HN2new | past | comments | ask | show | jobs | submitlogin

I was having a conversation with some friends the other day about what schemes might mitigate some of these risks, something like an anti-safe word, i.e. a "danger word" that someone can be used to remotely validate that a loved one is authentically in danger.

This is fairly low-tech and likely susceptible to various kinds of social engineering, but I'm curious what a more robust approach might look like that doesn't involve us all regurgitating 6-digit codes like robots all the time.



Let's say you call or text me, requesting gift cards or my password or something else weird.

I tell you I'll email you immediately to verify. I email you to confirm you want N $X gift cards. You do so.

Or if the inbound contact is by email, you verify by text or phone call. Or Signal/Discord/FB chat apps, etc. Heck even LinkedIn Inmail could be your verification channel.

You could still be compromised, but that'd require attackers to have significantly more access and readiness. And if you reach the other person and they're unaware, now you both have some support as you figure it out.


If one service was compromised, isn't it reasonable to think that the scammer can access other services during this breach?


Yep, this is the way




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: