Hacker News .hnnew | past | comments | ask | show | jobs | submit | ErigmolCt's commentslogin

On the credit card point though, cards don't work perfectly as age verification either. Plenty of minors can access prepaid cards or family cards

>cards don't work perfectly as age verification either.

there are 0 "perfect" age verification systems.

plenty of minors can have their brother/sister/parents supply their id, or do the verification video. the on-device verification discord rolled out was, within hours, broken. i remember news reports of kids submitting photos of their dogs and being verified as of-age.

credit card solves most of the problem with much less downside than submitting my face (i am already okay putting my card info into most sites)


Prepaid cards can't masquerade as credit cards as there are easy ways to differentiate them (the numbers have meaning) and a minor getting access to the family credit card is the parents giving them permission. I'm not convinced credit card for age verification is a good solution for all cases but for cases where you've already used a credit card to access the service it would be perfect.

I agree, we shouldn't be optimizing for the case where a child steals a credit card. That's just not in the threat model. I mean, they could steal IDs too, and children can already steal credit cards and buy, like, vbucks or whatever. Which probably causes more tangible real-world harm than seeing a pair of boobies or whatever we're trying to protect against.

However, I still think credit cards are overkill. They reveal way too much information, including addresses. I wouldn't trust most companies with my credit card either, at least not online. In person it's different, the scanners are secure especially if you use tap to pay. But online, you just have a pinky promise that your info isn't being stored.

Frankly, I'm getting sick and tired of being put in the situation where I have no choice but to just blindly trust people to do the right thing. Obviously, it's not working, and we need real solutions.


I agree that CCs are overkill for every case except those where you have already given them a CC. There is no risk of revealing to much information for age verification when you already are giving them all that information.

Whether these systems are a good idea is still very much being debated

The uncomfortable part is that both sides are right: there are real harms to kids online, but tying real-world identity to routine internet access fundamentally changes what the internet has been for decades

Good press is helpful, but public investors also care about regulatory risk


"Didn't sell its soul" is a pretty high bar for any large AI lab taking government and enterprise money


Nott really, the dispute is that Anthropic wanted to keep restrictions against domestic mass surveillance and fully autonomous weapons, while the Pentagon reportedly wanted the models available for any "lawful" use


It probably boosts their reputation with one segment of the market while making them much less attractive to another (just my thoughts)


And one of those segments is about 50% of the US population, and the other is about 50% of the US population + the rest of the world.

Source: A Norwegian that just cancelled his ChatGPT plus subscription and will consider Gemini or Claude instead.


This feels bad for the industry. If every AI company learns that having explicit red lines gets you blacklisted, the incentive is to keep safety language vague and negotiable


I suspect that's why experienced officers sometimes intervene like in the OP's story


Clearance forms are weird in that they're not just legal documents, they're inputs into an investigative process


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: