>cards don't work perfectly as age verification either.
there are 0 "perfect" age verification systems.
plenty of minors can have their brother/sister/parents supply their id, or do the verification video. the on-device verification discord rolled out was, within hours, broken. i remember news reports of kids submitting photos of their dogs and being verified as of-age.
credit card solves most of the problem with much less downside than submitting my face (i am already okay putting my card info into most sites)
Prepaid cards can't masquerade as credit cards as there are easy ways to differentiate them (the numbers have meaning) and a minor getting access to the family credit card is the parents giving them permission. I'm not convinced credit card for age verification is a good solution for all cases but for cases where you've already used a credit card to access the service it would be perfect.
I agree, we shouldn't be optimizing for the case where a child steals a credit card. That's just not in the threat model. I mean, they could steal IDs too, and children can already steal credit cards and buy, like, vbucks or whatever. Which probably causes more tangible real-world harm than seeing a pair of boobies or whatever we're trying to protect against.
However, I still think credit cards are overkill. They reveal way too much information, including addresses. I wouldn't trust most companies with my credit card either, at least not online. In person it's different, the scanners are secure especially if you use tap to pay. But online, you just have a pinky promise that your info isn't being stored.
Frankly, I'm getting sick and tired of being put in the situation where I have no choice but to just blindly trust people to do the right thing. Obviously, it's not working, and we need real solutions.
I agree that CCs are overkill for every case except those where you have already given them a CC. There is no risk of revealing to much information for age verification when you already are giving them all that information.
The uncomfortable part is that both sides are right: there are real harms to kids online, but tying real-world identity to routine internet access fundamentally changes what the internet has been for decades
Nott really, the dispute is that Anthropic wanted to keep restrictions against domestic mass surveillance and fully autonomous weapons, while the Pentagon reportedly wanted the models available for any "lawful" use
This feels bad for the industry. If every AI company learns that having explicit red lines gets you blacklisted, the incentive is to keep safety language vague and negotiable
reply