Well, I'd appreciate Karen is willing to talk and explain whatever inconvenient policies they have. A faceless bureaucracy is even more desperate.
My wife and I had many troubles (delays due to additional security checks, endless request for documents) in visa and all immigration-related applications in the US. We cannot even find a government official to complain. Email inquires all end up with boilerplate responses. Many agencies do not have phone services, and even if some do, you are connected to an unhelpful call center worker who can only provide generic info and have no permissions to discuss your problems. And lawyers told us we could do little because all the procedures are legitimate. We may (and we did once in the past) sue the government but only after an "unreasonable" delay, at which point much harm is already caused.
This week the US consulate emailed me to ask for official documents about a minor past civil suit against me in China, including "a police certificate", for my visa application. Why the heck does the US visa have anything to do with a civil suit, and in which country does a civil suit involve police?
That's not my understanding. This is what the bill says: Provide a developer who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface that identifies [the age group].
So the app requests a signal (like, calling an API), and the OS returns the signal (returning the age group).
Regarding API vs installation lock, TBH I don't think the law concerns that level of details. An OS or app-store installation lock that checks app ratings can be considered as a valid implementation.
The California law is horrible because it forces everyone to let tech companies and governments decide what's suitable for children, rather than let parents decide. It's telling parents to give every app their child's age and trust that the apps will do the right thing. It also legitimizes personal data collection (in this case, the user's age) for every app and service on the Internet that wants to know your age.
The password-based app installation lock I proposed in my original comment doesn't require any kind of age checking at all, so it naturally doesn't fit the California law. The device owner (in this case, the parent who buys the device for their child) gets to decide what apps can be installed on their child's phone on an app-by-app basis using a password set by the parent. The app store doesn't need to know, and the apps don't need to know.
You have a point. Though I suspect that average parents are either too lazy or not tech literate enough.
I do want to note that this California law alone doesn't say anything about content restriction. I won't be surprised if there was/will be another bill to assign the responsibility (which may be more controversial). But the current law is only about the age gating mechanism. And on the positive side it removes the need for actual age verification (like using ID) which other regions still insist on.
The California law is the closest thing to what we do in the physical world but better. We already decided as a society to limit the purchase of pornography, gambling, alcohol, tobacco, prostitution, drugs, via age gates and require the merchant to be liable for that. We already find this reasonable as a society. The California law recognizes the tracking problems of requiring a verifiable id online and instead recognizes that parental self-assertion at the point of account creation is enough.
Since tracking children is generally illegal, you can also voluntarily lie and label yourself as a child when you don't want to access such content.
We have decided as a society to age-gate the purchase of a very small selection of goods and services, but this did not require a law that says all merchants have the right to know your age. And in this case, it's not even just all merchants, but anyone that serves you any kind of information. The real world equivalent of this California bill would be more like: anyone you've ever talked to has the right to know your age.
A more reasonable approach would be for parents to keep tabs on (or for stricter parents, control) who their child is associating with and where they're going, and advise their child on who/what to stay away from if they're out alone. And of course that takes parenting effort. The digital equivalent of this are things like password-gating app installation in the OS and website-blocking in the WiFi router. But I will say, I don't think these kinds of analogies are good because the Internet is too different from the physical world.
And let's not underestimate the tracking power of a legally mandated data point: the age contains about 6 bits of information that can be used to identify your user account on the Internet across apps and websites, even if your inputted age is fake.
TBH California one doesn't require age verification (while many other states do). It only requires the OS to provide a mechanism for the user to indicate their age group and apps should use the information (instead of asking for PII themselves). It's a fake one, but somehow drew most attention.
If that is true about the California case, it is basically a fluke. Lobbyists don't have total control of the legislation after all. It sounds almost benign when posed that way, but it is the wrong solution either way. The better solution is to tell people to install filtering software to block content that they don't want. Then you don't have to worry about compliance of individual sites, personal information, or any of it. This filtering strategy also makes sense for privacy and handling the subjective nature of what is age-appropriate or offensive.
> Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
Actually I had some terrible experiences when asking the agent to do something simple in our codebase (like, rename these files and fix build scripts and dependencies) but it spent much longer time than a human, because it kept running the full CI pipelines to check the problems after every attempted change.
A human would, for example, rely on the linter to detect basic issues, run a partial build on affected targets, etc. to save the time. But the agent probably doesn't have a sense of time elapsed.
Except for the times you do want it to run the CI.
LLM issues can often be solved by being more and more specific, but at some point being specific enough is just as time consuming as jumping in and doing it yourself.
If you actually read this law, it does exactly the opposite to avoid every random app/website from having to do age verification (like traditional age verification laws requires). It requires that only the OS to ask the user's age (not even verify it). Individual apps should use the age buckets signaled by the OS.
Repost my comment in the other thread: I know this sounds absurd. But let me try not to be cynical and explain how we got here, according to what I understand:
First, let's admit the push for age verification laws isn't a partisan or ideological thing. It's a global trend. This California law has bipartisan sponsorship and only major org opponent is the evil G [1]. While age verification is unpopular in tech community, I imagine a lot of average adult voters agree that limiting children's access to wilder parts of the Internet is a good thing.
On this premise, the discussion is then who should be responsible for age verification. The traditional model is to require app developers / website owners to gatekeep -- like the Texas and Ohio laws that require PornHub to verify users' IDs. But such model put too much burden on small developers, and it's a privacy nightmare to have to share your PII with random apps.
This is why we see this new model. States started to believe it seems more viable to dump the responsibility on big tech / platforms. A newer Texas law is adopt this model (on top the traditional model) to require app stores to verify user age (but was recently blocked by court) [2]. And this California law pretty much also takes this model -- the OS (thinking as iOS / Android / Windows with app store) shall obtain the user age and provide "a signal regarding the users age bracket to applications available in a covered application store".
While many people here are concerning open-source OSes, and the language do cover all OSes -- my intuition is no lawmaker had ever think about them and they were not the target.
TBH my kids have limited access to their (Android) phones using family link but I don't see option there to:
- block certain list of sites
- block walls inside YouTube for example
- limit amount of scrolling time Vs amount of learning time (this can be done quite easily)
So just give the tools to parents and stop requiring IDs for adults. What happens if kid gets adult's phone? And what happens when kid gets dad's rifle or car keys? It doesn't mean that all the rifles and car keys should now start to include blood sample based age verification mechanisms
--Edit--
Apple family management is even worse. The best I heard of is implemented in the switch console
1. This California law doesn't require IDs. (Some states like TX do, but mainly for websites "harmful to minors").
2. If I have to think through your examples -- purchasing cars and arms requires strict ID checks that go further than age verification. If a kid drove or use weapons owned by their parents, I'm mostly confident parents are liable in most jurisdictions. But I think I can guess out your concern -- 24x7 online tracking can be much more intrusive and terrifying than a one-time background check -- which I actually agree.
3. In fact, you can think this law exactly require OSes (thinking of as iOS/Android/Windows/macOS) to "give the tools to parents" -- being able to indicate that the user is a minor at OS level and expose that information to apps.
No. Age verification law is not a partisan or ideological thing. It's a global trend. This law is sponsored by both parties: https://calmatters.digitaldemocracy.org/bills/ca_202520260ab... , and Texas has a newer law (App Store Accountability Act) that requires app stores to verify user ages and obtain parental consent for minors.
I know this sounds absurd. But let me try not to be cynical and explain how we got here, according to what I understand:
First, let's admit the push for age verification laws isn't a partisan or ideological thing. It's a global trend. This California law has bipartisan sponsorship and only major org opponent is the evil G [1]. While age verification is unpopular in tech community, I imagine a lot of average adult voters agree that limiting children's access to wilder parts of the Internet is a good thing.
On this premise, the discussion is then who should be responsible for age verification. The traditional model is to require app developers / website owners to gatekeep -- like the Texas and Ohio laws that require PornHub to verify users' IDs. But such model put too much burden on small developers, and it's a privacy nightmare to have to share your PII with random apps.
This is why we see this new model. States start to believe it seems more viable to dump the responsibility on big tech / platforms. A newer Texas law is adopt this model (on top the traditional model) to require app stores to verify user age (but was recently blocked by court) [2]. And this California law pretty much also takes this model -- the OS (thinking as iOS / Android / Windows with app store) shall obtain the user age and provide "a signal regarding the users age bracket to applications available in a covered application store".
While many people here are concerning open-source OSes, and the language do cover all OSes -- my intuition is no lawmaker had ever think about them and they were not the target.
My wife and I had many troubles (delays due to additional security checks, endless request for documents) in visa and all immigration-related applications in the US. We cannot even find a government official to complain. Email inquires all end up with boilerplate responses. Many agencies do not have phone services, and even if some do, you are connected to an unhelpful call center worker who can only provide generic info and have no permissions to discuss your problems. And lawyers told us we could do little because all the procedures are legitimate. We may (and we did once in the past) sue the government but only after an "unreasonable" delay, at which point much harm is already caused.
This week the US consulate emailed me to ask for official documents about a minor past civil suit against me in China, including "a police certificate", for my visa application. Why the heck does the US visa have anything to do with a civil suit, and in which country does a civil suit involve police?
reply