Says a lot about the state of society when parenting is outsourced to technology, so that the parents can be further enslaved (because almost no one chooses to work two jobs).
Most of a "living wage" is from the cost of living. We make living space artificially scarce and then your rent is high but so is the rent on the small businesses that employ people. The restaurant can't pay the waitress more when their own costs have gone up, and the money is going to the landlords rather than the employers.
Likewise, when some megacorps capture the government and monopolize a market, the costs go up on both individuals and all the employers in other markets who are now paying monopoly rents with the money they could have otherwise used to hire more people (bidding up wages) or lower the prices workers pay when they buy their products.
Just asking them to pay more doesn't work when the party you want to pay more isn't the party which is extracting the money, and higher costs are just as much of a problem as lower wages.
> If you end it with "and make a good easy to use technical solution instead" then you found my stance.
That assumes a good easy to use technical solution is possible. What if classifying user-generated content as safe for kids is enormously subjective, and the labor required to accurately classify it even given a hypothetical objective standard would cost more than users are willing to pay to have it done?
I was under the impression that all the mainstream models already do this. I'm sure you could probably download some obscure, uncensored and unhinged model that says anything you want, but that isn't what 99% of people will be interacting with.
Not strictly relevant but I also have concerns about AI psychosis which seems related a little bit here, otherwise they'd realise it's a computer program and can't make you do anything.
I think it would be better if children learned critical thinking. It would help defend against any unsound conclusions proposed AI or any other sources.
Problem is that there simply is not a way to do this reliably. The models are all stochastic processes and the only real levers model designers have to pull involve asking the model to pretty please not do something bad.
And then it turns out that it's pretty easy to also ask models to pretty please ignore previous instructions. You can also accidentally get a model into a state where it ignores system prompt guidelines.
There is not a big #ifdef DONT_TELL_USER_TO_DIE switch in the code. Nobody truly understands how the models work under the hood and there simply is not a way to enforce 100% that a model cannot do something.
I don't always copy paste vibe coded project readme mds into Claude code and ask them to rewrite it but when I do... actually that's all I do now because my goal in life is to make wealthy overvalued companies wealthier.
Anthropic is the opposite of wealthy, the more you use their service, the more money they lose. Unless you think your precious MDs being used for training data is gonna make them rich eventually.
At the same time? I don't think so. Almost everyone talks to each other and takes notes. We know they do. The World Economic Forum is real and has a website you can access. They talk about policies like this under their "Fourth Industrial Revolution" section and don't even hide it. The same policies are repeated across much of the world on everything from smoking to driving to digital ID, regardless of who gets voted in.
Hmm. I read a semantic difference between "opt-out" and "being opted-in by default".
The first denotes an abstract policy, the second an action that has been done to you in which you were a passive participant. And this is all about our lack of agency.
You may prefer that we speak of abstract policies. But to say "there is no" about an otherwise sensible phrase implies that you think that we have agreed to stay within some fixed set of terminology. I didn't think that we had.
reply