HN2new | past | comments | ask | show | jobs | submit | supermatt's commentslogin

The solution is parents using the parental control feature on their children’s devices.

If laws need to be made about something it should be to punish those parents who neglect to safeguard their children using the tools already available to them.

If the parental controls currently provided aren’t sufficient then they should be modified to be so - in addition to filtering, they should probably send a header to websites and a flag to apps giving an age/rating.


Australian laws decided to explicitly not blame the parents and place the responsibility on the platform. Turns out not all parents are responsible adults with a diploma in dark pattern navigation, and some kids don't even have parents. So if the goal is to help the kids, rather than have someone to blame when they get abused, you can't just pass the buck.

Curious: are you ok with the other laws that are in place in the world to prevent underage people to engage with all sorts of activities? Like, for example, having to show an ID to being able to purchase alcohol?

They aren't comparable. Showing an ID to a staff member isn't stripping my anonymity. I know the retailer won't have that on file forever, tied to me on subsequent visits. Also they stop ID'ing you after a certain age ;)

There isn't any way to achieve the same digitally.


Actually there is, various age verification systems exist where the party asking for it does not need to process their ID, like the Dutch iDIN (https://www.idin.nl/en/) that works not unlike a digital payment - the bank knows your identity and age, just like they know your account balance, and can sign off on that kind of thing just like a payment.

I hope this becomes more widespread / standardized; the precursor for iDIN is iDEAL which is for payments, that's being expanded and rebranded as Wero across Europe at the moment (https://en.wikipedia.org/wiki/Wero_(payment)), in part to reduce dependency on American payment processors.


The privacy issue has two facets, when I show ID to get in to a club or buy alcohol, the entire interaction is transient, the merchant isn't keeping that information and the issuer of the credential doesn't know that happened (i.e. the government).

Just allowing a service provider to receive a third party attestation that you "allowed" still allows the third party to track what you are doing even if the provider can't. That's still unacceptable from a privacy standpoint, I don't want the government, or agents thereof, knowing all the places I've had to show ID.


> Just allowing a service provider to receive a third party attestation that you "allowed" still allows the third party to track what you are doing even if the provider can't. That's still unacceptable from a privacy standpoint, I don't want the government, or agents thereof, knowing all the places I've had to show ID.

Isn't this solvable by allowing you to be the middle man? A service asks you to prove your age, you ask the government for a digital token that proves your age (and the only thing the government knows is that you have asked for a token) and you then deliver that to the service and they only know the government has certified that you are above a certain age.

The service gets a binary answer to their question. The government only knows you have asked for a token. Wouldn't a setup like that solve the issue you're talking about?


We have a similar system in Italy so the age verification process itself doesn't personally concerns me that much since the verification process is done by the government itself and they obviously already have my information.

I'm personally more interested in the intuition people have when it comes to squaring rejecting age verification online while also accepting it in a multitude of other situations (both online and offline)


My main issue is trust.

In real world scenarios, I can observe them while they handle my ID. And systematic abuse(e.g. some video that gets stored and shows it clearly) would be a violation taken serious

With online providers it's barely news worthy if they abuse the data they get.

I'm not against age verification (at least not strongly), but I'd want it in a 2 party 0 trust way. I.e. one party signs a jwt like thing only containing one bit, the other validates it without ever contacting the issuer about the specific token.

So one knows the identity, one knows the usage But they are never related


> So one knows the identity, one knows the usage But they are never related

I could be wrong but I think this is how the system we have in place in Italy works. And I agree that it's how it should work.


I know they're not compatible. I'm asking if you're also ok with those. There are also plenty of situations where you are asked to provide an ID, digitally, when above a certain age. For example booking hotels and other accommodations.

Personally I'm still trying to figure out where my position is when it comes to this whole debate because both camps have obvious pros and cons.


Which hotel asks for id online..? I've only ever had to provide it once on-site and checking in.

And when then, only when I'm in foreign countries.


Happens quite often with Airbnb for example. You often don't meet the host in person so there's no way to show them a physical ID.

The difference is the internet is forever. A one-time unrecorded transaction like showing your ID at the bar is not. It is a false equivalence.

Not only is the internet forever, but what is on it grows like a cancer and gets aggregated, sold, bundled, cross-linked with red yarn, multiplied, and multiplexed. Why would you ever want cancer?


> It is a false equivalence

It's a false equivalence only if you decide to equate the two. My question wasn't worded that way. I'm curious to know if someone who oppose this type of laws is also for or against other laws that are dealing with similar issues in other contexts.

Also, as I said in another post, there are plenty of places, online, where you have to identify yourself. So this is already happening. But again, I'm personally interested in people's intuitions when it comes to this because I find it fascinating as a subject.


Personally, I am pro-both. Even if it helps a single child not fall in to a bad situation, it's worth the many other cons that come with it. <tinfoilhat>I believe that the original concept had good intent, then flowed through a monetization process before delivery.</tinfoilhat>. If our weird reality eventually balances out, at least we'll have this on our side. People > Money.

I'm a lot more okay with that because alcohol purchasing doesn't have free speech implications.

It's weird how radicalized people get about banning books compared to banning the internet.


> It's weird how radicalized people get about banning books compared to banning the internet.

I don't think asking for age verification is the same as banning something. Which connection do you see between requiring age and free speech?


First, children also have a right to free speech. It is perhaps even more important than for adults, as children are not empowered to do anything but speak.

Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".

Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.


> First, children also have a right to free speech.

Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.

As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.

That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.


> narrow subset of the websites on the web

Under most of these laws, most websites with user-generated content qualify.

I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.


> but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.

I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.

Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.

It would no be a perfect solution, but it's probably better than including every site with user generated content.


> I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky)

An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):

1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.

2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.

3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.

These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.

[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.

[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.


The chilling effect of tying identity to speech means it directly effects free speech. The Founding Fathers of the US wrote under many pseudonyms. If you think you may be punished for your words, you might not speak out.

We know we cannot trust service providers on the internet to take care of our identifying data. We cannot ensure they won't turn that data over to a corrupt government entity.

Therefore, we can not guarantee free speech on these platforms if we have a looming threat of being punished for the speech. Yes these are private entities, but they have also taken advantage of the boom in tech to effectively replace certain infrastructure. If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.

https://en.wikipedia.org/wiki/List_of_pseudonyms_used_in_the...


> If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.

Are private social media platforms "public services"? And also, you mentioned constitutional rights. Which constitution are we talking about here? These are global scale issues, I don't think we should default on the US constitution.

> We know we cannot trust service providers on the internet to take care of our identifying data.

Nobody needs to trust those. I can, right now, use my government issues ID to identify myself online using a platform that's run by the government itself. And if your rebuttal is that we can't trust the government either then yeah, I don't know what to say.

Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.


> Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.

This is somewhat central to being remain anonymous.

Protesters and observers are having their passports cancelled or their TSA precheck revoked due to speech. You cannot trust the government to abide by the first amendment.

Private services sell your data to build a panopticon, then sell that data indirectly to the government.

Therefore, tying your anonymous speech to a legal identity puts one at risk of being punished by the government for protected speech.


> You cannot trust the government to abide by the first amendment.

Again, this is a global issue. There is no first amendment here where I live. But the issue of the power these platforms have at a global level is a real one and something has to be done in general to deal with that. The problem is what should we do.


> The solution is parents using the parental control feature on their children’s devices.

This is a stopgap at best, and to be blunt, it's naive. They can go on their friends' phones, or go to a shop and buy a cheap smartphone to circumvent the parental controls. If the internet is locked down, they'll use one of many "free" VPN services, or just go to school / library / a friend's place for unrestricted network access.

Parents can only do so much, realistically. The other parties that need to be involved are the social media companies, ISPs, and most importantly the children themselves. You can't stop them, but they need to be educated. And even if they're educated and know all about the dangers of the internet, they may still seek it out because it's exciting / arousing / etc.

I wish I knew less about this.


>> This is a stopgap at best, and to be blunt, it's naive

Not if the rule includes easy rule circumvention. For example, if you could parent-control lock the camera roll to a white list of apps.

Want to post on social media so your friends would see? No can do, but you can send it to them through chat apps. Want to watch tik-tok? Go ahead. Want to post on tik-tok? It's easier to ask parent to allow it on the list, then circumvent, and then the parent would know that their child has a tik-tok presence, and — if necessary — could help the child by monitoring it.

The current options for parent control are very limited indeed. You can't switch most apps to readonly, even if you are okay with your child reading them — it's posting you are worried about.

But in ideal world there would be better options that would provide more privacy and security for the child, while helping parents restrict options if they fell their child isn't ready to use some of the functions.


yeah I think there is a way to do this elegantly. I didn't have my own device until I was 20 or so actually, and it wasn't a big problem. As a young teenager I could use the family desktop for education and entertainment. I had online friends in my late teens I played games with, and would have done much more so if I had a more more powerful cpu lol. Should mention though, these friends were through in person networks on discord, so I wasn't really in the public square I guess.

So I could explore things but not get into anything naughty.

When I decided to get into software dev I got my own cpu and my own phone once I had a job in dev.

Might seem pretty conservative but it worked, and I'm technical enough now. I wish I would have got into coding earlier but I've done alright so :shrug: Depending on the environment for my kids I'd move the timeline back a little, but not too much. Having too much time and just the unfiltered internet to fill it is too dangerous for young teens.


In what universe do you live where children have enough disposable income to buy a smartphone ?

You can get a usable smartphone for well under 100 USD on AliExpress or a reasonable secondhand one from a reputable brand for about the same price here in Norway on online trading sites. Don't teenagers get pocket money or do weekend jobs any more? My sons were grown up by the time smartphones were affordable but No. 2 son bought his own Siemens C65 with saved up pocket money when he was in his early teens.

You only need $25-30. It'll be locked to a carrier, but that doesn't matter and is perhaps preferable (no monthly fee for a subsidized device) if you are able to use wifi. There's an ETA prime video which explores using a 2025 Moto 5G as handheld game console: https://www.youtube.com/watch?v=5ad5BrcfHkY

tl;dw it's quite capable for the money and would could easily get on social media apps/sites.


You can get smartphones for 80 dollars (like the moto E15)

if you make smartphones an 18+ item like alcohol many of these problems would go away.

That would also spur the market to produce actually nice pure communication devices. Flip phones could stop being for people with AARP cards again and would give better options to adults who don't want the smart phone all the time.

And have schools stop giving kids laptops or tablets. I wonder how much of the Chromebooks for school incitive was to develop a new market for Google

It's wild seeing these opinions on hackernews of all places. Do we want future generations to know nothing about computing?

I would not be here if I didn't get my start in my early teen years.


When I was an early teen I had access to the internet but my activities weren't entirely unsupervised (and I doubt yours were either). Since it was a new technology there was a lot of discussion around how best to talk to children and make sure they felt safe reporting threats or harms to parents.

A smart phone is too disconnected of a device when compared to the desktops we all grew up on. No one is talking about fully banning <18s from the internet (at least no one serious) - it's a discussion about making sure that the way folks <18 use the internet is reasonably safe and that parents can make sure their children aren't being exposed to undue harm. That's quite difficult to do with a fully enabled smart phone.


Mine was not supervised cause immigrant parents that didn't know anything about computers really. So more or less entirely unsupervised.

By 16 I was regularly ignoring my parents to go to bed when I was up coding or gaming and doing dumb script kiddie stuff on IRC.

I had an adult introduce me to Astalavista (https://en.wikipedia.org/wiki/Astalavista.box.sk)

Thinking back to that I was very well aware of the fucked up part of the internet much more so than most adults around me. People did in fact meet up in person with strangers from the internet even back then.

I think it's more important to teach around age 10-14 about the dark side of the internet so that late teens can know how to stay safe. Rather than simply throwing them into the reality of it unprepared as "adults".

Also frankly I don't want to know the search history of a late teen. There's a degree of privacy everyone is entitled to.


Do you think the younger generations are properly prepared to view the internet as having a dark side? My impression has been that such an early introduction has caused those warnings to be delayed and lost and younger folks are much more trusting of the internet than most millenials were.

It's also important to acknowledge that kids that used the internet weren't everyone in our day and the usage of the internet varied wildly. While now-a-days it's an expectation for everyone to be at least moderately online (often required by academia) and often that their presences are tied to their real names.


I think so yes. What's acceptable changes over time. Gore "content" of WW2 is now presented to 12 year olds as history.

It's not the porn or the LiveLeak gore content that would have me worried. It's groomers and other adults with bad intentions. Not something you can easily block and not something this ID check will stop. A groomer will slow burn a social relationship with someone until they are legal adults. That's something you can only teach someone to look out for. And even adults are susceptible to this.


I remain skeptical. I have some 20sish cousins that seem to be highly aware of the potential dangers online and were pretty clear eyed about it in their teens - but the relatives I have that are five years younger seem absurdly trusting. This is empirical of course but it's concerning to me.

Many parents of preteens and young teens that I know simply do not allow their childrend to use social media on their own devices. Doesn't sound like that bad a solution.

Age verification clearly does not work either. Teens will circumvent it, or use alternative technologies.

just make internet 18+, solves everything. demand ID's at the time of purchase.

ICE agents will LOVE this one neat hack

purchasing alcohol must be a dream come true for them lol

Weird jump between voicing your political points online and which beer you prefer, but okay.

I think firstly the kids need to get education about this subject in school. The dangers online, the tools to use to protect oneself etc.

Secondly the parents need some similar education, either face-to-face education or information material sent home.

It will not prevent everything, but at least we cannot expect kids and parents to know about parental control features, ublock origin type tools or what dangers are out there.

We have to trust parents and kids to protect themselves, but to do that they need knowledge.

Of course some parents and kids don't care or do not understand or want to bypass any filters and protections, but at leaast a more informed society is for the better and a first step.


>The solution is parents using the parental control feature on their children’s devices.

Yeah but many parents are stupid and want the government to force everyone to wear oven mitts to protect their kids from their poor/lack of parenting. What do you do then?

Remember how since a lot of men died in WW2 so kids were growing up in fatherless homes which led to a rise in juvenile delinquency, and the government and parents instead of admitting fatherless homes are the issue, the "researchers" then blamed it on the violent comic books being the issue, so the government with support from parents introduced the Comics Code Authority regulations.

People and governments are more than happy to offload the blame for societal issues messing up their kids onto external factors: be it comic books, rock music, MTV, shooter videogames, now the internet platforms, etc.


And they hate that people are using different agents (like opencode) with their subscription - to the extent that they have actively been trying to block it.

With stupidity like this what do they expect? It’s only a matter of time before people jump ship entirely.


> like chat control.

That has never been passed in any form.

> the opposite of how it works here in the US.

It appears that you have conveniently forgotten about FISA, EARN IT, CLOUD act, PATRIOT act, LAED, etc, etc.



It hasn’t been passed in that “voluntary” form either.

Everything is a squircle (or whatever those continuously curved corners are called). I quite like it - but it definitely has Ives signature style.

There was a great post the other day showing low latency end to end using Nvidia models on a single GPU with pipecat

Discussion: https://hackernews.hn/item?id=46528045

Article: https://www.daily.co/blog/building-voice-agents-with-nvidia-...


> now you need Docker-in-Docker

Or you can just mount the socket and call docker from within docker.


Correct, which I wanted to avoid because:

> Mounting the Docker socket grants the agent full access to your Docker daemon, which has root-level privileges on your system. The agent can start or stop any container, access volumes, and potentially escape the sandbox. Only use this option when you fully trust the code the agent is working with.

https://docs.docker.com/ai/sandboxes/advanced-config/#giving...


PM for Docker Sandboxes here.

We have an updated version of Sandboxes coming out soon that uses MicroVM isolation to solve this exact problem. This next version will let your agent access a Docker instance within the MicroVM, therefore allowing you to do this securely.


It doesn’t.

It gives a very naive approach that doesn’t support any complex styling.

For that you need to wrap the input and additional styling elements in a ref’ed label.


Out of interest what's an example of styling that the radix/shadcn version enables that their approach doesn't? I was able to (AFAICT) replicate the radix docs example by just moving their styles around: https://codepen.io/mcintyre94/pen/pvbPVrP


In the example they are just using an empty <RadioGroup.Indicator/> for the pip as it is easy to target with a classname, but you can put any content in there instead for e.g. card-style radios (as used for complex selections, like a subscription tier).

By using radix, the underlying behaviour is compliant and identical for each of those implementations - you just change the content. Radix isn't looking at it like an html radio element, it is looking at it as a completely unstyled unique item selector.

The pseudo-element styling approach limits you to 3 layers - the container, and the 2 pseudo elements, none of which you can provide with meaningful content besides plain text. The best you can do is provide a basic styles and set background image. For anything else you need to use labels to either wrap the radio (in which case you can access state via sibling selectors) and/or ref them with "for" (in which case you cant access the state).


Wrapping it in a label is the idiomatic and correct way, and should be done even when not styling. Perhaps especially when not styling.

Putting an adjacent label is also possible, but scales poorly due to needing unique ids.


Can you give an example please? What kind of complexity are we talking about?


Any kind of nested markup: styled content, additional animation layers, etc.


Author here. Can you provide a screenshot or more detail?

I'd be happy to implement an HTML + CSS only solution and share it with you.

Thanks


But is that still less complex than what the author found?


> Why would you want to do this?

Have you tried completely customising a radio button with CSS? Feel free to demonstrate a heavily customised radio button style where you don’t hide the native appearance.


There's literally an example of that in the post.

> where you don’t hide the native appearance

What do you mean by this? Seems like an arbitrary requirement to set. Could you show an actual example of how this overengineered style is easier to customize?


The pseudo element solution alone is extremely limiting in its ability to be customised. For more complex customisation you will need to decorate with additional elements within a ref’ed label - and then you are effectively back to what radix does.


> and then you are effectively back to what radix does

I certainly won't need to import x elements from a library that imports y elements itself


Yes, several times. I've been specializing in front-end dev for over a decade.

I shared a simple example because Shadcn has a simple design.

You do often hide the native appearance if you need something complex, but doing that via CSS is still much simpler than a bunch of JS and a third party dependency.

If you have a specific design in mind I can show you how to do it.


I almost had the same reaction tbh! Like I remember inline-grid and place-content for example was not at all supported css, it would've been a nightmare to do, but modern browsers css support is way more powerful than my mental model of them still is. So it's time to update that mental model.


How do you test for ad effectiveness vs annoyance? Especially so for a captive audience where they can’t leave and go elsewhere?

It seems like every market leader that gets ads eventually “optimises” towards making them look like not ads. Obviously they will be more effective if people don’t realise what they are, so how do they account for annoyance (and the other negatives a user experiences) while doing these a/b tests?


>How do you test for ad effectiveness vs annoyance?

In a walled garden like apple? You simply don't, just make the test gradual and long enough until people get used to it.


That’s the way it appears, sure. But my question is how would you do it if you did care. What metric would there be you could measure if they have no choice but to use the product.


Why would you care about annoyance when you captured your audience?


It is a question asking how you would do that if you cared. I.e how do you measure/quantify that annoyance as a metric when they are captive and have no choice to leave.

Traditionally you would be able to measure annoyance by reduced usage, but that’s not the case in a captive market, so how do you measure it?


What does "solved with" mean? The author claims "I've solved", so did the author solve it or GPT?


When you use a calculator, did you really solve it or was it the calculator?


With a calculator I supply the arithmetic. It just executes it with no reasoning so im the solver. I can do the same with an LLM and still be the solver as long as it just follows my direction. Or I can give it a problem and let it reason and generate the arithmetic itself, in which case the LLM is effectively the solver. Thats why saying "I've solved X using only GPT" is ambiguous.

But thanks for the downvote in addition to your useless comment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: