Hacker News .hnnew | past | comments | ask | show | jobs | submit | cosmos0072's commentslogin

Yes, they are similar at first glance. As I wrote:

> If you know nushell, they will feel familiar as they are inspired by it - the implementation is fully independent, though.

Looking deeper, there are two main differences:

- Nushell structured pipelines are an internal construct, and only work with nushell builtins. Schemesh uses actual POSIX pipes, which allows to also insert executables in the pipelines.

- schemesh also allows to insert arbitrary Scheme code in the pipelines. Nushell does too, in a sense: you have to write nushell language though


I am really interested to give it a try since I am a big fan of nushell but also really enjoyed LISP through clojure.

Nushell is great because it is really "battery included" in the sense that it ships with most of what you will ever need: http client (no need for curl), export/import of most formats (yaml, toml, csv, msgpack, sqlite, xml), etc. That really avoids the library or tool shopping problem. You can copy the 50MB nushell binary on almost anything and your script will work.

This is why this is really a "modern shell" as it includes what you need in 2026, not 1979.

This is also why I use nushell rather than python or lua for most scripts.

I am not familiar with the Scheme ecosystem but see you are implementing a lot within your shell (http client, sqlite support) what is great. But does that mean I can produce a Schemesh binary that will include Chez Scheme and run my script on any Linux or FreeBSD host?

Anyway I think what you are doing is really great !


> I am not familiar with the Scheme ecosystem but see you are implementing a lot within your shell (http client, sqlite support) what is great.

Yes. Initially, I wanted to integrate http client, sqlite client etc. directly in the main shell executable.

This turned out to be impractical, because it introduces compile-time dependencies on external libraries, and because any bug, vulnerability or crash in the C code would bring down the whole shell.

My current solution is to have separate C programs (currently `http` and `parse_sqlite`) that are not compiled by default: you need to run `make utils` to build them.

> But does that mean I can produce a Schemesh binary that will include Chez Scheme and run my script on any Linux or FreeBSD host?

Yes, exactly. Schemesh is a C program that includes both Chez Scheme REPL and shell features.


First stable release appeared on HN one year ago: https://hackernews.hn/item?id=43061183 Thanks for all the feedback!

Today, version 1.0.0 adds structured pipelines: a mechanism to exchange (almost) arbitrary objects via POSIX pipes, and transform them via external programs, shell builtins or Scheme code.

Example:

  dir /proc | where name -starts k | sort-by modified
possible output:

  ┌───────────┬────┬───────────────┬────────┐
  │   name    │type│     size      │modified│
  ├───────────┼────┼───────────────┼────────┤
  │kcore      │file│140737471590400│09:32:44│
  │kmsg       │file│              0│09:32:49│
  │kallsyms   │file│              0│09:32:50│
  │kpageflags │file│              0│10:42:53│
  │keys       │file│              0│10:42:53│
  │kpagecount │file│              0│10:42:53│
  │key-users  │file│              0│10:42:53│
  │kpagecgroup│file│              0│10:42:53│
  └───────────┴────┴───────────────┴────────┘
Another example:

  ip -j route | select dst dev prefsrc | to json1
possible output:

  [{"dst":"default","dev":"eth0"},
  {"dst":"192.168.0.0/24","dev":"eth0","prefsrc":"192.168.0.2"}]
Internally, objects are serialized before writing them to a pipe - by default as NDJSON, but it can be set manually - and deserialized when reading them from a pipe.

This allows arbitrary transformations at each pipeline step: filtering, choosing a subset of the fields, sorting with user-specified criteria, etc. And each step can be an executable program, a shell builtin or Scheme code.

If you know nushell, they will feel familiar as they are inspired by it - the implementation is fully independent, though.


I am from EU, and contrary to age verification laws in general.

My stance is that if somebody is a minor, his/her/their parents/tutors/legal guardian are responsible for what they can/cannot do online, and that the mechanism to enforce that is parental control on devices.

Having said that, open-source zero-knowledge proofs are infinitely less evil (I refuse to say "better") than commercial cloud-based age monitoring baked into every OS


> Having said that, open-source zero-knowledge proofs are infinitely less evil (I refuse to say "better") than commercial cloud-based age monitoring baked into every OS

To be honest, I worry that the framing of this legislation and ZKP generally presents a false dichotomy, where second-option bias[1] prevails because of the draconian first option.

There's always another option: don't implement age verification laws at all.

App and website developers shouldn't be burdened with extra costly liability to make sure someone's kids don't read a curse word, parents can use the plethora of parental controls on the market if they're that worried.

[1] https://rationalwiki.org/wiki/Appeal_to_the_minority#Second-...


> App and website developers shouldn't be burdened with extra costly liability

Why not? Physical businesses have liability if they provide age restricted items to children. As far as I know, strip clubs are liable for who enters. Selling alcohol to a child carries personal criminal liability for store clerks. Assuming society decides to restrict something from children, why should online businesses be exempt?

On who should be responsible, parents or businesses, historically the answer has been both. Parents have decision making authority. Businesses must not undermine that by providing service to minors.


> Why not?

This implies the creation of an infrastructure for the total surveillance of citizens, unlike age verification by physical businesses.


Spell it out: how do ID checks for specific services (where the laws I've read all require no records be retained with generally steep penalties) create an infrastructure for total surveillance? Can't sites just not keep records like they do in person and like the law mandates? Can't in-person businesses keep records and share that with whomever you're worried about?

How do you reconcile porn sites as a line in the sand with things like banking or online real estate transactions or applying for an apartment already performing ID checks? The verification infrastructure is already in place. It's mundane. In fact the apartment one is probably more offensive because they'll likely make you do their online thing even if you could just walk in and show ID.


>create an infrastructure for total surveillance

I mean, we're talking about age verification in the OS itself in some of these laws, so tell me how it doesn't.

Quantity is a quality. We're not just seeing it for porn, it's moving to social media in general. Politicians are already talking about it for all sites that allow posts, that would include this site.

So you tell me.


App and website developers having liability is an alternative to OS controls. Mandatory OS controls are OS/device manufacturers having liability. I agree that's a poor idea, and actually said as much like a year ago pointing out that this California bill was the awful alternative when people were against bills like the one from Texas. It's targeting the wrong party and creates burdens on everyone even if you don't care about porn or social media.

No, in the CA law OS controls are part and parcel with app and website developer liability.

They're separate concepts. Clearly, obviously, mandating OS controls is creating liability for OS providers, not service operators. Other states do liability for providers without mandating some other party get involved.

California is also stupid for creating liability for service/app providers that don't even deal in age restricted apps, like calculators or maps. It's playing right into the "this affects the whole Internet/all of computing" narrative when in fact it's really a small set of businesses that are causing issues and should be subject to regulation.


Knowing if the user's over 18 doesn't imply total surveillance, it only implies a user profile setting that says if they're over 18.

It implies that the user has access to the technical infrastructure that supports age verification. Sucks to be you, if you can't afford a recent Apple or Android device to run the AgeVerification app.

There is also the problem of mission creep. Once the infrastructure is in place, to control access to age-restricted content, other services might become out of reach. In particular, anonymous usage of online forums might no longer be possible.


That technical infrastructure: a drop-down menu on the user's account settings

The EU Digital Wallet requires hardware attestation so only it only works on locked-down government-approved OSes. That opens the door for government control of all electronic devices.

What a shame. The California one is just an input box.

Do you know what the word "infrastructure" means?

Do you know what "total surveillance" means? It doesn't mean a checkbox for over 18

I can't tell if this is a troll or not.

OS-level ability to verify the age of the person using it absolutely provides infrastructure for the OS to verify all sorts of other things. Citizenship, identity, you name it. When it's at the OS level there's no way to do anything privately on that machine ever again.


I agree that a checkbox for if the user is over 18 opens the door to a checkbox for if the user is a citizen and even a textbox for the user's full name (which already exists on Linux so you better boycott Debian now!). I don't see how such input fields are "total surveillance".

> Physical businesses have liability if they provide age restricted items to children.

Ok, suppose the strip club is the website, and the club's door is the OS.

Would you fine the door's manufacturer for teens getting into the strip club?


Dueling physical analogies is never a productive way to resolve a conversation like this. It just diverts all useful energy into arguing about which analogy is more accurate but it doesn't matter because the people pushing this law don't care about any of them and aren't going to stop even if the entire internet manages to agree about an analogy. This needs to be fought directly.

>This needs to be fought directly.

How do we fight? It seems like agree or disagree, this isn't going to stop. There's so much money behind it in a time where the have nots can barely survive as is.


The OS is not the club's door. The OS is unrelated. The strip club needs to hire someone to work their door and check ID, not point at an unrelated third party. They should have liability to do so as the service provider.

For one thing, it's fairly uncommon for children to purchase operating systems. As long as there is one major operating system with age verification, parents (or teachers) who want software restrictions on their children can simply provide that one. The existence of operating systems without age verification does not actually create a problem as long as the parents are at least somewhat aware of what is installed at device level on their child's computer, which is an awful lot easier than policing every single webpage the kid visits.

So I agree that operating systems and device developers should not be liable. That's putting a burden on an unrelated party and a bad solution that does possibly lead to locked down computing. I meant that liability should lie with service providers. e.g. porn distributors. The people actually dealing in the restricted item. As a role of thumb, we shouldn't make their externalities other people's problems (assuming we agree that their product being given to children is a problem externality).

What if all the useful apps refuse to run on the childproof operating system?

Then ditch propietary software completely and join free as freedom OSes.

I think the market is pretty good at situations like that.

> Physical businesses have liability if they provide age restricted items to children.

These are often clear cut. They're physical controlled items. Tobacco, alcohol, guns, physical porn, and sometimes things like spray paint.

The internet is not. There are people who believe discussions about human sexuality (ie "how do I know if I'm gay?") should be age restricted. There are people who believe any discussion about the human form should be age restricted. What about discussions of other forms of government? Plenty would prefer their children not be able to learn about communism from anywhere other than the Victims of Communism Memorial Foundation.

The landscape of age restricting information is infinitely more complex than age restricting physical items. This complexity enables certain actors to censor wide swaths of information due to a provider's fear of liability.

This is closer to a law that says "if a store sells an item that is used to damage property whatsoever, they are liable", so now the store owner must fear the full can of soda could be used to break a window.


That's not a problem of age verification. That's a problem of what qualifies for liability and what is protected speech, and the same questions do exist in physical space (e.g. Barnes and Noble carrying books with adult themes/language).

So again, assuming we have decided to restrict something (and there are clear lines online too like commercial porn sites, or sites that sell alcohol (which already comes with an ID check!)), why isn't liability for online providers the obvious conclusion?


> That's a problem of what qualifies for liability and what is protected speech

The crux is we cannot decide what is protected speech, and even things that are protected speech are still considered adult content.

> why isn't liability for online providers the obvious conclusion?

We tried. The providers with power and money(Meta) are funding these bills. They want to avoid all liability while continuing to design platforms that degrade society.

This may be a little tin-foil hat of me, but I don't think these bills are about porn at all. They're about how the last few years people were able to see all the gory details of the conflict in Gaza.

The US stopped letting a majority of journalists embed with the military. In the last few decades it's been easier for journalists to embed with the Taliban than the US Military.

The US Gov learned from Vietnam that showing people what they're doing cuts the domestic support. I've seen people suggesting it's bad for Bellingcat to report on the US strike of the girls school because it would hurt morale at home.

The end goal is labeling content covering wars/conflicts as "adult content". Removing any teenagers from the material reality of international affairs, while also creating a barrier for adults to see this content. Those who pass the barrier will then be more accurately tracked via these measures.


However there are also parts of the internet that are clear cut, like porn.

What about nude paintings/photography that aren't made with erotic intent?

Anatomical reference material for artists with real nude models?

What about Sexual education materials? Medical textbooks?

Women baring their breasts in NYC where it's legal?

Where is the clear cut line of Pornography? At what point do we say any depiction of a human body is pornographic?


Some things being unclear doesn't mean all things are unclear.

>Plenty would prefer their children not be able to learn about communism

Plenty of people would prefer that children not learn about scientology from pro-scientology cultists too. It's not that they can't know about scientology (they probably should, in fact, because knowledge can have an immunizing effect against cults)...

And it's not that they can't know about communism (they probably should, in fact, because knowledge can have an immunizing effect against cults)...


Would you also be against learning about Capitalism from the Heritage foundation?

This is a comment section about large corporations lobbying against our ability to freely use computers and you break out the 80's cold war propaganda edition of understanding a complicated economic system that intertwines with methodology for historical analysis with various levels of implementations from a governmental level.

You're either a mark or trying to find a mark.


> Physical businesses

Physical businesses nominally aren't selling their items to people across state or country borders.

Of course, we threw that out when we decided people could buy things online. How'd that tax loophole turn out?


But when they do, federal law requires age verification (at least with e.g. alcohol).

It turned out we pretty much closed the tax loophole. I don't remember an online purchase with no sales tax since the mid 00s.


ZKP methods are just as draconian as they rely on locking down end user devices with remote attestation, which is why they're being pushed by Google ("Safety" net, WEI, etc).

The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.

This also gives parents the ability to additionally decide other types of websites are not suitable for their children, rather than trusting websites themselves to make that decision within the context of their regulatory capture. For example imagine a Facebook4Kidz website that vets posts as being age appropriate, but does nothing to alleviate the dopamine drip mechanics.

There has been a market failure here, so it wouldn't be unreasonable for legislation to dictate that large websites must implement these tags (over a certain number of users), and that popular mobile operating systems / browsers implement the parental controls functionality. But there would be no need to cover all websites and operating systems - untagged websites fail as unavailable in the kid-appropriate browsers, and parents would only give devices with parental controls enabled to their kids.


> The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.

Agreed, recycling a comment: on reasons for it to be that way:

___________

1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.

2. No toxic Orwellian panopticon.

3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?

4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.


Good list of more reasons! I focused on what I consider the two most important.

To expand on your #3, it also gives parents a way to have different policies on different devices for the same child. Perhaps absolutely no social media on their phone (which is always drawing them, and can be used in private when they're supposed to be doing something else), but allowing it on a desktop computer in an observable area (ie accountability).

The way the proposed legislation is made, once companies have cleared the hurdle of what the law requires, parents are then left up to the mercy of whatever the companies deem appropriate for their kids. Which isn't terribly surprising for regulatory capture legislation! But since it's branded with protecting kids and helping parents, we need to be shouting about all the ways it actually undermines those goals.


> There's always another option: don't implement age verification laws at all.

Where do you go to vote for this option?


App and website developers shouldn't be burdened with extra costly liability to make sure someone's kids don't read a curse word, parents can use the plethora of parental controls on the market if they're that worried.

App and website operators should add one static header. [1] That's it, nothing more. Site operators could do this in their sleep.

User-agents must look for said header [1] and activate parental controls if they were enabled on the device by a parent. That's it, nothing more. No signalling to a website, no leaking data, no tracking, no identifying. A junior developer could do this in their sleep.

None of this will happen of course as bribery (lobbying) is involved.

[1] - https://hackernews.hn/item?id=46152074


The concern is ubiquitous all-pervasive surveillance, control, and manipulation of algorithmical social media and its objective consequences for child development and well-being. Not "kids reading a bad word". Disagree all you want, but don't twist the premise.

Surely you can find a rationalwiki article for your fallacy too.


If you want to avoid all pervasive surveillance, it might be wise to not mandate all pervasive surveillance in the OS by law.

In fact, I suspect adults, and not just children, would also appreciate it if the pervasive surveillance was simply banned, instead of trying to age gate it. Why should bad actors be allowed to prey on adults?


Luckily some of these laws, which we're rallying against, make it illegal to pervasively surveil.

I must have missed that. Which of them prevent pervasive data collection on all ages?

The California age input law says that the OS shall not give more data than necessary.

And what are the consequences for application vendors that collect more information, including after the age is collected?

>Disagree all you want, but don't twist the premise.

The 2 billion dollars are the one twisting it.


You mean the same social media companies that want this legislation and wrote it themselves? The same legislation that introduces more surveillance and tracking for everyone, including kids?

Also, I heard the same thing about video games, TV shows, D&D, texting and even youth novels. It's yet another moral panic.

From the Guardian[1]:

> Social media time does not increase teenagers’ mental health problems – study

> Research finds no evidence heavier social media use or more gaming increases symptoms of anxiety or depression

> Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study.

> With ministers in the UK considering whether to follow Australia’s example by banning social media use for under-16s, the findings challenge concerns that long periods spent gaming or scrolling TikTok or Instagram are driving an increase in teenagers’ depression, anxiety and other mental health conditions.

> Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties.

From Nature[2]:

> Time spent on social media among the least influential factors in adolescent mental health

From the Atlantic[3] with citations in the article:

> The Panic Over Smartphones Doesn’t Help Teens, It may only make things worse.

> I am a developmental psychologist[4], and for the past 20 years, I have worked to identify how children develop mental illnesses. Since 2008, I have studied 10-to-15-year-olds using their mobile phones, with the goal of testing how a wide range of their daily experiences, including their digital-technology use, influences their mental health. My colleagues and I have repeatedly failed to find[5] compelling support for the claim that digital-technology use is a major contributor to adolescent depression and other mental-health symptoms.

> Many other researchers have found the same[6]. In fact, a recent[6] study and a review of research[7] on social media and depression concluded that social media is one of the least influential factors in predicting adolescents’ mental health. The most influential factors include a family history of mental disorder; early exposure to adversity, such as violence and discrimination; and school- and family-related stressors, among others. At the end of last year, the National Academies of Sciences, Engineering, and Medicine released a report[8] concluding, “Available research that links social media to health shows small effects and weak associations, which may be influenced by a combination of good and bad experiences. Contrary to the current cultural narrative that social media is universally harmful to adolescents, the reality is more complicated.”

[1] https://www.theguardian.com/media/2026/jan/14/social-media-t...

[2] https://www.nature.com/articles/s44220-023-00063-7

[3] https://www.theatlantic.com/technology/archive/2024/05/candi...

[4] https://adaptlab.org/

[5] https://pubmed.ncbi.nlm.nih.gov/31929951/

[6] https://www.nature.com/articles/s44220-023-00063-7#:~:text=G...

[7] https://pubmed.ncbi.nlm.nih.gov/32734903/

[8] https://nap.nationalacademies.org/resource/27396/Highlights_...


Practically, instead of requiring that sites verify age, require that they serve adult content with standardized headers. Devices can then be marketed as "child-safe" which refuse to display content with such headers.

Yes! This is the way, give parents the ABILITY to advertise the users age to browsers, apps and everything in between. Only target cooperations, do not target open source projects. Fine websites for not using this API (ex: porn sites). Assume an adult if not present.

> Fine websites for not using this API (ex: porn sites).

Recent posters here are clear that porn sites are setting every available signal that they are serving adult-only content.

According to them, you are targeting the wrong audience.

Facebook/Instagram studying how to get young users addicted should be of greater concern. I have my doubts about the effectiveness of age-based blocking there, though.


> Facebook/Instagram studying how to get young users addicted should be of greater concern. I have my doubts about the effectiveness of age-based blocking there, though.

Yeah quite the opposite. Once they have that formalized attestation they will move in like sharks.


Both are problems, porn sites have also targeted children and any non-enforced age “verification” on these sites is simply plausible deniability that isn’t plausible at all

In what way have porn sites targeted children? They have no disposable income to target and the product is literally self age gated in appeal.

No. This is not the way.

> give parents the ABILITY to advertise the users age to browsers, apps and everything in between.

Accounts and Applications to services that provide countent are set to a country-specific age rating restrictions (PG, 12+, 18+, whatever). That's it.

None of the things you mentioned have any point to concern themself with the age or age-bracket of the user in front of the device. This can and will be abused. This is very obvious. Think about it.


Why should the applications get to decide if they are appropriate for a particular age? Shouldn't that be up to the parent? I shouldn't need to tell my kid: "Well, to use this compiler software, you need to set your age to 18 temporarily, because some product manager 3,000 miles away decided to rate it 18+. But, set it back to age 13 afterwards because you shouldn't be on adult sites." It's stupid.

I get what you mean, but I might have miscommunicated a bit.

Clarification: "are set to" means by the parent. "Accounts and Applications to services that provide countent" like media content providing apps like discord, netflix, etc. that ARE able and/or bound to rate their content.

Package Manager and Software Installation in general are usually locked behind root/admin passwords anyway. Especially on kids' devices their user should be non-admin, no?

So, when any piece of software is installed, it is by choice of the parent.

That's not unreasonable then?


That is what I meant by age(-rating), you are correct. However, drop country specifics - too complicated. Age brackets are enough: child, preteen, teen, adult. At around 16-17 these should be dropped anyway since at that point people are smart enough to get around these measures anyway and usually have non-parent controlled devices.

This is a great solution to the stated problem. The issue is that nobody is actually trying to solve the stated problem. This is a terrible solution to the real 'problem' which is the lack of surveillance power and information control.

>This is a great solution to the stated problem. The issue is that nobody is actually trying to solve the stated problem. This is a terrible solution to the real 'problem' which is the lack of surveillance power and information control.

So on the Sony consoles I created an account for my child and guess what they have implemented some stuff to block children from adult content on some stuff.

So if Big Tech would actually want to prevent laws to be created could make it easy for a parent to setup the account for a child (most children this days have mobile stuff and consoles so they could start with those), we just need the browsers to read the age flag from the OS and put it in a header, then the websites owners can respect that flag.

I know that someone would say that some clever teen would crack their locked down windows/linux to change the flag but this is a super rare case, we should start with the 99% cases, mobile phones and consoles are already locked down so an OS API that tells the browser if this is an child account and a browser header would solve the issue, most porn websites or similar adult sites would have no reason not to respect this header , it would make their job easier then say Steam having to always popup a birth date thing when a game is mature.


When one clever teen figures it out, they will share it with 80% of their friend group, making that number 80% and not 1%.

Let's go back to parenting: yes, world is a scary place if you get into it unprepared.


100% law enforcement is harmful: https://hackernews.hn/item?id=47352848

When one teen figures out how to get alcohol without ID, 80% of them will.

From https://www.edgarsnyder.com/resources/underage-drinking-stat...:

  > Nearly 75% of 12th grade students...have consumed alcohol in their lifetimes.
and

  > 85 percent of 12th graders ... say it would be “fairly easy” or “very easy” for them to get alcohol.
So, yes?

That's why I suggested kernel enforced security (simple syscall) that applications could implement and are incredibly hard to spoof / create tools and workarounds for, but I got downvoted to hell.

Permission restricted registry entry (already exists) and a syscall that reads it (already exists) for windows and a file that requires sudo to edit (already exists) and a syscall to read it (already exists). Works on every distro automatically as well including android phones since they run the linux kernel anyway. Apple can figure it out and they already have appleid.


For linux we have the users and groups concept, the distro can add an adult group and when you give your child a linux a device and create the account you would just chose adulr or minor , or enter a birthdate. No freedom lost for the geeks that install Ubuntu or Arch for themselves and we do not need some extra hardware for the rare cases where a child has access to soem computer and he also can wipe it and install Linux on it. Distro makes can make the live usb default user to be set as not adult. Good enough solutions are easy but I do not understand why Big Tech (Google and Apple) did not work on a standard for this. (maybe both Apple and Google profits would suffer if they did)

Definitely the latter, exploiting kids (roblox) is very very profitable.

Three states now implement this solution that you just called a great solution, and most of HN still hates it. Are they seeing something that you're not? https://hackernews.hn/item?id=47357294

Psst I was talking about zero knowledge proofs. Read twice before talking.

Can't see where you said that. You definitely commented about parental controls.

This is what I think. I saw someone else on HN suggested provide an `X-User-Age` header to these sites, and provide parents with a password protected page to set that in the browser/OS.

Responsibility should be on the website to not provide the content if the header is sent with an inappropriate age, and for the parent to set it up on the device, or to not provide a child a device without child-safe restrictions.

It seems very obviously simple to me, and I don't see why any of these other systems have gained steam everywhere all of a sudden (apart from a desire to enhance tracking).


Seems simple until you try to figure out what's allowed for what age, which surely will differ by country at a minimum.

To me that's a geo-ip lookup on the online service, which they kinda need to do anyways so seems fine?

(if there are further restrictions then it gets messy, but I feel like that's the current state of things anyways? at least for online services which I'm mostly speaking about here.)

Mostly my point is I don't think attestation is required. I think that responsibility should fall upon parents, and I don't want to have to give my ID to any online sites, because I don't remotely trust them to keep that safe. I'm less worried about them storing a number I send them about how old I am.


There's ~195 countries with 195 sets of laws.

And 50 US states.


Yeah, and frankly if you have a porn site (for example) you already need to deal with the different country restrictions.

Having no restrictions would be great, but since a bunch of countries are passing these laws I'd appreciate having a minimally invasive version instead.


that is correct the parents are meant to pass on morals and parent the child. If the parents fall through, there is the community such as church, neighbors, schools etc. The absolute last resort is government or law enforcement intervention, and this should be considered an extreme situation. But as John Adams noted, "Our Constitution was made only for a moral and religious people" -- in other words, all these laws start to rip at the seams when the fabric of society, the people who make up the society no longer have morals. But I appreciate this article in general, we need to fight against mass surveilance at all costs.

>all these laws start to rip at the seams when the fabric of society, the people who make up the society no longer have morals

Morals like owning slaves, right?

A moral system that requires everyone to be white Christian males isn't a moral system, it's a theocracy.


"mechanism to enforce that is parental control on devices."

Meh, I use it, but it's super annoying and I think that with my Daughter I'll take a different approach (but it will be some years before that is relevant).

On Android: The kid can easily go on Snapchat (after approval of install of course, and then you can just see their "friends") before Pokemon Go (just a pain to get working, it keeps presenting some borked version which led to a lot of confusion at first). I just lied about his age in a bunch of places at some point. Snapchat is horrible and sick from our experiences in the first week.

On Windows: It's a curated set of websites (and no FireFox) or access to everything. It's not even workable for just school. Granting kids access to our own minercraft servers: My god, I felt dirty about what the other parents had to go through to enable that.


> Granting kids access to our own minercraft servers: My god, I felt dirty about what the other parents had to go through to enable that.

This is a hobby horse of mine to the point that coworkers probably wish I'd just stfu about Minecraft - but holy shit is it crazy how many different things you need to get right to get kids playing together.

I genuinely have no idea how parents without years of "navigating technical bullshit" experience ever manage to make it happen. Juggling Microsoft accounts, Nintendo accounts, menu-diving through one of 37 different account details pages , Xbox accounts, GamePass subscriptions - it's just fucking crazy!


I always wonder about this. I read most dialogs (as I do) but man, the sanity of most people must require that they just next next next this stuff right? Perhaps they even let their kids do it instead.

If you're using something like a fire tablet and you set them up as a kids account that's not how it works. If you next through everything your kid cannot play minecraft online, not even on your own little private server.

Getting an actual kids account to work online with minecraft involves setting the right permissions across 2-4 websites and 1-3 companies. I think it took me around 4 hours of trial and error to get it working.


I might be wrong about this, but at least in my experience you just can't "next next next." There's too much complexity!

I'm essentially the maintainer of a series of accounts for each kid, these days. Woe unto anyone without a password manager!


> My stance is that if somebody is a minor, his/her/their parents/tutors/legal guardian are responsible for what they can/cannot do online, and that the mechanism to enforce that is parental control on devices.

Imho there is a place for regulation in that, actually. Devices that parents are managing as child devices could include an OS API and browser HTTP header for "hey is this a child?" These devices are functionally adminned by the parent so the owner of the device is still in control, just not the user.

Just like the cookie thing - these things should all be HTTP headers.

"This site is requesting your something, do you want to send it?

Y/N [X] remember my choice."

Do that for GPS, browser fingerprint, off-domain tracking cookies (not the stupid cookie banner), adulthood information, etc.

It would be perfectly reasonable for the EU to legislate that. "OS and browsers are required to offer an API to expose age verification status of the client, and the device is required to let an administrative user set it, and provide instructions to parents on how to lock down a device such that their child user's device will be marked as a child without the ability for the child to change it".

Either way, though, I'm far more worried about children being radicalized online by political extremists than I am about them occasionally seeing a penis. And a lot of radicalizing content is not considered "adult".


You could make the same case for parental control as evil.

"You‘re reading about evolution! Not in my house"


Parents already have a lot of control on children' education.

Examples: most children believe in the same religion as their parents, and can visit friends and places only if/when allowed by their parents.

This is simply extending the same level of control to the internet.

Government-mandated restrictions are completely another level.


I have personally worked with parents trying to prevent their children from using social media and it’s nearly impossible. Kids are almost always more tech savvy than their parents and unlike smoking it’s nearly impossible to tell a child is doing so without watching them 100% of the time.

Who controls your age if you try to buy alcohol.

Who controls your age if you want to see an R-rated movie?

This is simply extending the same level of control to the internet.

More control for parents is a completely different level.


There are no laws preventing children from seeing R-rated movies with or without their parents, theaters implement that policy by choice.

Sort of by choice. Often, the municipality won't let you build a movie theater unless you pinky-promise to abide by the rating system.

They rarely enforce it, but if it gets out of hand, the city will start getting on your case about it.


Welcome to the world where many countries aren’t the US

The OP is about legislation and companies in the US

And parent is from the EU and talks about age control in general.

Does the US have a zero-knowledge proof system that is mentioned in the discussion?


Disingenuous, but I'm sure you know that and were being intentionally so. The government is not using alcohol age laws as a justification to place a camera in your bedroom to make sure you aren't sneaking booze, but it is using internet age laws as a justification to surveil your entire life in a world which is becoming increasingly digital-mandatory to participate in government services or the economy. Nobody had a problem with internet age laws when "are you over 13? yes/no" was legally sufficient.

Is California doing this?

To my understanding no, but California is breaching the rubicon in compelling open-source / non-commercial software developers to include something in their offline software, which is something that I believe has never been done before by any jurisdiction, and opening that floodgate is a clear slippery slope in this environment because the week after that is implemented Texas will be mandating that all operating systems come installed with JesusTracker. Apparently New York is already working on sliding down that slope, in fact, and for good measure wants to mandate anti-circumvention measures too.

You‘re missing the point

> Having said that, open-source zero-knowledge proofs are infinitely less evil (I refuse to say "better") than commercial cloud-based age monitoring baked into every OS

Parent prefers more control by parents over zero-knowledge proof


If that was your point, I don't think your previous comment did a very good job of making it at all.

I do think parental controls can be and are abused for evil, but they're still better than the alternative. Zero-knowledge proof is not an alternative, and to suggest that it is is misunderstanding the situation. These laws are proposed and funded by people who want complete surveillance of the population. Zero-knowledge proof is, therefore, explicitly contrary to the goal and will never be implemented under any circumstances. Suggesting that it can be muddies the issue and tricks people into supporting legislation that exists only to be used against them.

In a benevolent dictatorship, sure, go for a zero-knowledge proof verification as your solution. In the reality of democracy, where politicians are corporate puppets who cloak surveillance laws in "think of the children" to rally support from the masses, we need to convince people to see through the lie and reject the proposals outright while reassuring them that they can protect the children themselves via parental controls. You will never be able to sufficiently inform 50.1% of the population of any country of what zero-knowledge proof even means, let alone convince them to support age verification laws but strictly conditional on ZKP requirements. That level of nuance is far too much to ask of millions of people who are not technically-informed, and idealism needs to give way to pragmatism if we wish to avoid the worst-case scenario.


Same here, EU citizen who thinks parents should do some parenting, after all. However, try to confront "modern" parents with your position. Many of them will fight you immediately, because they think the state is supposed to do their work... Its a very concerning development.

> My stance is that if somebody is a minor, his/her/their parents/tutors/legal guardian are responsible for what they can/cannot do online

As a parent, sure, that is my stance as well. What... what other stances are there even? How would they work?


The steelman argument is that parents are not necessarily up to date on the technology, and cannot reasonably be expected to supervise teenagers 24/7 up to the age of 18. Compare movie ratings or alcohol laws, for example: there's a non-parental obligation on third parties not to provide alcohol to children or let them in to R18 showings.

But the implementation matters, and almost all of these bills internationally are being done in bad faith by coordinated big-money groups against technologically illiterate and reactionary populist governments.

(if we really want to get into an argument, there's what the UK calls "Gillick competence": the ability of children to seek medical treatment without the knowledge and against the will of their parents)


In the UK parents can give children alcohol below the age of 18. parents get to make the final decision at home so I do not think its really comparable.

I would personally favour allowing parents to buy drinks for children below the current limits (18 without a meal, 16 for wine, beer and cider with a meal).

The alternative to this is empowering parents by regulating SIM cards (child safe cards already exist) and allowing parents to control internet connectivity either through the ISP or at the router - far better than regulating general purpose devices. The devices come with sensible defaults that parents can change.


Suggesting that regulating ISPs or SIM cards or even public WiFi is an alternative to the OSA or age and identity verification is ignoring the reality that mandatory filtering of internet connections has been a legal requirement for a very long time now, and it has been 'voluntary' (by the ISPs themselves, opt-out by customers, pushed on by Cameron) for even longer.

It is not a new or novel concept. There are legal adults taking part in these conversations that are simply too young to have ever experienced internet connections that weren't restricted and filtered mandated by legislation, and they would have been teenagers that were old enough to have a say in the conversation when the Conservatives were debating the OSA in parliament.

Mobile internet connections have been filtered since 2004 even, so it's entirely likely that this would also be true for some people that are pushing 30 today. The debate on whether it's appropriate for internet filters to block access to Childline, the NSPCC, the Police, the BBC, Parliament, etc, is 15 years old at this point. Fifteen.

The false dichotomy that exists between the entirely authoritarian measures of the OSA and the still fairly authoritarian measures of mandatory filtering serves only the interests of borderline monopolistic American tech companies who are in a position to weather such regulations as they stifle and snuff out any possibility of a less harmful web ecosystem, and people will cheer it on as they believe the social media platforms they blame for causing harm will themselves be harmed by the very laws they are writing.

The real alternative is not having mandatory filtering but instead voluntary filtering by the parents themselves, which is what everybody seems to think they are arguing for, and that conversation is long since dead. It is entirely beside the point, but contrast it with alcohol laws. The UK is one of the few countries in Europe that has consumption laws both in private(+) and in public, whereas half of Europe only has consumption laws in public while the other half has no consumption laws in either private or public. America on the other hand has many states that prohibit under-21s from drinking alcohol even in private. A better comparison may be content ratings, which are largely entirely voluntary and not a legal requirement.

(+) It's 5+ so there may as well be no laws on private consumption.


That steelman still stands on a core assumption that its both the state's responsibility and right to step in and parent on everyone's behalf.

Maybe a majority of people today agree with that, but I know I don't and I never hear that assumption debated directly.


> I never hear that assumption debated directly.

The idea of the "nanny state" has been debated a lot, and this seems like a very literal example of that. But once some status quo is firmly entrenched, debate about it tends to die down because the majority of people no longer care enough about it.


The point of having a state at all is to create a framework where people are set up to succeed.

Everyone shouldn't have to lose their privacy just because you're too lazy to use parental controls or give your kids devices that are made for children.

Entering your child's age when you create their user account is a loss of privacy?

The current bills (e.g. NY one at https://www.nysenate.gov/legislation/bills/2025/S8102/amendm... ) require age assurance that goes beyond mere assertions, so when creating your (adult) user account it would be required to give away your privacy to prove your age - if you can't implement a way for anonymous/pseudonymous people to verify that they indeed are adults (and not kids claiming to be so), these bills prohibit you to manufacture internet-connected systems that can be used by anonymous/pseudonymous users.

We are also talking about the Illinois one, which doesn't do that

Where exactly are you getting that goal of a state from? Maybe that's one of the goals today, historically I don't think it was anywhere on the list.

Then frankly you haven’t seen many debates around age verification as it’s the main thing discussed every time it’s brought up

You are correct, I didn't pay close attention to any EU debates that may have happened, I haven't lived there in years. In the US I haven't seen much debate at all, regardless of the bill really we don't seem to have leaders openly and honestly debate anything.

The other stance is that most parents are not capable of winning a battle against tech giants for the mind of their children, just as parents were not capable of winning this fight with tobacco and alcohol companies.

The tech giants want this. They drafted the bill. They paid tens of millions of dollars to promote it. Think about that for a minute.

They want it because it absolves them of responsibility for what their app does to kids. They can then just point to the existence of an already working mechanism for parents to intervene. The alternative would be for each app to implement stringent age verification or redesign itself to avoid addictive patterns. Neither option is good for their earnings.

If this had anything to do with reigning in tech giants, it would be done for adults as well, without restricting anyone's rights (well, aside from the people-corporations' of course). The issues are the manipulative algorithmic datafeeds, advertising, and datamining. Age verification does nothing for any of this and only provides the tech giants and governments the means to secure even more control over people.

ignore parent, outsource parenting to gov verification authority

TBH many parents done exactly that by giving phones/tablet already to kids in strollers


The latter is true, but we cannot regulate the vast majority of parents on the basis of the worst.

I'll go further. As a human being, I am responsible for myself. I grew up in an extremely abusive, impoverished, cult-like religious home where anything not approved by White Jesus was disallowed.

I owe everything about who I am today to learning how to circumvent firewalls and other forms of restriction. I would almost certainly be dead if I hadn't learned to socialize and program on the web despite it being strictly forbidden at home. Most of my interests, politics and personality were forged at 2am, as quiet as possible, browsing the web on live discs. I now support myself through those interests.

We're so quick to forget that kids are people, too. And today, they often know how to safely navigate the internet better than their aging caretakers who have allowed editorial "news" and social media to warp their minds.

Even for people who think they're really doing a good thing by supporting these kinds of insane laws that are designed to restrict our 1A rights: the road to hell is paved with good intentions.


This is obviously where it's going to go, at least in the US. Things that are non-religious, non-Christian especially, pro-LGBT, and similar will be disproportionately pulled under "adult content" to ensure that children are not able to be exposed to unapproved ideas during formative years.

That has already been going on for decades, with satanic panic and banning of library books.

The scary thing about legislation and software is that they can negatively reinforce each other if not properly designed and implemented. We run the risk of codification of morality-of-the-week becoming embedded deeply embedded into the compute stack, which will not self-correct until there is a great political movement for liberation of compute.

Exactly. Having lived through it already, I know what it did to me and I would never wish that upon another child. The internet saved me from being a religious, colonial, racist piece of shit like the rest of my family.

[flagged]


Did you have an actual point to make or did you just choose two random words and hurl an insult with zero context? Were you looking for an actual discussion or is this all you had to offer?

I need this *so* often that I programmed my shell to execute 'cd ..' every time I press KP/ i.e. '/' on the keypad, without having to hit Return.

Other single-key bindings I use often are:

KP* executes 'ls'

KP- executes 'cd -'

KP+ executes 'make -j `nproc`'


How? Readline macros?


Literally with my own shell: https://github.com/cosmos72/schemesh


The math looks suspicious to me, or at least how it is presented.

If, as stated, accessing one register requires ~0.3 ns and available registers sum up to ~2560 B, while accessing RAM requires ~80 ns and available RAM is ~32 GiB, then it means that memory access time is O(N^1/3) where N is the memory size.

Thus accessing the whole N bytes of memory of a certain kind (registers, or L1/L2/L3 cache, or RAM) takes N * O(N^1/3) = O(N^4/3).

One could argue that the title "Memory access is O(N^1/3)" refers to memory access time, but that contradicts the very article's body, which explains in detail "in 2x time you can access 8x as much memory" both in text and with a diagram.

Such statement would require that accessing the whole N bytes of memory of a certain kind requires O(N^1/3) time, while the measurements themselves produce a very different estimate: accessing the whole N bytes of memory of a certain kind requires O(N^4/3) time, not O(N^1/3)


I did not interpret the article as you did, and thought it was clear throughout that the author was talking about an individual read from memory, not reading all of a given amount of memory. "Memory access, both in theory and in practice, takes O(N^⅓) time: if your memory is 8x bigger, it will take 2x longer to do a read or write to it." Emphasis on "a read or write".

I read "in 2x time you can access 8x as much memory" as "in 2x time you can access any byte in 8x as much memory", not "in 2x time you can access the entirety of 8x as much memory". Though I agree that the wording of that line is bad.

In normal big-O notation, accessing N bytes of memory is already O(N), and I think it's clear from context that the author is not claiming that you can access N bytes of memory in less time than O(N).


Nobody has ever had this confusion about the access time of hash tables except maybe in the introductory class. What you’re describing is the same reasoning as any data structure. Which is correct. Physical memory hierarchies are a data structure. Literally.

I’m confused by GP’s confusion.


This is completely false. All regularly cited algorithm complexity classes are based on estimating a memory access as an O(1) operation. For example, if you model memory access as O(N^1/3), linear search worse case is not O(N), it is O(N^4/3): in the worse case you have to make N memory accesses and N comparisons, and if each memory access in N^1/3 time, this requires N^4/3 + N time, which is O(N^4/3).


> linear search worse case is not O(N), it is O(N^4/3)

No, these are different 'N's. The N in the article is the size of the memory pool over which your data is (presumably randomly) distributed. Many factors can influence this. Let's call this size M. Linear search is O(N) where N is the number of elements. It is not O(N^4/3), it is O(N * M^1/3).

There's a good argument to be made that M^(1/3) should be considered a constant, so the algorithm is indeed simply O(N). If you include M^(1/3), why are you not also including your CPU speed? The speed of light? The number of times the OS switches threads during your algorithm? Everyone knows that an O(N) algorithm run on the same data will take different speeds on different hardware. The point of Big-O is to have some reasonable understanding of how much worse it will get if you need to run this algorithm on 10x or 100x as much data, compared to some baseline that you simply have to benchmark because it relies on too many external factors (memory size being one).

> All regularly cited algorithm complexity classes are based on estimating a memory access as an O(1) operation

That's not even true: there are plenty of "memory-aware" algorithms that are designed to maximize the usage of caching. There are abstract memory models that are explicitly considered in modern algorithm design.


You can model things as having M be a constant - and that's what people typically do. The point is that this is a bad model, that breaks down when your data becomes huge. If you're tying to see how an algorithm will scale from a thousand items to a billion items, then sure - you don't really need to model memory access speeds (though even this is very debatable, as it leads to very wrong conclusions, such as thinking that adding items to the middle of a linked list is faster than adding them to the middle of an array, for large enough arrays - which is simply wrong on modern hardware).

However, if you want to model how your algorithm scales to petabytes of data, then the model you were using breaks down, as the cost of memory access for an array that fits in RAM is much smaller than the cost of memory access for the kind of network storage that you'll need for this level of data. So, for this problem, modeling memory access as a function of N may give you a better fit for all three cases (1K items, 1G items, and 1P items).

> That's not even true: there are plenty of "memory-aware" algorithms that are designed to maximize the usage of caching.

I know they exist, but I have yet to see any kind of popular resource use them. What are the complexities of Quicksort and Mergesort in a memory aware model? How often are they mentioned compared to how often you see O(N log N) / O(N²)?


> It is not O(N^4/3), it is O(N * M^1/3). There's a good argument to be made that M^(1/3) should be considered a constant

Math isn't mathing here. If M^1/3 >> N, like in memory-bound algorithms, then why should we consider it a constant?

> The point of Big-O is to have some reasonable understanding of how much worse it will get if you need to run this algorithm on 10x or 100x as much data

And this also isn't true and can be easily proved by contradiction. O(N) linear search over array is sometimes faster than O(1) search in a hash-map.


This is untrue, algorithms measure time complexity classes based on specific operations, for example comparison algorithms are cited as typically O(n * log(n)) but this refers to the number of comparisons irrespective of what the complexity of memory accesses is. For example it's possible that comparing two values to each other has time complexity of O(2^N) in which case sorting such a data structure would be impractical, and yet it would still be the case that the sorting algorithm itself has a time complexity of O(N log(N)) because time complexity is with respect to some given set of operations.

Another common scenario where this comes up and actually results in a great deal of confusion and misconceptions are hash maps, which are said to have a time complexity of O(1), but that does not mean that if you actually benchmark the performance of a hash map with respect to its size, that the graph will be flat or asymptotically approaches a constant value. Larger hash maps are slower to access than smaller hash maps because the O(1) isn't intended to be a claim about the overall performance of the hash map as a whole, but rather a claim about the average number of probe operations needed to lookup a value.

In fact, in the absolute purest form of time complexity analysis, where the operations involved are literally the transitions of a Turing machine, memory access is not assumed to be O(1) but rather O(n).


Time complexity of an algorithm specifically refers to the time it takes an algorithm to finish. So if you're sorting values where a single comparison of two values takes O(2^n) time, the time complexity of the sort can't be O(n log n).

Now, you very well can measure the "operation complexity" of an algorithm, where you specify how many operations of a certain kind it will do. And you're right that typically comparison sorting algorithms complexities are often not time complexities, they are the number of comparisons you'll have to make.

> hash maps, which are said to have a time complexity of O(1), but that does not mean that if you actually benchmark the performance of a hash map with respect to its size, that the graph will be flat or asymptotically approaches a constant value.

This is confused. Hash maps have an idealized O(1) average case complexity, and O(n) worse case complexity. The difficulty with pinning down the actual average case complexity is that you need to trade off memory usage vs chance of collisions, and people are usually quite sensitive to memory usage, so that they will end up having more and more collisions as n gets larger, while the idealized average case complexity analysis assumes that the hash function has the same chance of collisions regardless of n. Basically, the claim "the average case time complexity of hashtables is O(1)" is only true if you maintain a very sparse hashtable, which means its memory usage will grow steeply with size. For example, if you want to store thousands of arbitrary strings with a low chance of collisions, you'll probably need a bucket array that's a size like 2^32. Still, if you benchmark the performance of hashtable lookup with respect to its size, while using a very good hash function, and maintaining a very low load ratio (so, using a large amount of memory), the graph will indeed be flat.


> if you model memory access as O(N^1/3), linear search worse case is not O(N), it is O(N^4/3)

This would be true if we modeled memory access as Theta(N^{1/3}) but that's not the claim. One can imagine the data organized/prefetched in such a way that a linear access scan is O(1) per element but a random access is expected Theta(N^{1/3}). You see this same sort of pattern with well-known data structures like a (balanced) binary tree; random access is O(log(n)), but a linear scan is O(n).


> "in 2x time you can access 8x as much memory"

is NOT what the article says.

The article says (in three ways!):

> if your memory is 8x bigger, it will take 2x longer to do a read or write to it.

> In a three-dimensional world, you can fit 8x as much memory within 2x the distance from you.

> Double the distance, eight times the memory.

the key worda there are a, which is a single access, and distance, which is a measure of time.

N is the amount of memory, and O() is the time to access an element of memory.


The operation GP is thinking of is a full scan, and that will always take n(n^(1/3)) lower bound time. Though if done right all of that latency will be occupied with computation and allow people to delude themselves into thinking it doesn’t matter.

But when something is constrained two or three ways, it drastically reduces the incentive to prioritize tackling any one of the problems with anything but small incremental improvements.


> The operation GP is thinking of is a full scan, and that will always take n(n^(1/3)) lower bound time.

It doesn't. Full scans are faster than accessing each memory address in an unordered way.

Let's look at a Ryzen 2600X. You can sustain 32 bytes per second from L1, 32 bytes per second from L2, and 20 bytes per cycle from L3. That's 64KB, 512KB, and 16MB caches all having almost the same bandwidth despite very different latencies.

You can also imagine an infiniband network that fills 2 racks, and another one that fills 50000 racks. The bandwidth of a single node is the same in both situations, so even though latency gets worse as you add more nodes and hops, it's going to take exactly O(n) time for a single thread to scan the entire memory.

You can find correlations between memory size and bandwidth, but they're significantly weaker and less consistent than the correlations between memory size and latency.


I don’t think I can agree to ignore the latency problem. Intermachine Latency is the source of a cube root of N term.


Once you start receiving data in bulk, the time it takes is quantity of data divided by your connection speed. Latency doesn't factor in.

Technically you need to consider the time it takes to start receiving data. Which would mean your total time is O(n + ∛n). Not O(n * ∛n). But not all nodes are ∛n away. The closest nodes are O(1) latency. So if you start your scan on the close nodes, you will keep your data link saturated from the start, and your total time will be O(n). (And O(n + ∛n) simplifies to O(n) anyway.)


I don't think we have such a lower bound: from a “theoretical" point of view (in the sense of the post), your processor could walk on the cube of memory and collect each bit one by one. Each move+read costs O(1) (if you move correctly), so you get O(n) to read the whole cube of n bits.

If you require the full scan to be done in a specific order however, indeed, in the worse case, you have to go from one end of the cube to the other between each reads, which incurs a O(n^{1/3}) multiplicative cost. Note that this does not constitute a theoretical lower-bound: it might be possible to detect those jumps and use the time spent in a corner to store a few values that will become useful later. This does look like a fun computational problem, I don't know it's exact worst-case complexity.


I think the article glossed over a bit about how to interpret the table and the formula. The formula is only correct if you take into account the memory hierarchy, and think of N as the working set size of an algorithm. So if your working set fits into L1 cache, then you get L1 cache latency, if your working set is very large and spills into RAM, then you get RAM latency, etc.


I hate big O notation. It should be O(N) = N^(1/3)

That way O is the function and it's a function of N.

The current way it's notated, O is the effectively the inverse function.


Yes, the notation seems wrong but it is because O(...) is a set, not a function. The functions is what goes inside.

So it should be f€O(...) instead of f=...

(don't know how to write the "belongs" symbol on iOS).


Here you go: ∈ (so f ∈ (...))


That helps but how do I learn it?


O(N) is not a function, though, so the notation is doing a good job of saying this. When we say "the complexity class of an algorithm is O(log N)", we mean "the function WorseCaseComplexity(N) for that algorithm is in the class O(log N)", The function "WorseCaseComplexity(N)" measures how much time the algorithm will take for any input of size N in the worse case scenario. We can also say "the average case complexity of quicksort is O(N log N)", which means "the function AverageCaseComplexity(N) for quicksort in the class O(N log N)", where AverageCaseComplexity(N) is a function that measure how much time quicksort will need to finish for an "average" input of size N.

Saying that a function f(n) is in the class O(g(n)) means that there exists an M and a C such that f(n) < C * g(n) for any n < M. That is, it means that, past some point, f(n) is always lower than C * g(n), for some constant C. For example, we can say the function f(n) = 2n+ 1 is in the class O(n), because 2n + 1 < 7n for any n > 1. Technically, we can also say that our f(n) is in the complexity class O(2^n), because 2n + 1 < 2^n for any n >= 3, but people don't normally do this. Technically, what we typically care about is not O(f(n)), it's more of a "least upper bound", which is closer to big_theta(n).


There are a lot of misconceptions and things being confused together in your post. For one complexity classes are related to decision problems, not algorithms. An algorithm does not have a complexity class, rather it's decision problems that belong to a complexity class. 3SAT is a decision problem that belongs to the NP-complete complexity class regardless of any particular algorithm that implements a solution to it.

Secondly, it's a common misconception that O(f(n)) refers to worst case complexity. O(f(n)) is an upper bound on the asymptotic growth of f(n). This upper bound can be used to measure best case complexity, worst case complexity, average case complexity, or many other circumstances. An upper bound does not mean worst case.

For example I often perform code reviews where I will point out that a certain operation is implemented with the best case scenario having O(log(N)), but that an alternative implementation has a best case scenario of O(1) and consequently I will request an update.

I expect my engineers to know what this means without mixing up O(N) with worst-case scenarios, in particular I do not expect an improvement to the algorithm in the worst case scenario but that there exists a fast/happy path scenario that can avoid doing a lot of work.

Consequently the opposite can also happen, an operation might be implemented where the worst case scenario is o(N), note the use of little-o rather than big-o, and I will request a lower bound on the worst case scenario.


> An algorithm does not have a complexity class, rather it's decision problems that belong to a complexity class. 3SAT is a decision problem that belongs to the NP-complete complexity class regardless of any particular algorithm that implements a solution to it.

Both algorithms and problems have their own complexity. The complexity of an algorithm is the time it takes that algorithm to finish. For example, quicksort is an algorithm, and its average case complexity is in O(n log n). I called O(n log n) a complexity class, this was indeed sloppy on my part - I was just trying to find a name for this set.

> Secondly, it's a common misconception that O(f(n)) refers to worst case complexity.

I know, and my post was very explicitly avoiding this misconception. I explicitly gave the example that the average case complexity of quicksort is in O(n log n).

> Consequently the opposite can also happen, an operation might be implemented where the worst case scenario is o(N), note the use of little-o rather than big-o, and I will request a lower bound on the worst case scenario.

I'm not sure what you're trying to say in this part. o(n) is basically the set of all sublinear functions, e.g. log(n), but also 1. Any function in o(n) is also in O(n), though the converse doesn't hold. So, if you know an algorithm has worse complexity in o(n), you know more information about the algorithm than if I tell you it has worse case complexity in O(n). Knowing a lower bound is still useful in either case, of course, since a constant time algorithm and a logarithmic time algorithm are still both in both sets.


No it shouldn't. The function you're talking about is typically called T(N), for "time". The problem is that you can't write T(N) = N^(1/3) because it's not exactly N^(1/3) -- for one thing, it's approximate up to a constant factor, and for another thing, it's only an upper bound. Big-O solves both of these issues: T(N) = O(N^(1/3)) means that the function T(N) grows at most as fast as N^(1/3) (i.e.: forms a relationship between the two functions T(N) and N^(1/3)). The "T(N) =" is often silent, since it's clear when we're talking about time, so at the end you just get O(N^(1/3)).


All their numbers are immediatley suspect since they admit to using ChatGPT to get their numbers. Oh, and their wonderful conclusion that something that fits fully in the CPU's caches is faster than something sitting a few hops away in main memory.


For me it's more about "feeling" badly written code, as if it stinks and is unpleasant to look at, and conversely enjoying nicely written one, and seeing its elegance and beauty.

Especially when I have to familiarize with code written by someone else, I usually start by "cleaning" it - small refactorings such as splitting overlong functions and simplifying expressions, that are trivial to prove as correct even without a deep knowledge of the code purpose.

Only when the code looks sufficiently clean and familiar, I start adding the requested new features


This is also how I have traditionally looked at new codebases. I don't necessarily actually clean it up (I dislike changing someones style unless it is completely necessary) but I build up a mental model similar to what you're outlining. It's been really interesting with these LLM coding tools. I may not disagree with the implementation in terms of logic (it just does what I've prompted it to do after all) but something about it will 'stink'. Amusingly this feeling is the true 'vibe' of 'vibe coding' to me.

I sometimes wonder if this is the result of experience/education (I'm not a compsci major) or if it's just a naturally different way to think about systems.


I also do this sort of restless refactoring. I find interacting with a new codebase is a kind of brain hack that (subjectively) helps me get up to speed faster.


> So, people who use native REPLs, what do you do with them?

In my case, I use my interactive shell https://github.com/cosmos72/schemesh every day as login shell.

You can look at it as heavily customized Scheme REPL, where everything not inside parentheses is parsed and executed as shell syntax, and everything inside parentheses is parsed and executed as Scheme syntax.

Having arithmetic and procedure definition within the login shell definitely feels liberating, at least to me


I've used the Picolisp REPL like that, though not as a login shell proper, but as the shell where I actually do stuff. Mainly due to the ease with which it integrates with the below shell through 'in, 'out and 'fork.


You can choose the font with `twin --hs=X11,font=X11FONTNAME` or `twin --hw=xft,font=TRUETYPEFONTNAME`

I will have a look too, the terminal should not crash or stop being responsive


Thank you. Weird that this isn't in the man page or --help message. Another font didn't fix anything anyway.


I tried compiling https://github.com/panzi/ansi-img and running it inside the truecolor branch of twin, i.e. https://github.com/cosmos72/twin/tree/truecolor and it correctly displayed an example JPG I tried - no artefacts, and terminal remained responsive.

As I wrote above, I am debugging it and fixing some known issues, thus truecolor support is not yet in the default branch


Author here :)

I've been using Twin as my everyday terminal emulator and terminal multiplexer since ~2000, slowly adding features as my free time - and other interests - allowed.

As someone pointed out, the look-and-feel reminds Borland Turbo Vision. The reason is simple: I started writing in in the early '90s on DOS with a Borland C compiler, and I used the Borland Turbo Vision look-and-feel as a visual guideline (never actually looked at the code, though).

The porting to linux happened in 1999 (it was basically dormant before that), and Unicode support was progressively added around 2015-2016 (initially UCS-2 i.e. only the lowest 64k codepoints, then full UTF-32 internally, with terminal emulator accepting UTF-8). There are still some missing features, most notably: no grapheme clusters, no fullwidth (asian etc.) support, no right-to-left support.

Right now I'm adding truecolor support (see https://github.com/cosmos72/twin/tree/truecolor) - it's basically finished, I'm ironing out some remaining bugs, and thinking whether wire compatibility with older versions is worth adding.

And yes, documentation has been stalled for a very long time.

Retrospectively, I should have switched C -> C++ much earlier: lots of ugly preprocessor macros accumulated over time, and while I rewrote the C widget hierarchy as C++ classes, several warts remain.


Do symbols for legacy computing work with it? Especially the 1/8ths vertical/horizontal blocks?


If you mean the Unicode glyphs listed at https://en.m.wikipedia.org/wiki/Block_Elements they are supported - you just need a display driver that can render them. For example, `twin --hw=xft` (it's the default) or `twin --hw=X11`, both with a font that contains them


Xe means the Unicode block that is actually named "Symbols For Legacy Computing". It's not in the BMP. Some bloke named Bruce was doing TUI windows with scrollbars and sizer/menu boxes some years before TurboVision and code page 437. (-:



Reading the flags one: Unscii has font coverage, if you want to try that out on the emulators whose fonts were problems.

* https://hackernews.hn/item?id=43812026


Given that it's drawing TUI windows, the MouseText characters for doing that very thing on the Apple IIe would seem even more pertinent.

* https://tty0.social/@JdeBP/114409020672330885


Alas, it's not finished. You've made the mistakes that all of us have made, and haven't caught up with us, must of us having fixed those mistakes, a few years back when implementing 24-bit RGB was in vogue.

This is not, as the function name suggests, a colon, but per ITU/IEC T.416 it should be:

https://github.com/cosmos72/twin/blob/truecolor/server/hw/hw...

And not only should this have colons too, but per ITU/IEC T.416 there's a colour space parameter that goes in there:

https://github.com/cosmos72/twin/blob/truecolor/server/hw/hw...

The unfortunate part is that when rendering to a terminal, you don't have any available mechanism apart from hand-decoding the family part of the TERM environment variable, and knowing who made which mistakes, to determine which of the 7 possible colour mechanisms are supported. They are:

1. ECMA-48 standard 8 colour, SGRs 30 to 37, 39, 40 to 47, and 49

2. AIXTerm 16 colour, ECMA-48 plus SGRs 90 to 97 and 100 to 107

3. XTerm 256 colour, ITU T.416 done wrongly with SGR 38;5;n and SGR 48;5;n

4. XTerm 256 colour corrected, ITU T.416 done right with SGR 38:5:n and SGR 48:5:n

5. 24-bit colour take 1, ITU T.416 done wrongly with SGR 38;2;r;g;b and SGR 48;2;r;g;b

6. 24-bit colour take 2, ITU T.416 done wrongly with SGR 38:2:r:g:b and SGR 48:2:r:g:b

7. 24-bit colour take 3, ITU T.416 done right with SGR 38:2::r:g:b::: and SGR 48:2::r:g:b:::

Few people support 4, and although quite a lot of us have finally got to supporting 7 it isn't quite universal. Egmont Koblinger, I, and others have been spreading the word where we can over the last few years.

This is where I was at in 2019:

https://github.com/jdebp/nosh/blob/trunk/source/TerminalCapa...

There a few updates to that that are going to come out in 1.41, but when it comes to colour they're mainly things like recognizing the "ms-terminal" and "netbsd6" terminal types in the right places.


Yep, I am well aware of the `;` vs `:` confusion in both 256 color and 24-bit color control sequences.

Short of hand-coding "which terminal supports which variant" I do not know any standard mechanism to detect that (beyond the well-known $TERM=...-256color and $COLORTERM=truecolor or $COLORTERM=24bit)

I guess I'll have to add command line options to choose among the variants 1...7 you helpfully listed above.

My main use it to render twin directly on X11, which avoids all these issues, and while rendering inside another terminal is important and is not going away, I am OK with a few minor color-related limitations (note: limitations, not bugs) in such setup, especially if the other terminal does not follow the relevant standards


> This is not, as the function name suggests, a colon, but per ITU/IEC T.416 it should be

Digressing, but I’m fascinated to see ODA still being referenced, even if only some small part of it


Compiling an expression to a tree of closures, and a list of statements to a slice of closures, is exactly how I optimized [gomacro](https://github.com/cosmos72/gomacro) my Go interpreter written in go.

There are more tricks available there, as for example unrolling the loop that calls the list of closures, and having a `nop` closure that is executed when there's nothing to run but execution is not yet at the end of the the unrolled loop.


Impressive! I would love to learn howto implement that in Julia. Could you help me understand how you did that?

I'd love to see, if it's possible to create a libc-free, dependency-free executable without Nim (https://nim-lang.org/).


I put together this example after reading the article:

https://github.com/skx/simple-vm

Simpler than the full gomacro codebase, but perhaps helpful.


For optimal speed, you should move as much code as possible outside the closures.

In particular, you should do the `switch op` at https://github.com/skx/simple-vm/blob/b3917aef0bd6c4178eed0c... outside the closure, and create a different, specialised closure for each case. Otherwise the "fast interpreter" may be almost as slow as a vanilla AST walker.


That's fair, thanks for taking the time to look and giving the feedback.


The core idea is simple: do a type analysis on each expression you want to "compile" to a closure, and instantiate the correct closure for each type combination.

Here is a pseudocode example, adapted from gomacro sources:

https://gist.github.com/cosmos72/f971c172e71d08030f92a1fc5fa...

This works best for "compiling" statically typed languages, and while much faster than an AST interpreter, the "tree of closures" above is still ~10 times slower that natively compiled code. And it's usually also slower than JIT-compiled code


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: