I would caution readers to do their due dilligence as the presentation may be fancy but that should not immediately translate into a signal of quality in itself given the author has disclosed using Claude Code for a chunk of this work.
While I won't outright discount the findings (as there is "too much" to reasonably verify), there are a few oddities around the source repo such as errors where Claude has tried to access sources, been denied and then noted as much or where it has seemingly fetched incorrect files and tried to interpret them (https://github.com/upper-up/meta-lobbying-and-other-findings...)
I am not under the immediate impression that the author has done thorough due diligence rather than just offloading that to readers by saying "You can just check the sources yourself"
Also not really a fan of the 'what they're hiding from you' tone it takes (even if that's the subject), like saying that because a website was made less than 100 days before a bill was signed it was a '77-day pipeline' to the bill (which jumped out as a dramatized rephrasing and not present in the original Reddit post).
It also doesn't inline link sources, like the Bloomberg article it mentions (this[1]). A more impartial voice and linked citations to allow quick reference would raise fewer red flags, even if the goal is worthwhile.
I would also encourage taking a critical look at the underlying investigation as it seems mostly LLM generated without a huge amount of manual due dilligence
I also submitted https://hackernews.hn/item?id=47370954 because it was pointed out to me that a Reddit submission about the same story on r/linux had been taken down. If there was LLM content I suppose that might at least partially explain a moderator decision there... ?
No, the mods did not make a decision. It got flagged by an auto moderator bot, because of mass flagging. The mass flagging seems to be a brigade that happened on prior posts in that same subreddit discussing this topic of age verification. I don’t have any definite evidence, but it seems odd that a topic that is so relevant to that community would be flagged, so I assume it is a coordinated attack.
I’ve moderated on Reddit before - a mass report bot on r/linux specifically for age verification is too strangely niche. Also automod doesn’t remove flagged posts, unless it has been set up to do it.
It’s also very definitely ai generated, and makes several claims and implication. Users may have reported it as well.
I would hesitate to assume coordinated behavior at this stage.
Maybe it’s a dupe but I think it’s an important topic to discuss. And even if it is mostly LLM generated, that doesn’t mean it is completely invalid. Some of the major points around Meta’s lobbying, and Anthropic’s donations, are seemingly valid.
In one part of the report, there seems to be this implicit assumption that Linux and Horizon OS (Meta's VR OS) are somehow comparable and that Meta will be better equipped than Linux if age verification is required.
It doesn't explicitly say "This will allow Horizon OS to become the defacto OS and Linux will die out" but that seems to be the impression I'm getting which uhh... would make zero sense.
More broadly, this entire report (and others like it) are extremely annoying in that I've seen some Reddit comments either taking "lots of text" as a signal of quality or asking "Does anyone have proof that these claims are inaccurate" which is
a) Of course entirely backwards as far as burden of proof
b) Not even the right rubick because it's not facts versus lies, it's manufactured intent/correlations versus real life intent/correlations (ie; bullshit versus not)
All of this could be factually true without Meta being smart enough to play 5D chess
Or of authority, when they're not equipped to evaluate the data first-hand.
The Gish gallop technique in debate overwhelms opponents with so many arguments that they're unable to address them all before the time limit. Reports presented like this are functionally that, but against reader comprehension and attention.
Similarly, being the first, loudest, or only voice claim is unreasonably effective at establishing perception of authority, where being unchallenged is tantamount to correctness. This also goes both ways; censorship in media, for instance, can be used to promote narratives by silencing competing views, like platforms selectively amplifying certain topics to frame them as more proven and widely supported than they might actually be.
It's unfortunate that inexpert execution often positions well-meaning and potentially correct arguments to be discredited and derided by prepared opponents before their merits can be established. In this case, it may be true that Meta may have organized a well-coordinated shadow campaign for legislation using technically legal channels, but I'm sure they've anticipated this at some point, or are relying on the inertia of the system and initial buy-in to force the course.
The name “Calcalist” is indeed a play on “Economist” (it is not a proper Hebrew word, but fuses the Hebrew word for economy “calcala” with the English suffix for a professional work “ist”.
However, it is just an expanded version of Ynet’s business/economy section, and Ynet is probably the closest equivalent to USA Today or The Sun.
How can a word come from the Bible? It must have existed before the Bible in order to have a meaning inside of it. Or did you mean to write it came from Aramaic?
I mean that it already appears in the Bible, in old Hebrew (which is close to, but isn’t exactly Aramaic), with the meaning “to feed and provide” - and I did not find any documentation about how it formed (or came into) Hebrew.
Which means of course m, that it was already in use before the Bible was canonicalized.
This post raises a few flags in my mind that it was at least partly generated by an LLM? That isn't to suggest that this editor doesn't/won't exist, that the editor uses LLM-generated code (which is not a sleight) or that the claims are not truthful.
The main things that jump out are the inconsistency in writing style (sometimes doing all lowercase and no punctuation) but then the brief rundown is all perfect spelling and grammar with em-dashes.
The "Not just" parts stick out like "Not just play them back — edit them" as well as "This isn’t a proof of concept or a weekend project. It’s a real authoring environment."
Anyway, best of luck to the author with their project!
> This document has been through ten editing passes and it still has tells in it.
The big one it missed: the headers are mostly "The [Noun:0.9|Adjective:0.1] [Noun]". LLMs (maybe just Claude?) love these. Every heading sounds like it could be a Robert Ludlum novel (The Listicle Instinct, The Empathy Performance, The Prometheus Deception).
> This document was written by an LLM (Claude) and then iteratively de-LLMed by that same LLM under instruction from a human, in a conversation that went roughly like this
I don't like lists like these as I sometimes use half of the "signs" in my writing. And it would be trivial, feeding that list to a LLM and tell it to avoid that style.
Huh. This page claims "This website requires JavaScript." at the top, yet I can read everything fine. TFA on the other hand is blank without JavaScript.
Not quite everything. Some of the page doesn't load and not all of the functionality works but it is nice enough to let you try to view the parts you can rather than either force it to not load or act like the entire page loaded fine.
>That isn't to suggest that this editor doesn't/won't exist, that the editor uses LLM-generated code (which is not a sleight) or that the claims are not truthful.
If you look at the icons of the tools in the image they appear to have been generated using a LLM. So yeah it's probably vibecoded a lot, it would be cool if the author reports how much and how it was used but I don't think newgrounds would like it much.
Oh neat, I had come across the headless client yesterday (and submitted a now-fixed bug report for it after running into some issues).
Before opening HN this morning and seeing this post, I actually wrote a post about how I'm experimentally using headless to publish my blog: https://utf9k.net/blog/obsidian-headless/
Well, that post was my experiment but I'll be looking forward to trying it out going forward.
There are of course many alternatives and I'm sure this workflow may have its pains but for now, it feels like a lot less friction between actually writing and having it published.
I've used plain Git for many years of course but I've also tried other rube goldberg machines such as various Git-inside-Obsidian plugins and so on but there's always just a bunch of "stuff" between writing and putting it online.
Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point.
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
The term lawful use is a joke to the current administration when they go after senators for sedition when reminding government employees to not carry out unlawful orders. It’s all so twisted.
To be clear, the sticking point is actually that the DoD signed a deal with Anthropic a few months ago that had an Acceptable Use Policy which, like all policies, is narrower than the absolute outer bounds of statutory limitations.
DoD is now trying to strongarm Anthropic into changing the deal that they already signed!
Great question. Even I came across it while I was in development process and I've tested the built-in "Study Modes" extensively, and the difference comes down to Intent Persistence.
1. Instruction Drift vs. The Gatekeeper: General-purpose LLMs are trained to be "helpful and agreeable." If a student pushes or shifts the topic, the model often "drifts"—like you mentioned, it might start correcting grammar instead of pushing the child to derive the essay's core logic. Qurio uses a secondary "Gatekeeper" agent that audits every response turn specifically to ensure the "Socratic Loop" stays on the core concept, not just surface-level fixes.
2. The Walled Garden: A general-purpose AI is an open "Ducati"—it has the entire internet's biases and infinite distractions. Qurio provides a closed-loop logic environment. It removes the ads, tracking, and the constant temptation to "just get the answer" that is always one click away in a standard bot.
3. The "Architect" UI: Unlike a standard chat, our Cognitive Process Capsules (CPCs) record the thinking journey, not just the final result. This allows parents to see the logical steps their child took, which is a feature prioritized for education rather than just production.
Ultimately, a kid uses this because it treats them like a Future Architect who needs to understand the "Why," rather than just a user who needs a "Result."
You caught me. English is not my native language, so I use an LLM to polish my thoughts and correct my grammar before posting. I want to make sure I’m explaining the technical parts of Qurio clearly, but I realize it can end up sounding a bit "robotic."
I'm a developer and a dad—the project is real, even if my grammar needs a boost! I'll try to let more of my own "unfiltered" voice through.
As far as your query regarding chatGPT, I tried its study mode to write an essay on climate control for a 10 year old kid and instead of focusing on essay, it kept insisting me to correct my grammar instead. And having a switch button to full fledged LLM right in front needs a lot of patience and dedication. I tried conveying this though by taking help of LLM. Thanks
I’m sorry you think like that. Please be assured, I’m a really dev. Not an AI. Nevertheless , my product solves a practical need and I use AI for correcting my English. Thank you.
If you read the resignation letter, they would appear to be so cryptic as to not be real warnings at all and perhaps instead the writings of someone exercising their options to go and make poems
Also, trajectory of celestial bodies can be predicted with a somewhat decent level of accuracy. Pretending societal changes can be equally predicted is borderline bad faith.
Besides, you do realize that the film is a satire, and that the comet was an analogy, right? It draws parallels with real-world science denialism around climate change, COVID-19, etc. Dismissing the opinion of an "AI" domain expert based on fairly flawed reasoning is an obvious extension of this analogy.
> Let's ignore the words of a safety researcher from one of the most prominent companies in the industry
I think "safety research" has a tendency to attract doomers. So when one of them quits while preaching doom, they are behaving par for the course. There's little new information in someone doing something that fits their type.
I would caution readers to do their due dilligence as the presentation may be fancy but that should not immediately translate into a signal of quality in itself given the author has disclosed using Claude Code for a chunk of this work.
While I won't outright discount the findings (as there is "too much" to reasonably verify), there are a few oddities around the source repo such as errors where Claude has tried to access sources, been denied and then noted as much or where it has seemingly fetched incorrect files and tried to interpret them (https://github.com/upper-up/meta-lobbying-and-other-findings...)
I am not under the immediate impression that the author has done thorough due diligence rather than just offloading that to readers by saying "You can just check the sources yourself"
reply