Hacker News .hnnew | past | comments | ask | show | jobs | submit | rdevilla's commentslogin

This state of affairs presages the advent of a second dark age - one that will forever eclipse the era of radical openness & transparency that once served the software community for decades. Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM whok would steal their competitive advantage & replicate it at scale, until any possible information asymmetries have been arbitraged away. The development & secrecy of technique will once again become a deep moat as LLMs fall into local, suboptimal minima, trained on and marketed towards the lowest common denominator. The Internet, or at least, The Web, becomes a Dark Forest of the Dead Internet (Theory), in which humans fear of speaking out and capturing the attention of the LLM who would siphon their creative essence for more, ever more training data. Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay. Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

- Unknown, 19 Feb 2026


> Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM who would steal their competitive advantage & replicate it at scale

I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on. I'm not sure of any way I can share it with humans and only humans. If I let the LLMs have the UI patterns and libraries I've developed it would dilute my IP, like it has Studio Ghibli's art style.


It's worth questioning the underlying assumptions. It's humans - all humans - that benefit from LLMs. I see a lot of people having this attitude, but I can't help but see it as really being about seeking credit instead of generosity, and/or Dog in the Manger mindset.

Humans aren't benefiting from LLMs, only a few individuals are. Let's stop with the fake platitudes and realize that unless this technology isn't completely open sourced from top to bottom, it's a complete farce to think humans are going to benefit and not just the rich getting richer.

> Humans aren't benefiting from LLMs, only a few individuals are.

Honest question: how is this different from traditional Open Source? Linux powers most of the internet, yet the biggest beneficiaries are cloud providers, not individual users. Good open weights models already exist and people can run them locally. The gap between "open" and "everyone benefits equally" has always been there...


Because opposite is true for open source? It is actually for free, whether you contribute to it or not. Anyone can legally use it for free. Torwalds can not just wake up one day and decide to charge more.

If you feel like linux is a too much of a monopoly, you can actually fork it and compete.


But I considered that when I said "Good open weights models already exist and people can run them locally."

You can have a great LLM model with vast coding knowledge running on your computer right now, for free. It won't be the best one nor the fastest one, but still a very good one.


Same is true about science as well. Taxpayer money is spent on research, but the outcomes of that research primarily benefits the corporate interests.

I'm the last person to cheer for unrestrained capitalism, but this anti-billionaire / anti-AI narrative is getting ridiculous even for general population standards, much less for HN. It's like people think their food or medicine or LLMs grow on fucking trees. No. Companies and corporations is how adults do stuff for other adults, at scale. Everyone understands that, except of a part of software industry, that by accidental confluence of factors, works by different rules than literally the rest of the world.


You must not be serious. Every single person using LLMs, whether paid or free tiers or open models, whether using them for chat or as part of some kind of data pipeline - so possibly without even knowing they're using them - benefits.

"Few individuals" get money mostly for providing LLMs as a service. As far as tech businesses go, this is refreshingly straightforward, literally just charging money for providing some useful service to people. Few tech companies have anything close to a honest business model like this.


Gemma4 is apache2 licensed.

I am unsure about the openness of the training data itself. That too should be required for a LLM to be considered 'open'.

Open source is the only way forward, I agree.


> It's humans - all humans - that benefit from LLMs

This is not true tho. The moment LLM will be necessary, we will all have to pay to the monopoly owners, as much as they can extract.

But, they will never pay to us.


I'm not seeing how the benefits have outweighed the positives at this point. Spam, scams, porn, being inundated with slop, people losing their skills and getting dumber, mass surveillance...

Is that worth possibly maybe saving some time programming, but then not gaining the knowledge you would have if you did it yourself, that can be built on in the future?

I don't see technological advancement as good in itself if morality is in decline.


I reached the same conclusion. It also made me realised how most technologies degraded our lives.

Before the TV people would go to the theatre. It's becoming hard to find a theatre these days. Artificial light is convenient, it made billion or people develop sleep disorder and we can't see stars at at night. Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.


> Mass food production supposedly nourished more people: veggies today have 20% the minerals content they had 70y ago..the list go on and on.

I suggest you should have a look at malnutrition rates 100 years ago vs now. Without mass food production we would not be able to sustain even 50% of current population.


Why would that be bad? Why is more better?

Would you ask that your starving great-n-grandparents worried about whether they're able to feed the infant that would later become your ancestor?

Isn't that a food distribution problem not a food production one?

I think it's more fair to say that with every technology there are tradeoffs. Consider the wheel, before the wheel people probably were more physically fit, but they couldn't move as large of loads. Well, except in the Andes where they figured out how to move gigantic stones well beyond the weight that any wooden wheel would have been able to carry anyway and cut and place them into configurations that were earthquake resistant.

Technology and civilization is path dependent, and I think it's silly to make blanket statements about the merit of technological progress overall. Everything choice (including the choice to do nothing) has unintended consequences. I would never condemn anyone for inventing a new technological solution to a problem, but once the systematic effects are understood then we do need the collective ability to course correct (eg. social media, AI, etc).


It's odd to me that you live in a place where it's hard to find a theatre. Living in a cosmopolitan city there's so many theatres with anything from professional shows to amateur dramatics all at very reasonable price points.

> seeking credit instead of generosity, and/or Dog in the Manger mindset.

I have tried being generous to enemies. It only turns them them into... bigger, hungrier enemies.

I'm happy with never getting "credit" for anything I "accomplish" (whatever those notions even mean under a system where thoughts can be property).

I mean: as long as my labor output cannot be subverted to benefit hostiles even the tiniest bit.

> It's humans - all humans - that benefit from LLMs

The set of "all humans" includes that power-hungry majority who find nothing wrong with subjecting other sentient beings to sadistic treatment.

Those who, as soon as they take notice of me - or my kind, or our speech, or our trail - more often than not become terrified into outright aggression.

So far we had been protected from their stupidity and lack of imagination, by their stupidity and lack of imagination.

Now they've had brain prostheses developed for 'em, and... well I can't really do much for those who haven't already begun to reevaluate their baseline safety, now can I?


Corporations are not humans.

And while sociopaths - who benefit the most from corporations - technically are humans, I don't consider them parts of humanity, more like a cancer tissue on top of it.

So whatever benefit humanity gets is more than cancelled by the growing cancer.


So I am to assume you're not using LLMs yourself, or any technology employing those models in the pipeline (which at this point includes many features in smartphones made in the last 3 years)? If that's not the case, then you are a beneficiary too.

There are some local benefits, there are some local and global costs. My point is that we are in a strongly net negative situation, mr Jack.

> there's stuff I would have open sourced in the past but no longer will because I know it would get trained on

Could you publish under AGPLv3, so any AI users with recognizable patterns from your code can get in trouble?


I've already started thinking this way, there's stuff I would have open sourced in the past but no longer will because I know it would get trained on.

Same here.

I no longer post photos, code, or pretty much anything other than short comments on the internet.

I'm not going to do free work for trillion-dollar AI companies.

I do, however, find it interesting to watch AI destroy the whole "content creation" industry.

All of the "creators" and "influencers" and "I wanna be a YouTube star when I grow up" people are all going to have to look for real jobs soon.

I've seen in the newspaper that there are real companies paying real money for fake AI-generated "influencers" to flog their products.

Why pay dollars to a wannabe, when you can pay pennies to an AI corp?


> Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay

This is already happening and you don't have to look far to find it.

Personally HN is the only site I browse and comment on anymore (and I'm on here less than I once was). The vast, vast majority of my time online is spent in walled off Discords and Matrix chats where I know everyone and where there's a high bar to add new people. I have no real interest in open communities anymore.


> Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.

https://en.wikipedia.org/wiki/Anathem#Plot_summary


A college instructor turns to typewriters to curb AI-written work and teach life lessons - https://apnews.com/article/typewriter-ai-cheating-chatgpt-co...

    The scene is right out of the 1950s with students pecking away at manual typewriters, the machines dinging at the end of each line.

    Once each semester, Grit Matthias Phelps, a German language instructor at Cornell University, introduces her students to the raw feeling of typing without online assistance. No screens, online dictionaries, spellcheckers or delete keys.

    The exercise started in spring 2023 as Phelps grew frustrated with the reality that students were using generative AI and online translation platforms to churn out grammatically perfect assignments.

Somehow made me think of Warhammer 40k (maybe pre men of iron?)

It’s a recurring theme, see dune’s references to Samuel Butler.

I say this with a multiple decades-spanning love of the game and the lore, but Warhammer 40k is what you get when teenagers try to create something immediately after reading Dune.

Sounds lifted from Alpha Centauri

Directionally correct. But seems overly optimistic to think that moats can be kept from the prying eyes of LLMs, unless you're not interacting with the market at all.

Scary... where can I find more of that?

There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.

As far as I can tell, the dark ages were called the dark ages because there wasn't much evidence to be found: writing was less prominent during that time.

> It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.

You are seeing the fall of the western part of the Roman Empire a bit too rosy. Compare and contrast https://acoup.blog/2022/01/14/collections-rome-decline-and-f...


Yes, Europe did not have dark ages, it only had period of population decline, of less emissions, less building, less inventions, less records and severed trade networks.

Population decline? Less emissions? Haven't we reached consensus that those would be welcome today? Is it time for a pro-dark-age movement?

The world is projected to hit population decline already sometime between 2060 and 2080, so I guess the younger ones of us will find out definitively whether it's a good or bad thing.

I am very sorry, but you are wrong. Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD) there was a period of not only standstill, but regression, devolution and forgetfulness. Compared with what came before, it can be rightly called the dark ages.

HN will flag and downvote anti AI sentiment. There is lots of it here, you just aren't allowed to see it.

I posted Bernie's "Conversation with Claude" a while back, and it was just about immediately taken down.

Let's face it Y combinator is mostly AI startups for the next few years, and any anti-AI sentiment is going to hurt the bottom line.

That being said, I disagree with Sanders on a number of points. He wants to stop data center construction. Can't think of a more luddite un-nuanced solution to the "problem"

The real AI danger is not the threat to white collar jobs (which will simply have to evolve), but something we will see roughly 18 months now when Joe Schmo asks Claude Giga Max Supreme 8.0 to help him reduce his taxes, and it hacks into the IRS and deletes everyone's records.


You think there won’t be any taxes in 2028-2029 because of AI?

Just an example off the top of my head

I feel like we must be visiting different websites.

Have you seen how overwhelmingly anti-AI the non-software-engineering world is? (despite the hypocrisy as plenty use a chatbot these days) The resistance in here is pitiful.

I'm lurking in the indie game dev scene, and any mention of using LLMs for anything is downvoted and laughed at.


Gamedevs are vehemently against genai art, but code generation seems more accepted from what I've seen

Most discussion of using code generation on gamedev forums is taboo. As in, do what you want in the privacy of your own home, but in public, try to have some self-respect as an artist.

I've seen some "devs" livestreaming themselves coding a game using LLMs, and it's not pretty. But that's my opinion of vibecoding in general — it's the tool one uses when they don't want to think too much, which is the furthest path to greatness.

Some random examples off reddit (try to compare the sentiment if it had been posted on Hacker News):

- https://old.reddit.com/r/gamedev/comments/1rvafee/using_clau...

- https://old.reddit.com/r/gamedev/comments/1erb39r/is_it_poss...

- https://old.reddit.com/r/gamedev/comments/1qll8jr/has_anyone...

- https://old.reddit.com/r/gamedev/comments/1jitfbi/im_tired_o...

In general, interest in AI-assisted coding is associated with people that have no experience whatsoever and just want to make a game out of thin air without putting in the effort.


I'm the one who posted this link, and I think Bernie Sanders is a terible man. But an op-ed by a U.S. Senator in the WSJ about a tech issue seems like proper HN material. It's been un-flagged by the mods.

What are some of the terrible things he's done?

That would be off-topic.

We have threaded commenting here, so feel free to go off-topic.

In Canada, demands are not actually demands. That way if the demand is avoided, there never was a demand to begin with; however, if it was fulfilled, then of course there was always a demand all along.

It's extremely prevalent in Canada as well; almost certainly even more so. It's really a North American thing.

I expect copious downvotes with no actual replies. Then the comment will be flagged by the bot armies so the administration here can preserve its dearly held national identity of being "diverse" while never having elected an ethnic minority PM.


I would expect more downvotes for your needless "I'm going to get downvoted because the sheep hate when wolves tell the truth!" persecution complex. Your comment was at least mildly interesting before you got to that.

I loved this game playing on an Arch Thinkpad in university with budget graphics capability.

The best part is being able to pin locations on the map for your teammates, so we were able to plot the adventures and battlegrounds of a goated unit by naming the pins "Ronant's Triumph," "Ronant's Revenge," "Ronant's Folly," and ultimately "Ronant's Last Stand." Great times with a few beers and the lads.

RIP Ronant, Wesnoth will never see another hero of your like again.


It's because people have discovered that (1) motte and bailey fallacies [0] and equivocation of language (between, e.g. identity and diagnosis) are highly effective rhetorical tools; (2) merely identifying as something ontologically changes the metaphysical structure of reality [1] which confers certain societal benefits.

There is a matrix of is-diagnosed, is-not-diagnosed, identifies-as, does-not-identify-as, which is open to exploitation by those who "identify as" something they are not diagnosed with. Who gets fucked? The people who are diagnosed but do not identify as their diagnosis.

God help me once we start adding another dimension of people who have a condition, but are also not diagnosed and do not identify with it either...

[0] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

[1] https://hackernews.hn/item?id=47538165


Articulation. The lens of articulation. Or otherwise, "eloquence;" the lens of eloquence.

LLMs ain't gonna do sheeeeeeeit if this is still where we are at...


That would not fit as well.

I have terrible news for you. Linguistics is descriptive, not prescriptive. We will torment you with word game playing until such time as you loosen up.

The torment is evidently yours [0]. However, I am ecstatic you reveal your own inner turbulence, which I have deliberately engineered.

By the way, how are those food prices working out for ya bud [1]? I understand that you are struggling, but please try not to get, quote, "violent".

Pause. Empathize. Apologize. Then post.

[0] https://hackernews.hn/item?id=47639747

[1] https://hackernews.hn/item?id=47636685


Only two or three weeks from incepting the idea of a token efficient LLM English dialect to seeing it in practice. I just never imagined it to take.... this particular form.....

https://hackernews.hn/item?id=47434846


I've had the thought that English is an efficiency barrier for a while now. Surely there are more information-dense representations of semantic concepts.

Some languages for example have single characters that represent entire ideas/phrases.

https://hackernews.hn/item?id=47442478


https://en.wikipedia.org/wiki/Le_Taureau

"Everything should be made simple as possible but not simpler"


Marx begs to differ. By labor theory of value Sisyphus should be the wealthiest man on Earth; unless you smuggle all the complexity and paradoxes of this theory into some ill-defined notion of "socially useful labour" (how does one measure or quantify utility?) of course...

Marx's labor theory of value very much still assumes a product of that labor to embody its value.

Metamath is fascinating to me in that it is the most "math-like," in terms of being both readable and executable on pen and paper through simple substitution. I've spent a month or two formalizing basic results in it and found it quite fun; unfortunately the proof assistant and surrounding tooling is, archaic, to put it generously. However, the fact that the system still works and that the proof tree is grounded in results from 1994 that still stand to modern day without modification is testament to its design.

Most people seem to be rallying around Lean these days, which is powerful and quite featureful, but with tactics metaprogramming feels more like writing C++ templates instead of the "assembly language of proof" which I liken metamath to, for its "down to the metal" atomization of proof into very explicit steps. Different (levels of) abstractions for different folks.

Once I return to a proper desktop I will probably woodshed myself into Lean for a week or two to get a better handle on it, but for now tactics feel like utter magic when not just chaining `calc` everywhere.


I feel the same. When I first heard about metamath I was blown away at how I could drill down to the base axioms (I had only tried Lean before). Lean also feels too magical for my taste, and I dislike that I don't have a good mental model of its execution under the hood. I care a lot about execution speed as well, and Lean... isn't always fast. It's another reason metamath's design really speaks to me.

You might find metamath0 interesting, its kernel design has a similar focus on simplicity while cleaning up a lot of metamath's cruft: https://github.com/digama0/mm0

EDIT: and feel free to ask any questions about mm0, I don't know a ton about it yet but I have researched it a good deal. I'm hoping to use it more this fall when I take a class on first order logic and set theory!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: