Hacker News .hnnew | past | comments | ask | show | jobs | submit | thorum's commentslogin

The models are primitive right now, but we’re clearly heading toward “AI as sound synthesis, human as artist” - much like how producers currently use a DAW to assemble premade loops and sounds from Splice, but with the producer now able to prompt any sound, filter, or effect they can imagine into existence and then rearrange them into a song.

See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.


Isn’t this a permissions issue? Your “opt out” is using a GitHub access token that doesn’t allow it to happen.

I have the opposite experience: random HN/Reddit comments saying “this sucks” or “whoa this is a huge improvement” are the only benchmark that means anything. Standard benchmarks are all gamed and don’t capture the complexity of the real world.


Stars have been useless as signals for project quality for a while. They’re mostly bought, at this point. I regularly see obviously vibe-coded nonsense projects on GitHub’s Trending page with 10,000 stars. I don’t believe 10,000 people have even cloned the repo, much less gotten any personal value from it. It’s meaningless.


I'm with you on all points except for it being bought.

Programming has long succumbed to influencer dynamics and is subject to the same critiques as any other kind of pop creation. Popular restaurants, fashion, movies - these aren't carefully crafted boundary pushing masterpieces.

Pop books are hastily written and usually derivative. Pop music is the same as is pop art. Popular podcasts and YouTube channels are usually just people hopping unprepared on a hot mic and pushing record.

Nobody is reading a PhD thesis or a scholarly journal on the bus.

The markers for the popularity of pop works are fairly independent from the quality of their content. It's the same dynamics as the popular kid at school.

So pop programming follows this exact trend. I don't know why we expect humans to behave foundationally differently here.


> Nobody is reading a PhD thesis or a scholarly journal on the bus.

As someone who is involved in academia, I can attest that most of my colleagues (including myself) do in fact read quite a few papers on buses (and trams - can't forget those)


> I'm with you on all points except for it being bought.

Stars get bought all the time. I've been around startup scene and this is basically part of the playbook now for open core model. You throw your code up on GitHub, call it open source, then buy your stars early so it looks like people care. Then charge for hosted or premium features.

There's a whole market for it too. You can literally pay for stars, forks, even fake activity. Big star count makes a project look legit at a glance, especially to investors or people who don't dig too deep. It feeds itself. More people check it out, more people star it just because others already did.


Meaningless is maybe too strong.

I have 60-ish repos, vast majority are zero star, one or two with a star or two, one with 25-ish. It’s a signal to me of interest in and usage of that project.

Doesn’t mean stars are perfect, or can’t be gamed, or anything in a universally true generalization sense. But also not meaningless.


I star repos as bookmarks. Don't know if there is another feature for that


Good day for Kling.


Ape thinking is a cognitive practice where a human deliberately solves problems with their own mind. Practitioners of ape thinking will typically author thoughts by thinking them with their own brain, using neurons and synapses.

The term was popularized when asking a computer to do it for you became the dominant form of cognition. "Ape thinking" first appeared in online communities as derogatory slang, referring to humans who were unable to outsource all their thinking to a computer. Despite the quick spread of asking a computer to do it for you, institutional inertia, affordability, and limitations in human complacency were barriers to universal adoption of the new technology.


The slogan of ape thinking (deliberately adjusted for machine readability): "Not AI, not machine generated slop <em-dash> genuine human intelligence."


Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.

I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.


It doesn't mean anything. It is just there to be there and catch low-hanging RL reward granting eyeballs.


It's annoying because it is a super common widget and it is interesting work, the first draft or literally even prompt they gave the AI probably would've been a great post, all they had to do was not ensloppify it...


I agree this thing went on forever and seemed to have multiple summaries of the same concepts.


AI for help figuring things out and Timeshift for when you accidentally break something. One reboot and it’s fixed.


> but the number of problems requiring deep creative solutions feels like it is diminishing rapidly.

If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.

The solution is indeed to work on bigger problems. If you can’t find any, look harder.


I’m honestly surprised LLMs are still screwing up citations. It does not feel like a harder task than building software or generating novel math proofs. In both those cases, of course, there is a verifier, but self-verification with “Does this text support this claim?” seems like it ought to be within the capabilities of a good reasoning model.

But as I understand the situation, even the major Deep Research systems still have this issue.


> LLMs [...] reasoning model

Found your problem right there


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: