HN2new | past | comments | ask | show | jobs | submit | minimaxir's commentslogin

...Hacker News could use some more cute animal pictures, though.

Coming up on 20 years and we clearly went too far the other way.

One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.

At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.


(I was replying to a now deleted response)

> Slop has an upside?

Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.

It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."

An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.


AI generated "cutest possible animal" (and "make it cuter") might be mildly interesting.

Coming to LISP in 2038, just the right time when we hit the 2038 bug.

Interestingly, their CSP policies forbid even an extension from inserting an img tag.

Strong opinions strongly held.

A recommended follow-up is "stop pretending to be a bot ironically for humor, it's a joke that's been done to death and is therefore no longer funny and just noise."

So you're saying it's not funny, it's annoying!

"Most" is not "All". Hacker News has always had an exception for extremely significant politics.

My bar for "extremely significant" is much higher than it appears to be here. Apparently most events in the US/Iran involvement is "extremely significant" if we judge the votes on this site to offer guidance on how this rule is interpreted.

This forum was founded in 2007. The US was very much involved in Iraq and Afghanistan at that time. If the same bar for coverage was in place at the time, HN would have been flooded with US Military content the way it is now. So yeah, obviously the bar has moved lower for this particular matter and it's because the current community on the site wants it to. Likewise the "generated/AI-edited comments" guideline seems equally squishy to me. And despite a rule about being "curmudgeonly", I'm pretty sure 80% of this site's content is curmudgeonly rants.

IMO at this scale dang, tomhow, and other mods need to be much stricter. When HN was 1/10 the size a shaming comment would often set a poster in place. Now they just sneer back in another comment and post 20 other guideline breaking things.


Well it’s up to interpretation

“most”

“extremely significant”

What’s extremely significant for someone is an offtopic for someone else and vice versa


What are examples of highly-upvoted political stories on HN that you think are not appropriate for the HN community?

My experience has been that the large majority of political content posted here is (at least apparently) mainly here so that people (who are mostly in mutual agreement) can post about how they dislike some political entity or another. I would like to see much less of this on HN personally; it's not insightful and does not promote curiousity.

US domestic politics

I won't give you examples because all of them can be spinned about being relevant

"Well HN is an american site after all"

"Most of the HN users are american voters so it's relevant for them"

"Hackers need to be aware of what's happening in the world"

"You only say that because you disagree with that side"

etc

Same with the stories about Tesla flagged. If you read the comments it's always the same: "Pro-Tesla crowd is flagging everything negative about Elon so the bad news never reach the front page" vs "Anti-Tesla crowd flagging everything because they hate Elon"

HN is the best without politics. But it's not up to me.


The blatantly LLM comments do get downvoted/flagged, it's just still noise.

Please don't vaguepost as it wasted my time trying to trade down which comment you thought was LLM generated and why.

OP is likely referring to this one (https://hackernews.hn/item?id=47335032) by LuxBennu because it has an em-dash, that's one of the few cases it's used correctly. But the account's comment history comments that do not follow the typical LLM tropes but are still odd for a human to write: https://hackernews.hn/user?id=LuxBennu

LuxBennu did reply to accusations of being an AI bot: https://hackernews.hn/item?id=47340704

> Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.


Multiple people agree with me here. The account used em dashes almost everywhere and was rapidly posting complex comments (while having clearly read the articles) one or two minutes apart. There are also other subtle LLM-isms, like replying to a user with "<username> nails it". That's a typical Moltbook pattern. A human would at most write "You nailed it", anything else is just strange.

It's almost as if being immediately reactionary removes nuance and worsens discourse.

The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).


The intent of this rule is to avoid the very common AI tropes that have been increasingly common in HN comments. Using AI as an organizational tool isn't inherently against the rules, but just copy/pasting output from ChatGPT without human oversight is.

The majority of the artist responses were "hard no" in 2024. There's no way the artist demographic such a service would appeal to would be on board with anything even tangent to AI in 2026 (even done ethically) where the professional liability far exceeds the potential revenue.

Most artists I have spoken to don't believe it's possible to do this AI stuff ethically

Maybe they're wrong but I tend to agree. Or even if it is possible to do it ethically, it still never will be done that way because there's just too much money in behaving unethically


The only problem that people see in these models is the money flows.

If it all was non-profit - then no one would raise the ethical issue.


I don't know

I still think cutting artists livelihood from under them with tooling built on top of their work is unethical no matter how you cut it


In a world where artists’ livelihood depends on their output, yeah. But that's not a force of nature. Our society is making the choice of letting them starve for not producing enough art. We need to decouple this. UBI now.

There's a spicy argument to be made that "Rewrite it in Rust" is actually an environmentalist approach.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: