I also tried setting my phone to grayscale, but it takes me significantly longer to do useful things, as some UIs are harder to distinguish without color. Have you found a workaround for this?
I did this, and yeah google maps is all but unusable without color, that and the camera takes some getting used to.
I'm on an iPhone, but what ended up doing was creating a shortcut that toggles the phone to grayscale and back, and then having two automations, one for when I open any of the apps I actually want color in (maps, camera, photos, etc) that toggles grayscale off and then another automation to toggle grayscale back on when I close any of those same apps.
The option is located in Settings > Accessibility >Display & Text Size > Color Filters
It isn't perfect, but it works most of the time (I also added the shortcut to the back of the home screen so if it is off or I need one-off color I can just toggle it manually).
From where I sit, smartphones completely ruined the internet.
If you're defining "successful" in the sense of people-making-a-lot-of-money I completely agree.
If you're talking about the internet in 2005 vs 2025, smartphones completely ruined the internet. At one point I had half my high school using HTML in their AIM profiles because they thought mine was cool.
Now kids can hardly even type on an actual, physical keyboard.
The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself.
That in no way is the same as “an LLM-generated comment”.
For the benefit of external observers, you can stick the comment into either https://gptzero.me/ or https://copyleaks.com/ai-content-detector - neither are perfectly reliable, but the comment stuck out to me as obviously LLM-generated (I see a lot of LLM-generated content in my day job), and false positives from these services are actually kinda rare (false negatives much more common).
But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells:
"Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"
"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.
"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!
"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.
"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.
The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM.
I'm using ChatGPT daily to correct wording and I work on LLMs, construction and the wording in your comment is straight from ChatGPT. I looked at your other comments, and a lot of them seem to be LLM output. This one is an obvious example: https://hackernews.hn/item?id=44404524
And anyone can go back to the pre LLM era and see your comments on HN.
You need to understand that ChatGPT has a unique style of writing and overuses certain words and sentence constructions that are statistically different from normal human writing.
Rewriting things with an LLM is not a crime, so you don’t need to act like it is.
I've actually seen LLMs put spaces around em dashes more often than not lately. I've made accusations of humanity only to find that the comment I was replying to was wholly generated. And I asked, there was no explicit instruction to misuse the em dashes to enhance apparent humanity.
and you're responding to a comment where the LLM has been instructed to not to use emdashes.
And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance.
Posting (unmarked) LLM-generated content on public discussion forums is polluting the commons. If I want an LLM's opinion on a topic, I can go get one (or five) for free, instantly. The reason I read the writing of other people is the chance that there's something interesting there, some non-obvious perspective or personal experience that I can't just press a button to access. Acting as a pipeline between LLMs and the public sphere destroys that signal.
Have you ever listened to a bad interview? Like, really bad? Conversely, have you ever listened to a really good interview? Maybe even one of the same subject? The phrase "prompt engineering" is a bit much, but there's still some skill to it. We know this is true, because every thread there's people saying "it doesn't work for me!" while others are saying it's the second coming.
So maybe while it makes you feel smart because you're a stoichastic parrot that can repeat LLM generated!111 like you're a model with a million parameters, every time you see an emdash, it's a lazy dismissal and tramples curiosity.
I have no idea what you think you're responding to. I use LLMs frequently in both professional and personal contexts and find them extremely useful. I am making a different, more specific claim than the thing you think I am saying. I recommend reading my comment more carefully.
This kind of legislation has been proposed so many times at EU and national level, and will fail like always, at the lates at the European Court of Justice for violating human rights.
This concept has been studied already extensively, e.g [1] (in 2000!) by people like Rivest and Chaum, who have actual decade-old competence in that field.
I think worldcoin added this year (?) identification using government e-passport as well (not only orb) - all modern passport have NFC/RFID chip, you won't get all data from that in public way but can verify signature and can get basic information. There are already apps in appstore doing that.
While that works for attacks that are like spam, bot detection for high margin attacks like show ticket scalping really wants an identity-oriented solution.
This is an extremely ignorant take. It's extremely well-known that one of the primary ways you stop spam is by making it economically infeasible, specifically by making the cost of distribution higher than the expected return. It's also extremely well-known that spam snail-mail is subsidized by the US post office and doesn't pay normal post rates.
reply