Imagine we wouldn't tell criminals the law because they might try to find holes... This is just user-hostile and security through obscurity. If someone on HN knows that this is what is shown to banned people then so will the people that scrape or mean harm to imgur
In 2015, yes. In 2025? Probably not. Imgur is enshittifying rapidly since reddit started it's own image host. Lots of censorship and corporate gentrification. There's still some hangers on but it's a small group. 15 comments on imgur is a lot nowadays.
Why would you think it is anything special? Just because Sam Altman said so? The same guy who told us he was scared of releasing GPT-2.5 but now calling its abilities "toddler/kindergarten" level?
My comment was mostly a joke. I don't think there's anything "special" about GPT-5.
But these models have exhibited a few surprising emergent traits, and it seems plausible to me that at one point they could intentionally deceive users in the course of exploring their boundaries.
There is no intent, nor is there a mechanism for intent. They don't do long term planning nor do they alter themselves due to things they go through during inference. Therefore there cannot be intentional deception they partake in. The system may generate a body of text that a human reader may attribute to deceptiveness but there is no intent.
I'm not an ML engineer - is there an accepted definition of "intent" that you're using here? To me, it seems as though these GPT models show something akin to intent, even if it's just their chain of thought about how they will go about answering a question.
> nor is there a mechanism for intent
Does there have to be a dedicated mechanism for intent for it to exist? I don't see how one could conclusively say that it can't be an emergent trait.
> They don't do long term planning nor do they alter themselves due to things they go through during inference.
I don't understand why either of these would be required. These models do some amount of short-to-medium term planning even it is in the context of their responses, no?
To be clear, I don't think the current-gen models are at a level to intentionally deceive without being instructed to. But I could see us getting there within my lifetime.
If you were one of the very first people to see an LLM in action, even a crappy one, and you didn't have second thoughts about what you were doing and how far things were going to go, what would that say about you?
It is just dishonest rhetoric no matter what. He is the most insincere guy in the industry, somehow manages to come off even less sincere than the lawnmower Larry Ellison. At least that guy is honest about not having any morals.