HN2new | past | comments | ask | show | jobs | submit | yz-exodao's commentslogin

Also, what's this??? https://imgur.com/a/5CF34M6


Imgur is down, hug of death from screenshot links on HN.

  {"data":{"error":"Imgur is temporarily over capacity. Please try again later."},"success":false,"status":403}
Or rate limited.


This is what Imgur shows to blacklisted IPs. You probably have a VPN on that is blocked.


Ugh, why lie to users... Just say the IP is blacklisted.

Thanks for the tip btw.


Because when you know it’s blacklisted you might try with a different IP, whereas if you don’t you will just wait (forever).


Imagine we wouldn't tell criminals the law because they might try to find holes... This is just user-hostile and security through obscurity. If someone on HN knows that this is what is shown to banned people then so will the people that scrape or mean harm to imgur


In a world where we couldn't arrest criminals, only keep track of them in a log book, yeah that's probably exactly what we'd do


There’s no law here, just someone trying to protect their website.



Lol this is pure vibegraphing!


stats say this image got 500 views. imgur is much much more populated than HN


In 2015, yes. In 2025? Probably not. Imgur is enshittifying rapidly since reddit started it's own image host. Lots of censorship and corporate gentrification. There's still some hangers on but it's a small group. 15 comments on imgur is a lot nowadays.


Not GPT-5 trying to deceive us about how deceptive it is?


Why would you think it is anything special? Just because Sam Altman said so? The same guy who told us he was scared of releasing GPT-2.5 but now calling its abilities "toddler/kindergarten" level?


My comment was mostly a joke. I don't think there's anything "special" about GPT-5.

But these models have exhibited a few surprising emergent traits, and it seems plausible to me that at one point they could intentionally deceive users in the course of exploring their boundaries.

Is it that far fetched?


There is no intent, nor is there a mechanism for intent. They don't do long term planning nor do they alter themselves due to things they go through during inference. Therefore there cannot be intentional deception they partake in. The system may generate a body of text that a human reader may attribute to deceptiveness but there is no intent.


> There is no intent

I'm not an ML engineer - is there an accepted definition of "intent" that you're using here? To me, it seems as though these GPT models show something akin to intent, even if it's just their chain of thought about how they will go about answering a question.

> nor is there a mechanism for intent

Does there have to be a dedicated mechanism for intent for it to exist? I don't see how one could conclusively say that it can't be an emergent trait.

> They don't do long term planning nor do they alter themselves due to things they go through during inference.

I don't understand why either of these would be required. These models do some amount of short-to-medium term planning even it is in the context of their responses, no?

To be clear, I don't think the current-gen models are at a level to intentionally deceive without being instructed to. But I could see us getting there within my lifetime.


If you were one of the very first people to see an LLM in action, even a crappy one, and you didn't have second thoughts about what you were doing and how far things were going to go, what would that say about you?


It is just dishonest rhetoric no matter what. He is the most insincere guy in the industry, somehow manages to come off even less sincere than the lawnmower Larry Ellison. At least that guy is honest about not having any morals.


Deception - guessing it's % of responses that deceived the user / gave misleading information


Sure, but 50.0 > 47.4...


Oh man... didn't even notice. I've been deceived. That's bad.


In everything except the first set of bars, bigger bar == bigger number.

But also scale is really off... I don't think anything here is proportionally correct even within the same grouping.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: