The fetishism of "byte count" (here, as "732 byte python script") needs to stop, especially when in a context like this where they're trying to illustrate a real failure modality.
Looking at their source code [1] it starts with this simple line:
import os as g,zlib,socket as s
And already I'm perplexed. "os as g"? but we're not aliasing "zlib as z"? Clearly this is auto-generated by some kind of minimizer? Likely because zlib is called only once, and os multiple times. As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.
Anyway, I could go on. :) Let's just stop fetishizing byte count
Hilariously, "os as g" adds one more byte than it saves, since os is only used 4 times but the alias takes 5 extra bytes to save 4. And "socket as s" comes out even.
If you wanted real savings, you'd use "d=bytes.fromhex" instead of defining a function -- 17 bytes!! And d('00') -> b'\0' for -2 bytes.
We could easily get the byte count down further by using base64.b85decode instead of bytes.fromhex (-70 or so), but ultimately we're optimizing a meaningless metric, as you mention.
I don't get the 732-byte thing either and while I think it's a relatively punchy and unusually informative landing page for named vulnerability there are little snags like this all over it.
But the fact that it's not a kernel-exec LPE and it's reliable across kernels and distributions is important; it's close to the maximum "exploitability" you're going to see with an LPE. Which the page does communicate effectively; it just gilds the lily.
yeah... definitely a bit of a rush to get the landing page out after a long time in the disclosure process. The folks putting this all together have been working like mad (finding the bug, disclosing, working a lot on patching, writing up POCs and verifying exploitability in different scenarios) and stayed up really late to finish up the landing page, which led to a lot of minor issues.
But the bug is real and people should patch :)
For the size: sometimes people will shove in kilobytes of offset tables or something into an exploit, so it'll fingerprint and then look up details to work. This is much smaller because it doesn't need any of that, which is important for severity. (I agree the "golf" nature is a bit of an aside, kind of like pwn2own exploits taking "10 seconds")
I don't see it as fetishizing byte count. I think of it as a proxy measure for how complicated or uncomplicated the exploit might be. They could just as well have said "we can do it in 3 lines of python" or "the Shannon entropy of the script implementing the exploit is really small" and I would have interpreted it similarly.
Where do you see this "fetishizing" happening most often? It's a strange thing to counter-fetishize about.
> I think of it as a proxy measure for how complicated or uncomplicated the exploit might be.
From a Busy Beaver, 256-bytes compo, or Dwitter perspective, 732 bytes isn’t really that meaningful.
And the sample exploit is even optimizing the byte size by using zlib compression, which doesn’t make much sense for the purpose. It just emphasizes the byte count fetishization.
Again, I think the point is that compressed size is a reasonable measure of the inherent complexity of a program. I'm a crap mathematician, but I believe that is a fundamental concept in information theory.
If you have a choice between posting minimized exploit code, and posting regular exploit code, posting minimized code is virtually always the wrong choice.
If you have a choice between pointing out the byte size of the exploit, and not pointing out the byte size of the exploit, pointing it out is virtually always the wrong choice.
In both cases, doing the right thing is less work. So somebody is going the extra way to ensure they are doing it wrong. If they didn't care, they'd end up doing it right by default.
How does "import os as g" communicate the intent? How does hiding the payload behind zlib communicate the intent? This is the opposite: obfuscating the intent, so they can brag about 732 bytes instead of 846 bytes (or whatever it might have been).
It would have been less work for everyone involved to just release the unminified source.
While not formally reviewing code like this, I read a lot of it for fun. When it's clear and understandable, it's more educational and enjoyable. If the PoC code can also serve as a means of communication, that seems like an extra win.
I started to take the exploit script apart and reformat it to be something readable. At about 1041 bytes it's actually readable. The heart of it also includes an encoded zlib compressed blob that's 180 bytes long ('78daab77...'). This is decompressed (zlib.decompress(d(BLOB)) to a 160 byte ELF header.
You're supposed to add "as a senior engineer" so we know you're 3 years out of bootcamp and can program in 1.25 languages. Or "as a staff..." if you've given an interview, know what 'make' is ("it's a command!") and are willing to do absolutely anything for the CTO.
While I agree that it doesn't make much sense to use a minimizer on code the reader could understand, the code-golfed byte count of a CVE repro communicates its complexity in a certain visceral way.
Then go on. zlib is only used once, so "zlib as z" in exchange for using z once doesn't get you anything. Using os directly and not renaming it g saves you 2 bytes though. But in this age where AI outputs reams of code at the drop of a hat, why shouldn't we enjoy how small you can get it to pop a root shell?
>As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.
lucky for them, its an exploit script, not enterprise code.
all that needs to be "reviewed" is whether or not it exploits the thing its supposed to.
edit: yall really think a 10-line proof of concept script needs to undergo a code review? wild. i shouldnt be surprised that the top comment on a cool LPE exploit is complaining about variable naming
It's just sloppy. Readers are human, and little mistakes like this take away from the article. Then you add a nonexistent RHEL version, and it just isn't a good look. Which is a shame, because it's otherwise a very interesting vuln.
Maybe you didn't care, but the length of this comment chain clearly shows that it matters. Effective communication is just as important as the engineering.
i just dont understand huffing and puffing over "os as g" in a 10-line poc script, and saying "well i would never approve this". its not enterprise code. its not code that will ever be used anywhere else, for anything. its sole purpose is to prove that the exploit is real, which it does!
the rest of the information is in the actual vulnerability report. the poc is a courtesy to the reportee, so that they can confirm that the report itself isnt bullshit.
evidently, given the downvotes i am getting, people think exploit scripts should be enterprise quality code. ¯\_(ツ)_/¯ half of the reports i see flowing through mailing lists dont even have a poc.
amazingly HN-like to be upset about a variable name
>Disagree because to run the PoC you really ought to understand what it’s doing.
that is contained in the report, which will look similar to the blog. the maintainers will have an open line of contact with the reporters as well. the poc is a small part of the entire report. its not like the linux maintainers only received this poc and have to work out the vulnerability from it alone.
>It is failing at letting people confirm the exploit easily.
it confirms the exploit incredibly easy. just run it, and you get confirmation.
go ahead and explain your point, rather than be cryptic, if you you want to have an actual conversation about it.
you said "I need to know what the code does before I run it.".
you know its an LPE. the mechanisms of the exploit are fully explained. what more do you need to know? please imagine yourself in the position of the kernel security team who would have received this poc in the first place when you answer, because that is the intended context of the poc.
if you think the kernel security team is going to get tripped up over "os as g", you have a crazy low view of the team.
I don't anyone is saying it's not "enterprise" it's just that they clearly went out of their way to make it less readable. By all means advertise the golf'd line count but just have the non minified script.
Nordic Semi, or maybe ST Micro. I've got an STM32WB on my bench at the moment with sensitive coulomb counting and it looks very promising but without all those radios. Of course with all those radios (ie, if you need LoRa on a watch... which is a design decision I'm also skeptical of) then Nordic has a good track record.
Different use case (environmental data recorder) but our current product uses an STM that turns on an ESP32 (not trusting that sleep mode) when it needs radio, to run on 2 AAs forever.
I have the opposite experience: random HN/Reddit comments saying “this sucks” or “whoa this is a huge improvement” are the only benchmark that means anything. Standard benchmarks are all gamed and don’t capture the complexity of the real world.
Total gimmick. I guess we're "making progress", but this is will never lead to any useful application other than "Yes, you're absulotely right" bots. What's needed for real applications is 10000× the input token context and 10× the output token speed, so we're off by a factor of ... 100,000×?
Correct, also with the context growing, the conversations cannot continue at the initial speed either. Gimmick or not, this is very sci-fi compared to 10-20 years ago.
Looking at their source code [1] it starts with this simple line:
import os as g,zlib,socket as s
And already I'm perplexed. "os as g"? but we're not aliasing "zlib as z"? Clearly this is auto-generated by some kind of minimizer? Likely because zlib is called only once, and os multiple times. As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.
Anyway, I could go on. :) Let's just stop fetishizing byte count
[1] https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...
reply