Friendly reminder that just because someone is building security software it doesn't mean they are competent and won't cause more harm than good.
Every month the security team wants me to give full code or cloud access to some new scanner they want to trial. They love the fancy dashboards and lengthy reports but if I allowed just 10% of what they wanted we would be pwned on the regular...
I audited Trivy's GitHub Actions a while back and found some worrying things, the most worrying bit was in the setup-trivy Action where it was doing a clone of main of the trivy repo and executing a shell script in there. There was no ref pinning until somebody raised a PR a few months ago. So a security company gave themselves arbitrary code execution in everyone's CI workflows.
Aqua were breached earlier this month, failed to contain it, got breached again last week, failed to contain it again, and now the attackers have breached their Docker Hub account. Shit happens but they're clearly not capable of handling this and should be enlisting outside help.
The ref pinning part is almost worse than no pinning. You can pin the action itself to a commit SHA, sure. But half the actions out there clone other repos, curl binaries, or run install scripts internally. Basically none of that is covered by your pin. You're trusting that the action author didn't stick a `curl | bash` somewhere in their own infra.
Audited our CI a few months back and found two actions doing exactly that. Pinned to SHA on our end, completely unpinned fetches happening inside.
Granting broad access to "security" tools so some vendor can take another shot at your prod keys is not risk reduction.
Most of these things are just report printers that makes more noise than a legacy SIEM, and once an attacker is inside they don't do much besides dump findings into a dashboard nobody will read.
If you want less self-inflicted damage, stick new scanners in a tight sandbox, feed them read-only miror data, and keep them away from prod perms until they have earned trust with a boring review of exactly what they touch and where the data goes.
Otherwise you may as well wire your secrets to a public pastebin and call it testing.
Yet many of these tools have setup like: create a service account, give it about thousand permissions (if not outright full ownership) and send us the JSON private key.
Most of corporate security nowadays involves "endpoint security solutions" installed on all devices, servers and VMs, piping everything into an AI-powered dashboard so we can move fast and break everything.
My hypothesis is that generally, there's no quality floor at which security departments are "allowed" to say "actually, none of the options on the market in this category are good enough; we're not going to use any of them". The norm is to reflexively accept extreme invasiveness and always say yes to adding more software to the pile. When these norms run deeply enough in a department, it's effectively institutionally incapable of avoiding shitty security software.
Fwiw w/r/t Trivy in particular,I don't think Trivy is bad software and I use it at work. We're unaffected by this breach because we use Nix to provide our code scanning tools and we write our own Actions workflows. Our Trivy version is pinned by Nix and periodically updated manually, so we've skipped these bad releases.
From having worked at and consulted with security software producing companies as well as security software consuming ones, I would say the security companies are worse than average at security.
And their security teams more cynical.
Sometimes they deliberately hire lower aptitude candidates to run internal security to prevent them from getting distracted by the product.
In other cases they are getting high on their own supply, more or less.
Jack Welch style management seems to take a deeper toll in this sector.
It doesn't help that a lot of security software is pretty niche. It's unreasonable to expect most candidates to know it or have experience.
In one case I was one of exactly two people out of 500 that had used the product as a paying customer. Neither of us was in management.
After a year or two the CISO drifted over and asked me to show him how to use the product, but he was more interested in soundbytes than actually using the system.
It became a powerpoint exercise and I collected my attaboy.
On private trackers where people care about that stuff it's easier. The NFO usually has a pretty comprehensive description of the contents and all the tracks etc so you can decide which version you want before downloading.
Those claims were never confirmed, no? Some of it might be true or trueish but I'm not talking Bloomberg's anonymous sources word for it, and with so much supermicro gear out there you would think some other evidence would show up.
It depends on what you consider confirmed. It was kind of corroborated, at least. There was a CEO of a hardware security firm that came forward after the original article. He claimed that his firm had actually found a hardware implant on a board during a security audit. It wasn't exactly as Bloomberg described, though.
His take was that it was very unlikely that it impacted exclusively Supermicro, though.
I don't think it was a confirmed story. That is, the tiny "grain of rice" size Ethernet module that CEO of a security audit company allegedly found, was not present in other SuperMicro servers. SuperMicro itself, as well as it's buggest customers did not confirm the findings.
From what i recall, the story was very vague, there were no pictures of the specific chip, no pictures of the motherboard of the motherboard that would include serial, i.e. no details that would accompany a serious security research.
The only photo I saw of the "hidden Ethernet module" was a ceramic RF filter or diplexer, basically a passive $2 part that does nothing on its own, and that would have stuck out like a sore thumb if actually installed in the area where it was depicted.
Just a random surface-mount component that someone pulled off another board or found on the floor behind a workbench. Allegedly.
A supply chain attack similar to Supermicro's would be much more targeted and recalls with national security implications do get flagged via a separate chain.
Recently asked Codex (GPT-5.2) to write a small single-page HTML frontend to debug some REST endpoints. As it was just a one-off tool, I put in no instructions about looks or styling at all. Lo and behold, the tool it wrote came with exactly that round-box style.
It seems to be the "default" style of some models for some reason.
Which makes me wonder if people already experimented with different style suggestions to get different results: "Make it look like an 1998 GeoCities page" / 2005 Facebook / Newgrounds / DeviantArt / HN / one of those Windows XP simulators with built-in window manager / etc
I vibe code web apps with Google's Gemini and I think it actually mimics Google's UI and UX because I see similarities between my vibe coded web apps and Google's web apps.
If anything it's not going far enough. Corruption should be considered high treason and lead to public executions.
As a common citizen, very few things in this world are more demoralizing than witnessing corrupt politicians get filthy rich while trading favors, embezzling money, creating loopholes for corporations to exploit while pretending to regulate them or whatever else it is that they do, there are so many scams it's impossible to enumerate them all. It breaks one's spirit to realize that it's the honest and the just who get punished on a daily basis while the corrupt get constantly rewarded. That is an incalculable crime against society and should absolutely be severely punished.
If they don't like it, they can just not be a politician. Nobody asked them to seek power, they fought to be there. Normal citizens get to go to jail, not them.
> But he that knew not and committed things worthy of stripes, shall be beaten with few stripes.
> For unto whomsoever much is given, of him shall much be required
> and to whom men have committed much, of him they will ask the more.
Revoke their right to privacy. Normal citizens get to have privacy protections. Not them. They are in power to serve us. We should be entitled to know everything they are doing at all times. Let's see them try to scapegoat anyone then. If they don't like it just resign.
If a solution isn't working, it's probably because it's not extreme enough. Always go further. If they try to weasel out, preempt them. We aren't dealing with innocent lambs here. These are rich elites who graduated from top universities, high functioning sociopaths who absolutely have enough education and mental acuity to know right from wrong. They take an almost sadistic pleasure in not only subverting morality but in getting away with it. They don't just do things, they rub it in our faces that they did it and got away with it, and think us fools for not stopping them.
Corruption is power without responsibility. The more power you have, the more scrutiny you should be subjected to and the more consequences you should suffer.
I got solar installed by the local power company and while it's well done and was a great deal regarding the price, the inverter stats are locked behind a really terrible app. At least there isn't a subscription cost but I wouldn't be surprised if they add one someday.
Would gladly pay more for fully open and serviceable replacement.
I get that - but also solar should be cheap. If we lower the cost of power we knock off a lot of the bad externalities of power production and allow people to be more inventive with their power use.
Agreed - a lot of the inverters do some real BS moves around data management clearly a way to extract more value in a subscription mode. Its mind numbingly frustrating.
Could use some proofreading.
reply