Some hard numbers [1] as to why GitHub is struggling with stability issues, directly from GitHub's COO:
Yup, platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)
GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.
So we're pushing incredibly hard on more CPUs, scaling services, and strengthening GitHub’s core features.
In a large enterprise if you task a front end team with solving a performance issue that is caused by the back end, invariably they’ll hack together some workaround… in the front end.
People only ever solve problems in the areas they have control over, whether that’s where the root cause is or not.
From what I remember, it got much worse the moment they started requiring JS for displaying what would otherwise be mostly static (and thus easily cached) content.
Used to be full page loads when you clicked on links too, performance got a lot worse (for me), both network-wise and client-side-wise when that changed.
That doesn't mean it doesn't have usage patterns or other things telemetry would be useful for. And, at the rate these tools are being updated (multiple times a week, multiple times a day in some cases), they practically _are_ SaaS.
If the benchmarks are private, how do we reproduce the results? I looked up the Humanity's Last Exam (https://agi.safe.ai/) this model uses and I can't seem to access it.
Yep, latency's also big if you play competitive multiplayer games. With DOCSIS you get ~11ms +- 3ms added to every packet no matter what because it's shoehorned existing cable infrastructure. Fiber is much better in this regard.
Ping to my public IP's gateway address:
30 packets transmitted, 30 received, 0% packet loss, time 29031ms
rtt min/avg/max/mdev = 1.449/1.915/2.212/0.166 ms
Does InTune have some sort of check that goes "if over 1% of devices are wiped within a certain timeframe, stop all new device wipe requests"? Seems like it should be a feature, especially if these kinda attacks pick up.
>During TechEd 2014, Emory University's IT department prepared and deployed Windows 7 upgrades to the campuses computers. If you've worked with ConfigMgr at all, you know that there are checks-and-balances that can be employed to ensure that only specifically targeted systems will receive an OS upgrade. In Emory University's case, the check-and-balance method failed and instead of delivering the upgrade to applicable computers, delivered Windows 7 to ALL computers including laptops, desktops, and even servers.
You can set dual authorization for resets, wipes, and deletes. Normally CISA would pipe in with this kind of guidance. Anyone know what they’re up to now?
If AGI is coming, won't there just be autofishers and no one will ever have to fish again, completely devaluing one's fishing knowledge and the effort put in to learn it?
“Autofishers” are large boats with nets that bring in fish in vast quantities that you then buy at a wholesale market, a supermarket a bit later, or they flash freeze and sell it to you over the next 6-9 months.
Yet there’s still a thriving industry selling fishing gear. Because people like to fish. And because you can rarely buy fish as fresh as what you catch yourself.
Again, it’s not a great analogy, but I dunno. I doubt AGI, if it does come, will end up working the way people think it will.
Evey now model is better than a previous one. When and why it is going to stop? Current models are limited by DC capacity which is rapidly expanded. Genuinely curious, I don't know the answer.
Because I want to know what the exploit is doing and how it works, and if it's even safe to run.
A privesc PoC is NOT the place for this kind of fun.
reply