What about, if any sociopathic, super genius is ever born and raised to his full ability, everyone dies?
It’s not like AI is the only serious threat humanity has faced.
I'm guessing they use very conservative usage in their math. I'm on my pixel all day long but I barely use my iPhone 13 more than maybe an hour a day. I can leave it unplugged all weekend and come back Monday with enough charge to get me through the day.
The cars are programmed to intentionally, violate the rules. That’s the only way to drive effectively. Just two examples: gap finding and turning left on a busy intersection.
I’m interested in learning more about your theory that these models can be trained more cheaply. Is anyone doing it from scratch, rather than adversarial distillation?
It is a lot cheaper to train a 27b model such as qwen3.6 which you can even vibe code or agentic code with than it is to train a 1t+ parameter model. It runs on a single commodity GPU for goodness sake
It's not a theory. These smaller models that are coming out are huge advances for the field.
I can't comment on companies training practices. That would be proprietary stuff I guess. I think the claims that the advances being made are due to distillation alone are completely unfair. The advances alone are not just data.
Several times the speed of sound? That is meaningless when there is no media for the sound waves.
I think a better unit might be furlongs per fortnight.
> 2.43 kilometers a second, or 1.51 miles a second, or 5,400 miles an hour, or 8,700 kilometers an hour.
> There is, of course, no air and no sound on the Moon, so a "Mach number" doesn't really make sense. But if there were air, the speed would be about Mach 7, seven times the speed of sound.
"If there were air". Air at which temperature though? Th sound of speed, and hence what Mach numbers mean, depends on the temperature of the air. The temperature air would have at the moon's surface? By day or by night? Or the air at Earth's surface? Or at some other altitude?
Well, there is a speed of sound on the moon. Sound does travel through the regolith. If you were standing on the moon you would indeed "hear" this impact as the sound moved up through your feet. It would sound/feel like standing beside a subwoofer.
I still call it sound when I hear things under water. Gas isn't special.
When someone knocks on your door, you still say you heard the sound, even though the pressure wave was transmitted through the solid door material (before then being transmised to the gas in the room). Likewise, we still file a noise complaint when the neighbor is throwing a raging party, even though we are feeling the bass as much as we are hearing it.
Didn’t Anthropic vibe code all of those integrations? If AI coding is as useful and successful as it is touted, then those integration should be no moat at all.
SWE are paid that because the industry makes so much money off advertising, and it marks the market for everything else.
It's more business model than skillset, because RF engineering is, in many ways, so much more technically challenging.
People who care about pay should mostly be thinking about how their potential employers make money. Do they have fat variable margins? Is there volume? Do I have the opportunity to impact those margins in some way? If you do, there's a good chance you can make good money, regardless of the actual technical challenge at hand.
For a lot of RF engineering, the answers are generally no, at least enough such that the general market isn't getting set at a high clearing rate.
Hardware engineers can get paid that, although it’s rarer. That said, there’s also a much broader base of hardware engineers than just the Bay Area… so cost of living is a lot lower, therefore salaries don’t need to be as sky high to compensate.
Imagine vibe coding your core consumer application and associated backend…
Oh wait, I don’t have to imagine. That’s what Anthropic does. A nice preview for what is in store for those who chose to turn off their brains and turn on their AI agents.
reply