We are _miles_ behind successful embodied AI. The demos are cool but the success rates are not high enough.
You can tell we're on the cusp when level 5 self driving cars are common an you have multiple companies deploying them on the street. Google is doing great work but the poured TONS of effort into it and the thing still needs intense stacks of perception and processing. Much more than I've seen any humanoids pour into it.
L5 SDV's are much easier to get than humanoids and the have tangible economic benefit. My thesis is that those will come first.
I'm really curious how quickly we would have huge numbers of L5 SDV if we societally accepted ~equal rates of injury and death, both of passengers and pedestrians. I want to be very clear, I'm not advocating for this (and even if I was, I haven't the faintest idea how one would go about getting society more broadly to go along), but part of me thinks that the primary hold up isn't actually capacity but instead standards.
This doesn't really argue against your point, because the standards are what they are, and like I said, I have no idea how one would go about changing them if one even decided they wanted to. And given what they are, it has taken, as you point out, enormous amounts of effort to reach those standards in a practical way.
That all being said, while I agree that SDV's are in many respects easier than other robotics tasks, they are also somewhat uniquely dangerous. Other categories of task, while potentially more complicated, won't have to worry nearly so much about safety, and so may be operating under a different constraint regime. I think this means that we may see adoption happen at a much more accelerated rate than we have seen in the automotive space.
Standards are not higher for self driving cars. Musk lied a lit about capability and safety of self driving, creating impression that it is safer then humans driving.
I have no idea where you get this impression. Tesla is no where close to the majority (or even plurality) of fully autonomous self driving miles. Waymo is dramatically safer (less injuries, not quite enough data yet to be certain about fatalities, but they are lower than average, we just can't yet claim statistical significance) than human drivers.
I haven't seen good stats on Tesla (they are less transparent than Waymo), but it would shock me if they weren't also at least slightly safer than the average human driver. Human drivers are really bad at driving.
But even if Tesla isn't safer, taken as a whole, the self driving industry as it currently exists still probably is, purely because it's mostly Waymo, and Waymo is dramatically safer.
It sounds like problem with the lack of volume then? Since macs are super common, you can find a lot of places that repair them. Doesn't say much about the HW comparison between the two, IMO.
I think this is the way. Human-machine co-design worked great for me so far. Hell, even the test writing alone is great, because I can have more confidence in my code. And test writing was mostly drudgery. On the other hand, you _must_ have a good mental model of the thing in your head else this will not work. And it's much easier to believe you have it and not really have it if you don't engage with the codebase.
You can tell we're on the cusp when level 5 self driving cars are common an you have multiple companies deploying them on the street. Google is doing great work but the poured TONS of effort into it and the thing still needs intense stacks of perception and processing. Much more than I've seen any humanoids pour into it.
L5 SDV's are much easier to get than humanoids and the have tangible economic benefit. My thesis is that those will come first.
reply