HN2new | past | comments | ask | show | jobs | submitlogin
Tesla’s autonomous vehicles are crashing at a rate much higher tha human drivers (electrek.co)
488 points by breve 3 days ago | hide | past | favorite | 262 comments




The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.

The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.

Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.


All of your arguments are expounded upon in the article itself, and their conclusions still hold, based on the publicly available data.

The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.

The denominator problem is made up. Tesla Robotaxi has only been launched in one location, Austin, and only since July (well, 28th June, so maybe there is a few days discrepancy?). So the crash data and the miles data can only refer to this same period. Furthermore, if the miles driven are actually based on some additional length of time, then the picture gets even worse for Tesla, as the denominator for those 9 incidents gets smaller.

The analysis indeed doesn't distinguish between the types of accidents, but this is irrelevant. The human driver estimates for miles driven without incident also don't distinguish between the types of incidents, so the comparison is still very fair (unless you believe people intentionally tried to get the Tesla cars to crash, which makes little sense).

The comparison to Waymo is also done based on incidents reported by both companies under the same reporting requirements, to the same federal agency. The crash definitions and reporting practices are already harmonized, at least to a good extent, through this.

Overall there is no way to look at this data and draw a conclusion that is significantly different from the article: Tesla is bad at autonomous driving, and has a long way to go until it can be considered safe on public roads. We should also remember that robotaxis are not even autonomous, in fact! Each car has a human safety monitor that is ready to step in and take control of the vehicle at any time to avoid incidents - so the real incident rate, if the safety monitor weren't there, would certainly be even worse than this.

I'd also mention that 5 months of data is not that small a sample size, despite you trying to make it sound so (only 9 crashes).


Statistically 9 crashes is enough to draw reasonable inferences from. If they had the expected human rate of 3 over the period in question, the chance that they would actually get into 9 accidents is about 0.4%. And mind you, that’s with a safety driver. It would probably be much worse without one.

I agree with most of your points and your conclusion, but to be fair OP was asserting that human drivers under-report incidents, which I believe. Super minor bumps where the drivers get out, determine there’s barely a scratch, and go on. Or solo low speed collisions with walls in garage or trees.

I don’t think it invalidates the conclusion, but it seems like one fair point in an otherwise off-target defense.


Sure, but the 3x comparison is not based on reported incidents, it's based on estimates of incidents that occur. I think it's fair to assume such estimates are based on data about repairs and other such market stats, that don't necessarily depend on reporting. We also have no reason a priori to believe the Tesla reports include every single incident either, especially given their history from FSD incident disclosures.

"estimates" (with air quotes)

> The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.

I think OP's point still stands here. Who are people reporting minor incidents to that would be publicly available that isn't the police? This data had to come from somewhere and police reports is the only thing that makes sense to me.

If I bump my car into a post, I'm not telling any government office about it.


I don't know, since they unfortunately don't cite a source for that number, but I can imagine some sources of data - insurers, vehicle repair and paint shops. Since average miles driven without incident seems plausible to be an important factor for insurance companies to know (even minor incidents will typically incur some repair costs), it seems likely that people have studied this and care about the accuracy of the numbers.

Of course, I fully admit that for all I know it's possible the article entirely made up these numbers, I haven't tried to look for an alternative source or anything.


The article lists the crashes right at the top. One of 9 involved hitting a fixed object. The rest involved collisions with people, cars, animals, or injuries.

So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage), and also assume that humans fail to report injury / serious property damage accidents more often than not (as the article assumes).

That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.

2.66x is still so poor they should be pulled of the streets IMO.


> So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage)

I don't know what data is available but what I really care about more than anything is incidents where a human could be killed or harmed, followed by animals, then other property and finally, the car itself. So I'm not arguing to exclude hitting fixed objects, I'm arguing that severity of incident is much more important than total incidents.

Even when comparing it to human drivers, if Tesla autopilot gets into 200 fender benders and 0 fatal crashes I'd prefer that over a human driver getting into 190 fender benders and 10 fatal crashes. Directionally though, I suspect the numbers would probably go the other direction, more major incidents from automated cars because, when are successful, they usually handle situations perfectly and when they fail, they just don't see that stopped car in front of you and hit it at full speed.

> That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.

> 2.66x is still so poor they should be pulled of the streets IMO.

I'm really not here to argue they are safe or anything like that. It just seems clear to me that every assumption in this article is made in the direction that makes Tesla look worse.


I'm using the data listed immediately after the introductory paragraph of the article.

FTA:

>> However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin.


Yeah, I've driven ~200k miles in my life and had quite a few incidents but most not recorded anywhere.

Probably 200K on my part--probably a bit more. Some minor damage but no insurance claims or police reports. <touch wood> But some dings of various degrees.

Insurers?

I can't be certain about auto insurers, but healthcare insurers just straight up sell the insurance claims data. I would be surprised if auto insurers haven't found that same "innovation."


That's a fair point, but I'll note that the one time I hit an inanimate object with my car I wasn't about to needlessly involve anyone. Fixed the damage to the vehicle myself and got on with life.

So I think it's reasonable to wonder about the accuracy of estimates for humans. We (ie society) could really use a rigorous dataset for this.


Tesla could just share their datasets with researchers and NHTSA and the researchers can do all the variable controls necessary to make it apples to apples.

Tesla doesn't because presumably the data is bad.


Was going to say the same thing. I've had three of these sorts of "crashes": once was my fault, but no damage, so I apologized and we shook hands and drove away; once was their fault, but same result; third was unambiguously his fault, but I laughed and told him it was his lucky day that the scratch down the side of my car was only maybe the third worst on the beater I was driving at the time. That guy's bumper was all but ripped off, but (though we exchanged details) I never heard anything more about it, so I'm damn sure he never went to insurance. If you count only the (my fault) first of those I'm well ahead of the once every 200k miles average someone proposed up-thread - though if you count the time I ripped off the underlip of my bumper parking in front of a too-high curb, I'm probably back at par.

To add to this, more data from more regions means the estimate of average human miles without an incident is more accurate, simply because it is estimated from a larger sample, so more likely to be representative.

Tesla Robotaxi service is also available in the San Francisco Bay Area, with an area serviced greater than Waymo. Since at least September, 2025, but probably earlier.

TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles. I think that's high - if you're including backing into static objects in car parks and the like, you can look at workshop data and extrapolate that a lower figure might be closer to the mark.

TFA also does a comparison with other self-driving car companies, which you acknowledge, but dismiss: however, we can't harmonize crash definitions and reporting practices as you would like, because Tesla is obfuscating their data.

TFA's main point is that we can't really know what this data means because Tesla keep their data secret, but others like Waymo disclose everything they can, and are more transparent about what happened and why.

TFA is actually saying Tesla should open up their data to allow for better analysis and comparison, because at the moment their current reporting practice make them look crazy bad.


> TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles.

Where does it say that? I see "However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin."

All but one of the Tesla crashes obviously involved significant property damage or injuries (the remaining one is ambiguous).

So, based on the text of the article, they're assuming only 2/5ths of property damage / injury accidents are reported to the police. That's lower than I would have guessed (don't people use their car insurance, which requires the police report?), but presumably backed by data.


This might be UK-specific, but:

Car insurance often requires the payment of an excess, and a loss of no claims bonuses. I've had two prangs, only one was reported to my insurance as the damage caused by the lorry that smashed into me was significant. That was not reported to the police, and an insurance claim does not require a police report.


> TFA's main point is that we can't really know what this data means because Tesla keep their data secret

If that's so, then the article title is very poor.


I think the article's title is pretty fair because the thesis is that Tesla is keeping their data secret because it makes them look bad, which seems consistent with what we know.

Tesla could share real/complete data at any time. The fact that they don't is likely and indicator the data does not look good.

You can do this with every topic. XYZ does not share this, so IT MUST BE BAD.

Yes, that's very often the case with things that would very likely be shared if it looked good.

There are things that don't get shared out of principle. For example there are anonymous votes or behind the scenes negotiations without commitment or security critical data.

But given that Musk tends to parade around vague promises since a very long time, it seems sharing data that looks very good would certainly be something they would do.


And it usually is.

[flagged]


happymellon is not trying to sell you a̵ ̵b̵r̵i̵d̵g̵e̵ a self driving car though.

If a company wants to sell you something, but wants to block access to information, the default position for everyone should be "it's probably because it's bad".

If I have an investment fund and I refuse to tell you about the current performance, I hope you would be sceptical.

If I try to sell you medicine and redact the information about whether it does what I claim, and block you from seeing how many people were poisoned from taking it, I hope that everyone would refuse to take it.

The insanity I'm seeing here from Tesla defenders is amazing. I can only assume they've fully bought in to the vision and tied assets to it and refuse to acknowledge that they might lose everything.


It's a public company making money off of some claims. Not being transparent about the data supporting those claims is already a huge red flag and failure on their part regardless of what the data says.

Are you familiar with the term "common sense"?

They say it ain't so common


I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.

The biggest indicator for me that this headline isn't accurate is that Lemonade insurance just reduced the rate for Tesla FSD by 50%. They probably have accurate data and decided that Tesla's are significantly safer than human drivers.

But that's not what happened...

Lemonade announced an entirely new product for FSD driving, which it says should cut rates for FSD vehicles And importantly, FSD driving is no longer covered by the regular policy, so FSD drivers now need two separate insurance policies if using Lemonade: the regular insurance policy, and another one just for when using FSD.

The actual combined cost of the two insurance policies is more than the previous policy was because they didn't reduce rates for the normal insurance policies.


Yes but they charge per mile, so the net change in price should be a substantial reduction.

Unless Tesla drivers were already paying obscene rates for car insurance, the per mile-based charge will still be as much as most people with ICE vehicles pay for their insurance coverage...and for Tesla owners this will still be an additional insurance cost on top of their original car insurance cost.

Also, important to note: Lemonade isn't actually available in the states with the largest population of Tesla owners, like California. So...basically this is a big nothingburger.


There will be stronger evidence if more auto insurance companies follow suit.

Thank you. Everyone is hiding disengagement and settling to hide accidents. This will not be fixed or standardized without changes to the laws, which for self driving have been largely written by the handful of companies in the space. Total, complete regulatory capture.

I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.

While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.

I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.


It’s not weird. They have a history of over promising to the point that one could say they just straight up lie on a regular basis. The bar is higher for them because they have abused the public’s trust and it has to be earned again.

The results have to speak for Tesla very loudly and very clearly. And so far they don’t.


But this is more your feelings than actually factual.

I mean sure you can say that the timelines did slip a lot but that doesn’t really have anything to with the rest that is insinuated here.

I would argue a timeline slipping doesn’t mean you go about killing people and lie about it next. I would even go so far as to say that the timelines did slip to exactly avoid that.


That's not "feelings" that's reputational data.

Tesla continues to overpromise, about safety, about timelines that slip due to safety.

We should be a bit more hard nosed and data based when dealing with these things rather than dismissing the core question due to "feelings" and due to Tesla not releasing the sort of data tha allows fair analysis b


> But this is more your feelings than actually factual

Seems to be the other way, though I find that kind of rude to assert as opposed to asking me what informs my opinion. Other comments have answered that very well


> a timeline slipping

You're generous with your words to the point they sound like apologism. Musk has been promising fully autonomous driving "within 1-3 years" since 2013. And he's been charging customers money for that promise for just as long. Timelines keep slipping for more than half of the company's existence now, that's not a slipup anymore.

Tesla has never been transparent with the data on which they base their claims of safety and performance of the system. They tout some nice looking numbers but when anyone like the NHTSA requests the real data they refuse to provide it.

When NHTSA shows you numbers, they're lying. If I tell you I have evidence Tesla is lying you'll tell me to show it or STFU. When Tesla does the same after so many people died, you go all soft and claim everyone else is lying. That's very one sided behavior, more about feelings than facts.

> But this is more your feelings than actually factual.

The article is about "NHTSA crash data, combined with Tesla’s new disclosure of robotaxi mileage". Sounds factual enough. If Tesla is sitting on a trove of data that proves otherwise but refuse to publish it that's on them. If anyone is about the feels and not the facts here, it's you.


But these are not facts it’s your assumptions on the matter.

Even the already included escape route.

> If I tell you I have evidence Tesla is lying you'll tell me to show it or STFU.

I mean I wouldn’t choose those words but yes. Yes, you have to prove it, because you state it as a fact.

Innocent until proven guilty. There is a reason to this phrase.


Another commenter linked plenty of proof


> I mean sure you can say that the timelines did slip a lot but that doesn’t really have anything to with the rest that is insinuated here.

No. Not at all. This isn't "timelines slip". This is Musk saying, and I quote, "Self driving is a solved problem. We are just tuning the details." in 2016, and in 2021, "Right now our highest priority is working on solving the problem."

Somewhere along the line, it apparently got "unsolved".

"Timelines slipped" is far too generous for someone who, whenever Tesla is facing bad press, will imply that a new FSD release coming in 6 months, 3 months, a month, will solve all the issues plaguing it so far. Repeatedly. Those aren't real timelines.

Hell, even Tesla has had to add comments to investor and securities documents saying that "Musk's statements are aspirational and do not always reflect engineering realities."


I don’t see how this is connected to the point at hand here.

I think taking time to make sure the system works is the right call. Delaying it is the right call. Not publishing something because you had a different impression previously, just because, is the right call.

I think it integrity to delay a product even when your investors might get angry. Is it a winning strategy at wallstreet? No, probably not.

But what is the argument here „Musk bad“ because he delays a product because it’s not ready?

I think doing the due diligence is required here. Musk argument „it’s solved“ could even be argued by „look at Waymo“ they are doing it, aren’t they?

Tesla is aiming for more than that though. And as it is in product development sometimes, sometimes your don’t know what you don‘t know. Because why do you want to focus on chains guarding parkingspots, your cameras aren’t able to see, when your car can’t even drive through the city.

This is such a big thing to solve and 100% is impossible given some definitions.

Back to the article, I think delaying for safety is the right call, and that is also what the article says. It’s just that the article is in bad faith, as most of the arguments here are.

You probably would turn around and slam Musk for a System that obviously problematic as the alternative and until then it’s saying that he delays.

And if it were obviously problematic I think it would be much louder than just an article from a website that is know for having a biased view at things.


> I think taking time to make sure the system works is the right call. Delaying it is the right call.

You’re absolutely right but musk’s companies constantly fail to do that.

You should look up the past year or so of what’s happened at Tesla’s gigafactory in Germany. It’s pretty wild.

Also different company, same issue/owner: https://www.newschannel5.com/news/workers-walk-off-nashville...

Do you think Twitter/musk took time to get Grok right before unleashing it on social media?


Tesla (Elon Musk really) has a long history of distorting the stats or outright lying about their self driving capabilities and safety. The fact that folks would be skeptical of any evidence Tesla provided in this case is a self-inflicted problem and well-deserved.

He did promise his electric trucks to be more cost-effective than trains (still nothing in 2026...). And "world's fastest supercar". And full self-driving by "next year" in 2015. None of these are offered in 2026.

There have never been truthful statements from his companies, only hype & fluff for monetary gains.


There used to be [EDIT: still is] a website[1] that listed all of Musk's promises and predictions about his businesses and showed you how long it's been since he said the promise would materialize. It's full of mostly old statements, probably because it's impossible to keep up with the amount of content being generated monthly.

1: https://elonmusk.today


The burden of proof is on the article writer.

This has nothing to do with burden of proof, it has to do with journalistic accuracy, and this is obviously a hit piece. HN prides itself on being skeptical and then eats up "skeptic slop."

>I think it's fair to put the burden of proof here on Tesla.

That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".

https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".


The point is this: the article writer did what research they could do given the available public data. It's true that their title would be much more accurate if it said something like "Tesla's Robotaxi data suggests crash rate may be up to 3x worse than human drivers". It's then 100% up to Tesla to come up with cleaner data to help dispel this.

But so far, if all the data we have points in this direction, even if the certainty is low, it's fair to point this out.


It's not a Motte and Bailey fallacy at all; it's a statement of a belief about what should be expected if something is to be allowed as a matter of public health and safety implications.

They're saying that Tesla should be held to a very high standard of transparency if they are to be trusted. I can't speak to OP, but I'd argue this should apply to any company with aspirations toward autonomous driving vehicles.

The title might be misleading if you don't read the article, but the article itself at some level is about how Tesla is not being as transparent as other companies. The "shaky evidence" is due to Tesla's own lack of transparency, which is the point of stating that the burden of proof should be on Tesla. The article is about how, even with lack of transparency, the data doesn't look good, raising the question of what else they might not be disclosing.

From the article: "Perhaps more troubling than the crash rate is Tesla’s complete lack of transparency about what happened... If Tesla wants to be taken seriously as a robotaxi operator, it needs to do two things: dramatically improve its safety record, and start being honest about what’s happening..."

I'd argue the central thesis of the article isn't one of statistical estimation; it's a statement about evidentiary burden.

You don't have to agree with the position that Tesla should be held a high transparency standard. But the article is taking the position that you should, and that if you do agree with that position, that you might say that even by Tesla's unacceptable standards they are failing. They're essentially (if implicitly) challenging Tesla to provide more data to refute the conclusions, saying "prove us wrong", knowing that if they do, then at least Tesla would be improving transparency.


I don’t think it’s a motte and Bailey fallacy because the motte is not well established. Tesla clearly does not believe that the burden of proof is on them, and by extension regulators, legislators.

So, there are two theories:

a) Teslas are unsafe. The sparse data they're legally obligated to provide shows this clearly.

b) Elon Musk is sitting on a treasure trove of safety data showing that FSD finally works safely + with superhuman crash avoidance, but is deciding not to share it.

You're honestly going with (b)? We're talking about the braggart that purchased Twitter so he could post there with impunity. To put it politely, it would be out of character for him to underpromise + overdeliver.


You're not replying to the author of the article.

To add to that, most of the events described in the article - hitting cyclist, hit when reversing, hit stationary item on parking etc - happen during low speed city driving, but most miles for average driver are driven on highways. So, taxis, that are driven most of the time in city, would definitely get more crashes per mile than average driver. So 500,000/200,000 miles number could not be really applied to a taxi. I exect that number for taxis could be much lower.

Also, even for a non-taxi, 200,000 miles between minor hits on average seems incredibly high - that would mean that an average car in US does not hit anything in a car's lifetime. I'm not sure where that number is coming from, if that's non-reportable events.


electrek.co has a beef with Tesla, at least in the recent years.

Absolutely.

Let's examine the Elektrek editor's feed, to understand how "impartial" he is about Tesla:

https://x.com/FredLambert


Yup.

Btw, do you happen to know, why electrek.co changed their tune in such a way? I was commenting on a similarly negative story by the same site, and said that they are always anti-Tesla. But then somebody pointed out that this wasn't always the case, that they were actually supportive, but then suddenly turned.


Fred Lambert was an early Tesla evangelist - he constantly wrote stories praising Tesla and Elon for years. He had some interactions with Elon on Twitter, got invited to Tesla events, referred enough people to earn free Tesla cars, etc.

People roasted him for being a Tesla/Elon fanboy: https://www.thedrive.com/tech/21838/the-truth-behind-electre...

Fred gradually started asking tougher questions when Tesla's schedule slipped on projects and Elon ended up feuding with Fred (and I think blocking him) on Twitter: https://www.reddit.com/r/teslamotors/comments/bgmwk8/twitter...

Since then Fred has had a more realistic (IMHO) outlook on Tesla, although some might call it a "beef" since he's no longer an Elon sycophant.


I think you're being a bit unfair to Lambert.

If we assume the best (per HN guidelines): Up to about 2018 Tesla was the market-leading EV company, and the whole thesis of Electrek is that EVs are the future. So, of course they covered Tesla frequently and in a generally positive light.

Since then, the facts have changed. Elon's become increasingly erratic, and has been making increasingly unhinged claims about Tesla's current and future products. At the same time, Tesla's offerings are far behind domestic standards, which are even further behind international competition. Also, many people have died due to obvious Tesla design flaws (like the door handles, and false advertising around FSD).

Journalistic integrity explains the difference in coverage over the years. Coverage from any fact-based outlet would have a similar shift in sentiment.


Yes, it's basically this: he drank the Tesla lemonade until he realized it was urine.

> The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

Tesla's own stats don't count any accident without airbag deployment, regardless of severity (and modern airbag systems have a number of factors that play into deployment), and, for some unknown reason, they don't count fatalities in their crash statistics.


Oh. Well then. May we see the details of these minor contact events so that people don’t have to come here and lie for them anymore?

How corrupt and unaccountable to the public is the city of Austin Texas, even, for allowing them to turn in incident reports like this?


"insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".

> The comparison isn't really like-for-like.

This is a statement of fact but based on this assumption:

> low-speed contact events that would often never show up as police-reported crashes for human drivers

Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.


oh hacker news, never change. "crashes 3x as much as human driven cars" but is that REALLY bad? who knows? pure gold

Humans driving cars crash more than humans walking on side walks. But is humans driving cars really bad?


[flagged]


He's probably smart then

I would call strong opposition to Musk a democratic responsibility, not a derangement. We are talking about a guy with a fondness for the far right and throwing Nazi salutes, and whose destruction of USAID had, by November 2025, resulted in “hundreds of thousands of deaths”. [1] Those, of course, are just a couple of examples.

If strong opposition to that kind of evil makes me deranged, count me in.

1: https://hsph.harvard.edu/news/usaid-shutdown-has-led-to-hund...


Sure, but that is not a defence against the claim that his journalistic coverage is biased.

I strongly oppose the constant slander and the litany of lies partisan commenters post about Musk.

You don't get to throw out "fondness for throwing Nazi salutes" slander, based on an hoax immediately debunked at the time, and then act like you're doing democracy a favor. Try to stick to the facts.

Regarding the journalist discussed here, I had a look at his X account, and he posted no less than 20 posts attacking Tesla and Musk in just the last day. It's virtually all he posts, and it indeed appears deranged. The flagged comment was fair enough.


> based on an hoax immediately debunked at the time

We've all seen the video, there is no hoax and no doubt that he was doing a nazi salute, with some level of "humor" defense.


Seriously what is up with all the Electrical n apologists? Dude's a nazi. Weaseled his way into everything digitally related to the American government and should be treated like foreign intelligence agent. He has oversold and under-delivered everything he has bought from other people to claim for himself. Weird he's got so many dickriders on HN.

[flagged]


I think most of us don't care about the opinion of any Israël politician as they are doing Nazi things (genocide).

> You don't get to throw out "fondness for throwing Nazi salutes" slander, based on an hoax immediately debunked at the time, and then act like you're doing democracy a favor.

Just to clarify. This is the video context: https://www.youtube.com/watch?v=-VfYjPzj1Xw

Are you claiming that this is not an accurate depiction of what happened on stage? (That is the video is in some form fake. A deep fake, or special effects, or an Elon impersonator or whatever.)

Or are you claiming that the gesture seen is not a nazi salute?


Yes, "nazi salute" is obviously not an accurate description of the gesture Musk performed before saying "my heart goes out to you".

Here's a thought experiment for you.

If I stuck my middle finger up at you while saying "my heart goes out to you", what would you think?


Probably not that you support the Nazi regime, as that would be a ridiculous thing to think.

Particularly so if a year before you visited Auschwitz and stated it was "tragic that humans could do this to other humans", and told us how you attended a Hebrew preschool and have a lot of Jewish friends.


I didn't ask you what you wouldn't think. I asked you what you would think.

Did you even read the article you sent? It’s all based on estimates.

It is consensus seeking derangement at best


An article about a counterfactual (how many people would have survived had aid continued as before) can only be based on estimates, not real world data, yes, by its very nature. You can say the estimates are wrong, or that the source isn't trustworthy, maybe. But providing estimates for counterfactuals is not in any way illegitimate.

[flagged]


>The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm.

Yeah, no. I thought so as well initially but then I saw the video. The guy throws out his arm straight out multiple times.


I noticed the same thing. Not sure why you're being downvoted. The whole publican has turned sour recently.

After every glazing there is a sourness.

Musk glazing from Electrek was very significant 2002-2024 at least


[flagged]


>You sure like defending him a lot, [...]

That's... entirely expected of someone that has memories and a personality? It's like showing up to /r/starwars and telling some random person "you sure like star wars a lot"


This is from elektrek. You cannot believe anything you see from them regarding Tesla. Completely corrupt site.

I find it interesting the Lemonade insurance just began offering a 50% discount for Tesla with FSD.

Insurance companies are known for analytics and don't survive if they use bad data. This points to your comment being correct.


Who knows the backend deals there, everything up to and including Tesla subsidizing the premiums.

Thats a completely different scenario than fully autonomous driving.

Good analysis. Just over a month ago, Electrek was posted here claiming that Teslas with humans were crashing 10x more than with humans alone.

That was based on a sample size of 9 crashes. In the month following that, they've added one more crash while also increasing the miles driven per month.

The headline could just as easily be about the dramatic decline in their crash rate! Or perhaps the data is just too small to analyze like this, and Electrek authors being their usual overly dramatic selves.


That is an overly optimistic way to phrase an apparent decrease in crashes, when Tesla is not being upfront about data that at best looks like it's worse than human crash rates.

Unless one was a Tesla insider, or had a huge interest in Tesla over other people on the road, such spin would not be a normal thing to propose saying.

Media outlets, even ones devoted to EVs, should not adopt the very biased framing you propose.


I don't understand your claim.

Previous article: Tesla with human supervisor at wheel: 10x worse than human alone.

Current article: Tesla with remote supervisor: 3-9x worse than human alone.

Given the small sample sizes, this shows a clear trend: Tesla's autopilot stuff (or perhaps vehicle design) is causing a ton of accidents, regardless of whether it's being operated locally by customers or remotely by professionals.

I'd like to see similar studies broken down by vehicle manufacturer.

The ADAS in one of our cars is great, but occasionally beeps when it shouldn't.

The ADAS in our other car cannot be disabled and false positives every 10-20 miles. Every week or so it forces the vehicle out of lane (either left of double yellow line center, or into another car's lane).

If the data on crash rates for those two models were public, I guarantee the latter car would have been recalled by now.


I don’t think statistics work that way. A study of all Teslas and all humans in Austin for 5 months is valid because Electrek ran a ridiculous “study”, and this headline could “just as easily” have presented the flawed Elektrek stork as a legit baseline?

The 10x would be 9x if the methodology were the same. 9x->3x is going from reported accidents to inferred true accident rate, as the article points out.

To be honest I think the true story here is:

> the fleet has traveled approximately 500,000 miles

Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.


The more I think about your comment on statistics, the more I change my mind.

At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time/miles to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.

But then I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles. This should be enough data for a comparison.

If we are comparing the Tesla accident rate to people in a consistent manner (accident classification), it’s a valid comparison. So, I think the way this works out is: given an accident rate of 1/500000, we could expect a human to have 9 accidents over the same miles with a probability of ~ 1 x 10^-6. (Never do live math on the internet, but I think this is about right).

Hopefully they will get better.


500,000 / 30 years is ~16,667mi/yr. While its a bit above the US average, its not incredibly so. Tons of normal commuters will have driven more than that many miles in 30 years.

That’s not quite the point. I’m a bit of an outlier, I don’t drive much daily, but make long trips fairly often. The point with focusing on 500,000 miles is that that should be enough of an observation period to be able to make some comparisons. The parent comment was making it seem like that was too low. Putting it into context of how much I’ve driven makes me think that 500,000 miles is enough to make a valid comparison.

But that's the thing, in many ways it is a pretty low number. Its less than the number of miles a single average US commuter will have driven in their working years. So in some ways its like trying to draw lifetime crash statistics but only looking at a single person in your study.

Its also kind of telling that despite supposedly having this tech ready to go for years they've only bothered rolling out a few cars which are still supervised. If this tech was really ready for prime time wouldn't they have driven more than 500,000mi in six months? If they were really confident in the safety of their systems, wouldn't they have expanded this greatly?

I mean, FFS, they don't even trust their own cars to be unsupervised in the Las Vegas Loop. An enclosed, well-lit, single-lane, private access loop and they can't even automate that reliably enough.

Waymo is already doing over 250,000 weekly trips.[0] The trips average ~4mi each. With those numbers, Waymo is doing 1 million miles a week. Every week, Waymo is doing twice as many miles unsupervised than Tesla's robotaxi has done supervised in six months.

[0] https://waymo.com/sustainability/


Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.

At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.


No, my argument is you shouldn't draw a statistical conclusion with this data. That's all. I'm kind of pushing in the direction you were pointing in the second part - it's not enough data to make statistical inferences. We should examine each incident, identify the root cause and come to a conclusion as to whether that means the system is not fit for purpose. I just don't think the statistics are useful.

> The real take away is that the Robotaxis don't really exist

More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.


Because it is fraud trying to inflate Tesla stock price.

The real term is “marketing puffery.” It’s a fun, legally specific way to describe a company bullshitting to hype its product.

Puffery is like saying this is the best canned chili ever made. Selling a can of chili but never actually giving the chili just an empty can is fraud.

Puffery should really be limited to subjective things like flavor and not self-driving cars.

The Robotaxi service might be puffery, selling "full self driving" is just fraud.

A robotaxi rollout where each car needs a safety driver is fraud.

What's even more unbelievable is that a significant number of people are still falling for it

We've known for a long time now that their "robotaxi" fleet in Austin is about 30-50 vehicles. It started off much lower and has grown to about 50 today. There's actually a community project to track individual vehicles that has more exact figures.

Currently it's at 58 unique vehicles (based on license plates) with about 22 that haven't been seen in over a month

https://robotaxitracker.com/


But deep learning is also about statistics.

So if the crash statistics are insufficient, then we cannot trust the deep learning.


I suspect Tesla claims they do the deep learning on sensor data from their entire fleet of cars sold, not just the robotaxis.

No, they exist, but they are called Waymo

>One crash in this context is going to just completely blow out their statistics.

One crash in 500,000 miles would merely put them on par with a human driver.

One crash every 50,000 miles would be more like having my sister behind the wheel.

I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.


> One crash in 500,000 miles would merely put them on par with a human driver.

> One crash every 50,000 miles would be more like having my sister behind the wheel.

I'm not sure if that leads to the conclusion that you want it to.


From the tone, it seems that the poster's sister is a particularly bad driver (or at least they believe her to be). While having an autonomous car that can drive as well as even a bad human driver is definitely a major accomplishment technologically, we all know that threshold was passed a long time ago. However, if Tesla's robotaxis (with human monitors on board, let's not forget - these are not fully autonomous cars like Waymo's!) are at best as good as some of the worse human drivers, then they have no business being allowed on public roads. Remember that human drivers can also lose their license if [caught] driving too poorly.

> Remember that human drivers can also lose their license if [caught] driving too poorly.

Thank you, yes, I could have said that better. But yeah as a new human driver if I’m too sloppy and get into too many incidents , the penalty is harsh and I’d say that “none of the autonomous companies are held to the same standard” but that’s not 100% true: we do have cities and states refusing to play ball or issue permits here.

But that’s exactly right. my opinion was/is that there should be a probationary period for the first year of new autonomous technology — or major deviations from existing and proven technologies — too. And if it causes too many accidents or violations, then, it should be held to the same standard I am