Before I go into this report – I’m just looking at the Level 2 one now, because the full ADS one deals with situations too different to be compared together– I should make clear where I personally stand when it comes to these systems: I think L2 driver assist systems are inherently flawed, which I’ve explained many times before. My opinion on L2 is that it is always going to be compromised, not for any specific technical reasons, but because these systems place humans into roles they’re just not good at, and introduce a paradox where the more capable a Level 2 system is, the worse people tend to be at remaining vigilant, because the more it does and the better it does it, the less the human feels the need to monitor carefully and be ready to take over. So the fundamental safety net of the system, the human that’s supposed to be alert, is in some ways, lost. I’m not the only one who feels this way, by the way; many people far smarter than I am agree. Researcher Missy Cummings goes into this phenomenon in detail in her paper Evaluating the Reliability of Tesla Model 3 Driver Assist Functions from October of 2020: With that in mind, let’s look at some of the data NHTSA found. It’s worth starting by looking at how the reported crashes were categorized: So, this is a little confusing: crashes are categorized by the ADAS system the car was equipped with, and not necessarily what was reported as being active at the time of the crash? Also, the mistake made by “reporting entities” of calling Level 2 ADAS semi-automated systems as “ADS,” or full self-driving systems (Automated Driving Systems), is a good reminder about how much public education still remains to be done regarding these systems. The report also seems to contradict this a bit, as the criteria for what crashes get reported must include a Level 2 ADAS system being active at any time within 30 seconds of the crash (emphasis mine): I think this means that the crashes only get reported if a Level 2 system was active within 30 seconds of the incident, but the categorization may just be organized based on the installed equipment. At least, that’s what I think it’s saying. Let’s go right for the most headline-tempting chart, the total number of crashes by car brand using Level 2 Advanced Driver Assist Systems:
So, obviously Tesla looks pretty bad here, with 273 crashes, over three times the next closest carmaker, Honda, which is nine times more than the next one, Subaru, with 10 crashes. Does this mean that Tesla’s Autopilot and Full Self-Driving Systems (FSD) are significantly more dangerous than everyone else’s Level 2 systems? Well, as you likely guessed, it’s not all that simple. This chart of number of crashes doesn’t tell us the total number of miles driven or how many cars are equipped with these systems, or what percentage of drivers actually use them, and without any data like that, we really can’t make an assessment if the Tesla system is really three times as dangerous as, say, Honda’s systems, which we in turn can’t say if those are nine times more crash-prone than Subaru’s and so on. Now, I do have some theories about what’s going on here, and why Tesla’s numbers are so high. Compared to all the other automakers on this list, Teslas are more likely to have some form of L2 driver assist systems, since all Teslas come with the basic Autopilot system, and similar systems are still more likely to be optional on other cars. So, I think there are simply more L2 ADAS-capable Teslas on the roads, percentage-wise, than other cars, and, equally importantly, I think Tesla drivers have two cultural traits that factor in here: There’s much more focus in the Tesla community – hell, there’s more of a community, for better or worse, period, for Tesla owners than, say, for Hyundai or Honda owners – and that community pays a lot of attention to these sorts of driving automation systems, which makes them more likely to use them, and more likely to use them more often, and in more situations and driving conditions where a Toyota or Volvo or whatever owner might just drive normally. This same culture is also one that encourages a lot of faith in an almost cult-like belief in an ongoing “mission” to achieve self-driving, and from there, a still-mythical generalized Artificial Intelligence, and this sort of focus and faith can lead L2 users to attribute more capability to their cars’ ability to drive than the car is actually capable of, which in turn leads to a lack of proper oversight and attention, which leads to crashes. There’s also the possibility that Tesla’s data-logging and reporting capabilities, which are pretty extensive, may be a factor in how many crashes are reported. As the report states that carmakers that build in more advanced telematics and reporting systems will by their nature provide more data and communicate better, and cars with less advanced telematics and data logging capability may simply not be recording or reporting all incidents: There’s also the issue of usable domain; GM’s SuperCruise, for example, is designed to only operate on roads that have been mapped and approved for use by GM, where other systems, like Tesla’s Autopilot, are not limited this way. For example, a Level 2 ADAS-equipped vehicle manufacturer with access to advanced data recording and telemetry may report a higher number of crashes than a manufacturer with limited access, simply due to the latter’s reliance on conventional crash reporting processes. In other words, it is feasible that some Level 2 ADAS-equipped vehicle crashes are not included in the Summary Incident Report Data because the reporting entity was not aware of them. Furthermore, some crashes of Level 2 ADAS-equipped vehicles with limited telematic capabilities may not be included in the General Order if the consumer did not state that the automation system was engaged within 30 seconds of the crash or if there is no other available information indicating Level 2 ADAS engagement due to limited data available from the crashed vehicle. By contrast, some manufacturers have access to a much greater amount of crash data almost immediately after a crash because of their advanced data recording and telemetry.
As a result, there are just fewer places for a GM L2 ADAS vehicle to operate, and the places that it can likely have a greater margin of safety because they have been pre-vetted. So, does that mean that SuperCruise is safer than Autopilot? You could read it that way, but it’s still not really and apples-to-apples comparison. Also interesting in the report is what the cars actually crashed into:
Passenger cars are, of course, at the top of the list, but the largest category is “unknown,” which sounds pretty mysterious. First responder vehicles are only listed twice, which is notable because NHTSA already is investigating Tesla L2-related crashes with first-responder vehicles in another study, and that one notes 14 crashes.
Also very telling is this chart that logs where the L2 ADAS-equipped vehicles were damaged:
Crashes involving L2 ADAS vehicles tended to be most commonly damaged in the front, which suggests that its the L2 vehicles that are crashing into things, as opposed to being crashed into. Maybe it’s my own biases, but I feel like this data supports the basic issues I see with L2 systems, that of driver inattention.
If most of the wrecks are caused by running into things, causing damage to the front of the car, then that would seem to suggest that drivers, perhaps lulled by their L2 systems doing the bulk of the driving for long, uneventful periods, are not being attentive to what the cars are always driving toward. In cases where the ADAS system disengages without warning or, arguably worse, doesn’t realize that it should disengage because it is making poor driving decisions, and the driver’s attention and focus have been eroded by an otherwise effective system doing most of the work, then wrecks caused by the car running into other cars or immobile barriers or other things seems a likely result.
This report is interesting, but also very incomplete, and I think it’s premature to use it to make any really sweeping generalizations about what brands of cars have more or less dangerous Level 2 driver assist systems. I think the one thing we can take from this report is that these systems are by no means flawless yet, and are no substitute for actually paying attention to the road, no matter what the hype suggests.
It’s wise for NHTSA to keep formal tabs on all of this, so I think this is a good start. I’ll be curious to see future reports that may be able to provide more data for a clearer picture of what is actually going on.
How you ask?
Well, as an example, yesterday it dodged left in an attempt to spear me into a temporary barrier on a lane under construction.
Exhilarating! I’m wide awake now, thank you!
The level of bullshit around these cars is amazing. I’m not a fanboy, my wife knew basically nothing about Tesla as she ignores my ramblings, rightfully so.
She’s never had such a good experience with a car purchase, or a car, like the Tesla.
I completely agree. I will never buy another car from a dealership, or an ice car, again. I have a 2013 Mustang GT/CS, 2013 BRZ, 2017 Fiesta ST, 2009 Yaris (lol), 2010 Lexus RX350, and a 2006 Ram Mega Cab.
The basic autopilot is amazing, compared to all the other adas systems I’ve driven (never Chevys), and even the phantom breaking is “awesome” in that it is CAUTIOUS and overly safe.
These graphs and observations are obviously useless since they don’t include miles driven and percentage of each manufacturer, at the least. But from my experience so far, only a truly stupid person would allow adas systems to get them in trouble. Customer education as an excuse is bullshit.
Every single human that buys any product risks killing themselves with it, if they don’t understand how that product works. I’m sorry, but this is on the purchaser alone. Assuming the manufacturer includes proper documentation of course.
Adas is only “inherently flawed” if you expect the worst from people. Personally, I’m with Darwin on this subject.
If someone gets themselves killed doing something stupid, I’m with you, I don’t pity them. But what about the family they t-boned? People that don’t have a Tesla have died because the Tesla driver used “AutoPilot” unsafely.
And the reason they used it that way is because they were sold a lie by Tesla. Now, on principle, I believe people should be smart enough to know companies lie if it makes them a buck. But in practice, that’s not how the world works. Tesla is scamming people into being beta testers of unproven technology, and everyone on the road is being dragged into it unwillingly. Tesla has blood on their hands, because they approach this technology in an uncaring and unethical manner that no other OEM does. How they haven’t been sued to death in such a litigious country I don’t know.
Communications were less polite than expected, very curt, to be clear, but nobody tried to sell me vin etching!!!
Let’s than 15 minutes for actual “transaction” at the store.
It has been wonderful so far, without any inkling of trying to go all SkyNet on my ass…
The system is great at helping you drive the car, but the car is never driving itself. You are driving the car, the computer is just taking away some of the effort. I don’t have radar inside my meaty head, so knowing that the car is double checking my environment with one is rather reassuring. This is more than enough to make rush hour or boring freeway miles a lot less tiresome.
In a way, it is a bit like riding a horse. You can generally trust the horse with safely taking you in the direction where you want it to go, but no matter how smart the horse, this doesn’t mean that you can go to sleep.
I don’t believe there would be a precipitous drop in demand should they change the name to “copilot”, and people would approach the system with a much healthier attitude. But unfortunately it seems that this ship has long sailed…
While the bar graph in the article can easily be used for confirmation bias, it is statistically worthless without crashes per mile, road types, more crash details, or even definitive data that the system was in operation immediately before the crash. The only really useful thing about this report is that it points out that the companies and government agencies are not collecting enough of the right type of information.
It also hits the brakes randomly and accuses me of needing coffee at moments when I’m actually hyper aware of what’s happening. (Right after the lane centering nearly side swipes a truck)
Is that what they are comparing this to?