The crux of the Department of Justice’s probe seems to be related to the disparity between how Autopilot is marketed and promoted and the actual realities of what the system is capable of. This imbalance is at the heart of pretty much all Level 2 systems, as a study from the Insurance Institute for Highway Safety revealed earlier this month. People who use Level 2 semi-automated driving systems – which require an alert driver to be ready to take control back with no warning – seem to think their systems are much more capable than they are, which can lead to serious safety issues. I think Tesla has definitely implied that Autopilot is more capable than a Level 2 system that is very much not a true self-driving system. This starts with the confusing name Autopilot, something that to most people suggests self-driving, and the company has definitely done more than just suggest the Autopilot system is self-driving, in one prominent and longstanding example, they’ve come out and said it. That example is in a video from 2016 that is still on Tesla’s website, in a post titled “Full Self-Driving Hardware on All Teslas.” This video, set to the Rolling Stones’ Paint It Black, begins with this text:
I mean, this feels pretty damning to me: “THE PERSON IN THE DRIVER’S SEAT IS ONLY THERE FOR LEGAL REASONS. HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF.” How is that not suggesting that Autopilot is capable of driving the car on its own? As an extra bit of irony, last year the New York Times found that during shooting of this video, the car hit a roadside barrier and required repairs. I bet you won’t be shocked to find that part didn’t make it into the final video. Of course, nothing is simple. Tesla has definitely provided all of the necessary warnings and disclaimers needed for Autopilot on their website and in their owners’ manuals, and those are quite clear as well:
It comes right out and says Autopilot “does not make driving autonomous,” and “remain alert at all times and be prepared to take immediate action.” There’s no disputing that. Tesla definitely is covering their ass here, and I’m not certain how the DOJ plans to incorporate this into their investigation, and Reuter’s sources didn’t have specific commentary about this, either. These statements and warnings, though, are in smaller print on websites and deep inside owner’s manuals. This sort of honest, forthright assessment of Autopilot’s limitations are not what’s normally seen in public statements, which are things like
Active Autopilot (car is driving itself) cuts crashes in half again. Doesn’t mean there are no crashes, but, on balance, Autopilot is unequivocally safer. — Elon Musk (@elonmusk) April 17, 2021 …in that one he actually says “car is driving itself” when using “Active Autopilot” and when Tesla got pushback when they called their more advanced, in-development (yet deployed on public roads) semi-automated driving assist system “Full Self-Driving (FSD),” Elon Musk responded like this: Saying you have a Full Self-Driving system that is “feature complete” sure sounds like that means it’s a complete system where the car can drive itself. Plus, going on to say “feature complete but requiring supervision” and “feature complete but not requiring supervision” is confusing at best, because if it’s “feature complete” what’s the differentiator between requiring or not requiring supervision? Testing, I suppose? Then there’s the differentiation of not requiring supervision, and not requiring supervision “and regulators agree,” which again goes back to the implication that it’s government regulations keeping your car from driving itself more than anything else, which just isn’t the case. While I can’t speak to exactly what the DOJ is investigating, what I can see are two very different kinds of messaging happening about Autopilot: the grim realism of the warnings on websites and in documentation that makes it very clear that the car is not self-driving, and the driver must be ready to take control at any moment, and the more marketing-type messaging that suggests that Autopilot-assisted driving is so much safer than human driving, and that were it not for all those pesky, fussy, fun-killing government regulators, you could be blasting down the highway at 90 mph watching VR porn on a headset. So, the real problem seems to be a clash between actual capability and how Tesla is presenting Autopilot to the world, and while I can’t speak to what the DOJ will decide, I can note that all Level 2 systems are somewhat inherently going to be plagued by this, and the more advanced an L2 system gets, the worse the problem becomes. The issue is that all L2 systems require non-stop attention from a driver, because a disengagement could happen for any number of reasons, without any real warning, and the driver is needed to take control immediately. This is literally part of the definition of what a Level 2 system is:
See where it says “You are driving whenever these driver support systems are engaged,” and “You must constantly supervise these support features?” That’s pretty clear about what’s going on. The problem is that highly developed L2 systems can seem like they’re driving themselves, performing the vast majority of the driving task, yet they’re still just L2 systems, and could require takeover at any moment. That’s how we get mainstream articles like this New York Times article from earlier this year, where a writer driving a car with GM’s Super Cruise L2 system dangerously mischaracterizes the system as being much more capable than it really is, and stating that what was required of him as a driver was …which is, of course, absolutely untrue. That’s not all the car is asking. It’s asking you to be ready to drive, at any moment. This is a huge problem because the more an L2 system does, the less it seems like the driver needs to do anything, so the less attentive they can allow themselves to be. Yes, the DOJ is going after Tesla now, again, I think with good reason, but it may be time to evaluate how every automaker approaches Level 2 semi-automation, and decide if it makes any sense at all. This translates to “Level 3 systems must be Level 4 systems”. Which, IMO, is completely and utterly reasonable. Level 3 systems should never be allowed to share the road with humans. I am reasonably sure you don’t want the system to disengage ever. Sound alarms and bring the vehicle to a controlled stop if the driver doesn’t respond. Note that some car companies have gone down the “disconnect” route, but that leads to cars driving off the road. https://www.theverge.com/2021/5/27/22457430/tesla-in-car-camera-driver-monitoring-system I can’t see a situation where fully-autonomous and meat-operated vehicles can coexist. The tech isn’t there, no matter how much any of us want to be sleeping or masturbating on our morning commutes. Frankly all I’d need is for the car to take over for the both boring and risky bits. Bumper to bumper, traffic jam traffic seems the most reasonable target with the most bang for the buck. Let it keep the car on the lane and disengage the system once a certain low speed is reached or if things get confused (i.e. have to change lane) but there would be ample warning to the driver because of the low speeds. Problem is, the lane centering is unreliable. If it can’t sense the painted lines on the road it goes “bugf**k” and wanders off. The automatic braking works fine except that it has a habit of waiting until I get worried, then, bam, really slowed down. The cameras work fine when there is something to see and the lenses are clean. All in all certainly not self driving but with some early pretensions of where we are today. Personally, they can keep all of it. I liked my 2007 STS a lot more. In my Tesla, which doesn’t have the latest hardware, lane tracking is amazing. Last year, it successfully navigated a 100ft curved section of freeway that had new paving and the lanes were not marked. I have my hands firmly on the wheel, waiting to take over, but frankly, it did better than I could have done. The “Full Self Driving” name on the other hand is dangerous bullshit. In this case Elon is definitely over reaching. Torch definitely has a point – pilots are highly trained professionals who are accountable and will monitor the situation constantly. Motorists on the other hand – getting a driver’s license in general isn’t hard and many drivers can’t be trusted not to abuse Autopilot like Level 2 systems. This more controlled environment seems far simpler to program- any obstacle or vehicle noted by the system doing something which falls outside “the norm” sounds an alarm for a human to take control and the truck starts immediately braking to a quick stop at the side of the road unless a human operator intercedes to keep the truck going. Much less complex than the chaos of an urban environment full of baby strollers, and so forth. It also gives developers the opportunity to rack up millions of miles in testing rather quickly. So…. I just don’t get it. Further, there is a huge financial incentive for such a system in long haul trucking: professional drivers are expensive. Even if the equipment to operate a semi costs half a million dollars, it would only take a few years to recoup the costs in saved salaries. However, car manufacturers have chased the sizzle of putting self-driving into luxury cars instead. What’s going to either be great or annoying is how those trucks are driven. Robots don’t have families to come home to, and don’t care if it’s light or dark. I think gas/electric efficiency and safety will become more important than ever (especially if they go electric where wind resistance becomes even more important). I could see companies running these trucks at 45 mph on the expressways. It won’t matter if a 500 mile trip takes 9 hours or 13 hours anymore if you don’t have a human behind the wheel trying to maximize a paycheck.