Choosing the wrong abstraction: Tesla Autopilot

I talked last week about choosing the right abstraction to present to your users. Tesla’s Autopilot feature is a textbook case of choosing the wrong abstraction.

Tesla has a driver assistance feature in their vehicles. According to their website, it is able to:

  • Match speed to traffic conditions
  • Keep within a lane
  • Automatically change lanes without driver input
  • Transition from one freeway to another
  • Exit the freeway when your destination is near
  • Self-park when near a parking spot
  • Be summoned to and from your garage

It’s an impressive list of features, to be sure. To the common man, however, “autopilot” has a bigger meaning: a system for driving the car without any human intervention. And that’s where the problem lies. There is a promise implicit in the name that the car will require no - or almost no - human intervention when Autopilot is activated. As a result, drivers have been leaving their hands entirely off the wheel. This has resulted in several crashes recently.

What interests me here is the way people talk about these crashes. They don’t say that a Tesla driver crashed into something. They say that a Tesla on autopilot crashed into something. This shows that there is an implicit promise in the name “autopilot” - and it’s not just a branding thing that people mostly ignore, and mostly know the limitations of. Non-Tesla-owners are placing blame on the cars and not the drivers.

Regarding Autopilot, Tesla also says:

Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time.

To add on to what I said last week: I do not like this way of describing an abstraction. Saying that the interface you present to your user is “pretty much X, but with Y limitations” turns out to be misleading in practice. I infer - from my own experiences and from the Tesla Autopilot crash stories and how they’re presented - that we have a tendency to elide the second part of that statement. We don’t think of the specific limitations of the interface we’re using.

I presented two methods of resolving the discrepancy between promises and capability last week. One was to meticulously track and implement all the promises your interface makes. The other was to change the interface - perhaps creating a new idea that encompasses only the behavior our interface has. In this case, I would give this feature a different name. Something like Tesla Driver Assistance would make use of existing ideas to come much closer to the way this feature is (currently) supposed to be used. If I were to try and be even more specific, I would create a new name - something like Tesla Drive, which doesn’t have any preconceived notions associated with it. With a new name, users would then have to ask “what’s Tesla Drive?” “It’s an advanced driver aid that can park for you, change lanes for you, even guide the car from the garage to your front steps to pick you up.” After that conversation, do you think the user would be more or less likely to completely let go of the wheel?

I suspect Tesla isn’t worried about this. Eventually their autopilot feature will meet all of its implicit promises, and the issue will be moot. I can’t help but think there’s a more responsible way to handle this. I’d even say there’s a more brand-friendly way of handling this - Tesla is the only brand being blamed directly for what they then proceed to say is driver error. Tesla, perhaps you should look at the systemic issues here, and what you’re saying when you say “autopilot.” It’d save lives, and it’d save face.

I’ll leave you with this screenshot I took from Tesla’s info page about Autopilot. Do you think it’s misleading?

Full Self-Driving Hardware on All Cars