Managing Expectations Around AI

Loren Davie
Anti Patter
Published in
2 min readJul 6, 2016

--

As most know by now, on May 7th, Joshua Brown was killed while in his Tesla Model S. It looks like significant details will still emerge in the weeks to come, but it seems that Tesla’s Autopilot mode was engaged at the time. Whether Brown was paying any attention to the road is undetermined at this time.

I’m not going to speculate about what happened that day, but there are certainly enough videos on YouTube of drivers abusing Autopilot mode. When talking about Autopilot, Tesla states that drivers should still pay attention to the road, ready to take over as necessary. This is Autonomous Driving Level 2 according to the National Highway Traffic Safety Administration, which essentially means that the system offers automated functions like cruise control and lane centering. It is not fully automated driving (Level 4).

So given Tesla’s explicit messaging about Autopilot’s capabilities, why would anyone go ahead and assume that it could offer fully autonomous driving? This turns out to be a significant problem with AI and human interaction: we have a tendency to assume that an autonomous system has far more capabilities than it actually has, and we are surprised (and disappointed) when it comes up short.

Our naive understanding of technology has a very all-or-nothing quality. Either we see technology as a dumb tool (it’s a hammer. You hit things with it.) or as a magic genie inside a bottle, capable of anything. There is no middle ground. This creates a disconnect of expectations for the general public who interact with autonomous systems, and a problem for AI adoption.

Trying to find a suitable mental model, I find myself looking for pre-existing examples of less-than-human intelligence. For example, Amazon’s Echo will happily fire off a timer alert and ignore your cries of “Stop, Alexa! Stop, Alexa!”. But if you say “Alexa, stop!” then it will stop the alarm. It has to hear its name first. Suddenly I realized I had a mental model: Alexa isn’t human. She’s more like a very smart dog. She needs to be addressed in a specific way, according to her training. If I think of Alexa that way, my expectations concerning her capabilities are kept in check. No one becomes upset that their dog can’t read the newspaper to them.

And herein is the crux of the issue: the design of autonomous systems needs to create mental models that manage the expectations of their users. If a system is not as capable as a human, we need to be very cautious about anthropomorphizing it. Implicitly over-promising capabilities of autonomous systems is an incredibly easy trap to fall into, even if we explicitly state otherwise, and can have deadly consequences.

--

--