I read an article posted today about how a driver was suddenly struck by a pulmonary embolism while driving home (TechCrunch). His reaction: he let the autopilot system drive him 20 miles on the highway and to a hospital. That’s a pretty incredible story. After reading it, the first thing I wondered was: did the driver have any control of the wheel, or did he let the autopilor system totally take over the wheel?
I’ve heard a lot cautionary comments warning people not to completely trust the Tesla Model X’s autopilot system. Even while the car was still under development, Tesla emphasized how the car’s autopilot system is meant to assist the driver, not to replace the driver.
I’ve even heard about car crashes involving the car’s autopilot system. The most recent one I know of is the one involving an Ohio man. Like all of the accidents involving the car’s autopilot system, Tesla investigated the cause of the Ohio man’s fatal crash. According to Tesla, it was the “technical failure” of the automatic braking system (NY Times). But despite investigations, people still believe that the autopilot system contributed to the crash and they remain skeptical of that system.
The next thing I wondered about that incredible story is: did the driver choose the safest option? On one hand, he was able to arrive at a hospital safely. On the other hand, the car’s autopilot system has been in the spotlight for Tesla Model X accidents. If the driver called 9-1-1, he would probably wait a long time for an ambulance to arrive. Plus, making a call in that condition would have been extremely difficult, perhaps even impossible.
Then I realized: whether the driver made the right call to continue driving himself doesn’t really matter. What truly matters is that the driver had the option to continue “driving” with the car’s autopilot system. And in those brief moments, the driver believed that was his best option.