The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.
Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death.
The only possibilities that made sense were:
- A: Fault in the object recognition system, which may have failed to classify Herzberg and her bike as a pedestrian. This seems unlikely since bikes and people are among the things the system should be most competent at identifying.
- B: Fault in the car’s higher logic, which makes decisions like which objects to pay attention to and what to do about them. No need to slow down for a parked bike at the side of the road, for instance, but one swerving into the lane in front of the car is cause for immediate action. This mimics human attention and decision making and prevents the car from panicking at every new object detected.
The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.
This is not good.
Autonomous vehicles have superhuman senses: lidar that stretches out hundreds of feet in pitch darkness, object recognition that tracks dozens of cars and pedestrians at once, radar and other systems to watch the road around it unblinkingly.
But all these senses are subordinate, like our own, to a “brain” — a central processing unit that takes the information from the cameras and other sensors and combines it into a meaningful picture of the world around it, then makes decisions based on that picture in real time. This is by far the hardest part of the car to create, as Uber has shown.
It doesn’t matter how good your eyes are if your brain doesn’t know what it’s looking at or how to respond properly. I’ve asked Uber for comment.