As self-driving cars become increasingly common on global roads, their deployment is not without controversy and safety challenges. Tesla, a leading player in autonomous vehicles, recently recalled over two million cars due to issues identified by the US National Highway Traffic Safety Administration (NHTSA) with its driver assistance system. Despite disagreements, Tesla has committed to enhancing its features.
One of the critical challenges faced by autonomous vehicles, particularly in the case of driverless taxis or “robotaxis,” is their propensity to abruptly stop in the face of perceived problems. This default setting has led to incidents causing road chaos and, in some cases, even accidents. The recent suspension of Cruise, a robotaxi company owned by General Motors, in California underscored the severity of the issue after a series of high-profile incidents, including a vehicle dragging a pedestrian following a collision.
The tendency of self-driving cars to halt abruptly brings attention to a fundamental challenge: how can these vehicles be designed to comprehend road scenarios and behave more like human drivers? Researchers, drawing on their experiences in designing self-driving cars at Nissan, have proposed a novel approach involving the use of video to analyse driving behaviour.
In their research, the team utilized video recordings of self-driving cars to identify and understand the mistakes made by these vehicles on the road. The incidents mentioned earlier, where self-driving cars halted unexpectedly, revealed a significant discrepancy between the perception of the road by autonomous vehicles and that of human drivers.
Unlike humans, self-driving cars construct a simplified picture of the world based on sensor data, categorizing objects into abstract groups like cars, pedestrians, and bicycles. This oversimplified view can lead to problems in identifying nuanced human behaviour, such as distinguishing between a pedestrian walking casually and one urgently chasing after a bus.
The notion of “edge cases,” situations not anticipated by developers, becomes crucial in understanding the limitations of self-driving cars. While developers assume a finite number of unusual scenarios, the real world is dynamic and unpredictable, with the potential for entirely new edge cases to emerge continuously.
Humans rely on judgment when faced with unfamiliar situations, a capability lacking in self-driving cars. In the absence of judgment, autonomous vehicles often default to a seemingly neutral or safe solution: stopping. The researchers observed that the most common behaviour of self-driving cars in unusual situations, as seen in their video recordings, is to come to a halt on the road.
However, the safety of this default behaviour is questionable, especially when it involves stopping in front of emergency vehicles like fire trucks. This not only obstructs traffic but can create additional hazards. The inability of self-driving cars to navigate ordinary traffic situations and their continuous misunderstandings of human intent raise serious concerns about their impact on road safety.
In response to these challenges, the researchers proposed potential solutions for designing the motion of self-driving cars to enhance their communication with other road users. They identified five basic movement elements – gaps, speed, position, indicating, and stopping – which, when combined, could facilitate better understanding and interaction between autonomous vehicles and human drivers.
While the future possibilities of self-driving cars are vast, addressing these challenges is paramount before widespread deployment. The issues surrounding the default “halting” behaviour must be resolved to ensure the safety and efficiency of autonomous vehicles on roads worldwide. Researchers and developers must collaborate to refine the technology and address these concerns comprehensively before embracing the full potential of self-driving cars.