As part of US regulations surrounding self-driving cars, anyone with a permit to test an autonomous vehicle in California has to report how often the human driver is forced to take control from the computer. The first round of filings is in, and it's not all good for the machines. Overall, seven companies filed a report covering "disengagements" in the 15-month period between September 2014 and November 2015. The California DMV defines a disengagement as either the self-driving software failing and needing a human to take control, or the test driver feeling compelled to take control to avoid a dangerous situation.
Between the seven companies that filed, there were 2894 disengagements logged. Google recorded a total of 341, while Tesla's sitting pretty at zero. Of course, a disengagement doesn't mean the car was about to crash, and Google defended its results with the same argument:
Disengagements are a critical part of the testing process that allows our engineers to expand the software's capabilities and identify areas of improvement. Our objective is not to minimise disengagements; rather, it is to gather, while operating safely, as much data as possible to enable us to improve our self-driving system. Therefore, we set disengagement thresholds conservatively, and each is carefully recorded.
The numbers also don't make any recognition of improvements Google has made to its self-driving technology: the highest number of disengagements (48) occurred in January 2015, when Google's cars drove 29,000km; in October, when the cars drove 75,600km, only 11 disengagements took place.
The reports aren't a glowing endorsement of self-driving cars: In order for robot drivers to gain mainstream acceptance, the error rate will have to be ridiculously low, and 15 potential crashes per 100,000km is far from perfect. But in the space of 10 months, Google's cars got ten times better; just imagine what a few more years of training will do.
Image via Google