Self-Driving Cars Required Humans To Intervene Thousands Of Times Last Year

Self-Driving Cars Required Humans to Intervene Thousands of Times Last Year

As part of US regulations surrounding self-driving cars, anyone with a permit to test an autonomous vehicle in California has to report how often the human driver is forced to take control from the computer. The first round of filings is in, and it's not all good for the machines. Overall, seven companies filed a report covering "disengagements" in the 15-month period between September 2014 and November 2015. The California DMV defines a disengagement as either the self-driving software failing and needing a human to take control, or the test driver feeling compelled to take control to avoid a dangerous situation.

Between the seven companies that filed, there were 2894 disengagements logged. Google recorded a total of 341, while Tesla's sitting pretty at zero. Of course, a disengagement doesn't mean the car was about to crash, and Google defended its results with the same argument:

Disengagements are a critical part of the testing process that allows our engineers to expand the software's capabilities and identify areas of improvement. Our objective is not to minimise disengagements; rather, it is to gather, while operating safely, as much data as possible to enable us to improve our self-driving system. Therefore, we set disengagement thresholds conservatively, and each is carefully recorded.

The numbers also don't make any recognition of improvements Google has made to its self-driving technology: the highest number of disengagements (48) occurred in January 2015, when Google's cars drove 29,000km; in October, when the cars drove 75,600km, only 11 disengagements took place.

The reports aren't a glowing endorsement of self-driving cars: In order for robot drivers to gain mainstream acceptance, the error rate will have to be ridiculously low, and 15 potential crashes per 100,000km is far from perfect. But in the space of 10 months, Google's cars got ten times better; just imagine what a few more years of training will do.

[Twitter]

Image via Google


Comments

    This is why you do testing. Keep improving the way they are, the rate will be acceptably low in very short order. Then its up to the governments to approve them, the companies to release them, and the public to accept them.

    If you ran it the other way, had AI inside vehicles monitoring the human drivers, I am pretty sure the AI would have had to intervene thousands of times to take control to avoid a dangerous situation. Just look at the accident stats...

      Interestingly there's good data on accidents, but mostly that which results in injury or death/
      According to the ABS in 2014 Australians drove ~245billion kms in 12 months.

      If you applied that October number of disengagements / km (1:6,800km) to the number from the ABS it would be 92mil disengagements in Australia alone.

      That doesn't mean a disengagement would result in a crash or injury, but it's still a significant number when you look at how many kms are travelled.

        ABS 2014 - there were 1,156 road fatalities
        Dept infrastructure 2012 - 34,000 road hospitalisations

    Bear in mind that disengagement doesn't always equal potential crash. There was another article that described some of the problems the AI has - namely that it's too cautious. Some of these situations where humans took over could be situations like that - where the AI wasn't willing to proceed because it decided it was too risky. ie: it was actively stopping the car to *prevent* a crash.

    You could liken the current state of the AI to an overly cautious learner driver doing their practical license test, trying desperately not to mess up in front of the examiner.

    I'm not necessarily a fan of self driving cars, but it's interesting to see how they progress.

      I remember a situation where someone had reported they had actually caused one of the Google cars to take no action. They were riding their bike and stopped at a 4-way stop sign, but instead of putting their foot down they balanced on their wheels. Because they were moving slightly, the car just wouldn't go and in the end he just rode his bike through.

    Tesla at 0 is actually concerning given that I've seen YouTube videos of 5 situations where owners have had to take control. It makes me wonder about the extent of Tesla's testing. I guess it's only in California and for a limited period.

    How does the "15 potential crashes per 100,000km" figure compare with the actual crashes per 100,000km for regular cars?

    It'd be nice if they could get the number down to zero, but presumably there will be a point where self driving cars still occasionally kill someone but are better than the alternative.

    2891 disengagements were due to the driver, in frustration, overriding the car's better choice in music.

Join the discussion!

Trending Stories Right Now