The first fatal accident involving a self-driving car being tested by Uber dampened the general enthusiasm for autonomous vehicles that had prevailed until then.

Particularly worrying was that the cause of the crash lay in a mis-match between the maturity of the technologies involved and the way some developers of self-driving cars use them.

On the hardware side, the performance required from the car’s Lidar sensors, which use lasers to detect external objects, is not yet achievable in a vehicle driving at high speeds. More importantly, on the software side, the limits of how machine learning can be used are being reached.

Machine learning cannot provide all the expected answers in terms of safety. It is successfully applied in retailing, banking and insurance, providing a basis for systems that can deal with more than 90 per cent of the situations put to them.

But what about the 10 or five per cent of borderline cases? While relatively unimportant in most areas, they are crucial where road safety is concerned. Machine learning works by co-relation and not by determining a link between cause and effect. Yet mastery of such causal links is essential in situations where human life is at stake. Most companies in the sector, such as Nvidia and Mobileye, are working on machine learning solutions. Others, like Another Brain, are trying to get even closer to the way the human brain works.

But Raul Bravo, founder of Dibotics, reckons that the priority is to try and develop not just the machine’s complex reasoning capacities but also its “reptilian brain” — that part of the brain, which holds us back when we sense danger, even before we have identified the source with certainty.

On an issue as sensitive as this, it’s crucial to get back to the basics of the scientific approach. When assessing technological solutions, it will be essential to try and establish laws derived from experience and not rely merely on deductive approaches that accumulate experience but do not guarantee safety.

The writer is CEO of Roland Berger

comment COMMENT NOW