Meet the Amygdala of the Self-Driving Car

Logo DIBOTICS Augmented light

 

 

Engineers are working feverishly on autonomous cars all over the world.

They all know that if this technology is going to be accepted by the public, they have almost zero room for error. The public will hold Self-driving cars to a much higher standard than they hold human driven cars too.

Making an autonomous car almost 100% sure needs a combination of split-second reactions and complex judgments about the surrounding environment that humans do instinctively.

An effective strategy might be to investigate what makes humans more successful than the current crop of automated vehicles and then replicate that process.

How the Human Brain Works—Dual Process Theory

The human brain is a jumble of different systems and processes, much more complex than any logically laid out computer.

There is a lot of duplication and a lot of conflicting actions.

The human brain can be modeled as actually having two different processes for assessing risk and process the information of the outside world, and these two processes are located in different parts of the brain.

The more primitive of the risk assessment processes is the responsibility of the Amygdala. This is the part of the brain that is responsible for the fight or flight reflex. It is intuitive and instinctual.

The amygdala does not reason. It simply reacts.

In the neocortex, the human brain takes a more logical and deliberate approach to risk assessment. The neocortex needs as much data as possible before concluding.

Typically our amygdala overrules our neo cortex in highly stressful situations. This is part of the human survival instinct.

Nobel Prize winner Daniel Kahneman calls these two different processes fast thinking and slow thinking. They are part of his dual process theory.

You need to know when to trust your slow thinking neocortex and when you need to rely on your fast thinking Amygdala.

Kahneman calls the fast thinking System 1 and the slow thinking, like the neocortex, is called System 2 thinking.

Driving is one example of where humans use both System 1 and System 2.

System 1 thinking is responsible for slamming on the breaks when something suddenly runs in front of the car.

But, when it is time to navigate the car into a narrow parking space, System 2 thinking is needed to judge the correct angle of attack, the right speed, and to make sure there is no contact with the other cars.

How Most Autonomous Cars Currently Operate

Just like humans need their senses to perceive the environment, autonomous cars use sensors such as LiDAR to gather information about the car’s surroundings.

The question is what to autonomous cars do with the data gathered from the sensors.

The main trend is for the sensors to send the raw data to the brain (AI) of the car. The AI then translates the different data points into objects to drive behavior.

v1 1

This process is suboptimal.

The AI has to sift through too much useless data. This extra information takes up valuable network bandwidth and costs too much regarding time and energy consumption.

Another popular approach goes about things the opposite way.

In this second approach, the AI receives processed information or “objects” from the sensors instead of raw data points.

The problem with this approach is that the AI could not get rich-enough information, especially for multi-sensor fusion purposes.

Also, because of limited processing power available, the sensors typically cannot process as fast or accurately as the AI located in a central ECU, and in this edge-computing alternative, there is significant duplication of data processing from the different sensors.

The inefficiencies of both approaches made us think differently.

Applying Dual Process Theory to the Perception challenges in Autonomous Cars

What would it take to make autonomous cars more like human brains? One key could be to apply dual process theory to Autonomous Cars.

The car would use System 1 thinking to react to imminent danger and System 2 thinking for other more complex and deliberate tasks.

Right now most of the attention in autonomous cars development is in using System 2 thinking.

The AI/Machine Learning is an excellent parallel to the neocortex of the human brain.
But, cars need a parallel to the Amygdala—which until now hasn’t existed, or at least not as an explicit design choice.

The Augmented LiDAR technology, developed by DIBOTICS, will allow Autonomous Cars to use both System 1 and System 2 thinking simultaneously, when using LiDAR as a perception sensor.

The LiDAR sensors feed raw data directly to an embedded software running in real time on a tiny and low-power chip.

This chip becomes the Artificial Amygdala of the AI.

The AI, instead of working on raw data, can instead work on a classified point-cloud data.

v1 2As this data is classified at the point level and not at the object level, it becomes a set of enriched raw data: a level of abstraction high enough to be useful, but low enough to reduce time-to-decision, processing power, energy consumption and communication bandwidth.

Chip FLAT cropped 2

Instead of using raw data from LiDAR, the AI can use the output from an Augmented LiDAR.

Because the software does not need a power-hungry GPU or any expensive hardware, it is practical, efficient, and most importantly, effective.

And all it needs is a tiny chip.

How the Augmented LiDAR Makes Autonomous Cars Safer

The Augmented LiDAR plays the role of the Amygdala and classifies data points from the LiDAR for each individual frame.

There is no need to wait for several frames before the system can make a decision.
Because the classification is deterministic (no learning or a priori knowledge is required), a high level of safety and ISO 26262 compliance is a lot easier to achieve.

The Artificial Amygdala is not meant as a replacement for the System 2 thinking that AI needs to do. Instead, Augmented LiDAR is a parallel system.

It provides fast acting System 1 thinking for critical situations.

But the System 1 to be useful, it needs to be smart: providing basic Ego-Motion and Free space information is not useful enough.

3D SLAM on Chip

For the Artificial Amygdala to be useful, it should be able to solve some of the key perception challenges of the Self-driving car:

a) Ego-Motion: understanding frame per frame how the vehicle is moving (if there is no reference map).  Localization of the vehicle if there is a reference map.

b) 3D Mapping: creating a moving 3D map around the vehicle, allowing for a virtual frame from the sensor created by the integration of hundreds of actual sensor frames.

These two features are commonly called SLAM, for Simultaneous Localization and Mapping.  When running in a small chip like in the Augmented LiDAR technology, we call it SLAM on Chip.

c) Point wise classification: each point of the Lidar, in real-time, is classified in one of the following categories:

a. Fixed object, i.e., a building.

b. Moving object, i.e., a moving car or pedestrian.

c. Moveable object, i.e., a parked car or static pedestrian.

d. Vegetation

e. Drivable road

f. Ground undrivable

g. Road markings

h. Traffic signs

d) Object detection and tracking, also called DATO:  for moving and moveable categories, the points are clustered and tracked over time.

Physical characteristics like Trajectory, Speed and Volume are delivered to the AI in real-time.

The tracked objects are also labeled:

a. Car

b. Truck/Van/Bus.

c. Bicycle or Motorcycle.

d. Pedestrian or animal.

Meeting High Public Expectations for Safety and Consistency

Before the public accepts autonomous cars, the error rate will need to be near zero. However, System 2 thinking will not get the industry to the point of public acceptance on its own.

A deterministic, low-power, fast and smart System 1 and an AI-based System 2 thinking together is safer than just System 2 thinking alone.

Just like human drivers need their amygdalas to keep them safe, so do autonomous cars need System 1 thinking in the form of an Artificial Amygdala to keep them from getting into accidents.

A dual processes approach like the Augmented LiDAR technology can not only make the autonomous cars safer than human drivers; it can make autonomous cars publicly welcomed.


DIBOTICS will be exhibiting at AutoSens 2017 from 19-21 September, booth number 30.

Shopping cart0
There are no products in the cart!
0
2024 ADAS Guide

The state-of-play in today’s ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.