Content

Events
About

Intelligent sensor fusion for smart cars

Peter Els | 08/21/2017

Engineers have been installing the building blocks for modern autonomous vehicles since the 1980s, starting with antilock brakes, traction control, electric power steering, drive-by-wire, adaptive cruise control, cameras, etc. Now, as engineers tie these components together, along with lidar, radar and high-definition mapping, the car is basically becoming a thinking machine that is aware of its place in the world.

But, as NXP Automotive CTO Lars Reger, told delegates at NXP’s ‘FTF Connects’ conference in San Jose in June 2017, “An autonomous car can only be as good as its environmental sensing.” In voicing his opinion he was expressing the sentiment of the entire industry: Capturing in this single sentence both the challenge and future of automated vehicles.

In order for humans to trust the technology, self-driving cars need to replicate our cognitive behavior which is made up of a three-part process:

 

1) Perception of the environment

2) Decision making based on the perceived surroundings

3) Timely execution of each decision

 

In the automated vehicle, perception is mostly the domain of sensors such as cameras, radar, lidar, and ultra-sonic; while decision making is done by artificial intelligence (AI), algorithms and processing, and the manipulation of the vehicle is carried out by actuators.

The perception systems can further be broken down into two categories:

 

  • Proprioceptive Sensors – responsible for sensing the vehicle’s internal state like wheel speed, inertial measurement and driver attentiveness
  • Exteroceptive Sensors – responsible for sensing the vehicle’s surroundings

 

The exteroceptive sensors are of particular importance for autonomous capabilities as they are tasked with dealing with the external environment. It is their job to spot all important objects on or near the road; accurately identifying other vehicles, pedestrians, debris and, in some cases, road features like signs and lane markings.

An increasingly complex range of on-board smart sensors like cameras, radar, ultrasonic, infrared, lidar and connected sensors for V2X communications and telematics provide both real-time and rich data that the connected automated car must be able to process to create a picture of its environment at any point in time:

  • Ultrasonic is good for judging a car’s distance to objects, but only at short ranges
  • Radar can detect objects at long ranges regardless of the weather but has low resolution
  • Lidar has high resolution but loses sight in heavy snow and rain
  • Cameras, on the other hand, lead the way in classification and texture interpretation. By far the cheapest and most available sensors, cameras generate massive amounts of data, and also rely on good visibility.

Individual shortcomings of each sensor type cannot be overcome by just using the same sensor type multiple times. Instead, it requires combining the information coming from different types of sensors to best interpret the situation.

Sensor fusion

To achieve fully autonomous driving – SAE Level 4/5 – it is essential to make judicious use of the sensor data, which is only possible with multi-sensor data fusion. Instead of each system independently performing its own warning or control function in the car, in a fused-system the final decision on what action to take is made centrally by a single entity.

 

Applying sensor fusion, the inputs of various sensors and sensor types are combined to perceive the environment more accurately, resulting in better and safer decisions than those made by independent systems.

In some cars, like the Tesla, there is a sensor fusion between the camera and the radar, which is then processed by the AI of the car, together with other sensory data such as that received from the ultrasonic sensors.

Although there are many advantages of multiple-sensor fusion, the primary benefits are fewer false positives or negatives.

Any of the sensors can at any time be reporting a false positive or a false negative. It is up to the AI of the self-driving car to try and figure out which is which. This can be hard to do. The AI will typically canvas all its sensors to try and determine whether any one unit is transmitting incorrect data. Some AI systems will judge which sensor is right by pre-determining that some of the sensors are better than the others, or it might do a voting protocol wherein if X sensors vote that something is there and Y do not, then if X > Y by some majority, the decision is carried.

Another popular method is known as the Random Sample Consensus (RANSAC) approach. The RANSAC algorithm is a learning technique to estimate parameters of a model by random sampling of observed data. Given a dataset whose data elements contain both inliers and outliers, RANSAC uses the voting scheme to find the best-fit result.

More than the sum of the sensors: Sensor fusion combinations

Depending on the sensor configuration, sensor fusion can be performed in complementary, competitive, or cooperative combinations:

  • A sensor fusion combination is called complementary if the sensors do not directly depend on each other, but can be combined in order to give a more complete image of the environment, providing a spatially or temporally extended view.

Generally, fusing complementary data is easy, since the data from independent sensors can be added to each other, but the disadvantage is that under certain conditions the sensors may be ineffective, such as when cameras are used in poor visibility.

    • An example of a complementary configuration is the employment of multiple cameras each focused on different areas of the car’s surroundings to build up a picture of the environment.
  • Competitive sensor fusion combinations are used for fault-tolerant and robust systems. Sensors in a competitive configuration have each sensor delivering independent measurements of the same target and can also provide robustness to a system by combining redundant information.  

There are two possible competitive combinations – the fusion of data from different sensors or the fusion of measurements from a single sensor taken at different instants.

A special case of competitive sensor fusion is fault tolerance. Fault tolerance requires an exact specification of the service and the failure modes of the system. In case of a fault covered by the fault hypothesis, the system still has to provide its specified service.

In contrast to fault tolerance, competitive configurations provide robustness to a system by delivering a degraded level of service in the presence of faults. While this graceful degradation is weaker than the achievement of fault tolerance, the respective algorithms perform better in terms of resource needs and work well with heterogeneous data sources.

    • An example would be the reduction of noise by combining two overlaying camera images.
  • A cooperative sensor network uses the information provided by two independent sensors to derive information that would not be available from the single sensors.

A Cooperative fusion combination provides an emerging view of the environment by combining non redundant information, however the result generally is sensitive to inaccuracies in all participating sensors. 

Because the data produced is sensitive to inaccuracies present in individual sensors, cooperative sensor fusion is probably the most difficult to design. Thus, in contrast to competitive fusion, cooperative sensor fusion generally decreases accuracy and reliability.

    • An example of a cooperative sensor configuration can be found when stereoscopic vision creates a three-dimensional image by combining two-dimensional images from two cameras at slightly different viewpoints.

 

To sum up the primary differences in sensor fusion combinations: Competitive fusion combinations increase the robustness of the perception, while cooperative and complementary fusion provide extended and more complete views. Which algorithms are particularly used in the fusion/dissemination level depends on the available resources like processing power, working memory, and on application-specific needs at the control level.

 

Furthermore, these three combinations of sensor fusion are not mutually exclusive.

Many applications implement aspects of more than one of the three types. An example for such a hybrid architecture is the application of multiple cameras that monitor a given area. In regions covered by two or more cameras the sensor configuration can be competitive or cooperative. For regions observed by only one camera the sensor configuration is complementary.

 

As the technology matures, self-driving car makers are all jockeying to figure out how many sensors, which sensors, and which combination of sensors make sense for a self-driving car: More sensors, more data, more to process, more cost of hardware. Less sensors, less data, less to process, lower cost of hardware. And of course, without compromising safety!

Sources

· Richard Truet; Automotive News; Self-driving perfection is still years away; July 2017; http://www.autonews.com/article/20170703/MOBILITY/170709980/Self-driving-autonomous-Fusion

·   Christoph Hammerschmidt; eeNews Automotive; Solving tomorrow’s challenges will be easier with Qualcomm; June 2017; http://www.eenewsautomotive.com/news/solving-tomorrows-challenges-will-be-easier-qualcomm

·  Altium; Unmanned Autonomous Vehicles: Pros and Cons of Multiple Sensor Fusion; May 2017; http://resources.altium.com/altium-blog/unmanned-autonomous-vehicles-multiple-sensor-fusion-pros-cons

·         Steven Bohez, Tim Verbelen, Elias De Coninck, Bert Vankeirsbilck, Pieter Simoens, Bart Dhoedt; Report on: Sensor Fusion for Robot Control through Deep Reinforcement Learning; March 2017; https://arxiv.org/pdf/1703.04550.pdf

·   iAngels; Ushering in the Era of Autonomous Vehicles: A Primer on Sensor Technology; March 2017; https://www.iangels.co/2017/01/enabling-the-era-of-autonomous-vehicles-a-primer-on-sensor-technology/

·         Bill Howard; Extreme Tech; Big boost for self-driving cars: Osram cuts lidar cost to less than $50; November 2016; https://www.extremetech.com/extreme/239359-big-boost-self-driving-cars-osram-cuts-lidar-cost-less-50

·   Danielle Muoio; Business insider; Here's how Tesla's new self-driving system will work; January 2017; http://www.businessinsider.com/tesla-enhanced-autopilot-system-self-driving-features-2017-1/#a-recent-government-report-found-that-the-first-generation-autopilot-system-has-slashed-crash-rates-for-tesla-vehicles-by-40-16

Upcoming Events


Generative AI for Automotive 2025

11th - 13th February 2025
Munich, Germany
Register Now | View Agenda | Learn More

MORE EVENTS