Content

Events
About

What’s driving safety in autonomous vehicles?

Al Tuttle | 02/26/2018

At some future point, when vehicles reliably and instantly communicate with each other in all traffic patterns, is it possible that they will also “know” what to do in case of emergencies? One of the murkiest, most difficult but most important questions surrounding the idea of vehicles that drive themselves is: What will this machine do in a life-threatening situation?

This is the pivotal problem that needs a solution before any workable transportation system using driverless vehicles can be installed. As everyone knows, open-road traveling poses uncountable situational choices every time we get in a vehicle and drive. What happens in the normal, usual course of driving – starting, stopping, turning, avoidance and maneuvering in hundreds of subtle ways – is extremely complex.

The new vista of ground transportation includes autonomously operating vehicles as its high point, but driverless networks of any size are risky propositions. As vehicle systems become more complex every year, security and safety research must surge right alongside the robotics AI research that drives this market.

We will look at some broad safety/security challenges in non-human driving, then drill down to some specific top-level engineering being done today.

An interesting dilemma

We have seen demonstrations and read many papers on the future of autonomous ground travel, but the security implications are always insurmountable. So, for these systems to be put into public use, these theoretical behaviors, particularly as they relate to emergencies, must become predictable to the point of at least as reliable as human behavior in the same situations.

Humans make good and bad instant decisions; driving activities are inherently dangerous because people¬¬ react differently to the exact same situation. In this, machines can actually perform better, or at least more predictably, since they can be programmed to “choose” only one alternative. Can machines be further programmed to make ethical choices?

This interesting proposition involving ethical dilemmas presented to smart machines is one subject of a study (1) about autonomous driving and emergencies. Known as the trolley or runaway trolley dilemma, the scenario is as follows: you are the only person on a runaway trolley heading for a branch on the tracks. The choice is do nothing, stay on the same track and kill five or more people; or throw a switch to choose the other track and kill four or fewer people. 

As an ethical situation, the former choice involves no conscious change in the trolley’s direction while the latter means deliberately changing the trolley’s direction. If this kind of decision is extended to artificially designed computer brains, will it provide efficient safety for the passenger? When something goes horribly wrong during normal operation, what can be artificially included in intelligent software to take the safest route?

Intelligence based on outcomes

Obviously, machines are not moral; their movements are based purely on digitized directions from the CPU. Designers of the next generation of transportation systems must create rules of avoidance that will entail the ability to react to never-encountered incidents.

One report puts the problem succinctly (2). In discussing a plan to race identical autonomous cars under controlled conditions, the author notes the main problem: “Nevertheless, the biggest challenge self-driving cars will have to overcome on the road is being able to react to the randomness of traffic flow, other drivers, and the fact that no two driving situations are ever the same,” the report said.

AI software has some promising upsides. AI used in conjunction with new hardware can create reactions to situations much faster than human drivers, in the billionth-of-a-second range. Switching to mechanical means of safety like braking, turning and signaling happens more quickly. A steering wheel (if that is what continues to be used) can be spun to its lock instantly, far faster than a driver could. It would be up to the suspension and tires to absorb the shock of instant turns, but the prospect of safer avoidance is a real possibility with electro-mechanical movement leaving out human muscles.

According to this report, computing ability lies at the heart of the AI problem: it is not enough to receive thousands of data points per second; the software must interpret the data and signal action. Interpretation is where the system currently has difficulty. The report uses this example of visual interpretation through cameras: “There is a massive amount of computation required to be able to take these pixels and figure out, “is that a truck?” or “is that a stationary cyclist?” or “in which direction does the road curve?” It’s this type of computer vision coupled with deep neural-network-processing that is required by self-driving cars.”

Another paper about the intricacies of safety/functionality noted that functionally safe systems might be built onto existing automotive frames, or might be built entirely from scratch. However, meeting the physical demands of an autonomous platform is the easy part; the difficulty in today’s competitive marketplace is weaving together manufacturers’ patented platforms into a cohesive unit. Every system in every vehicle must be able to seamlessly receive and interpret signals from each other: “… the domain of systems architecting in the automotive world is still driven largely by qualitative aspects like legacy considerations, brand values, organizational and development processes, commitments to specific technology partners and so on,” the report said.

The most technical phases of engineering are in the preliminary stages. Following the EU’s Technology Readiness Levels (TRL), this report stated that as each level is reached, another level of software architecture is necessary. Building on legacy systems may or may not work. As this report states, “sensing is easy; perception is difficult.”

Receiving electronic signals, in much the same way we see a car coming toward us, is only the beginning of vastly complex brain activity that interprets the speed, distance and safety concerns about that vehicle. The software system must interpret the data in such a way as to complete a task as directed by its axioms. If the software is forbidden certain choices, they will never happen. The goal is to have choices that will match any circumstance.

The paper defines the components necessary to create this movement from sensing to perception:

Sensing components – receive data from sensors like cameras

Sensor fusion component – creates a hypothesis about the data from sensors

Localization component – accurately and immediately places the vehicle in a location on a map

Semantic understanding component – attempts to classify objects and their movement from data

World model component – attempts to place the data results into an ever increasingly large knowledge base.

Manufacturers must cooperate in developing these components in the most interchangeable way possible. This goes against the autonomy of major auto suppliers in all tiers, just as other industries have proprietary systems and patents they hold most dear. However, this appears to be the only way for a fully functioning autonomous vehicle system to perform; any other system demands severe limitations on its size and scope.

Some examples forging ahead

Deep neural networks like NVIDIA’s DRIVE™ PX are now performing at a level of semantic decision making, although still far from autonomy. (4) The Parker AutoChauffer now offers driverless transport for defined point-to-point trips. The company plans to release a deep-learning CPU called DRIVE PX Pegasus in 2018. The computer is the next step in learning situations, processing choices and creating vehicle movement (or shutdown) instantly.

Intel is also developing deep neural learning units for fleets and eventually for complete autonomous driving systems. (5) The company wants to perfect the fusion component in which parallel and sequential signals are merged, read and deciphered on the way to semantic understanding. Intel GO platforms combine three different processor types to produce a decision-making signal at the fusion level: “The compute required for autonomous driving can be divided into three intertwined stages: sense, fuse, and decide. Each stage requires different levels and types of compute performance.” 

The first stage collects data from sensors, the second stage develops, or fuses, the data into an environmental picture, and the third stage makes a choice on how to proceed. Intel says the platform is designed to allow individual companies to develop the processing independently, particularly the third stage.

As mentioned above, this may or may not be a good thing. Companies like chipmakers and software developers perhaps should be developing together first, and plan differentiation if this becomes feasible. Certainly, time will tell on this industrial dilemma. Can companies join together, offering the best of their legacy systems for collaboration, or will they stay as vertical silos, releasing only some details?

The answers will determine the direction of a rapidly growing and intensely watched new industry. The idea has so enthralled companies and countries that some are already planning large autonomous networks in the near future. The UK plans to “sweep away regulations and budget constraints” to launch an autonomous auto network by 2021, according to this report. (6) In a move to show the UK intends to lead industries post-Brexit, Chancellor Philip Hammond recently issued a budget that included £75 million for development of artificial intelligence, as well as other automotive expenditures.

When is “safe” truly safe

The UK wants to show a very upbeat manufacturing and trade persona to Europe, and perhaps no other project is as hot on the world stage as driverless cars and trucks. To expect a driverless network by 2021 is very optimistic. What will it take for these vehicles to be self-contained, fully safe and secure on the road?

There are some tantalizing experimental projects now in process. One ongoing project by Tesla will allow its autonomous driving system to learn whether it is in charge of driving functions or not. (7) According to this report, Tesla’s AI system use a monitor mode to “learn” how decisions are made by a human driver, although it will not be in command of any functions. 

Tesla said that this “shadow mode” learning offers more complete experiential/ reactive learning since it will constantly compare what the human driver does to what it was programmed to do in any situation. This could fill in the missing piece of an AI puzzle: it seems to be nearly impossible to pre-program a sensor-based system to understand and react to all emergencies before they happen. 

Tesla then plans to share every vehicle’s learned data knowledge with a larger fleet of vehicles, and so on. The multiplicative affect could be astounding and rapidly send the AI driven fleet onto the roads. If Tesla or another AI mainframe trainer can continuously train the brain in use and share that data, we may see a truly safe vehicle sooner rather than later. The key here is: when is sooner and when is later? When will we know if a vehicle is ready to take on fully autonomous driving? 

Here is another key to achieving a truly safe autonomous car: sharing data among all the companies working on this problem. As discussed earlier, the transportation industry tends by very vertical, each company holding patents and technology close to home. However, Intel has proposed an algorithm that can be used as a benchmark for not only companies but for regulatory agencies as well. (8)

The Cautious Command formula would guarantee that the autonomous vehicle makes no decision and takes no action that causes an accident. The set of cautious commands mean that the vehicle will never react outside of the command parameters. Mobileye (owned by Intel Corp.) devised a minimum command parameter protocol that it said could be used by all manufacturers of autonomous equipment. In the end, every vehicle would have the same Cautious Command set in use at all times.

Preventive maintenance and preventing accidents

Car owners today are notorious for allowing dangerous conditions in their vehicles to fester, or questionable equipment to go unrepaired. Some U.S. states require inspections once or twice a year; some states have no inspections and rely only on law enforcement to stop vehicles with dangerous lights, for example. 

Robotic cars can go a long way toward fixing this problem. Since the vehicle and its manufacturer will be liable (to some degree not yet known) for any accident in which there is no driver, it is possible to disallow any use of the vehicle until a dangerous situation is fixed. 

In the same way that technology was used to try to keep drunk drivers from starting their cars, using alcohol detection breathers, software could deny entry or starting a car if the brakes are severely worn to a dangerous point, for example. Missing or damaged lights, turn signals and such might give a warning for a few ignition starts, then cancel starts altogether until they are fixed. 

This poses its own set of problems: a vehicle must be allowed to drive to a repair center when something is dangerously wrong; perhaps this could be programmed as the only allowed destination through GPS for the next start-up. Of course, the owner would have ample warning and sets of instructions before these drastic actions are implemented.

In the race to provide a wide network of AI vehicles, manufacturers, regulators and nations must insist that a new type of commerce emerges in which all parties put safety and inter-connection above all other profit-inducing considerations. 

Sources

1. http://www.fuse-project.se/

2. http://www.roboticstrends.com/article/how_ai_is_making_self_driving_cars_smarter

3. https://www.kth.se/polopoly_fs/1.580283!/wasa2015.pdf

4. https://www.nvidia.com/en-us/self-driving-cars/drive-px/

5. https://www.intel.com/content/www/us/en/automotive/go-automated-driving.html

6. https://www.theguardian.com/technology/2017/nov/19/self-driving-cars-in-uk-by-2021-hammond-budget-announcement

7. https://www.scientificamerican.com/article/when-it-comes-to-safety-autonomous-cars-are-still-teen-drivers1/

8. https://www.engadget.com/2017/10/17/intel-mobileye-autonomous-vehicle-safety/

9. https://arxiv.org/pdf/1708.06374.pdf

Upcoming Events


Generative AI for Automotive 2025

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More


Data Management for Generative AI in Automotive

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More

MORE EVENTS