Print this page

Researchers hack adaptive cruise control, then show how to make it safer

Written by 
  • October 10, 2022

rep hoque self driving illustration 550pxAdaptive cruise control comes standard on most new vehicles but it can be tricked into causing accidents, according to research from the UAB Department of Computer Science. The researchers also demonstrated a way to alert humans in time for them to take control.Most of us think we’re pretty good behind the wheel. But even the worst driver’s-ed dropout can distinguish a speeding car from one barely inching forward. A new study from computer scientists at UAB shows that advanced driving assistance systems such as adaptive cruise control, now standard equipment on many cars, can be tricked into this exact mistake, however. The work, which will be presented at a global Internet of Things conference this fall, also demonstrates a way to keep the cars grounded in reality to avert disaster.

The study merges two strands of research on the vulnerabilities of modern cars with advanced driving assistance systems.

One focuses on the nerve center of electronic communication in most cars, a hub called the Controller Area Network (CAN) bus. The CAN bus is reliable and cheap and great at prioritizing messages while rolling down the interstate at 70 mph. That is why it is the go-to piece of hardware for passing data between the dozens of electronic control units, or ECUs, in modern cars. These controllers are in charge of everything from airbag deployment to antilock brakes and engine timing. But one thing the CAN bus is not is secure, from physical or wireless attacks, as researchers have demonstrated time and again.

rep hoque self driving adaptive cc 550px"As with most of our systems, cars were designed for performance and security is often an afterthought," said Ragib Hasan, Ph.D., director of UAB's SECRET Lab. "This has led to such glaring security issues in cars we use every day. That is why [Aminul] Hoque’s dissertation research in my lab is so important and timely — it has the potential to save countless lives."Another hot topic for security researchers: how to protect self-driving cars from attacks on their sensors. (Researchers have been able to fool self-driving cars with strategically placed stickers on stop signs, lasers and hacked billboards.) But what if the attack were deeper — getting at the heart of how a self-driving car sees the world?

This is the question asked by Aminul Hoque, a doctoral student of Associate Professor Ragib Hasan, Ph.D., in the Department of Computer Science. Hasan runs the Secure and Trustworthy Computing (SECRET) Lab at UAB, which focuses on solving important real-life security problems. As part of his doctoral dissertation research under Hasan’s supervision, Hoque explores the security issues related to connected and autonomous vehicles.

“In this work, we showed a novel attack that manipulates the bus data to exploit modern advanced driving assistance systems,” Hoque said. What would happen, Hoque and Hasan asked in their research, if a hacked CAN bus were programmed to tell an adaptive cruise control system it was going 30 miles per hour when it was barely moving? Or, even more frightening, if the adaptive cruise control maneuvered as if it were barely moving, when in fact it was going 40 miles per hour into a sharp curve?

The answer, as they demonstrated in a series of simulations, is predictable: crashes. (Watch one simulation below.)

But human drivers do not just go by the speedometer. They can look out the window and, seeing the countryside flying past, realize that a speed of 2 miles per hour is not accurate. So, in the second part of Hoque’s study, he proposed a backup source of truth to help self-driving cars do the same thing. Using GPS and the machine-readable HD maps that self-driving cars use to keep themselves on the road — neither of which runs through the CAN bus — he was able to generate an approximate value for the vehicle’s speed. It varied by a few miles per hour from the actual speed. But it was close enough to provide a check on the values coming through the CAN bus, and to quickly alert the driver in time for a human response if something is off. “Our prototype can detect discrepancies of more than 7 or 8 miles per hour in three seconds or less,” Hoque said.

The paper will be presented at the IEEE 8th World Forum on Internet of Things conference, which starts Oct. 26 in Yokohama, Japan.

“As we move into the age of driverless vehicles and connected cars, driving assistance technologies are becoming universal,” Hasan said. “This means we must identify such attacks and design countermeasures to prevent them. As with most of our systems, cars were designed for performance and security is often an afterthought. This has led to such glaring security issues in cars we use every day. That is why Hoque’s dissertation research in my lab is so important and timely — it has the potential to save countless lives.”

The next step in the research, Hoque says, is to investigate other advanced driving assistance systems and attack methods, including manipulating the reported steering angle to foil lane-keeping assistance technology. “What if the attacks are more complex?” Hoque said. “We need to alert the driver at least two seconds before a possible collision to give them time to respond. How good is our detection mechanism in those cases? These are the questions we will explore.”

Hoque should complete his dissertation and graduate in December 2022. What then? “I’m looking for a postdoctoral position in academia or a researcher position in the self-driving industry,” he said. “Self-driving cars are coming. Analyzing their security and safety is really important.”


Hoque’s research used simulation software designed for OpenPilot, open-source self-driving software that has become popular among autonomous driving enthusiasts.