Mitigating IoT transduction attacks
There are serious risks to modern-day sensors through “transduction attacks” according to research conducted jointly by Kevin Fu from the University of Michigan, and Wenyuan Xu from Zhejiang University. The vulnerability could result in real-world problems for those that use devices equipped with sensors.
To be clear, sensors (also called transducers) are electrical components that turn analog signals, e.g., radio, sound, light, etc. into an electrical signal that can be interpreted by a computer. A transduction attack exploits a vulnerability in the physics of a sensor to manipulate its output or induce intentional errors, says Michael Patterson, CEO of Plixer.
For example, malicious acoustic interference can influence the output of sensors trusted by software in systems ranging from smartphones to medical devices to autonomous vehicles. * What this means to consumers and businesses is that devices that have been put in place for consumer safety are now the devices that could have serious, even dangerous ramifications.
It has long been understood that vehicles equipped for remote connectivity can be hacked. However, the systems can be updated continuously, and over the air (OTA), to provide additional layers of security at the software level. Such is not the case with sensors.
So, if your car can connect to Pandora, Spotify, or the like, it may be possible to connect to it remotely and take control of the many systems that are controlled by the software installed. The manufacturer of your vehicle, however, can update the software to make it more difficult to hack.
This is not the case with sensors. Sensors are connected to the electrical components of the vehicle and relay data related to their function to the car’s internal OS. As an example, a proximity sensor on the front of your vehicle provides information to the cars emergency braking software to automatically brake the vehicle should the driver be unaware of an impending crash.
While the name “transduction attack” is new, the actual threat is not. DolphinAttack, revealed last summer, is an example of a transduction attack that has been successful in the wild. Additionally, as was indicated in the Communications of the ACM article, Fu and Xu’s research showed that Tesla’s sensors were fooled into hiding and spoofing obstacles. In the case of DolphinAttack, converting voice commands to ultrasound frequencies was a quick way to gain near-control of a device like Apple’s iPhone or Amazon’s Echo.
What remains to be seen is how manufacturers and software developers that develop and produce sensors will react to provide additional safeguards to devices and services. In many cases, software changes can prevent fault sensors from being used in malicious ways.
For example, if speech recognition software were written to only process input from normal human voice frequencies, and ignore frequencies used to trick the sensors, i.e., ultrasound, the physics of the sensor would no longer pose a security concern because only well-displayed attacks would properly activate the sensor. This is unlikely to take place because you can imagine how quickly an attacker would be stopped if they were sending audible sound over the air to try to take over someone’s iPhone by issuing “Hey Siri” commands.
It is difficult to say, though, the likelihood of companies addressing these sensor vulnerabilities in their software. After all, many audio sensors are used to track users’ locations for advertising purposes. If software was written to ignore the ultrasound frequencies being projected by companies looking for advertising opportunities, there could be a significant loss in ad revenue.
Understanding the problem, though, is only half the battle. Educating sensor creators of cyber-vulnerabilities is key to reducing vulnerabilities. Manufacturers that produce sensors should take a system-centric approach to security. This means that they need to ensure the validity of data even if a sensor is compromised or becomes faulty. Doing so, though, will require a third-party validation.
Installing additional sensors that look for environmental variations used to circumvent the validity of system sensors could provide an extra layer of protection for such attacks. With these additional sensors, operating systems or computer software could change systems accordingly to notify users of a fault and prevent unexpected behavior from happening.
To completely mitigate such attacks, manufacturers, and software developers must work together to create purpose-built systems that remove, as best as possible, the intrinsic vulnerabilities that sensors necessarily have as a mechanism of their function.
While this may mean that sensors will no longer be usable in thousands of devices (they will only be built and used for a specific application), it will mean that the sensors that are being used will be safer for the consumers that use them.
The author of this blog is Michael Patterson, CEO of Plixer
About the author:
Michael Patterson is CEO of Plixer. Michael worked in technical support and product training at Cabletron Systems while he finished his Masters in Computer Information Systems from Southern New Hampshire University. He joined Professional Services for a year before he left the ‘Tron’ in 1998 to start Somix which eventually became Plixer International.