IoT testing can be a complex process and as a result many vendors aren’t yet onboard with it. Concerns over their intellectual property, the level of commitment required and how to interpret and act upon the results deter many from embarking upon breakpoint testing.
But, as Andrew Tierney, consultant at Pen Test Partners says, in the long run, the process is beneficial, providing the vendor with the opportunity to correct issues that could compromise the brand.
Nearly all of the published research on IoT vulnerabilities focuses on the device and training on attacking the device. But when it comes down to it, a real-world IoT system is far more complex. There’s the devices, the operating system and software that runs on those, the mobile application, the servers and the build on the server, to name but a few. Compounding this, the devices can be placed in physically exposed locations and on potentially hostile networks that you have no control over. They are installed by people with no networking knowledge. And the painful fact is that you have placed your system directly in the hands of the attacker. This is very, very different to normal infrastructure IT.
There are three methodologies that can be used to test IoT systems, each with their own advantages. Black box testing sees the testers approach the system as real-world attackers. The only knowledge they have is what is publicly available. Often, the testing will focus on recovering firmware or rooting the device to obtain information about how the system operates, including APIs. This can be crucial in finding serious systemic issues. It tends to be time-boxed rather than task-driven and the testing will flow in an organic manner, following paths most likely to yield vulnerabilities.
Alternatively, white box testing sees the testers given access to design documentation, specifications, data sheets, schematics, architectural diagrams, firmware, and even potentially source code. Using this knowledge, they attack the system. Unlike black testing, it can be task driven, as the open access to documentation allows the tester to develop a plan before testing starts.
Between the two is grey-box testing. Some information is provided and this avoids unnecessary time being wasted on reverse engineering. A typical scenario might involve a period of black box testing which, if it fails to yield access to the device/firmware, leads to “break glass access” at which point grey-box testing continues. Grey-box testing often offers some of the best results, providing confidence that, by using defence-in-depth principles, the device will withstand attack from real-world attackers..
Concerns over testing expressed by vendors include whether the test will lead to a compromise so extreme that their product is pushed back to the drawing board. In reality, tests tend to discover vulnerabilities that can be fixed that then prevent mass compromise, stopping the kind of take-down achieved by proof-of-concept hacks like the Miller and Valasek Jeep attack.
Will testing find all the issues? That’s unlikely but white box testing will nearly always find more issues than black box testing. Should you fix even low risk issues? Yes. Many high severity issues are the result of multiple medium and low severity issues which together saw the product fail.
Some vendors are also reluctant to handover documentation, firmware or source code but dropping these barriers gives better results and saves time. There’s nothing in your IP the tester hasn’t seen before. Making the tester hunt for the JTAG port or bruteforce the password is a waste of precious testing time.
Will testing keep within the parameters of your time-to-market plan? While the tester will seek to provide a timeline, the process will always have an element of the unexpected so that even white box testing may deviate from the plan. Shorter test periods will tend to focus on the more probable vulnerabilities.
Security testing needs to become part of the IoT product development cycle but for now, without the compulsion of regulation, IoT testing remains optional. By shedding some light on the process, hopefully more vendors will embark on testing rather than having to answer an email from me alerting them to a serious vulnerability or worse fending off a real attack and having to answer to an angry customer base.
Black Box Testing
- Lowest customer effort – a scope is agreed, production devices are sent to us, and testing begins.
- Possible with nearly all systems – legal, technical or even internal politics sometimes prevent external parties performing white box testing.
- Simulates the real world – this is how any attacker would approach the system
- For us, reverse engineering is great fun!
- Does not maximise use of time – spending time on tasks like obtaining restricted access data sheets, determining part numbers from obfuscated parts, and reverse engineering multi-layer boards do not improve your security.
- Potentially low coverage – as time is spent reverse engineering and exploring the system, less time is spent on security testing itself.
- Often ignores defence-in-depth – if the firmware of a device cannot be recovered during the test, but that firmware is full of hidden vulnerabilities, you will not have the multiple layers of protection that are required to maintain a secure system.
White Box Testing
- Maximises use of time – the testers can move directly to security testing.
- Full coverage – everything that is documented can be tested appropriately.
- Defence-in-depth – many layers of the system can be examined, from aspects such as compile time security protections, through to intrusion detection on hosts.
- Highest customer effort – providing the documentation and other information can be time consuming.
- Not real-world – this can result in remediation efforts being spread too thin, focusing on deeply hidden vulnerabilities that are unlikely to be exploited.
- Hard to get buy-in – many vendors are still unwilling to allow this level of access to their documentation, systems, intellectual property or code.
- Third-parties – if the system being tested involves multiple third-parties, obtaining white-box level of access across all of these can be extremely challenging.
The author of this blog is Andrew Tierney, consultant at Pen Test Partners