For both developers and network providers, the road to the Internet of Things (IoT) has meant stepping into the unknown, but purpose-built test tools and services are now lighting the way, according to Stephen Douglas Solutions & Technical Strategy lead, Internet of Things, Spirent Communications.
From the providers’ side
The network provider sees a superlative market opportunity ahead with all these new devices needing to be connected, but then come the doubts:
- What are these devices? The IMEI data informs the network that a particular off-the-shelf connectivity module has been added – but what is it serving? It could be simply a parking meter, or it could be a critical heart rate monitor?
- What traffic patterns might they impose on my network? It may be safe to assume that most small IoT devices will put very little load during normal operation, but what might happen if, following a power cut, tens of thousands of them all log onto the network at the same time?
- Are they safe to go on my network? For example: what is the risk of adding a big population of very simple, unprotected elements to a network that has previously only served computers and smartphones incorporating their own sophisticated security measures.
- What QoS do they require? There will be a world of difference between the service demands of a parking monitor, a driverless vehicle, a smoke detector and a remote heart monitor.
- How might they impact my traditional services? As in the second question, it is necessary to think not only about everyday operation, but also what might happen under extreme conditions.
The network provider’s problem has more to do with the sheer scale of the IoT. It is one thing to accommodate a new device on an existing network, but quite another to predict what might happen when many thousands of devices are installed.
As the above questions suggest, there is also the challenge of sheer diversity of needs, traffic and criticality. The whole IoT population can be divided into two main types.
“Mission Critical IoT” includes devices such as alarm systems, remote medicine, driverless vehicles and financial systems that demand stringent reliability guarantees, against a wide range of latency requirements. Meanwhile the network is also serving a “Massive IoT”: a population of very cheap, low power devices that offer little or no guarantee against breakdown or malware infection.
The IoT ‘chasm’
This then is the chasm between the two cultures. On the one hand there are equipment developers wanting to ship devices that will work perfectly on networks all around the world. On the other hand there are providers expected to support a massive new population of strangers on their networks.
So the IoT industry needs to develop mechanisms to better recognise IoT devices and monitor their activity – how often connecting, how many bytes to how many different destinations, etc. Network providers should begin to insist on self-reporting capabilities, or at least to offer incentives such as lower certification costs to encourage the adoption of devices that themselves provide basic data for the network KPIs.
Also needed are standard categories of devices and typical operating parameters – to allow automated monitoring systems to recognise anomalous behaviour, such as a refrigerator transmitting spam mail. Some work is already being done: the Open Mobile Alliance is developing a Lightweight Machine to Machine protocol (OMA LWM2M) for managing simple IoT devices with limited processing resources.
Another simple application layer protocol is the Constrained Application Protocol (CoAP): this is being extended as the number and diversity of M2M applications grows.
Bridging the chasm
Meanwhile help is at hand for both sides – from an industry offering advanced network testing devices, services and strategies.
For the developer concerned about the performance of a device in other countries and other cellular networks, there are compact solutions that will emulate any type of cell network, allowing “real life” performance testing to be carried out under strict and repeatable laboratory conditions. An important and often under-considered aspect is: how might the device respond in negative situations?
Not just how it works on a certain provider’s network, but things like: what actually happens if a user takes it to an area of bad reception – does it fail safely? There is a double benefit here: the developer can make sure that a first class product will be delivered, and at the same time give the service provider assurance that it has been thoroughly tested for the environment they provide.
Similarly there are equipment and services that can audit any device’s ability to stay secure under all sorts of malware, denial of service, breakdown and attack conditions. As a service, this will provide more than a simple fail/pass indication: vulnerabilities will be prioritised, with a clear distinction between urgent gaps that must be closed before the device will be allowed on the network, versus lesser weak points the developer should be aware of.
Developers are also reminded of the need for a regular repeat security audit to make sure that their device stays ahead of hacking advances, especially if it or the network is regularly updated.
For the cellular network provider there are solutions that can emulate any type of IoT – containing millions of devices of different types, traffic patterns and QoS demands – and then load it onto a model of the network and allow testers to “play tunes” to see how the network will respond under all sorts of operating and failure conditions.
Part of that sort of testing is to model the impact on existing network users: might the IoT erode the QoS for regular cell-phone customers? Until recently the only solution has been to buy a few representative physical devices and try them out on the network, without any ability to scale up to millions of devices.
Once the network provider is sure of their ability to host an IoT, the next step is to develop IoT services and applications to offer to the device manufacturers. Here the test industry can offer what it calls “Lifecycle Service Assurance”: a means to not only test that a new application can be launched knowing that it will adequately support the IoT population without compromising existing users, but also to continue to monitor the actual network performance and proactively warn of likely problems as the number and range of devices grows.
So the operator is not only can be sure of what should work on their network but also that the network is continuing to support the IoT as it grows.
This is especially important when the provider faces the challenge of balancing the needs of Massive IoT, Mission Critical IoT and existing customers. Mission Critical IoT is a niche market that demands very high SLAs but rewards the provider with lucrative contracts, whereas Massive IoT promises almost unlimited business but very low returns for devices that are still using up network resources.
Lifecycle Service Assurance provides on-going visibility to anticipate trouble and maintain that balance, without sacrificing either type of business opportunity or existing customers’ loyalty.
Minding the gap
The network testing industry is providing a vital service for a very precocious IoT market. It offers a window between two cultures that will need to co-operate as never before. It could also help to build a common language for the future.
The author of this blog is Stephen Douglas, Solutions & Technical Strategy lead, Internet of Things, Spirent Communications.
Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow