The Edge of Reason: how IoT needs edge analytics as it expands
Gadi Lenz, Chief Scientist at AGT International, details how analytics must be taken to the edge if we are to truly make the most of IoT applications.
Until very recently, the focus was on making the Internet of Things devices, working out how they connect to the Internet, collecting the data they produce and making this data available for someone to use – a sort of “if you build it (the IoT) they will come”.
But the real value of IoT solutions comes from analyzing the data to gain some understanding as to what it all means. A stream of temperature measurements from a sensor in a house or on the street, has limited usefulness. We need to process and analyse this data so that we can decide if we need to turn on the air conditioning, for example.
Hence, the IoT requires analytics. So, the analytics associated with the Big Data market – predictive analytics, streaming analytics, real time analytics, etc. – are now being adopted by the IoT industry.
But the traditional approach to analytics isn’t necessarily fit for purpose for IoT. With so many devices producing so much data, a correspondingly large array of analytics, compute, storage and networking power and infrastructure is necessary. When all the data comes to a relatively small number of data centres where the data gets analysed in the cloud will simply not scale. It will also be very costly because transporting bits from here to there actually costs money.
The solution is clear – distribute the analytics to the edge or very close to it. Let’s harness the smartness of these myriad smart devices and how their low cost computational power allows us to run valuable analytics on the device itself. Multiple devices are usually connected to a local gateway where potentially more compute power is available (like Cisco’s IOx), enabling more complex multi-device analytics close to the edge.
So how would distributed IoT analytics work? The hierarchy begins with “simple” analytics on the smart device itself, more complex multi-device analytics on the IoT gateways and finally the heavy lifting, the Big Data Analytics, running in the cloud. This distribution of analytics, offloads the network and the data centres creating a model that scales.
But edge IoT analytics is more than just about operational efficiencies and scalability.
Many business processes do not require “heavy duty” analytics and therefore the data collected, processed and analysed by Edge Analytics on or near the edge can drive automated decisions. For example, a local valve can be turned off when Edge Analytics detects a leak.
Some actions need to be taken in real time because they cannot tolerate any delay between the sensor-registered event and the reaction to that event. This is extremely true of industrial control systems when sometimes there is no time to transmit the data to a remote Cloud. This is remedied with a distributed model.
Many sensors create huge volumes of data, for example video. Even when compressed, a typical video camera can produce a few megabits of data every second. Transporting these video streams requires bandwidth – not only does bandwidth cost money but if in addition you want some quality of service the whole thing becomes even more expensive. Thus, performing video analytics or even storage on the edge and transporting only the “results” is much cheaper.
In considering edge analytics, we’re beginning to understand that there are some trade-offs that must be considered.
Edge Analytics is all about processing and analyzing subsets of all the data collected and then only transmitting the results. So, we are essentially discarding some of the raw data and potentially missing some insights. The question is can we live with this “loss” and if so how should we choose which pieces we are willing to “discard” and which need to be kept and analysed?
The answer is not simple and determined by the application. Some organisations may never be willing to lose any data but the vast majority will accept that not everything can be analysed. This is where we will have to learn by experience as organisations begin to get involved in this new field of IoT analytics and review the results.
It’s also important to learn the lessons of past distributed systems. For example, when many devices are analyzing and acting on the edge, it may be important to have somewhere a single “up-to-date view”, which in turn, may impose various constraints.
The fact that many of the edge devices are also mobile complicates the situation even more.
If you believe that the IoT will expand and become as ubiquitous as predicted, then distributing the analytics and the intelligence is inevitable and desirable. It will help us in dealing with Big Data and releasing bottlenecks in the networks and in the data centres. However, it will require new tools when developing analytic-rich IoT applications.
In some fields, for example adaptive traffic signal control, some analytics are being done on the edge already but not on the massive scale that will result from the rapid expansion of the IoT. While some people may associate the term “Distributed Intelligence” with apocalyptic visions a la Skynet in “The Terminator” movies, I believe that some real cool and exciting stuff is ahead of us and (excuse the cliché) it will change our lives for the better.
Check out Gadi’s blog, The Analytics of Everything, for more insights.