As we enter the age of the Internet of Things (IoT) Jeremy Cowan asks Oracle’s Dave Hofert how we can benefit from the burgeoning data streams now available to service providers. What lessons in device and data management have been learned in machine-to-machine (M2M) communications?
M2M Now: Are there many common problems and solutions when extracting data from devices in industries within M2M that differ as widely as automotive, healthcare and utility services?
Dave Hofert, Oracle: IoT and M2M solutions are in many ways all the same, but they are also unique. Every problem, or every solution is very specific. “How can I operate this machine more efficiently? How many lights are needed on in this particular building? Where is the patient?” Those questions are extremely specific and so are the rules around collecting and handling the data needed to answer them.
That said, the solutions follow something similar to an 80/20 rule. Every M2M or IoT solution collects some kind of sensory data appropriate to the problem, communicates it back to a data centre, performs some kind of an analysis, and compares that analysis against predicted or past performance.
While everything has a specific objective they also have a common set up, a common design that they can share. You can have specific solutions or techniques across all of these industries that can be leveraged, reused, designed or improved to take advantage of deploying solutions more quickly and easily.
M2M Now: Clearly the industry has moved beyond rebuilding and reinventing the wheel almost every time.
Dave Hofert: I believe that this is the year IoT will blossom. You are right, M2M has been around for well over a decade. Just as in the embedded market, solutions were designed to solve a specific problem, create a one-off solution, and then let’s just go do the next one, we will just start over and solve those problems.
But with the combination of cheaper processors, better connectivity, more advanced processing capabilities and data transmission and sharing standards, developers can now take advantage of more consistent platforms.
At Oracle we view this as three steps. Number one; acquire and manage. Get your data in a standardised, scalable, secure device platform. Then integrate and secure that data into your business systems in a cost-effective and leveraged manner. Then analyse and act; extract the business value from that data and take some action.
If you can make these individual steps more of a common platform, then you can apply this many places across your organisation.
M2M Now: Among the most commonly reported problems for M2M and IoT service providers are making services scalable and simple to deploy. How does Oracle address these issues?
Dave Hofert: We believe in a platform architecture and model that address the issues of scalability and deployment. Oracle’s model allows customers to deploy and create solutions, but also to take advantage of improvements ‘under the hood’. On devices, Oracle believes the Java platform is the key for acquiring and managing data. Java is both a language and a run-time. The language is widely known and something of a standard in IT for developers. Then Oracle and partners create the run times across a wide range of hardware.
When you create a Java application, you can run it without change across multiple different devices. So you can leverage business logic or algorithms, or just interface code across a wide range of devices, from the device side through the gateway, to the server side.
Now you have this platform that allows you to share business logic, and algorithms up and down the solution stack which gives you tremendous flexibility. As Oracle improves the platforms, you automatically see the benefits. This allows you to focus on your logic while Oracle focuses on better performance, functionality, and scalability.
M2M Now: Even when those twin barriers have been overcome, some IoT enabled services that seem wonderful in the pipeline can still fail to be cost-effective. This seems to be a constant challenge.
Dave Hofert: For early IoT services or solutions, cost-effectiveness was somewhat subjective. Obtaining information that wasn’t readily available; such as the temperature or the location of something,and you were able to realise direct business benefit then you had a win. The cost of this solution would be weighed against not having the data at all and the benefit seemed obvious.
However, the question is what happens over time? How much does it cost to maintain that solution? More importantly, how much does it cost to evolve that solution? I think IoT solutions are a bit like eating a packet of crisps; you have one, you get one piece of data, and then you just simply want another piece of data. And modifying that solution in place becomes really expensive when it is hand crafted.
For example, we talked with a partner that was working on an automated parking solution. The goal was to look at a parking space, is it available? Can I transfer the data to some central server so that people who are looking can buy a parking spot? That is a great solution on its own merit.
Then the city says, “Well, I have all these sensors at work, can you tell me the temperature or measure the rainfall at this spot?” That is a good idea, there is infrastructure in place, there is a device there. The question is, how hard will it be to modify? That is really the issue for long term cost-efficiency. A point solution may be fine, but they won’t be a point solution for long. Everyone is going to want to know more.
M2M Now: Are the cost pressures that can hamper profitable service delivery in modification?
Dave Hofert:Yes, again going back to the platform discussion, let’s use some examples. We have set up this IoT solution; “Where are my trucks? I am a crisp delivery business, and I need to know where my trucks are, because if I need to divert one, I want to pick the one that is closest.” So, you have collected data from a GPS module, you’ve sent that location to a server, which connects it to a map and gives you a picture on the screen showing you where all the trucks are.
Now the mechanic says, “Can I also know how fast it is going on average? Can you look at the tyre pressures?” To do this you have to modify that device, or add another inside the truck. If you have got a device that has a particular OS or development environment, you probably have a software
team that handles it; they are going to have to do some work to make changes and collect more of this data.
This is not particularly hard, but now you need to look at the gateway. It is a different operating system, tool chain, and device. You are going to send the data to the back end which, of course, has the IoT development team.
You put the first solution in place and everything is great, but now we want to modify it. Now there are three different teams to coordinate, collaborate, and share information back and forth.
The testing is really the killer. Testing and maintenance costs escalate tremendously, because you may have individual environments, you might have an integrated environment. If something goes wrong, who figures out who is at fault? Then you have to fix it and then you have to deploy it.
So it quickly spirals out of control, and what happens is the answer comes back, “Sorry no, we can’t deal with the tyre pressures, it is too hard.”
If you are using a platform-based solution such as we propose with Java, you can have Java running on all of those devices. You can have one team working on the full solution. If you want to collect more data it is much more manageable, much easier to maintain and to test.
M2M Now: Dave, are better organisational processes as important as better platforms?
Dave Hofert: Yes, I think you have got it right. Of course we support and develop a wide range of standards, not only for the back end business integration side of the world, but for the device side in terms of communication and data format.
Fundamentally, the platforms plus the standards help keep the costs lower, but the point about organisation is important. Keeping with our example with the delivery truck, let’s assume we are able to collect all of the data. Now we are saving a ton of money, operating this crisp delivery service much more efficiently. All of this data, and the devices, have become critical parts of an organisation’s infrastructure.
Many, if not all of these M2M devices need to become first class citizens in an organisation’s IT infrastructure – this is a bit of an eye opener for the IT department, and probably for the product guys who are instrumenting the trucks. The IT department is saying, “Okay, you want me to control and manage what? It’s not in my IT server room. How am I supposed to control access to that?” The product guys are saying, “You’ve now got to follow IT rules for how I make changes, this is now really hard to do.”
There is no question it is a challenge, but I think it is a big opportunity, because IT in general has been working for decades on problems like provisioning systems, creating unified profiles for systems, security, distributed computing, and even data lifecycles. These are precisely the problems that IoT devices have when they move out of the one device, one solution silo. They need to be remotely provisioned, updatable, secure, controlled and uniquely addressable. These are all attributes of a typical IT system. The good news is, while it can be a challenge, Oracle and other companies have been working for a long time on how to solve these problems.
But the process challenge is even more interesting when you engage in an IoT solution that works with the environment.
For example, your infrastructure on your trucks might also share data with the city, because they are looking at traffic. Or the city might have sensors out there that the trucks utilize to anticipate problems. We are starting to comingle these infrastructures. They want to work together.
Your company’s IT, which has always been something of a ‘walled city’, now has this outer layer that can potentially interact or engage with other players out in the open. How will we handle that? Again, we have very good ideas and techniques, but there are new languages and new vocabularies that need to be created to enable us to work together seamlessly.
M2M Now: Are users best served by having more intelligent devices in specific sectors like utilities, building automation and healthcare? Or should IoT service providers build the intelligence closer to the heart of the network?
You have some rules that this device is supposed to follow, and additional information that provides some context. Now you are a bit smarter because you can react to changing scenarios and conditions. This is what we want from devices, right? In a smart home scenario shouldn’t my house recognise that I am leaving for work, and automatically lock the doors, turn down the temperature, and turn off any lights?
We want devices to adapt to help us, because if they can adapt and act more intelligently, then we spend more time benefiting from them, rather than managing them, which is where we are today.
So, where does this intelligence live? We are evolving into a world of what I call “dynamic intelligence” which means both devices and servers are intelligent, but they have different responsibilities. Devices at the edge of the network need to be intelligent enough to react, they have to sense changes in the environment, the context, and then act. This is based on logic or models derived from the core functionality of the device and a longer-term analysis of how the functionality was delivered or executed.
The job of a server is to collect this data from the edge devices and continue to perform the ongoing analysis across all the devices out there, to look for patterns or exceptions and develop new, or better responses to input. Then over time these algorithms are tweaked, updated logic is sent to the edge, and the cycle continues. The overall solution continues to get smarter.
You can do all of this on the server side. But then you have the issues of connectivity and scalability. If connectivity is interrupted, then the device is extremely limited, and maybe even useless. As well, having more devices feeding in more data only increases pressure on the network, the storage, and on analytic components, which then start to become individual choke points.
What we propose is more of a distributed computing and processing model, where data is analysed, reduced, and enhanced at every stage to transmit the best and most relevant data possible. The device can make a few decisions about what is going on, and it can pass data onto a gateway, which can look at the inputs from multiple devices, and make some decisions, and reduce the raw data and send on what’s important.
Everybody wants more data, but everything has a limit. If we all start passing on every single bit of data that we all are collecting, everything just comes to its knees. We need to be smart about how we handle this.
M2M Now: Thank you.