One of the technological advances that the implementation of IoT is driving is in operating systems (OS) which manage applications and processes. Virtual machines (VMs) have long fulfilled this function in the infrastructure world, but for most applications, VMs are too slow and too heavy on resources for use in IoT endpoints. Most endpoints require a light OS that uses few resources and can work quickly to automate various processes, and these requirements are largely responsible for the increasing use of containers in IoT, writes Peter Dykes
Containers differ from VMs in that the former include an entire operating system as well as the applications. For example, a server running several VMs would require a commensurate number operating systems underpinned by a hypervisor. However, because each container shares the operating system with other containers, a server running the same number of containerised applications would only require a single operating system. Containers each have an entire runtime environment which includes an application along with all its dependencies, libraries and other binaries, as well as the configuration files needed to run it. The net result is that containers use far fewer resources than virtual machines, have a smaller footprint and they work faster.
Jay Lyman, the principal analyst for cloud management and containers at 451 Research says, “Key container characteristics such as lightweight, flexible and portable are among the drivers for container software in IoT. Along with an explosion of devices and endpoints, there is also a need for more applications and infrastructure that can interface with those devices and endpoints.”
Lyman adds that application container software such as Docker plays a significant role in IoT, though mainly in the software developer realm, but is helping to speed development, testing and deployment. It also allows flexibility and support for a greater number of application components including languages, frameworks, databases and messaging, as well as infrastructures such as the traditional data centre, virtual, public, private and hybrid clouds.
Container management software such as Kubernetes, Mesos or Docker Swarm and lightweight operating systems such as Container Linux and RancherOS are used to manage containers and clusters and also play a role in IoT software since, as Lyman puts it, “[Container management software] is ideal for the things end of IoT. That is to say container orchestration frameworks and stripped-down, specific and lightweight operating systems make sense as an edge operating system for many devices.”
Containers in infrastructure
But the efficacy of containerware in IoT is not confined to the endpoints and the management thereof. One of the requirements of IoT is that endpoints can, if necessary, be constantly interrogated, continuously updated, patched and in some cases repurposed, tasks which in many instances must be carried out on-demand and in real-time. The infrastructure which lies behind an IoT network must mirror these capabilities and while it has until now been exclusively the domain of VMware, it too is becoming fertile ground for the deployment of containers.
Of course, cloud-based infrastructure is the natural choice for IoT, with its speed, agility and global reach, but increasingly, when enterprises are looking for next generation infrastructure for their cloud-based infrastructure, they want the ownership experience to be similar to that of using a public cloud. The biggest disruption that public cloud computing has brought is that when using a public cloud, the infrastructure is seamlessly evolving behind the scenes and the enterprise doesn’t have to think about the upgrade status or the overheads of running and maintaining it.
As a result, software developers are beginning to concentrate more on operational issues, rather than developing and distributing software and many see containers as the means of achieving a public cloud-like experience for their private cloud customers. One such is Sunnyvale, California -based Openstack specialist Mirantis. The company’s co-founder and chief marketing officer Boris Renski says, “It’s really about operating the infrastructure environment for the customer in such a way that is seamless and is continuously delivered, which gives a public cloud -like experience. Containers enable us to do that.”
He adds, “In pragmatic terms, it involves doing some upstream work in Openstack to make it more cloud-native by containerising all of the control plane services in Openstack and using Kubernetes as a uniform underlay orchestrator for all the independent containerised Openstack services. With this architecture, we are able to deliver much simpler and easier this promise of the continuously delivered, continuously evolving infrastructure to our customers.”
Indeed, when running systems like Openstack, the issues around continuously delivery and upgrades are not small. Complexity breeds cost and the main problem is how to take a very large scale complicated distributed system like Openstack, that comprises many services running across many machines, and run it with as few operations personnel as possible in order to minimise operating costs. Much of the early development work on containers and Kubernetes was done by Google in order to solve this very problem. Google developed control groups in Linux over a decade ago which ultimately evolved into containers and which have been taken mainstream by San Francisco-based Docker. Kubernetes is Google’s container orchestrator, which was an offshoot of the company’s large-scale cluster management system called Borg. With these tools going mainstream and effectively becoming open standards for the broader market, they are now being used to simplify the operations of systems like Openstack, leading to the increasing use of containers in infrastructure.
A common platform for all?
While this is all very cutting edge and exciting, there remains the issue of uniting containers with VMs running on the infrastructure side, for as Lyman points out, “In enterprise software, we believe VMs will have some staying power amid the growth of containers because of the mature, battle-tested security, tooling and process around VMs. Even though they might not get all the benefits of containers, most enterprises prefer to continue to use their tried and true VM software.” If Lyman is right, and history suggests he is, VMware will be around for a very long time indeed and at present, bringing containers and VMs together on a single control plane using a single set of APIs, remains something of a holy grail for the industry.