The next phase of the Internet of Things, part 2
All Things Wireless
It’s not the case that all Things will be wireless; just almost all of them. Some objects are very static and very close to the right kind of wire but vastly more devices are not. This is because of mobility, remoteness, convenience and, particularly in the home, because of aesthetics.
Utilising wireless technology invokes three very familiar issues:
Power is needed to measure, process and transmit data to the Internet. Devices also receive data and take action; but taking action often requires much more power than receiving data. Arguably it is in measurement that power is most critical. Applications are highly variable, but many measurements are infrequent, particularly in tasks such as environmental monitoring or event detection. Powering the Internet of Things
It’s not only the act of receiving data that uses power. Remaining in a conventional standby state can consume power too. Most short and long range wireless protocols require devices to listen continuously for a beacon and in those infrequent applications, it is often the case that the power consumed to remain in standby is the largest component of all power used in the device.
There are four mainstream ways to power the IoT :
- Wired power
Wired power is simple, reliable and predictable but is only suitable within a few metres of an outlet and where simple accessibility and/or ergonomic issues don’t prevent its use.
- Rechargeable cells
Rechargeable cells work well in many consumer applications, providing that the time and patience exists for recharging. For fixed or remote applications, leakage will make the lifetime too short and recharging makes them impracticable.
- Energy harvesting
Energy harvesting has enabled some amazing applications including remote devices, in-body devices and near field devices for payment and communications. While it is possible to derive power from some unexpected places, the technology remains confined. In too many cases there is insufficient light, heat gradient, motion or physical space to generate the power needed.
- Dry cell battery
Dry cell batteries provide high capacity and low leakage. They represent a good choice for powering remote, inaccessible devices along with managing aesthetics and maintaining usability in the home. However, replacement can be expensive and periodic replacement creates a very undesirable operating expenditure.
Power is critical to reducing the total cost of ownership of Things. To make new, differentiated applications, what’s needed is fit-and-forget power – no recharging, no wires and multi-year battery life. Enabling that means either a big battery, or some truly low power technology.
Fit-and-forget power is critical to the next phase of the Internet of Things.
Are standards a problem looking for a problem..?
Standards have powered an incredible array of consumer electronics devices. Cellphones, wireless networks and short range personal devices have all been created as a result of the work of standards bodies and countless organisations that have sponsored them and developed technology based on them, including TTP.
At the physical layer, a wireless standard sits at a specific point in terms of spectrum, range and data rate. While most standards include options and/or fallbacks to mitigate interference, each standard is a compromise to meet the needs of a group of applications. Implementations make compromises too, selecting a set of functionality and interfaces to target a broad range of applications to maximise market share. These compromises have a direct impact on all aspects of implementation – range, performance, silicon area, memory and power consumption.
In return, there are two advantages to the use of a standard: networks and scale. Using a standard means that you can piggy-back onto someone else’s network and volumes derived elsewhere mean that silicon and module vendors have scale advantages. Higher volume drives increased integration and more investment in power-saving features. But is that enough?
Moore’s law is exhausted
In 1965, Gordon E Moore first predicted the logarithmic progress of silicon integration. This evolved into Moore’s law, which has proven to be one of the most accurate predictions of technological progress ever made. After nearly 40 years, the complexity of silicon devices continues to increase, but the costs of implementing each increment are becoming more and more prohibitive. Moore’s law is no longer able to predict a similar reduction in cost. Nvidia published its analysis in 2012 – while available silicon area roughly doubles with each shift in geometry, the incremental cost of implementing the same functionality through smaller processes has diminished to almost zero.
Moore’s law has had a further unintended consequence. While incremental improvements in efficiency of active power are still being achieved, static power consumption is becoming a serious problem. With smaller and smaller physical features, electrons can much more easily leak past barriers and across the boundary of transistors. This directly impacts designers, who now have to constantly manage leakage and make tough implementation choices between high performance and lower leakage logic.
Since Moore’s law can no longer be relied upon to continue to reduce system cost and power, then for those applications where cost and power are still too high, something more radical may still be needed to bring power consumption and cost down.
Part 1 of this blog is available here
Michael Barkway, Consultant, TTP, www.ttp.com
Michael Barkway has more than 25 years’ experience in the cellular and wireless industries, pioneering the development of one of the world’s first GSM basestation designs through very early development work on custom silicon and software for DECT, 802.11, Bluetooth, 3G, HSDPA and LTE. Michael holds an MBA from Cranfield University and a BSc in Electronics from Manchester.