Enable a digital twin with the right memory

Digital

IDC predicts that from 2021–2027, the number of new physical assets and processes modelled as digital twins will increase from 5% to 60% . Although the concept of digitalising key elements of an asset’s behaviour is not entirely new, the ability of various aspects of the technology – from precise sensing to real-time compute to improved extraction of insights from large amounts of data – are all aligning to make machines and systems of operations more optimised and help accelerate scale and time to market. In addition, enabling artificial intelligence/machine learning (AI/ML) models will help to improve process efficiencies, reduce product errors and deliver excellent overall equipment effectiveness (OEE) .

Once we understand the challenges and the complexity of these requirements, we will begin to realise how important memory and storage are for enabling a digital twin.

Extracting the right data is the first challenge

Designing a digital twin is not just the isolated sensing of physical characteristics. It is also the ability to model against the interaction between external and internal subsystems as well. For example, sensing the harmonic profile of vibration of a generator should also lead to insights about how that image can be correlated with the physics of the motor, bearings, belts, and the impact to that interaction. If one truly wants to build a digital twin of a machine, simply installing sensors all around it without any sense of value interdependence will not give an accurate twin.

Brown field adoption also makes this complicated considering adding new sensors to a machine already operating is not that simple. In fact, the first stab at proofs of concept are adding a DIY or embedded boards that have the minimal interface to support a sensor-to-cloud data conversion. It is one thing to add the connectivity piece, but entirely different to do the actual modelling where you need to be able store dynamic data and compare that to your trained model. Moreover, this approach is certainly not the most scalable solution – considering the tens or hundreds of types of systems that you want to model.

Compute will continuously evolve

New processor architectures that have built in convolutional neural network (CNN) accelerators are a good first step at enabling faster inference compute. These devices are equipped to not just ingest analogue signals but to process, in-device, and filter out the noise of the data and allow for values that are relevant for the model. These are well tailored for intelligent endpoints with parallel operations in the GFLOPS (gigaflops per second) range to approximately less than 20 teraOps operations per second (teraOPS).

Lower cost, low power GPUs are also critical as they provide hardware-based ML compute engines that will inherently be more agile, as well as offer the compute power for higher teraOPS (operations per second). Industry sees the implementation of edge purposed GPUs that are less than 100 TOPS or more infrastructure class GPUs of over 200+ teraOPS.

Low power DRAM memory is ideal for AI accelerated solutions

As you can imagine, depending on the architecture – multi-core general purpose CPUs with accelerators may require a memory width of x16, x32 bits, and higher-end GPUs could require up to x256 bit width IO.

The direct concern is that if you are moving gigabytes of data to or from the external memory for the computation, you will need higher bus width performance from the memory. The table below shows the performance requirements for memory interface based on INT 8 teraOPS requirements.

Memory is keeping up with AI accelerated solutions by evolving with new standards. For example, LPDDR4/x (low-power DDR4 DRAM) and LPDDR5/x (low-power DDR5 DRAM) solutions have significant performance improvements to prior technologies.

LPDDR4 can run up to 4.2 Gbps and support up to x64 bus width. LPDDR5x offers a 50% increase in performance vs LPDDR4, doubling the performance as much as 8.5Gbps. In addition, LPDDR5 offers 20% better power efficiency than LPDDR4X. These are significant developments that will improve overall performance and will match the latest processor technologies.

Embedded storage follows machine learning complexity

It is not enough to think that compute resources are limited by the raw teraOPs of the processing unit, or the bandwidth of the memory architecture. As machine learning models become more sophisticated, the number of parameters for the model are expanding exponentially as well.

Machine learning models and datasets expand to obtain better model efficiencies so there will be a need for higher performing embedded storage as well. Typical managed NAND solutions such as eMMC 5.1 with 3.2Gb/s are ideal for code bring up but also for remote data storage. Newer technologies such as UFS interfaces can run 7x to 23.2 Gb/s to allow for more complex models.

These embedded storage technologies are also part of the machine learning resource chain.

Enable a digital twin with the right memory

Industry knows that edge endpoints and devices will be generating terabytes of data, not just because of its fidelity, but the need to ingest data will help to improve digital models – exactly what a digital twin will need.

In addition, code will need to scale not just for the management of data streams, but also of the infrastructure for edge compute platforms – and adding XaaS (as a service) business models.

Digital twin technology has great potential. But if you do a ‘twin’ analogous to modelling just one ‘nose’ or ‘eye’ of a face, it will be hard to determine if this is your twin without the full image of the face. So, next time you want to talk about a digital twin, know that there are a lot of considerations including what to monitor, and also how much compute memory and data storage this will need. Micron, as a leader in industrial memory solutions, offers a broad range of embedded memory including our 1-alpha technology-based LPDDR4/x and LPDDR5/x solutions for fast AI compute, and our 176-layer NAND technology embedded into our eMMC and UFS enabled storage solutions. These memory and storage technologies will be key to getting you the computational requirements you need.

RECENT ARTICLES

Semtech enhances global connectivity with NTN support in HL78 modules

Posted on: March 29, 2024

Semtech Corporation has announced the integration of non-terrestrial network (NTN) support into its HL series LPWA modules, specifically the HL7810 and HL7812. This significant advancement showcases a leap forward in enabling uninterrupted global connectivity even amidst the most challenging conditions.

Read more

Enhance EV charging performance with cellular connectivity

Posted on: March 28, 2024

Electric vehicles (EVs) are steadily growing their market share at the expense of internal combustion engine vehicles. The growth is fuelled by several factors. Perhaps most importantly, prices for EVs have started to drop as competition in the industry is intensifying. New players and models are emerging, prompting several established EV makers to lower their

Read more
FEATURED IoT STORIES

What is IoT? A Beginner’s Guide

Posted on: April 5, 2023

What is IoT? IoT, or the Internet of Things, refers to the connection of everyday objects, or “things,” to the internet, allowing them to collect, transmit, and share data. This interconnected network of devices transforms previously “dumb” objects, such as toasters or security cameras, into smart devices that can interact with each other and their

Read more

The IoT Adoption Boom – Everything You Need to Know

Posted on: September 28, 2022

In an age when we seem to go through technology boom after technology boom, it’s hard to imagine one sticking out. However, IoT adoption, or the Internet of Things adoption, is leading the charge to dominate the next decade’s discussion around business IT. Below, we’ll discuss the current boom, what’s driving it, where it’s going,

Read more

9 IoT applications that will change everything

Posted on: September 1, 2021

Whether you are a future-minded CEO, tech-driven CEO or IT leader, you’ve come across the term IoT before. It’s often used alongside superlatives regarding how it will revolutionize the way you work, play, and live. But is it just another buzzword, or is it the as-promised technological holy grail? The truth is that Internet of

Read more

Which IoT Platform 2021? IoT Now Enterprise Buyers’ Guide

Posted on: August 30, 2021

There are several different parts in a complete IoT solution, all of which must work together to get the result needed, write IoT Now Enterprise Buyers’ Guide – Which IoT Platform 2021? authors Robin Duke-Woolley, the CEO and Bill Ingle, a senior analyst, at Beecham Research. Figure 1 shows these parts and, although not all

Read more

CAT-M1 vs NB-IoT – examining the real differences

Posted on: June 21, 2021

As industry players look to provide the next generation of IoT connectivity, two different standards have emerged under release 13 of 3GPP – CAT-M1 and NB-IoT.

Read more

IoT and home automation: What does the future hold?

Posted on: June 10, 2020

Once a dream, home automation using iot is slowly but steadily becoming a part of daily lives around the world. In fact, it is believed that the global market for smart home automation will reach $40 billion by 2020.

Read more

5 challenges still facing the Internet of Things

Posted on: June 3, 2020

The Internet of Things (IoT) has quickly become a huge part of how people live, communicate and do business. All around the world, web-enabled devices are turning our world into a more switched-on place to live.

Read more