University of Illinois, Micron enhance speed and battery life of mobile, IoT devices with deep in-memory architecture

7/25/2018 Kim Gudeman, Coordinated Science Lab

Written by Kim Gudeman, Coordinated Science Lab

Research conducted through a partnership between the University of Illinois at Urbana-Champaign and Micron Technology, Inc., could help your cell phone and IoT devices run applications faster and conserve battery life at the same time. These advancements are courtesy of the team’s development of a new deep in-memory architecture (DIMA) for NAND flash memory, which provides data storage for most of the world’s mobile devices.

Naresh Shanbhag
Naresh Shanbhag
The research team, led by University of Illinois Professor Naresh Shanbhag, recently won a best paper at the 2018 IEEE International Symposium on Circuits & Systems (ISCAS) for work that applies DIMA to NAND flash, a solid-state memory. DIMA helps speed up processing while reducing power consumption because it allows mobile devices to do more computation within the device, instead of offloading work to the cloud, which takes both time and energy. Shanbhag and his graduate students Sujan Gonugondla, Mingu Kang, and postdoctoral fellow Yongjune Kim collaborated with Mark Helm and Sean Eilert from Micron Technology, Inc., on this project.

For mobile devices, flash is the best memory option, as it is non-volatile, meaning that even when the power is off, the device retains the contents of its memory. The downside is that it is notoriously slow, says Shanbhag, Jack S. Kilby Professor of Electrical and Computer Engineering at Illinois and researcher in the Coordinated Science Lab.

“Flash provides very high storage capacity, but the moment you try to read it, it gets very slow, particularly for data-centric workloads,” he said.

The Illinois-Micron team tackled this problem by developing a DIMA for NAND flash memories. This is revolutionary since traditionally computing is done in digital while memory accesses are analog, making the two incompatible: “One is from Mars and the other is from Venus,” Shanbhag says. To put DIMA in flash, the team had to create a processor that works in analog, a powerful technology that is faster and more energy-efficient than its digital counterpart. 

The DIMA concept was conceived as part of Shanbhag’s research with his collaborators in the SONIC Center where it was developed for static random-access memory, or SRAM. Their first paper on the topic, co-authored with Ken Curewitz and Sean Eilert from Micron, Inc., was published in the IEEE Conference on Acoustics, Speech, and Signal Processing in 2014. Later, in a series of papers, Shanbhag and his students studied the energy and delay benefits of DIMA for various machine learning algorithms. 

In 2017, Shanbhag’s team was awarded a US patent on DIMA titled “Compute Memory.” Over the past year, Shanbhag and his students fabricated three DIMA prototype integrated circuits demonstrating up to 100 times less energy-delay in lab tests over an optimized von Neumann digital architecture. Their SRAM-based DIMA work was featured in the March 2018 issue of the IEEE Spectrum magazine, and via multiple publications in the flagship journal of the IEEE Solid-State Circuits Society, the IEEE Journal of Solid-State Circuits, and the 2018 IEEE International Solid-State Circuits Conference. 

With flash, the process of fabricating these devices, a complex function that must be accurate at the nanometer level, is another hurdle to overcome, says Helm, Senior Fellow at Micron and one of the paper’s authors. Still, he believes the payoff will be great and that it is possible the technology will be commercially available in devices within three to five years.

“There’s a huge amount of effort in the industry towards pursuing artificial intelligence and machine learning, and DIMA is really going to accelerate its adoption,” Helm said. “The entire world will benefit from this technology.”


Share this story

This story was published July 25, 2018.