Graduate student architecting more reliable, small-scale processors

2/19/2013 April Dahlquist

For decades, this has been the mantra when it came to manufacturing computing devices. And consequently, devices have been made exponentially smaller. According to Moore’s law, the trend is that the number of transistors that can fit on a computing device doubles about every 2 years. However, as chips keep getting scaled down, the variability of strength between the processors grows. The variability of strength in processors may cause errors.

Written by April Dahlquist

Make it smaller.

John Sartori
John Sartori
John Sartori

For decades, this has been the mantra when it came to manufacturing computing devices. And consequently, devices have been made exponentially smaller. According to Moore’s law, the trend is that the number of transistors that can fit on a computing device doubles about every 2 years. However, as chips keep getting scaled down, the variability of strength between the processors grows. The variability of strength in processors may cause errors.

Inspired by Moore’s law, PhD student John Sartori is researching ways to redesign the architecture for processors so they are more efficient in the face of errors.

“Let’s approach the design of the processor in a new way so we get even more benefits out of [exploiting error resilience],” Sartori said.

In spite of large variations between individual transistors, most processors, which are composed of many transistors, are operating at the level of their weakest transistors, which is neither energy- nor time-efficient. Design has progressed so the processor can operate at a higher level, but with a mechanism installed to correct the errors of the weaker transistors. While this does correct the problem, Sartori wants to make processing with the error correction mechanism dramatically more efficient. This type of design -- called stochastic design -- would consume less power and run significantly faster, said Sartori’s advisor, Professor Rakesh Kumar.

“Given the fact that you are using error correction, [this research looks at] how can you make your architecture, circuit design and program more efficient,” Sartori said. “Let’s take these error corrections and approach design in a totally new way, and we will get a lot more benefits from them.”

The goal is to keep scaling down computer chips, while dealing with the errors that can occur in an efficient way. If the cost to ensure that everything works equals the amount of energy saved, there is otherwise no point to scaling down, Sartori said.

Recently, Kumar, co-chaired a workshop at the CSL called Silicon Errors in Logic - System Effects (SELSE). Sartori presented at this two-day workshop about his research.

“We’ve been making new inroads and showing what is possible,” Sartori said. “I think that people will continue on these roads and continue doing research in the area of stochastic computing.”


Share this story

This story was published February 19, 2013.