Kumar and engineers aspire to build computer with 40 GPUs
Researchers from the University of Illinois and the University of California Los Angeles are working together to develop this computer which could potentially speed up calculations almost 19-fold and minimize the combination of energy consumption and signal delay by more than 140-fold.
"The big problem we are trying to solve is the communication overhead between computational units," Kumar explains. According to IEEE Spectrum, "supercomputers routinely spread applications over hundreds of GPUs that live on separate printed circuit boards and communicate over long-haul data links."
Furthermore, these links take up energy and are slow in comparison to the interconnects within the chips and because of the mismatch between the chips and circuit boards, the processors must be handled in packages that limit ther inputs and outputs. Sending data from one GPU to another takes "an incredible amount of overhead," says Kumar.
By using a process called thermal compression bonding, copper pillars fuse with the GPU's copper interconnects which means that 25 times more inputs and outputs can be squeezed into that space according to Illinois and UCLA researchers. In addition, Kumar and his researchers had to consider the constraints in designing the wafer-scale GPU including "how much heat could be removed from the wafer, how the GPUs could most quickly communicate with each other, and how to deliver power across the entire wafer."
Kumar is also affiliated with the Electrical and Computer Engineering.
Read more from IEEE Spectrum here.