Preventing malicious attacks on computing systems through alteration of environmental control
“It is a good time to do this research,” said Zbigniew Kalbarczyk, CSL research professor. “Fortunately, this type of attack has not happened yet, but we think it is only a matter of time. When the right opportunity occurs someone will strike.”
An example of an attack could be found here on the University of Illinois campus. The Blue Waters supercomputer is well-protected and is housed in its own building. However, when the computer is active it generates a lot of heat, relying on a dedicated system that delivers chilled water to absorb the heat from the computing nodes. An attack on the chilled water delivery system would result in the computer overheating and being forcibly shut down, without the computer itself being breached.
“You may not be able to steal data from the computer but you can create an outage,” Kalbarczyk, an electrical and computer engineering research professor, said. “The entire system is shut down; to reboot the system costs money. Any computations in progress will have to be repeated, which costs money.”
Supercomputers aren’t the only targets susceptible to this type of malicious attack – the problem has broader implications as well. The growing proliferation of IoT devices (e.g., smart televisions, printers, or home security cameras) create an open environment for attackers to exploit vulnerabilities in such entities as a stepping stone for launching attacks against highly-valuable assets. These attacks could go so far as to compromise major internet providers to indirectly impact customers such as Netflix or CNN.
In addition to improving the security surrounding supercomputers and other potential targets, Kalbarczyk and co-investigator Ravi Iyer, the George and Ann Fisher Distinguished Professor of Engineering, hope to improve detection of malicious attacks by improving the computer system’s ability to differentiate between a random system failure and an attack.
“In some situations, hackers can masquerade their attacks as accidental failures,” Kalbarczyk explains. “This is important because if an operator thinks that the system misbehavior is not due to a malicious attack they won’t investigate or look deeper. This gives the malicious attacker a second and maybe even third chance to execute the attack.”Making it more difficult to disguise an attack would mean more time and effort preparing for each attack, possibly discouraging hackers from their attempts. As part of their efforts to improve computer and overall system security, Kalbarczyk is very adamant about their purpose and methods.
“I want to stress one thing, we are not educating hackers,” Kalbarczyk said. “This approach allows us also to uncover vulnerabilities in the current settings of the system, what works and what doesn’t from the point of view of the operators. We can be proactive and make sure that any future attacks will not be successful.”
This project is funded by the National Science Foundation for three years at $500,000.