Nam Sung Kim earns multiple awards for papers, reviews

9/9/2025 Lilli Bresnahan

Nam Sung Kim has been recognized throughout the year for both his research and his prowess. These include both a Best Paper Award and Best Paper Honorable Mention from the IEEE and a Distinguished Reviewers Award from ASPLOS.

Written by Lilli Bresnahan

Nam Sung Kim
Nam Sung Kim

It has been an award-filled year for Professor Nam Sung Kim. So far, the electrical and computer engineering W.J. ‘Jerry’ Sanders III – Advanced Micro Devices, Inc. Endowed Chair Professor and his work has been recognized by the Institute of Electrical and Electronics Engineers (IEEE) and the ACM International Conference on Architectural Support for Programming Languages and Operating Systems, all for his research and expertise in high-performance, energy-efficient processor, memory, storage, network and system architectures.

IEEE Best Paper Award

Earlier this year, Kim’s research was published in IEEE Computer Architecture Letters, “Exploiting Intel Advanced Matrix Extensions (AMX) for Large Language Model Inference,” was selected for a 2024 Best Paper Award. The research addresses the expensive cost of providing large AI model-based services. The hardware to run a single instance of a large AI model can cost between $400,000 and $800,000 due to a large number of graphic process units (GPUs) required, a price that is not cost effective for most individuals or organizations.

However, Kim and his Ph.D. students, Hyungyo Kim and Nachuan Wang presented a new system that would instead use the latest central processing units (CPUs) and a feature called Advanced Matrix Extensions to cut and the numnber of GPUs required. The result is a system that is cheaper and performs competitively with GPU-based setups.

This is Kim’s second year in-a-row for earning a Best Paper Award from IEEE. His paper, “Unleashing the Potential of PIM: Accelerating Large Batched Inference of Transformer-Based Generative Models,” was published in IEEE Computer Architecture Letters in 2023. This paper proposes the technological solution to reduce bandwidth used by generative AI models, such as ChatGPT, using a method called AttAcc. The work uses the processing-in-memory (PIM) technology that Kim created at Samsung and shows how PIM can speed up large language models.

IEEE Best Artifact Award and Best Paper Honorable Mention

Also from the IEEE, Kim and his student, Jaeyoung Kang earned a 2025 Distinguished Artifact Award and Best Paper Honorable Mention at the IEEE International Symposium on Performance Analysis of Systems and Software in Ghent, Belgium, for his paper “Intel In-Memory Analytics Accelerator: Performance Characterization and Guidelines.”

This paper addressed the high cost of processing big data, a result of central processing unit performance gains slowing down. Kim said Intel integrated the In-Memory Analytics Accelerator (IAA) directly into their 4th-generation Xeon CPUs. IAA is a built-in tool designed to help the processor handle big data tasks faster, like analyzing information and compressing data.

“Because it's on-chip, it offers fast, low-power processing with simpler programming than external accelerators,” Kim said. “This paper details IAA's architecture, evaluates its performance across various applications, and provides guidelines for its effective use.

ASPLOS Distinguished Reviewers Award

The ACM International Conference on Architectural Support for Programming Languages and Operating Systems awarded Kim in April with an ASPLOS 2025 Distinguished Reviewers Award for his work in evaluating papers. He was one out of 22 distinguished reviewers.

ASPLOS is the leading academic forum for multidisciplinary computer systems research spanning hardware, software and their interaction. It focuses on computer architecture, programming languages, operating systems, and associated areas such as networking and storage.

According to Kim, ASPLOS is one of the top four most respected architecture conferences, with an exceptionally low acceptance rate. The committee reviews submitted papers and decides if they are accepted, rejected or revised.

—--

Nam Sung Kim is an Illinois Grainger Engineering professor of electrical and computer engineering in the Department of Electrical and Computer Engineering and the Siebel School of Computing and Data Science. He is affiliated with the Coordinated Science Laboratory and the National Center for Supercomputing Applications. Kim holds the W.J. ‘Jerry’ Sanders III – Advanced Micro Devices, Inc. Endowed Chair Professor appointment.

“Exploiting Intel Advanced Matrix Extensions (AMX) for Large Language Model Inference” was co-authored by Hyungyo Kim and Nachuan Wang, both PhD candidates in Electrical and Computer Engineering at the University of Illinois Urbana-Champaign Grainger College of Engineering, Gaohan Ye, a Masters of Science student in Electrical and Computer Engineering at Illinois Grainger Engineering and Amir Yazdanbakhsh, a research scientist at Google DeepMind.

“Unleashing the Potential of PIM: Accelerating Large Batched Inference of Transformer-Based Generative Models” was co-authored by Jaewan Choi, a PhD student at the Graduate School of Convergence Science and Technology, Seoul National University; Jaehyun Park, a PhD student in the Department of Intelligence and Information at Seoul National University; Kwanhee Kyung of Seoul National University; and Jung Ho Ahn, the Dean of the Graduate School of Convergence Science and Technology at Seoul National University

“Intel In-Memory Analytics Accelerator: Performance Characterization and Guidelines” was co-authored by Jaeyoung Kang, a PhD candidate, and Qirong Xia, a PhD student, both in Computer Engineering at Illinois Grainger Engineering; Ipoom Jeong, an assistant professor in the Department of System Semiconductor Engineering at Yonsei University; and Yongjoo Park, an assistant professor in Computer Science at Illinois Grainger Engineering.

 


Share this story

This story was published September 9, 2025.