Four faculty members from the Illinois Grainger College of Engineering have received a total of $475,000 in grants to support projects through the Amazon-Illinois Center on Interactive Conversational Experiences (AICE).
A collaboration between Amazon and the University of Illinois, the AICE Center aims to develop intelligent conversational systems that demonstrate contextual understanding and emotional intelligence, allow for personalization, and the ability to interpret non-verbal communication while being ethical and fair.
Bin Hu, an assistant professor of electrical and computer engineering, has received funding to support “Building AI Engineers via Integrating LLMs and Domain Knowledge.” Hu’s project will harmoniously integrate Large Language Models with domain-specific knowledge in various engineering disciplines including electrical, mechanical, aerospace, civil, and biomedical to produce general-purpose AI engineers. By leveraging the cutting-edge capabilities of LLMs and combining them with the deep, nuanced understanding inherent in each discipline, the project aims to develop AI systems with unparalleled problem-solving abilities and innovative potential.
Jiaxuan You, an assistant professor of computer science, has received funding to support “GRAG: Enhance RAG Applications with Graph-structured Knowledge.” Current Retrieval-Augmented Generation systems powered by Large-Language Models largely overlook the rich network of relationships embedded within external knowledge sources. You’s project proposes a novel approach to empower RAG systems by enabling them to leverage this valuable graph-structured information and aims to significantly improve the accuracy, reasoning capabilities, and overall effectiveness of LLM-based RAG systems.
Yuxiong Wang, an assistant professor of computer science, has received funding to support two projects. “Mastering Extensive Enviornmental Interactions: Building Agentic Foundation Models with Reinforcement Learning” aims to address the inability of foundation models to address complex, long-term challenges. His research focuses on three key advancements: enabling efficient decision-making over extended periods; creating a versatile framework for tackling diverse, long-horizon challenges; and addressing challenging scenarios involving continuous actions, abstract reasoning, and internal sensing.
Wang’s second project, “Long-form Video Understanding via Active Token Compression,” introduces a new paradigm to minimize the number of tokens processed within a vision-language model’s neural network architectures. His work tackles the dual challenges of precision and efficiency in long video understanding through active compression by introducing a novel token planner that produces compressed, carefully selected tokens for long videos.
Liangyan Gui, a research assistant professor of computer science, has received funding to support “Synergizing LLM and World Model for Zero-Shot 3D Human-Object Interaction Generation.” Gui’s work explores the zero-shot generation of realistic 3D human-object interactions from arbitrary text instructions by combining high-level semantic planning with low-level physics-based control. Her research will utilize a decoupled framework that separates semantic understanding from the dynamics of how the interaction unfolds in a realistic environment.