2/11/2026 Cassandra Smith
University of Illinois professor Klara Nahrstedt received $275,000 from the National Science Foundation to develop streaming technology for AI-generated neural video content. Her research focuses on enabling immersive experiences like photorealistic cleanroom environments for semiconductor students working across various instruments.
Written by Cassandra Smith
Imagine first year semiconductor students preparing for their careers by donning VR headsets and being transported into the cleanroom. They find themselves surrounded by photorealistic scientific instruments, watching procedures unfold from multiple angles as if they were really in the cleanroom.
This technology is at the heart of the University of Illinois Grainger College of Engineering’s Klara Nahrstedt’s latest research project, which was awarded $275,000 from the National Science Foundation. She is solving the following problem: How do we stream the photorealistic content that does not actually exist as traditional video, but as a neural video representation, called NeRF (Neural Radiance Field), providing a 360-degree view? As generative artificial intelligence (Generative AI) transforms how visual content is created and represented, Nahrstedt and her team are leading the charge of next generation streaming technologies.
She recently gave a talk called “Next Generation of Video Streaming Services: Are We There Yet?” The short answer is no.
Generative AI causes video streaming services to become much richer. With generative AI, the visual content can be represented as a neural video via ML models and voxel grid features to get the photorealistic 360 viewing of cleanroom environments. “Let’s say you as an instructor have a scientific instrument, and you take 2D pictures (or videos) of it from different sides with your mobile phone. Now, you won’t have all the pictures around it, but you will use generative AI to create a neural video content from these 2D pictures (videos) that will be stored as a ML model and features on a server. When students need to explore the instrument in the cleanroom, the neural video content gets streamed, delivered and rendered to students’ VR glasses, and they will see photorealistic 360 views of the instrument as they move their head,” Nahrstedt said. Neural layers on VR devices are rendering pixels in real-time to fill in the gaps of those pictures to create a full 360-view of that instrument.
We know how to stream 2D and 3D pixel-based video representations using, for example, DASH (Dynamic Adaptive HTTP) protocol standard, but neural video needs more care and studies when compressing, streaming, and rendering these types of representations. There is also a challenge of serving diverse devices such as VR/AR glasses, mobile devices, laptops, and large screens. Diversity of devices impacts storage, streaming, delivery, and rendering of the neural video as well.
“How do you satisfy every customer? That’s what all these streaming companies want,” said Nahrstedt, Swanlund Chair and professor at the Siebel School of Computing and Data Science and in the Coordinated Science Laboratory who also serves as the Discovery Partner Institute’s R&D director.
There are several real-world applications for this technology. Museums can use it to give virtual tours of their artifacts, or visitors could explore the inside of a restaurant.
Nahrstedt and her team’s primary focus is on streaming and delivery of neural videos for applications in academic environments such as cleanrooms, hence enabling starting students to train first in virtual cleanroom environments. Students have the potential to see instruments and procedures up close, learn their functionalities and characteristics before they go to physical cleanrooms to conduct experiments.
As Nahrstedt and her team look to help the next generation of physical scientists’ trainees, they are also looking at the future of other areas this technology could touch. While industry members focus on current profitable solutions, Nahrstedt's team is exploring emerging needs. “For me, it’s just fun to think about these problems,” said Nahrstedt.
Right now, Nahrstedt’s team is focused on perfecting the experience for individual users. They are solving the technical challenges of connecting one person with VR glasses or a laptop to the neural video streams. The next step is to scale up classroom settings where students simultaneously explore cleanroom simulations.
From that cleanroom simulation to countless applications yet to be imagined, Nahrstedt's work is ensuring that streaming technology will be ready when neural video's full potential arrives.
Affiliations within The Grainger College of Engineering
Klara Nahrstedt is the Grainger Distinguished Chair in Engineering and Swanlund Endowed Chair in the Siebel School of Computing and Data Science. She is also the director of the Research & Development team within the Discovery Partners Institute.