Watch and learn: Human-led videos enable robot tool use

8/13/2025 Jeni Bushman

Researchers from The Grainger College of Engineering have introduced a new framework for training autonomous robots using video demonstrations. Their innovation makes data collection more accessible while transferring tool-use knowledge from humans to robots.

Written by Jeni Bushman

A robot walks into a bar and unracks a wine bottle.

“We’re interested in designing frameworks and algorithms that will enable robots to easily learn from people with minimal engineering effort.”

Professor Katie Driggs-Campbell, Department of Electrical and Computer Engineering

It’s not the setup to a joke, but a new possibility for industries like hospitality and manufacturing thanks to recent robot-learning insights from researchers in The Grainger College of Engineering at the University of Illinois at Urbana-Champaign.

Autonomous robots perform tasks with little to no human intervention and are used increasingly in fields such as transportation, manufacturing, and healthcare. In the past, these autonomous functions have been programmed by expert engineers in a time intensive process limited to predetermined tasks and applications. More recently, breakthroughs in machine learning have empowered robots to learn from human demonstrations. Most of these robots are taught by teleoperation, a method by which human operators issue commands remotely. Although teleoperation is well-suited to single-purpose applications like self-driving vehicles and surgical robots, it is expensive and requires specialized equipment, making it less ideal for the agile and variable task patterns seen in day-to-day human movement.

To address this challenge, Illinois Grainger engineers from the lab of Katie Driggs-Campbell, an assistant professor of electrical and computer engineering, developed a novel framework for teaching dynamic, real-world tasks to robots without the use of resource-intensive methods like teleoperation. Their findings represent advancement toward general-purpose, adaptable and accessible robotic tool use in real-life environments. 

Photo of Katie Driggs-Campbell, assistant professor of electrical and computer engineering.
Photo Credit: Katie Driggs-Campbell
Katie Driggs-Campbell, assistant professor of electrical and computer engineering.

“Despite a lot of hype around robots, they are still limited in where they can reliably operate and are generally much worse than humans at most tasks,” Driggs-Campbell said. “We’re interested in designing frameworks and algorithms that will enable robots to easily learn from people with minimal engineering effort.”

Members of Driggs-Campell’s lab used human-led 3D demonstration videos to teach robots how to manipulate household tools like hammers, spatulas and frying pans. Videos on the lab’s website show robots hammering nails, scooping meatballs, flipping an egg, re-racking a wine bottle, and kicking a soccer ball.

“Our approach was inspired by the way children learn, which is by watching adults,” said Haonan Chen, a graduate student in Driggs-Campbell’s lab and the lead author of the paper. “They don't need to operate the same tool as the person they’re watching; they can practice with something similar. We wanted to know if we could mimic that ability in robots.”

“Our approach was inspired by the way children learn, which is by watching adults,”--Haonan Chen, a graduate student in Driggs-Campbell’s lab and the lead author of the paper.

The group’s training method showed strong generalization across real-world tool-use tasks and demonstrated higher success, faster data collection, and greater efficiency than existing methods including teleoperation and robotic hands. Their technique provides an avenue for scalable, intuitive, and low-cost data collection without the need for specialized hardware or operator expertise.

“These tasks were chosen to show the ways in which we’re bringing in-home robot assistants closer to reality,” Driggs-Campbell said. “But our work can be applied to a variety of settings, like manufacturing, cleaning, and healthcare.”

While promising, the newly proposed framework is limited in its ability to identify and locate objects of interest in the real world. Going forward, the Illinois Grainger engineers aim to improve the reliability of the robot’s perception pipeline through self-correction mechanisms and enable the robot to adapt its technique when faced with common real-life challenges like tool defects and slippage.

“If a robot sees me using one type of pen, can it transfer that knowledge to pens of different shapes and sizes?” Chen said. “Eventually we would like it to show some generalization capability.” 

The study, “Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning” has been accepted for publication at the Conference on Robot Learning and is available online. DOI: 10.48550/arXiv.2504.04612

Illinois Grainger Engineering Affiliations

Katie Driggs-Campbell is an Illinois Grainger Engineering assistant professor of electrical and computer engineering in the Department of Electrical and Computer Engineering. Driggs-Campbell is affiliated with the Coordinated Science Laboratory.


Share this story

This story was published August 13, 2025.