CSL researcher works to program robots with common sense

10/15/2020 Allie Arp, CSL

Written by Allie Arp, CSL

Personal robots have long been a staple in science fiction but are not yet a reality in the real world. One of the major roadblocks is that robots lack a key trait: Common sense. CSL Assistant Professor Saurabh Gupta has a newly funded research project entitled, “Scaling Up Robot Learning by Understanding Internet Videos Humans,” exploring the possibility of using videos to provide robots with this kind of intuitive knowledge. 
CSL Assistant Professor Gupta
CSL Assistant Professor Gupta

“One of the reasons we don’t have robots in daily life is because they lack common sense, for example, the knowledge that drawers can be opened by pulling on handles,” said Gupta, an electrical and computer engineering professor. “We’re looking at how to build a large repository of common sense so if you put a robot in a realistic environment, it can actually do something.”

Humans can use prior experiences to interact with unfamiliar surroundings. For instance, if a person has been in a few shopping malls, they would likely be able to efficiently find the bathrooms or a water fountain in a new one. Or when in a kitchen, they intuitively understand how to open different types of drawers. Gupta would like to introduce this ability in robots.

Describing his research as “the intersection of computer vision and robotics,” Gupta is combining classical techniques in robot learning with techniques for understanding videos in computer vision.

“In computer vision, there’s a focus on trying to understand videos, but we are missing a clear task definition,” said Gupta. “In robotics, we are missing a source of the basic knowledge that we as humans have. If we think of the two problems together, we get a goal for video understanding, as well as a starting point for building generalizable robots.”

Gupta and his team have three points of focus to help them achieve this goal. The first is to analyze videos to understand different states that objects can be in. The second is to learn high-level plans for solving tasks. And the last is to control the robot to follow the high-level plan.

“We are very far from personal robots, but this research is a step in that direction,” said Gupta.

The National Science Foundation is funding this project.


Share this story

This story was published October 15, 2020.