Adobe, Ma reconstruct 3D scenes from 2D images

2/21/2013 Elise King, CSL Communications

CSL Adjunct Professor Yi Ma and his graduate students recently received an $8,000 gift from Adobe Systems Incorporated to pursue a joint-research project on reconstructing 3D models of objects – primarily urban structures - from 2D images. Ma is an adjunct associate professor of electrical and computer engineering.

Written by Elise King, CSL Communications

CSL Adjunct Professor Yi Ma and his graduate students recently received an $8,000 gift from Adobe Systems Incorporated to pursue a joint-research project on reconstructing 3D models of objects – primarily urban structures - from 2D images. Ma is an adjunct associate professor of electrical and computer engineering.

Zihan Zhou
Zihan Zhou
Zihan Zhou

Recently the interest in reconstructing 3D models of urban scenes from 2D images has been tremendous; however, researchers have found that it may not be ideal to use traditional techniques to reconstruct these types of scenes. That is why CSL researchers are trying to find a new approach to this problem.

Using the traditional structure-from-motion (SFM) approach, a computer can reconstruct a 3D model from 2D images by matching local features such as points or edges. For example, if you took several photos of a house from different angles, a computer could match the same corners and edges of the house in each image in order to reconstruct a 3D model of the building. But what if the building had several repetitive features, such as many identical windows? It is a challenge to reliably match up corners and edges when different features look so similar and are symmetrical.

The new approach that CSL researchers are taking will explore these strong structure regularities in urban scenes. For example, because the urban environment is often rich in planar regions, CSL researchers developed an approach to structure and motion recovery directly from one or more large planes in the scene. In addition, they propose to replace local features with a new type of global image feature called transform invariant low-rank texture (TILT), which has several desired properties: It is holistic - meaning that it encodes accurate, global geometric information – it is reliable even if the camera’s perspective is distorted in some way, and it is robust to things such as image noise.

This new approach will allow computers to reconstruct 3D images without relying on local features such as points and edges, and can therefore overcome problems that are faced using the traditional SFM approach.

One of Ma’s graduate students, Zihan Zhou, worked on this project with Adobe last summer, when he held an internship at the company. “We continued to work on this after my internship,” Zhou said, noting that the goal of this project is “to facilitate long-term collaboration between our group and Adobe.”

Zhou, an ECE graduate student, said that he definitely hopes this project will benefit the next generation of Adobe products, and believes it will specifically benefit After Affects, a type of software used to create motion graphics and visual effects.

“Adobe has very good products in imaging,” Zhou said. “It’s very good to see how this (research) is used in actual products.”


Share this story

This story was published February 21, 2013.