Bresler receives grant for learning of sparse representations

9/24/2013 Katie Carr, CSL

CSL Professor Yoram Bresler was recently awarded a 3-year, $495,000 National Science Foundation grant to conduct research on theory and algorithms for learning sparse representations for large-scale problems.

Written by Katie Carr, CSL

CSL Professor Yoram Bresler was recently awarded a 3-year, $495,000 National Science Foundation grant to conduct research on theory and algorithms for learning sparse representations for large-scale problems.

According to Bresler, there has been an explosion of the quantity of data generated, for example, by user inputs via the web, or millions of video cameras worldwide that needs to be analyzed, processed, and organized.

Yoram Bresler
Yoram Bresler
Yoram Bresler
“We need a model for the data, to classify it, search and query repositories, identify, measure, quantify and extract useful information,” Bresler said. “All of that requires mathematical models, which in the past used to be postulated based on theory, physical models or hypothesis. But now there’s a much more effective way to learn the models -- directly from the data itself.”

Bresler and his team, including postdoctoral scholar Kiryung Lee and graduate students Siaprasad Ravishankar, Luke Pfister and Bihan Wen, will be working to develop a data-driven learning algorithm for sparse representation. They will be addressing the limitations of current approaches, as well as developing new formulations, algorithms and theories that will be applicable to large-scale problems.

“We hope to develop a fundamental theory that explores the limitation in the approach and develop algorithms that can be applied in a variety of situations. In particular, we plan to develop applications in medical imaging that can hopefully provide significant performance gains, and ultimately improve patient outcomes,” Bresler said.

According to Bresler, if you can create a mathematical model from the data, you can be more flexible and automatically adapt the model to specific data, or to changes in the nature of the data. Methods for learning sparse representation to date have been useful, but limited, in this regard.

“Once you get to large data, the computational requirements of current methods are infeasible, whereas with the approach we’re pursuing, it will be possible to scale to large problems at a lower cost, and produce practical solutions,” Bresler said.

Bresler sees the theories they develop being applied in areas such as the medical field where they could help to reduce the radiation dose required to produce accurate CT images or improve the accuracy of dynamic imaging in all medical modalities, thus improving diagnosis and treatment of diseases. The work could also be applied to video and audio database searches, and large-scale or high-speed image compression or data compression, such as in atmospheric modeling, remote sensing, or radio astronomy.

“It’s an exciting area of research and is an extension of work that I did with my students in the mid 1990s. That work, introducing the main ideas of sparsity and compressed sensing, actually preceded much of the recent work in this field, ” Bresler said.


Share this story

This story was published September 24, 2013.