11/18/2015 Ashish Valentine, ECE ILLINOIS
Atlas Wang, advised by CSL Professor Tom Huang, developed a platform called DeepFont for Adobe products, which can scan pictures and determine the fonts of text inside them.
Written by Ashish Valentine, ECE ILLINOIS
A designer working for a boutique ad firm is stumped, trying to decide on the perfect font to complement her new banner. Ruminating on the problem on her bus ride home, she suddenly spies a concert poster on the street with featuring just the font.
The bus stops for a few seconds to let out a passenger, and she whips out her phone, snapping a photo of the poster. An app immediately responds with the name of the font, and she works it into her ad that very evening.
This software, described by Adobe principal scientist Hailin Jin at the company’s 2015 MAX conference as “Shazam for Fonts,” is DeepFont. What’s more: this isn’t a hypothetical technology - it’s already been shipped with Adobe’s latest editions of Photoshop and Typekit.
Atlas Wang
Before its release, a demo of the software was presented to an audience of design professionals at the MAX conference to roaring applause. Since then, DeepFont has amassed several endorsements from designers and firms around the world, just a few of which are visible on its rapidly expanding hashtag on Twitter.
Font recognition is a huge need for designers, who traditionally rely upon professionals who charge high rates and take an average of 45 minutes to an hour to reliably determine fonts.
“I’m happy to resolve a need that the design community has been feeling for so long,” Wang said. “There’s definitely a sense of accomplishment when people like what you’ve developed, that’s a great feeling. I’m very excited too, we’ve been invited to several conferences to present our work, and we already have academics giving us appreciations.”
The software itself works using a new type of machine learning called deep learning, which aims to create algorithms that mimic the human brain by continuously learning, recognizing patterns, and improving their performance.
Screen captures from DeepFont, identifying the font that handwritten text comes closest too. The user photographs text, next the app analyses it, then proposes a list of fonts that possibly match the text.
In a recently released paper, Wang and his co-authors also demonstrated DeepFont’s ability to account for text being photographed at an angle, or being partially covered with shadows. The algorithm becomes a new feature in Adobe products like Photoshop, and is small enough in size to work on mobile phones.
“The chance to work with Adobe on DeepFont was incredible, it was definitely one of the best internship experiences I’ve had,” Wang said. “Besides the game-changing nature of what we were developing, the people at Adobe were always very friendly and passionate about their work. The teams were small and specialized enough that I could even have frequent lunch meetings with the development head of Photoshop. She could tell me exactly how she wanted DeepFont to look and feel, and I could work on her specifications the same day.”
Thomas Huang
“Atlas is amazingly good at multitasking,” Huang said. “In addition to DeepFont, Atlas is also deeply involved in studying image aesthetics, such as how the brain evaluates images, and building computational models of the brain, and he’s also working on a project with Statistics Professor Michelle Wang on using image recognition to determine the nutritional value of images of food. He is a brilliant young man, making many contributions to science and technology.”