CSL's SEAWARE research seeks to fill the niche of 360 video streaming


Allie Arp, CSL

Every month seems to bring a new streaming service to television consumers. From Discovery+ to Peacock to the originals like Netflix and Hulu, one would think that viewers have all the options they could ever want. However, a group of CSL researchers has found streaming to be an unexplored niche for their 360° video research, and are getting recognized for their efforts.

“SEAWARE: Semantic-Aware View Prediction System for 360° Video Streaming” is a paper about the joint project with the University of Massachusetts Amherst focusing on developing a 360° streaming system that provides a better quality of experience for its viewers. The paper recently received the IEEE International Symposium on Multimedia Best Paper Award.

“Winning the award for SEAWARE is very important to us since it shows the interest of the multimedia system community in this area and the Klara Nahrstedtimportance of new ideas and consideration around the 360° video content distribution,” said Klara Nahrstedt, CSL Director and Ralph and Catherine Fisher Professor of Computer Science. “Results for 360° video content distribution systems have impact not only for this medium, but also for other 360° media such as virtual reality and augmented reality content and their distribution over the internet. “

Lead author Jounsup Park believes their work stood out from other research in the multimedia streaming area because of their system-wide approach. Many researchers focus on how deep-learning technology can improve a system’s performance without consideration of computing load, but the CSL group worked on algorithms that could be used on mobile devices rather than high-performance computers. 360° videos are stored on a server, and when they are streamed there is a significant delay because of the amount of data being shared. Park, along with a number of graduate and undergraduate students, has developed a way to predict user behavior to overcome the delay problem by requesting future video data.

“Users are not watching the whole video at the same time. They pick the view, but we can predict where they will look,” said Jounsup Park, a CSL postdoctoral researcher. “What we have done in this project is analyze the objects moving in the video that could draw the viewer’s attention and lead their behavior. The proposed system allows for the video analysis to be done at the server; therefore, the mobile devices do not need powerful computing resource to predict the user behavior.”

A 360 video shows the direction a person is likely to look based on the group's research.
Based on the action happening in this scene, with both the monster and one of the people moving to the left of the screen, it can be predicted that the viewer will also move in that direction.

Viewers follow objects with such regularity that in addition to predicting their behavior, the group is able to analyze the semantic information of the video to improve the prediction algorithm. The 360° videos they used ranged from a monster movie involving a chase scene to a surfer moving through ocean waves to give the team a variety to analyze.

Next steps for the group involve further improving 360° video streaming through improved cache algorithms.

“Video streaming platforms usually use a cache in the network to lower the latency between the server and the clients,” said Park. “We’re exploring how we could improve the caching algorithm for 360° videos which will also improve our ability to deliver high quality 360° videos to the users.”

This work is funded in part by the National Science Foundation. Read more about it here.