2/1/2022 Ivan Sanchez for CSL
Ayush Sarkar, a graduate research assistant in CSL and PhD student in computer science, recently received the “Best Student Paper” award for his paper, “L3BOU: Low Latency, Low Bandwidth, Optimized Super-Resolution Backhaul for 360-Degree Video Streaming” at the 2021 IEEE International Symposium on Multimedia.
Written by Ivan Sanchez for CSL
As the hardware used for augmented reality and virtual reality becomes more available and popular among consumers, the amount of associated video traffic is expected to increase beyond 4 exabytes per month by the end of January 2022. However, issues with network bandwidth, latency, and video quality within 360-degree video streaming have yet to be addressed and are impacting the quality of experience for consumers.
In recognition of work done to tackle that issue, Ayush Sarkar, a graduate research assistant in the Coordinated Science Laboratory (CSL) and PhD student in computer science, recently received the “Best Student Paper” award for his paper, “L3BOU: Low Latency, Low Bandwidth, Optimized Super-Resolution Backhaul for 360-Degree Video Streaming” at the 2021 IEEE International Symposium on Multimedia.
Sarkar is the first author on the paper, which was co-authored by John Murray, Mallesham Dasari, Michael Zink, and Sarkar’s advisor, CSL Director Klara Nahrstedt, the Grainger Distinguished Chair of Engineering Professor in the Department of Computer Science. The authors propose L3BOU, a three-tier distributed software framework, as a solution to combat the challenges of cloud-edge network bandwidth requirements and reduce end-to-end latency without compromising on the quality of experience.
Unlike conventional 2D video, 360-degree video requires panoramic viewpoints to be processed through head-mounted displays (HMDs) to achieve visual immersions in both virtual reality and augmented reality. 360-degree videos also require significantly higher video resolution quality than 2D videos. Streaming of 360-degree media through HMDs results in higher demands on network bandwidth and bitrates than streaming of 2D video does. The result can be higher latency, which ultimately can negatively affect the quality of experience in using 360-degree media.
Traditional approaches operate under the assumption of a simple client-server architecture and ignore streaming scenarios that involve edge nodes with high-end computing power. That restriction leads to energy overheads on the client-side when GPUs are used to perform super-resolution upscaling to generate high-quality video content. In many instances, this traditional approach doesn’t work because the client devices’ compute capacities are too low.
To avoid that problem, L3BOU takes into consideration the server, edge, and client in a distributed super-resolution process. Sarkar explains that process as follows: “First, L3BOU utilizes the edge compute to upscale the downscaled, ultra-low-resolution tiled video segments transmitted from the cloud, thus reducing the network bandwidth requirements on the cloud servers. Secondly, L3BOU masks latency overheads using a prediction algorithm to prefetch heavily compressed ultra-low-resolution tile segments. Thirdly, L3BOU uses the edge compute for the super-resolution of the micro models, thus delivering an enhanced video quality.”
Regarding his Best Student Paper award, Sarkar says, “Receiving such an award is not only inspiring but also an acknowledgment of the effectiveness of a collaborative effort. The recognition motivates me to perform further research and write more innovative papers with diverse teams of like-minded individuals.”
For now, Sarkar is continuing to work on virtual reality and augmented reality on an NSF project that seeks to improve situational awareness in firefighter training environments by stitching 2D videos and developing tools for action and object detection in 2D and 360-degree stitched videos.