Wireless AR/VR

Enabling Ultra-Low Latency Immersive Mobile Experiences with Viewpoint Prediction

Principal Investigator
Project Description

For 3DoF and 6DoF VR applications, the ultra-low latency requirement with user's head rotation and body movements can be very challenging to meet, as new video needs to be rendered in response to user's head and body motions, and then displayed on the HMD. Moreover, if a truly wireless experience is desired, the video needs to be rendered and streamed wirelessly from a local/edge computing device further adding to the delay. In this project, we focus on techniques to ensure the ultra-low latency requirement of head motion as well as body motion.

One possible way of address the ultra-low latency challenge is always pre-fetching the entire rendered or natural video to the user device or edge node; when head motion happens, the corresponding Field of View (FOV) can be displayed almost immediately, thus significantly reducing latency. However, pre-fetching the entire view consumes ultra-high bandwidth (i.e. 4 times more than regular videos). Moreover, for 6DoF, since the user location is not known in advance, it is not clear what would be the right video to render and transmit in advance. Therefore, in this project, we propose a novel approach of pre-fetching the content within a predicted view frustum based on head and body motion prediction of the user. Streaming the above partial content is expected to consume much less bandwidth than the complete original content, and can be transmitted with high quality in advance, leading to satisfaction of both latency and high quality of experience (QoE).