The virtual reality industry is faced with a conundrum: Users must be tethered to a server or PC to use high-quality VR apps. At the same time, say researchers at Purdue University, next-generation smartphones and wireless networks will not be advanced enough to sever the tether. To solve the problem, they have proposed a new software platform called Furion.
“Is it feasible to put high-quality VR apps on untethered mobile devices such as smartphones?” asks Y. Charlie Hu, a professor of electrical and computer engineering at Purdue. “Today’s mobile hardware and wireless networks are only about one-tenth the speed needed for high-quality, immersive VR. And waiting for future mobile hardware or next-generation wireless networks is unlikely due to power limitations and greater computational demands needed for processing packets under higher data rates.”
For the virtual reality to have acceptable quality, each video frame must be rendered at a rate of 16 milliseconds, or 60 frames per second. However, trying to render at this speed quickly exhausts the capacity of a smartphone’s central processing unit; Google’s Pixel XL, for example, is only capable of a speed of 111 milliseconds per frame.
Today’s high-quality VR systems consist of a headset and server, which contains a powerful graphical processor; the user is tethered to the server. One strategy for untethered operation might be to render all the frames on the server and transmit them over Wi-Fi to the smartphone. But this takes even longer: around 200 milliseconds per frame at the highest data rate current smartphones support Wi-Fi.
“A key observation we made is that waiting for next-generation wireless networks such as 5G will not help because packet processing at 10 times higher data rate will exhaust the CPU on today’s smartphones,” says Hu.
Meanwhile, stagnating lithium-ion battery technology will limit next-generation smartphone’s hardware performance. “Battery capacity in mobile devices has barely doubled over the past 15 years, and this limits the smartphones CPUs from getting faster,” he adds.
At the same time, the clock rate of GPUs, which is critical to graphics performance, also has not improved much in recent years.
One reason for the heavy computational workload of VR apps is the constant need to render updates to the background environment in the virtual world. However, the background environment is largely unchanged from frame-to-frame—mountains and landscapes, for example, remain much the same—and this background changes primarily in relation to the user’s position.
“But the user’s position doesn’t change randomly,” Hu explains. “You move continuously and in a very predictable way. So we can predict how the background will change based on the user’s position and pre-render it.”
Furion splits up the rendering, handling background rendering on the PC or server and the less computationally heavy rendering of the foreground in the smartphone or other mobile device. This “cooperative rendering” approach (pre-rendering the background on the PC and rendering the foreground on the smartphone) speeds the frame-rendering time to 14 milliseconds on Pixel XL.
Furion also renders the entire background as a panoramic photo and split into four images, each decoded on one of the smartphone’s four microprocessor cores. Then, the background can be automatically cropped to match the users changing viewing angle.
“You may suddenly turn your head,” says Hu, “so if we render the whole panoramic frame for a location in the virtual world, it can simply be cropped properly to match wherever you are looking.”
In today’s VR systems, frames are handled in a five-step process beginning with the smartphone “fetching,” a rendered frame from the server. However, Furion “pre-fetches” backgrounds by anticipating and performing the fetch commands ahead of time. This lets the smartphone keep up with rendering high-quality VR content.
“Prefetch means you predict the fetch and you start asking the server to get it before the VR game logic actually asks for it,” Hu says.