Tips for Enhancing XR Experiences

Wednesday 11/13/19 09:12am
|
Posted By Brian Vogelsang
  • Up0
  • Down0

Snapdragon and Qualcomm branded products are products of
Qualcomm Technologies, Inc. and/or its subsidiaries.

With the powerful edge processors of today powering new extended reality (XR) glasses and headsets, developers are able to build new types of experiences like never before. Moreover, the distinction between AR, VR, and even reality itself, are continuing to blur as these new devices redefine the reality–virtuality continuum.

XR Tips & Tricks Image 1

With each generation of devices, comes new features that developers use in new and creative ways. For example, the transition from 3-DoF to 6-DoF motion tracking a few years ago, meant that users could go beyond just ‘looking around’ a virtual reality environment to actually ‘moving around’ within it. Today, we’re seeing equally impressive innovations such as apelab’s SpatialStories Unity SDK, that allows kids to build VR environments in just a few days.

But beyond the new hardware technology, it’s the developers who are really driving innovation. So in this blog, we thought we’d share some tips and tricks that you can use to enhance your XR experiences.

Eye Tracking

Many of today’s devices include eye tracking capabilities, which provide information about where the user’s eyes are focused. This information is often accessible to developers through the device’s API.

Developers should use eye tracking if it’s available, as it can provide real time (or near real time) information about the user. And when analyzed across a series of frames, it can provide hints about what the user may be doing or even thinking. This can be useful for creating more dynamic experiences such as:

  • determining if the user has seen all the items or areas they are expected to,
  • feeding back eye direction and blinks to an avatar in a social XR setting, or
  • enhancing the XR experience by updating the user interface based on where the eyes are focused.

Foveated Rendering

Headsets place the viewport directly in front of the user’s eyes via two small lenses. Given this close proximity, a rendering engine can take advantage of rendering the different parts of each image at varying levels of detail, based on regions that correspond to the user’s peripheral vision.

XR Tips & Tricks Image 2

This is called foveated rendering, and its purpose is to limit the amount of detail to render in an effort to reduce rendering processing requirements. More specifically, a rendering engine can use the saved resources, and divert its efforts into rendering the areas that the user is looking at, in greater detail. This can also help to reduce battery consumption and heat.

In the figure above, the white areas would be rendered at full detail while the colored areas, which correspond to the user’s peripheral areas, would be rendered in less detail. If eye tracking capabilities are also available, developers can utilize that information to dynamically shift the foveation areas based on where the user’s eyes are pointed for a given frame.

Interpupillary Distance Manipulation

In addition to foveation, developers can also take advantage of the interpupillary distance (IPD) between the eyes. Many headsets allow users to adjust the real IPD, which is the physical distance between the lenses, to obtain the correct viewport and to reduce blurriness, similar to adjusting a pair of binoculars.

Developers can dynamically change the virtual IPD, which is the distance between the virtual camera viewports of the two lenses, to change the sense of scale. This can create effects such as making objects appear larger or smaller.

Inside-out Motion Tracking

Tracking a user’s position and orientation in a virtual environment, as it corresponds to the real surrounding environment, is essential for keeping the user within the bounds of both environments.

Just a few years ago, this was performed using outside-in tracking, where external sensors, markers, or lasers provided bounding information to the headset. In addition, that generation of headsets were often wired, which tethered the user and further limited their mobility.

XR Tips & Tricks Image 3

Depiction of outside-in tracking involving room markers and a wired headset.

This has evolved over the last few years into wireless inside-out tracking where the headset uses built-in cameras and machine vision techniques, along with sensors, to capture and sense the surrounding environment to estimate the head pose.

XR Tips & Tricks Image 4

When assessing target platforms for your application, it’s recommended that inside-out tracking be one of the top requirements, as it makes for a much better user experience. In addition, developers may want to consider processing the camera and sensor data using simultaneous localization and mapping (SLAM) techniques. SLAM can provide accuracy for both tracking a user’s position in virtual reality, and for steady rendering of virtual objects, such as those placed in augmented reality.

Visual Anchors

Motion sickness is a common side effect of being immersed in a virtual or semi-virtual environment. It’s caused primarily by slow motion-to-photon latency, which is the delay between when the user moves or turns, and when the viewport is rendered to reflect that change. As a guideline, developers should aim for a motion-to-photon latency of less than 20ms per frame.

Common solutions involve reducing this latency and limiting experiences to short time periods. In addition, we’ve found it effective to provide visual anchors, or objects that remain fixed in space during movement. Giving the user a fixed visual reference point can help their visual system distinguish their movement relative to the environment and reduce the onset of motion sickness. This is another area where SLAM techniques can be used to provide steady rendering of virtual objects.

Motion sickness can also occur when the app takes over moving or rotating the user’s viewpoint, especially when the user is not expecting it. If your app needs to perform such an effect, we recommend that it be done using either a quick cut in the scene, or by fading to dark, moving the viewpoint, and then bringing it back into view.

5G and Edge Processing

With 5G’s promise for higher bandwidth and lower latencies, and edge processing becoming ever more capable, XR developers now have more options as to what should be processed at the edge (i.e., on the headset) and what data needs to be transmitted for processing in the cloud. This supports the idea of boundless XR, where users can experience XR almost anywhere.

In particular, 5G’s support for edge clouds brings cloud services closer to the device, giving XR developers more choice as to where real-time processing and rendering take place.

For example, a headset can send the head pose to the edge cloud for partial rendering, receive back the frame, and complete the rendering at the edge, all while maintaining an acceptable motion-to-photon latency:

XR Tips & Tricks Image 5

Diagram illustrating how the edge cloud can augment on-device processing with 5G.

For more information, check out our article on how 5G is going to help make AR and VR experiences more photorealistic.

Making it a Reality

As devices continue to evolve and new techniques are devised to solve new challenges, developers are constantly coming up with new tricks to enhance XR experiences.

Qualcomm Technologies has a number of mobile platforms that are powering XR experiences including Snapdragon® 845 Mobile Platform, the Snapdragon XR1 Platform, and now the Snapdragon 855+ Mobile Platform.

To see some of the devices that we’re helping to power, see our XR Get Started page.

Snapdragon is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.