I am excited for the upcoming Snapdragon Summit in November and have a feature request related to 2D Mode Productivity use case outlined below:
One of my primary use cases on PC/Mac/Phones will be 2D Mode Productivity i.e. screen output via DP alt-mode.
At present 2D Mode does not react to 6DOF tracking sensor data.
I would like to request this ability to utilise 6DOF sensors in 2D Mode.
In particular, the only way this would make sense is if the SOC on the glasses would feed tracking sensor data into the Display Controller which would shift 2D image based on Head/Body Position.
This perspective rotation of 2D screen input would be done by the chips integrated into glasses completely independently of CPU/GPU on PC/Mac/Phone.
In other words, Task Manager should report 0-3% GPU/CPU utilisation when i am looking at 2D desktop whilst Head Tracking is fully functional on the glasses.
This request does not apply to 3D Mode use case with OpenXR enabled apps or 3D Mode Gaming.
It does, however, still apply to 2D Mode Gaming.
Is the above something Qualcomm are working on specifically for 2D Mode use on PC/Mac?
Will this be supported by XR2 with internal firmware?
Or will this be supported by the next AR chipset?
Could this be expected in 2022?
Thanks for the question,
while there is a DSP unit on the glasses (that's available for programming, also one on phone), the glasses just pass info to the phone. The entire rendering pipeline resides on the phone, and the rendered scene is then blitted to the glasses display. It is possible to grab the 6DOF parameters in the program, but there's nothing on the glasses that's doing "rendering". The current use-case that we're working with is a tethered/wireless headset connected to a phone/android-device that's doing most of the work. Different architectures using Qualcomm XR chips (like the Quest 2) are self contained but essentially there's a CPU/GPU unit and the HMD display.
We are fully behind using OpenXR to use a uniform standard so that it's easy for hmd manufacturers to utilize Qualcomm (and any other conforming SOC) to easily create apps using engines that conform to OpenXR interfaces. Generic portability. OpenXR does have an overlay subsystem that could do what you describe, but you'd have to code that in the rendering engine, and that's always going to be on the CPU/GPU.
You might review the non-Snapdragon Spaces software for the Lenovo glasses - they utilize the glasses as an external 2D display - whereas Spaces is intended for XR applications.
- Snapdragon Spaces Support
So you can see I'm working backwards through the forum posts :-)
While I wait for a response from Lenovo - what you'd need to do this is 6DOF information ( the head tracking information) to get where the user is moving their head position and direction. So while you might be able to use DCC/CI to move the display, you'd near certainly need OpenXR functionality to get the 6DOF info (at least, in a hardware agnostic manner) - since the render pipeline is already hooked up to the 6DOF - anything you render (3D or 2D) can be placed and would respond as expected when the user moves their head. If you look up head-locked display you'll find more information on that. It's usually used for HUDs but you do have frame-by-frame control over the position. Additionally the render pipeline is designed to not-tear when the user moves their head (each eye is independant and their prediction algorithms are involved & throw in time-warp, space-warp, etc), something that would not occur if you're directly manipulating the display parameters outside of the rendering sync.