We do not currently have a guardian setup to establish the floor or other boundaries on the A3. You can use Plane Detection like you mentioned, and it is recommended to include some UX to direct the user to look for and manually confirm the floor. Alternatively, you could use Image Recognition and place an image target on the floor, or even start the experience with the glasses on the floor, but these are less ideal solutions.
ok thanks for the tips :)
Our app needs to know where the floor of the physical space is. Is this something the SDK offers? I can see there is a plane detection sample, but it wasn't clear if there is a way to know which plane is the floor.
Coming from using the Oculus SDK, their camera rig offers an option to be "floor level", meaning 0,0,0 is actually on the floor. It seems like the camera you get in the Snapdragon Spaces SDK is "eye level". Any tips on achieving a "floor level" camera rig with spaces? thanks!!