Start a new topic

QCHT hand pose - in what space?

Hello.


I am currently in the process of creating UI components that track hand features.

I made a simple version that attaches a UI element to a palm of a detected hand with an offset of 15 centimeters on the X and Z axis.

However, for some reason, the menus are positioned correctly on startup, but when I start walking, the menus drift tremendously.


This is my current code situation:


Pose pose = hand.GetHandJoint(XrHandJoint.XR_HAND_JOINT_PALM);

transform.position = Vector3.Lerp(transform.position, new Vector3(pose.position.x + offset.x, pose.position.y + _arCameraTransform.position.y, pose.position.z + offset.y),  MovementSmoothness);

 

Keep in mind that "offset" is a Vector2, without a Z component.


What am I doing wrong? Do I need to convert the hand position to some other space?


Thank you in advance!


Hey Danilo, 


all comments in the forum need manual approval at this moment to avoid spam. This can sometimes take a while, especially on Fridays. 


I think your problem has to do with lerping between transform.parent.position and the hand pose.position. The screenshot you sent is cut off, but I'm assuming that you use the Lerp function properly. Check the Unity docs for that  Unity - Scripting API: Vector3.Lerp (unity3d.com) (a common mistake could be a t value larger than 1 or smaller than 0, which would cause the offset you're seeing) The Script has a property "Movement Smoothness" set to 20 which when used as t value would cause an offset.  


If I understand your video correctly, the script of which the screenshot is from (HandFloatingPanel) is located on UI Menu Spaces Anchor, making LayerMenu the parent object of the panel. Lerping from the current parent position to the desired position seems valid. I suggest you try to rule out problems with the lerping function and setting the position of the UI element directly to the palm pose position to check if that's working. 


I can't see your XR Input Setup under AR Session Origin, but something is modified there because the usual outline for our QCHT hands is missing. Having the outline visible would help to debug and see where the hands are detected. Try removing and reimporting the QCHT package and samples. 



On a side note, you can enable the XRIT device simulator to test in editor, in case you're not using it already. Here's the solution article for that Helpdesk : Qualcomm 

Hello, typed a long comment but unsure if I didn't send properly or it is being moderated, but just in case here it is again:


For some reason I am still experiencing drift after simplifying the code as much as possible:

AR Session Origin has been reset to 0, 0, 0 additionally

Problem

Scene UI setup


Additionally, I manually set the Interaction mode to Gaze Pointer through code.

Hello. 

Thank you for your response - however I am still battling with drift.

I did as you advised and reset my Session Origin to 0.


I also stripped down my UI positioning code to its bare minimum for sanity check purposes.


Here is my code, my scene setup, as well as a video of weird behavior.

Please excuse me if this happens to be a trivial mistake, I am still trying to grasp this SDK.


Issue:

https://youtu.be/6GWscMN3iTQ


Scene setup:

https://youtu.be/xE-x44-qdW8


I have also noticed that the "ray" coming from the hands is not positioned correctly to say the least.

So, if this is not a trivial mistake on my part I assume my whole scene is messed up?




Translating the Session Origin by -1.75 would usually result in the content being positioned below the floor. Your setup seems a bit odd. 

If you want to have Gaze and QCHT simultaneously, following the setup guide will not lead to the right result as it shows how to set up things with one input method being active at a time. 


If no other perception features are needed, I'd start with the Spaces Main Menu scene.

For one input method at a time: Use the interaction prefab's InteractionManager (description at Interaction Components (qualcomm.com)) to control what interaction method is active right now. 

For more than one input method: Build your own interaction method switching logic by splitting off the interaction manager prefab into its individual parts. 


Stability depends on a couple of factors and can be influenced by:

  • Snapdragon Spaces Services and plugin version installed on your device
  • Physical environment (e.g. a white empty room will track worse than an environment with a lot of features)
  • Number of polygons in your scene
  • Application specific code


If you are having stability issues because of low performance, it is a good idea to run the Unity profiler to check for potential bottlenecks. 

Thank you, that solved the drifting issue. I had some weird things going on with my scene setup and AR Session Origin positioning, because I wanted the experience to be at head-height and therefore I translated the AR SO by -1.75 on the Y axis. 


However, I am still experiencing certain stability issues and am failing to replicate the stability you have in your examples. I am not sure If i damaged some of the prefabs or what.


Question - what is your recommended starting blank scene? Will following the setup guide guarantee me scene stability with Gaze and QCHT functionality?


Thank you in advance

Hey Danilo,


Adding  _arCameraTransform.position.y  to your y-axis will result in the offset changing with your head movements, which is likely the error here. The pose you get from hand.GetHandJoint() should be in world coordinates, hence just deleting " + _arCameraTransform.position.y " should work. 


You can also try to attach a transform point to the hands prefab's palm joint instead which would automatically update your desired position for the UI element, then lerp from the UI element's transform to the transform attached to the hand. 

Login to post a comment