In using the image target tracking im getting jitter with a sense of some sort of extended style tracking but what im looking to do is use the image target to get a position in relation to the target and kick off 6DOF tracking.
6DOF (Positional Tracking) is rock solid and I barely get any drift. As my target will not be moving I would like to use the stability of 6DOF with the positional registration of the image target.
For Context Im doing a multi user experience where everyone is looking at a large target to get their relative position before gameplay kicks off. As we are in the same room having the pose vector as close to realworld as possible helps with the multiplayer logic and feel of the game
Bonus Points for if we understand there has been drift and we can re ping to search for the target
Our team in Austria is currently investigating and will provide a response once they have their findings.
could you give a little more context to your issue? It sounds like you are recreating an shared ARAnchor. If you are doing your shared target's 6DOF updating calculations yourself then it's very easy to introduce errors in its 6DOF. I'd also like to understand how your users obtain the shared target's 6DOF. It sounds like you are using a single anchor and then recalculating the player's 6DOF - overriding the HMD's reported 6DOF? Internally the HMD's 6DOF is carefully calculated and uses prediction algorithms to smooth out the 6DOF - this is done per eye. You should instead reorient each player's world-view matrix - e.g move the world around the player, not the player in the world. But there is an existing solution that you should look at.
I'm going to assume you are using Unity. If so then you should look at World Locking Tools and ARAnchorManager as they are the preferred methods for sharing world items.
Let me know if this was helpful and I look forward to hearing from you again.
- Snapdragon Spaces Support