The project we are responding to involves the placement of 3DCG furniture in an office space with white walls and large windows.
The project uses local anchors, but there are no distinctive objects or pictures placed in the customer's new office.
Therefore, we were unable to set the anchor. (Image as shown in the attached image)
However, by placing one LEGAL size image of a feature design at eye level, we were able to set an anchor on the floor and display the furniture.
Could you answer if this is an appropriate method of spatial compensation for local anchors to work correctly?
Please let me know if there is another better method.
Mono colored surfaces deliver few visual features that could be used for tracking. In general matte, non reflective surfaces with irregular visual patterns of some sort produce the best tracking results. Wall artworks for example could help improve tracking.
Can you go into detail about what you mean with a "LEGAL size image of a feature design"? I'm having difficulties imagining what you mean by that.
I will inquire with the team about possible best practices that we have for anchors and the environment they are being used in.
Thank you for your prompt response.
The area where my client wants to place the 3DCG furniture is a space where nothing is placed.
Therefore, I cannot get the characteristic spatial information that I expect.
I understand that the best way is to place illustrations with many large feature points on the walls or to put irregular patterns over a wide area.
However, that method is not employed.
After actually testing the method, I was able to set up anchors by placing images as shown in the attached image, and somehow managed to place 3DCG furniture on the space.
However, I am looking for a better way.
The prints fall under the category irregular surface patterns and definitely help tracking accuracy.
Persistent tracking without enough surface features is not possible at this moment.
It could be worth trying image tracking with extended tracking activated:
You would need:
- One image marker is placed somewhere in your room.
- A prefab containing all furniture with the correct dimensions and locations of the room
The user would first need to look for the image marker which would spawn the furniture upon recognition. After the furniture is spawned extended tracking allows to use internal sensors to continue tracking instead of the image marker. The user can walk away from the marker. I'm not sure how stable this would be in your rather feature less environment, but it could possibly produce better results compared to using anchors only.
For persistency you would need to save the transform of the furniture objects and load it upon spawning.
Is persistency over several sessions required for your project?
Thank you for your response regarding the placement of geometric patterns in the space I shared with you, and that it can be effective.
Now, the extended tracking you suggested, I have used it several times in the past for AR projects using smartphones. However, it was a supplementary function and sometimes did not work properly.
I have three questions.
1. For the area to be extended starting from the image, does it need to be a recognizable floor in order to switch to planar recognition?
2. Does the extended tracking work effectively when the glass is pointed at, for example, a reflective monochromatic floor, ceiling or white wall?
3. Does the display position of furniture deviate significantly even when it is more than a few meters away?
Essentially, we cannot obtain results without trying, but due to lack of available time and money, it is difficult to address this issue now for this project.
We would like to try it the next time we have the opportunity.
I received the question "Is persistency over several sessions required for your project?" but I do not understand what it means. What would you mean in other words?