Regarding your current approach, Physics.Raycast is indeed a great, flexible option that works well across platforms, firing from the device pointer pose or the camera pose depending on which interaction you're trying to do.
If you want to use XR Interaction Toolkit, you can reference the Grab functionality as seen in this setup documentation for XR Interaction Toolkit, and reference the events available to you:
What is the recommended practice to get the current controller or gaze Raycast (depending on the mode) with the Lenovo A3 when using Spaces to find out if a gameobject is being selected and finding out if the gaze or pointer selection is triggered? There does not seem to be a sample and the OpenXR sample controller Unity example does not seem to easily map to the use the A3. Our current approach is to get the host controller or gaze transform and perform the Raycast ourselves and capturing the selection using the InputAction Pointer/select but this approach seems pretty complex and just apply to the controller. It does not seem that XRControllerManager provide any functionalities to get the current raycast including selected object and the ray information for both the gaze and pointer and its also part of the samples rather than an SDK core component. Thanks for any guidance