In the '2016 R2' Windows SDK, the instruction ProjectDepthToCamera can be used to map depth coordinates to the world coordinates.
If you need the reverse of this process, ProjectCameraToDepth maps world coordinates to the depth.
Thank you for your answer. ProjectDepthToCamera seems to be a good choice. Unfortunately, I don't know how to call this function. I always get an error when I try to compile this:
PXCMPoint3DF32 pos_uvz; PXCMPoint3DF32 pos3d; PXCMProjection.ProjectDepthToCamera(pos_uvz, pos3d);
I am pretty sure I am using the function in a wrong way…
Besides this, am I right when I assume that this function returns the XYZ world coordinates from XY depth picture coordinates? This means I put a captured 3D frame and XY coordinates (pixel) into that function and it returns the XYZ world coordinates? Maybe you can tell me an example project where I can see the use of a PXCMProjection?
Thanks for supporting me!
You are very welcome, Martin!
ProjectDepthToCamera would return the depth coordinates based on the world coordinates. If you wanted the XYZ world coordinates, you would use the ProjectCameraToDepth instruction instead.
I see your script is in the C# language (PXCM), whereas a C++ script would be PXC.
I do not know of a C# script for ProjectDepthToCamera, but there is a C++ script. You may be able to get some useful indicators from that.
So far I think I got it now. But in the example he uses PCL (= Point Cloud Library), which seems to be already included into the RealSense SDK. When I try to compile the code I either can successfully include the RSSDK or the PCL. Both together seems to overwrite some of the other’s functions. Any ideas? Can I use the RSSDK’s PCL functions?
Also I now use C++