RealSense expert Samontab published a YouTube video about getting raw camera data into OpenCV. I don't know if it will be useful to your particular issue but it may be worth a look.
Thank you for your reply, but I saw that video at the beginning of everything (maybe 4 months ago ), it was really helpful because it gave me the relationship between FPS and what I was able to stream.
Unfortunately that information is not useful anymore, it is way different having the depth raw image from having the distance and the world coordinates.
Thanks any way
I did some further research. In this link, a person gets the world coordinates from the image coordinates in OpenCV with a non-RealSense camera using a function called solvePnP.
And on another YouTube video, somebody gets coordinates in mm using OpenCV's triangulatePoints function. The useful point on this video is the detailed text description's explanation of the process under 'Show More', not the video itself.
This next video is about getting the image coordinates, which isn't what you asked for. But it has a link to source code that still works, and in the comments the author of the video says to someone who asked about world coordinates that they can get them with the same script by changing the script to use camera capture.
That was all I was able to find that seemed relevant. Sorry I could not be of more help in this case.
1 of 1 people found this helpful
So, you want to know how to get the vertex for a specific pixel in the image, right?
If the pixel you're interested in is in the depth image, that's easy as the vertices array is aligned to the depth image by default. For a point (u,v) in the depth image, the corresponding world point is v = vertices[u + v * depthImage.info.width].
If it's in the colour image, it's a bit more complicated. You'll first have to map the colour points to the depth image. There are a couple of ways to do this, depending on your situation. The easiest is probably by using MapColourToDepth, but this is only efficient if you have a "small number" of pixels you want to map. If you need to map anything approaching the whole image and you're at all concerned about performance, it'd be best to use either ProjectColourToCamera or QueryInvUVMap. Once you have your colour pixel mapped to the depth image, the case is reduced to the depth image method above.
Note, for help with any of the methods linked, search this forum/google for the method name and you should find some snippets to get you started.
This message was posted on behalf of Intel Corporation
Thank you for your interest in the Intel® RealSense™ Technology.
I was wondering if you could check the suggestion provided by jb455.
If you have any other question or update, don’t hesitate to contact us.