It could be useful, however I should be able to map a point of the color image on the depth, and this point should be easily identifiable in order to identify that coordinate in the depth image.
It is a very long process taking into account that I only need the equivalent coordinate, ie:
That the pixel (57,23) Color = (67,34) Depth
EDIT: Taking a look to the option you said "MapColorToDepth" it turns out that this function takes a pixel of a specific coordinate in the color image and maps it on a pixel in a coordinate of the depth image, which means that if I want to map a pixel to the equivalent one I need to know anyway what it's coordinate is, so we return to the beginning of the problem.
As a further avenue to consider, I'll link you to a page with a script on it for mapping depth to color using the UV map. Click on the 'c++' tab to get the C++ version of the script.
I'll also link RealSense stream programming expert jb455 to this discussion to get his input. JB, what do you think is the best way to map depth to a single color point?
Do you only want to map a single pixel each time, or will you need to map lots of pixels?
If it's a single pixel the easiest way will be to use MapColourToDepth to get the corresponding pixel in the depth image, then retrieving the depth of that pixel from the depthData object. This isn't very time/CPU efficient when mapping "many" pixels though, so be aware of that.
If you want to do lots of pixels, it'd be best to use either the inverse UV map or ProjectColourToCamera.
Good morning, thanks for your answer. I already managed to do the mapping of the pixels long time ago with the code above, the real problem is that I need to know the equivalence of coordinates:
I have a pixel in position (x, y) in the color image and I need to know which is equivalent to (x', y') in the depth image.
As I have no way to put a visible marker on the color image due to the problems of converting from CvMat to PXCImage, it is no use for me to map the pixels graphically, I need to find the "mathematical" or "geographic" equivalence.
1 of 1 people found this helpful
Ok, in the code I linked (inverse uv map), lines 13&14 are calculating the coordinates in the depth image (u,v). So in your case, you'd need invuvmap[x + y * color.info.width] instead of invuvmap[i].
BTW, this is equivalent to what MapColorToDepth does: you give it (i,j) colour image coordinates and it returns corresponding (u,v) depth image coordinates (if a mapping exists: because the cameras look at different places, the depth camera can't see everything the colour camera can see so if there is no corresponding pixel it'll return (-1,-1)). I don't understand what you mean in the edit of post #2 - from what you say you need, this should be perfect (again, assuming a small number of pixels).