Items with no label
3335 Discussions

Manually converting from Depth and color pixels to XYZ coordinate in SR300

aanaz1
Beginner
2,583 Views

Hi

I am using SR300 Depth camera and I want to find a way to manually conver from Depth and color pixels to XYZ coordinate in mm

0 Kudos
7 Replies
MartyG
Honored Contributor III
1,163 Views

This forum thread may be useful to you.

https://software.intel.com/en-us/forums/realsense/topic/560784 Converting depth into 3D world coordinates intel real sense

0 Kudos
aanaz1
Beginner
1,163 Views

Thank you , I already saw that post, but I am looking for a way to convert manually from pixel to the XYZ without using ready functions because I am using the measurement in my project and I should know how they did it

0 Kudos
MartyG
Honored Contributor III
1,163 Views

There was an article on converting the depth of an image on a Microsoft Kinect motion camera to real world coordinates with equations. Maybe it is possible to adapt them for RealSense.

*******

When you record a scene with the Kinect, you can choose the 'real coordinates' (x, y, z in cm) or the 'Kinect coordinates' (i, j, z), where i ranges from 0 to 640, j ranges from 0 to 480, while z is still in cm. So we find that a unit increment in 'i' or 'j' corresponds to different 'real distances' as a function of the depth 'z'.

To convert Kinect coordinates to real coordinates you have to calibrate your device in this way:

x = (i-320)*a*z

y= (j-240)*a*z

where, in our case, a = 0.00173667

The author attached an accompanying PDF to their original post. It is in Spanish but has a useful illustration in English. I've attached it to this message.

0 Kudos
aanaz1
Beginner
1,163 Views

Actually I hope I find something like that but for SR300 instead, because "a" depend mainly on the device, moreover, the equations themselves may be different. I saw that post and another posts they have different equ.s. So I should find some official Doc. in order to use it in my project

Thank you so much

0 Kudos
jb455
Valued Contributor II
1,163 Views

The 'official' way of doing it (and so the only way properly documented) is by using the https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/pxcprojection.html Projection interface. But if you want to do it yourself you can try to reverse engineer the formulae on https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/streamcalibration_pxccalibration.html this page (or for a simpler, less accurate, way you can use http://forums.structure.io/t/getting-colored-point-cloud-data-from-aligned-frames/4094/2 this method where QVGA_F_X/Y are the focal length and QVGA_C_X/Y are the principal point, which you get from the Calibration).

0 Kudos
MartyG
Honored Contributor III
1,163 Views

Thanks for your help JB!

0 Kudos
jb455
Valued Contributor II
1,163 Views

My attention was just drawn to the RF_MeasurementSP sample. If you look in Measurement.cpp, lines 69-78, they use the 'simpler' method I mentioned to get the world coordinates (the m_spIntrisics object they use looks similar to the StreamCalibration object I linked to before).

0 Kudos
Reply