1 of 1 people found this helpful
Back in March there was a case where someone was measuring the distance between two X-Y points that should have been 5 cm but was reading as 7 cm. An Intel support representative suggested using the 'Measure' sample program that comes with the RealSense SDK, and the user reported more accurate results when using Measure.
Thanks, but using their method of calculating the 3d points didn't seem to help, I still get the same results (which i guess I'm supposed to get if I did things correctly before with my point cloud method?).
I can't run their c++ code right now because of reasons, but I translated the calculation into my python script.
Another point of reference other than the Measure sample is the Python example for measuring boxes. This example is compatible with use of multiple cameras.
RealSense cameras are usually well-calibrated when bought. Reasons why a calibration might be needed include whether the camera receives a hard knock or a drop onto the floor.
We do not get many questions about XY accuracy, as the accuracy issues are usually related to Z-depth. You say that you have excellent depth accuracy in your readings though.
There is another recent discussion about measurement in Python in this discussion:
Within that discussion, a Python tutorial called 'Distance to Object' is highlighted.
I seem to get much better results (perfect this far) when using the the intrinsic parameters from the color frame instead of the depth frame, and otherwise following the example in librealsense/examples/measure at master · IntelRealSense/librealsense · GitHub .
For anyone else with the same problem, this is how I made a very simple distance function (assuming 'import numpy as np' and 'import pyrealsense2 as rs'):
def get_3d_coords(color_intr, depth_frame, xpix1, ypix1, xpix2, ypix2):
dist1 = depth_frame.get_distance(xpix1,ypix1)
dist2 = depth_frame.get_distance(xpix2,ypix2)
depth_point1 = rs.rs2_deproject_pixel_to_point(color_intr, [xpix1,ypix1], dist1)
depth_point2 = rs.rs2_deproject_pixel_to_point(color_intr, [xpix2,ypix2], dist2)
return np.sqrt(np.power(depth_point1 - depth_point2, 2) +
np.power(depth_point1 - depth_point2, 2) +
np.power(depth_point1 - depth_point2, 2))
I don't yet know if it's something that I just misunderstood the first time, or if it is connected to the fact that I align my depth and color image for a different part of my code. But this change of the intrinsic parameters also fixed my problem with the distance changing with viewing angle!
Thanks for all the very fast answers!