Guess I will talk to myself then.
The point cloud calculator seems to always use the depth sensor intrinsics, which produces garbage PCs from aligned depth. Especially noticeable with D435 which uses different cameras for color and IR. So continuing tests with D435.
Calculating PC manually with RGB camera intrinsics produces more realistic results. The points are obviously shifted due to different reference csys.
Now we get to the transformation part. The transform is extracted from color->depth extrinsics values and applied to the calculated point cloud. This is where I get strange behavior.
Color->Depth extrinsics rotation: [0.999981, 0.00394571, 0.00481309, -0.0039441, 0.999992, -0.000344145, -0.00481441, 0 .000325155, 0.999988] translation: [-0.0144745, -0.000329277, -0.000862016]
Transform( Orientation( [[ 9.9998063e-01 3.9457134e-03 4.8130937e-03] [-3.9441027e-03 9.9999219e-01 -3.4414532e-04] [-4.8144138e-03 3.2515533e-04 9.9998838e-01]] ), Vector(-0.014474545605480671, -0.00032927736174315214, -0.0008620155858807266) )
Color intrinsics width: 1280, height: 720, ppx: 646.732, ppy: 356.376, fx: 928.967, fy: 928.493, model: Brown Co nrady, coeffs: [0, 0, 0, 0, 0]
Depth intrinsics width: 1280, height: 720, ppx: 636.268, ppy: 365.072, fx: 636.515, fy: 636.515, model: Brown Co nrady, coeffs: [0, 0, 0, 0, 0]
PC from original depth frame (gray) and PC from aligned depth frame (using color intrinsics) (blue):
PC from original depth frame (orange) and PC from aligned depth frame with applied transform (purple). Closer, but not exactly:
PC from original depth frame (pink) and PC from aligned depth frame with applied transform TWICE (green):
This is as close as it gets, but what on earth is happening? Why does applying extrinsics transformation TWICE produce better results? If I am missing something, please, enlighten me. Or if you know another way of getting depth from aligned frame back to regular transformation. Or aligning color over depth map (not the other way, like every tutorial does).
This message was posted on behalf of Intel Corporation
I apologize for not getting back to you sooner.
I will continue to research this problem for you, and will get back once I find a better solution. This might be a calibration issue.
For now, have you tried the rs-align sample from the RealSense SDK 2.0? This sample aligns depth frames with some other stream (and vice versa).
So you would be able to align color over depth like you said.
The GitHub page for the rs-align sample can be found here.
I think I solved it yesterday. If I invert the extrinsics rotation matrix, I get the required transform to align the two point clouds. Though I am not sure why the extrinsics transform data is recorded this way.
This sample aligns depth frames with some other stream (and vice versa) - unfortunately "vice versa" doesn't seem to be the case and it's been mentioned in several places that rs_align only aligns depth data to other streams.