Intel recently introduced a new chessboard calibration product in its Click online store that can calibrate multiple 400 Series cameras, though it costs $1500 as it is aimed at engineering departments and manufacturing facilities rather than consumers.
I can calibrate between Left and right camera (intrinsic, and R/T), I can calibrate between Left and Color camera (intrinsic, and R/T).
Since they use the same reference: Left Camera principal point as origin, and it's also point cloud coordinate system.
point cloud coordinate system reference for D435 is:
Left camera principal point BEFORE rectification or AFTER rectification?
if I use a check board to calibration two D435 cameras (say Camera A, and Camera B) relative location R/T,
if I use only the left imager of Camera A and Left imager of Camera B to get the extrinsic calibration,
Is the R/T between A and B the same as the point Cloud generated from A and B?
I did the above, but the point cloud A and point cloud B do not registered correctly.
How do I use only one plane check board to do relation calibration (R and T) between two point clouds ?
For calibration, RealSense provides the Y16 data format which is unrectified. I think all your questions are answered in this custom calibration white paper https://www.intel.com/content/www/us/en/support/articles/000026725/emerging-technologies/intel-realsense-technology.html.
Intel Customer Support
Thanks. I could not find the information in the Intel customer calibration document.
My question is:
Are the following two coordinate systems the same:
1) Coordinate system of Point Cloud generated by D435: Origin and X, Y, Z direction
2) Coordinate system of the Left Camera (OV9282): Origin and XYZ direction
Example of two D435 Cameras:
D435 Camera A, LeftCamA and RightCamA, generate PointCloudA.
D435 Camera B: LeftCamB and RightCamB, generate PointCloudB
If I do extrinsic calibration between LeftCamA and LeftCamB to get rotational and transnational matrix R/T, is the following true?
PointCloudB*R+T will align with PointCloudA
The answer to your question is Yes. This is implied in the multiple camera whitepaper, https://www.intel.com/content/www/us/en/support/articles/000028140/emerging-technologies/intel-realsense-technology.html. Page 10, section C. Aligning Point Clouds.
Intel Customer Support
How do I do point cloud transform inside D435 camera?
I have R/T matrix from extrinsic calibration of D435 point cloud relative to my workspace,
how do I load the R/T to the D435 camera and D435 will do transform inside camera so that the point cloud output from D435 is already transformed to my workspace coordinate?
You can use the WriteCustomCalibrationParameters function from Dynamic Calibration API to write those parameters in the camera.
Find more about it here: https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_D400_Dyn_Calib_Programmer.pdf
The Calibration write command is to write the intrinsic, stereo parameters and RGB extrinsic parameters.
Yes, I use those to do the custom intrinsic and stereo calibration
But I would like to do is to write a extrinsic parameters to transform the point clouds.
Does D435 has a way to do point cloud transform (Rotation and translation) inside the camera?
The Rotation and Translation parameters of the IR cameras can be written using this function.
They are rotationLeftRight (The rotation from the right camera coordinate system to the left camera coordinate system, specified as a 3x3 row-major rotation matrix) and translationLeftRight (The translation from the right camera coordinate system to the left camera coordinate system, specified as a 3x1 vector in millimeters).
You might also want to check the Projection in RealSense SDK 2.0 page.
The point cloud origin is the left camera principal point, the righttoleft is for stereo matching after rectification.
but I'd like to do is transform the point clouds (basically the IR left camera origin) to my work space origin.
I can do this outside of the D435, but I'd like to do that inside the camera.
The commands in sdk mentioned in the document such as:
are those commands that can only be implemented outside of the camera?
The depth data is generated by the overlap of the individual left and right imager filed of view.
The depth start point is referenced for D435 at -3.2 mm from front of the camera cover glass. You can find this information at page 57, 4.7 Depth Start Point(Ground Zero Reference) from the datasheet.
In order to obtain depth data, you also need to consider the Min-Z depth (the minimum distance from depth camera to scene for which the Vision Processor D4 provides depth data), which for D435 varies from 105 mm to 280 mm, depending on the resolution. You can check the datasheet, page 56, 4.4 Minimum-Z Depth for the exact values.
Given the fact that you need to take these into consideration, obtaining a pointcloud inside the camera is not possible.
Could you please let us know why do you want to use the pointcloud inside the camera?
Thank you and best regards,
What we do now:
1) The Point cloud output from D435 cameras (from disparity, intrinsic and stereo calibration parameters done by D435 camera)
2) We have extrinsic calibration matrix to transform the point cloud from D435 to our workspace.
We we want to do:
1) get the point clouds from D435 camera that is already transformed (we can provide the rotation and translation Matrix to the D435 camera) to our workspace. (to save the conversion time in our computer).
Unfortunately, the camera is not capable of doing these advanced transformations. You will have to do that processing on your computer.