1 of 1 people found this helpful
The D435 is a Strereoscopic type camera. This means that it has an RGB sensor and a pair of infrared sensors (left and right). The left and right IR imagers are used to construct a depth image.
If you are aiming to align more than one RealSense camera but do not need their streams to be precisely synchronized, I wonder if the 'Multicam' sample program might meet your needs.
If you are needing to align a D435 with a non-RealSense device, then multi-camera hardware sync would be the way to go.
It is worth bearing in mind though that with a D435, to initiate the syncronization you would also need an additional D415 camera to act as the sync trigger pulse generator or use an external signal generator.
1 of 1 people found this helpful
Yes, the Extrinsics object contains the translation between the two sensors in metres.
To get it, depending on what language you're using, you'll need to do something like this:
var extrinsics = depthStream.GetExtrinsicsTo(colourStream);
Alternatively, IIRC, the calibration tools/api includes a command line app for outputting calibration data from a device.
Intel's depth testing guide says about external projectors: "Projectors do not need to be co-located with the depth sensor. It is also acceptable to move or shake the projector if desired, but it is not needed".
Here is the rest of the text from the guide about external projectors.
a. While the internal projector is very good (shown below), it is really quite low power and is not designed to illuminate very large rooms. It is therefore sometimes beneficial to add one or more additional external projectors. The D4xx camera will benefit from any additional texture projected onto a scene that has no inherent texture, like a flat white wall. Ordinarily, no tuning needs to be done in the D4 VPU when you use any external projectors as long as it is on continuously or flickering at >50KHz, so as not to interfere with the rolling shutter or auto exposure properties of the sensor.
b. External projectors can be really good in fixed installations where low power consumption is not critical, like a static box measurement device. It is also good for illuminating objects that are far away, by adding illumination nearer to the object.
c. Regarding the specific projection pattern, we recommend somewhere between 5000 and 300,000 semi-random dots. We normally recommend patterns that are scale invariant. This means that if the separation between the projector and the depth sensor changes, then the depth sensor will still see texture across a large range of distances.
d. It is possible to use light in visible or near-infrared (ranging from ~400-1000nm), although we normally recommend 850nm as being a good compromise in being invisible to the human eye while still being visible to the IR sensor with about 15-40% reduction in sensitivity, depending on the sensor used.
e. When using a laser-based projector, it is extremely important to take all means necessary to reduce the laser-speckle or it will adversely affect the depth. Non-coherent light sources will lead to best results in terms of depth noise, but cannot normally generate as high contrast pattern at long range. Even when all speckle reduction methods are employed it is not uncommon to see a passive target or LED projector give >30% better depth than a laser-based projector.
I don't think there are any numbers for this other than what is in the datasheet.
I believe (but you should double check): the RGB camera is 15mm to the left of IR1... IR2 is 50mm to the right. I think the Viewer takes the center of the baseline as the zero point... so RGB is at -42 mm.
If you are using external analysis ... and want to combine data or compare data with the Realsense results, you need to translate(and possibly rotate) your data if you aren't using the zero point of the camera as the origin of your data universe.
The depth data is already aligned to IR1 when you get it from the Viewer.