Note that the ZR300 and SR300 both use a laser projector (as discussed above), but maybe there is an important difference!
The ZR does have stereoscopic IR cameras.. and this is how it calculates the depth. The projector is "to add texture to non-textured objects" (at least that is the datasheet explanation).
So.. It might behave different if you used a single projector or IR source and two or more cameras. It could be interesting to find out!
My guess is without the projector the accuracy could be more unstable. It would depend somehow on the contrast of the object you were imaging.. and how the pixel edges are detected by both sensors.
Stereoscopic depends on detectable pixel shifts from perspective to calculate the Z.. so, it is a bit trickier.
Also, one neat thing (but we haven't play with it yet) is that there is an integrated IMU (gyro & accelerometer). The hint is that it uses this somehow to correct for changing orientations of the sensor.
For your application it might not matter if the sensors are static.
Please let us know if you do any testing on the ZR!
I noticed the depth and IR images have different resolution in the ZR300. In spite of this, are the depth images always pixel aligned with the IR images in the ZR300?
Also, for the ZR300, can we still use the client software to change parameters, such as DEPTH_ACCURACY, etc.?
Does the ZR300 works well with the librealsense for Linux? I would like to do plug and play replacing the SR300 with an ZR300, and not have to heavily modify the current code I wrote using librealsense.
Whilst I'll leave the depth align question for Chris to reply to, I'll try to answer the rest of your questions.
The ZR300 has identical IR components to the R200, so Librealsense instructions related to the R200 specifically (e.g r200_auto_exposure_kp_gain) should work with the ZR300 too.
Intel recommend that with the ZR300, you use the Intel RealSense SDK For Linux. This installs Librealsense as a module, so you get the Librealsense functions plus extra features not in the basic Librealsense such as 'Person Tracking' and 'Object Recognition' / 'Object Localization' / 'Object Tracking'.
We haven't worked much with the ZR300 since our sensors are Windows based at the moment. There has been some development with LibRealSense.. but only a little.. so I am not sure I can advise you on the specifics of the ZR300!
With the SR300 we use depth + color for our application, not depth + IR.But in this case there is a correlation between the Depth pixel and the color.
For a given color pixel you can find the corresponding depth pixel. I would expect that the same would be the case between depth and IR as well.
If you need I can try to post some snippets of code that perform this with depth/color extraction.
I bought a ZR300. The interference is not bad anymore when I use the SR300 and a ZR300. Sometimes, depending on the angle, I get the see, from the ZR300 depth image, the sweeping light of the SR300, but it's not a big issue right now since the important areas on each depth image have good depth data.