Thanks for reaching out!
The way the ZR300 generates depth data can be found in section 4 of its datasheet (https://click.intel.com/media/ZR300-Product-Datasheet-Public-002.pdf). The following is taken from section 4.1 of the datasheet:
"The ZR300 module uses stereo vision to calculate depth. The stereo vision implementation consists of left infrared camera, right infrared camera, and an infrared laser projector. The left and right camera data is sent to the ZR300 ASIC. The ASIC calculates depth values for each pixel in the image. The infrared projector is used to enhance the ability of the system to calculate depth in scenes with low amounts of texture..."
The datasheet does not mention the structured light, which makes me assume this camera does not implement it. However, you could share the documentation you read with us and we could verify if it also applies to the ZR300. I would also suggest you to take a look at librealsense's source (https://github.com/IntelRealSense/librealsense/tree/master/src) and documentation (https://github.com/IntelRealSense/librealsense#documentation), it might give you further insights on how the camera work and how the data is handled.
Regarding the second part of your inquiry which mentions that you are getting noisy clouds indoors, could you please explain us a little bit more about this? Perhaps you can share screenshots and relevant information that might help us identify the issue.
Let us know.
I hope this information helps you,
* Thanks for the explanation and the detailed answer. Now I understand that the Zr300 uses the IR projector to assist the stereo depth calculation. I should have read the datasheet properly!
>The datasheet does not mention the structured light, which makes me assume this camera does not implement it. However, you could share the documentation you read with us and we could >verify if it also applies to the ZR300.
* Sorry , I couldn't find the link right now. I found it some other forum.
>Regarding the second part of your inquiry which mentions that you are getting noisy clouds indoors, could you please explain us a little bit more about this? Perhaps you can share screenshots >and relevant information that might help us identify the issue.
* Here is a screen capture of the data I took in the basement of my building with florescent lighting. As you can see the depth cloud is very sparse, the floor is not detected, and there are a lot of spurious detections/ ghosts in the background.:
* I came across this SDK design guidelines for R200 camera : https://software.intel.com/sites/default/files/managed/d5/41/Intel%20RealSense%20SDK%20Design%20Guidelines%20R200%20v1_1…
Do you have a similar one for ZR300? Instructions on Pg 9-13 do seem general and I will try to play with the gain/exposure settings to get better results. Do you have any suggestions on the settings I could tune for this particular scene in the video?
I see what you mean, I would suggest you to make sure that the room is as well lit as possible and that you are on the range of detection of the camera, remember that the camera's depth capture range 0.55m to 2.8m, it is not clear on the video but make sure you take this into consideration.
I am not aware of a similar document for the ZR300 but you can find gain/exposure details on the datasheet. You might also want to check out librealsense's source and documentation to see if you can find useful information for your project:
You can check the examples on librealsense and compare them to your project to see if you are missing something. Or, you can base your project on one of the examples.
Also, another good source of information is this guide https://software.intel.com/sites/products/realsense/intro/getting_started.html, you might want to check it out.
I hope this information helps you.
Thanks for the detailed reply, I will look up on the links that you have provided.
Also, I was hoping the ZR300's depth camera range to be more, because it was meant to be used for robotics applications like SLAM. 2.8m seems like a very restrictive range for many applications.I will try some tuning and filtering algorithms to have a bit more range. Lets see.
Please let us know the outcome of the suggestions provided, any results will be appreciated by the community.
I understand what you mean regarding the camera’s range. I’ll pass your comments to the corresponding development team, thank you for your feedfack.
Have a nice day.
Sorry for the late reply. I did not get a lot of time to tune the ZR300 parameters, but I did make some progress. For e.g. increasing the "LR Gain" parameter (using ROS' dynamic reconfigure) , i was getting better results for obstacles that were further away from the camera. i would say it was an improvement of about 1.5-2 meters in the operational range. Plus I am doing additional filtering using PCL so I'm pretty satisfied with the results for now.
One strange thing that I did notice was that in the raw Point Clouds that I get from the camera, I do not gent any points for the floor. Maybe because my floor is texture-less. Anyways this is favorable to me as I had to remove the points which correspond to floor from the point cloud using RANSAC.Now its one less PCL filter to apply so its good.
I’m glad to hear that you are making some progress and that everything seems to be working favorably.
You may be right with the assumption regarding the floor being “texture-less”, the system may have trouble detecting surfaces that present little texture.
Thank you for sharing your experience with the community, we really appreciate it.
Have a nice day.
2 of 2 people found this helpful
I'm not sure it was made clear but the IR projector is basically used to add "texture" to smooth surfaces. It's not structured light per se; the cameras are doing stereo, but stereo needs texture to work. Unfortunately, due to power limitations the projector it has limited range. You CAN still get range data for things that are farther away as long as they (a) have texture (b) are brightly illuminated IN IR. Take a look at the IR camera images to see. My guess is the floor is not well illuminated in IR and/or does not have enough texture. Another factor is that floors are hard to illuminate if they are shiny; naturally the light from the projector will bounce "forward" in this case and not back to the camera. It's also possible for surfaces to be TOO bright in IR, so all texture is washed out. Take a look at the IR images and try to adjust the exposure settings (sounds like you are already doing that) so that as much "texture" as possible is visible.
For very long distances there is another limitation: at some point, the spacing of the pixels in the cameras gets comparable to the image discrepancy. When you reach that limit the stereo algorithm simply can't recover depth information anymore, and close to the resolution limit the precision gets worse. You would need a larger baseline to get better data and/or a higher resolution sensor... but with a larger baseline, it gets harder to recover depth for things CLOSE to the camera, and a higher resolution camera would also mean less sensitivity and more processing (especially for short ranges). The baseline and resolution on the ZR300 was chosen in an attempt to hit a happy medium, but won't be perfect for everything.
Anyhow, the real answer to your question is "it's complicated"
Thanks for such a detailed response. I now have much more clarity on how the ZR300 calculates depth. Your answer made me curious about one other thing. Can we control the ZR300's IR projector's power? I looked around and saw there is some some parameter in SR300 cameras to set the projector power, but didn't find one for the ZR300. I'm sure it would not be trivial to set the power because of safety issues and laser power regulatory issues. But I would glad to know if someone has "hacked" their way into controlling the projector power.