Items with no label

Depth error

TTay1
Beginner
3,365 Views

Dear Intel,

For my application, I have a D435 mounted about 2 meters high pointing towards the ground at about a 45 degree angle. The application reads depth data (no RGB) from the D435 and converts it into a point cloud. The point cloud is visualized as in the image below. This image shows me with my hand about 0.5 meters above the ground.

This next image shows me with my hand slightly lower to the ground. Notice that the point cloud shows that my hand and the ground are connected even though they are not touching. In general, I've seen this issue in other scenarios where the depth visualization shows two objects connected even though they are not touching in reality.

What could be causing this problem? Does the D435 apply some kind of smoothing or regularization that could be causing two disjoint objects to appear connected?

0 Kudos
25 Replies
idata
Employee
1,224 Views

Hello Teeter,

 

 

Can you try the same experiment using the Depth Quality tool from the SDK 2.0? I tested your scenario and I saw a clear differentiation from my foot to the ground.

 

 

Regards,

 

Jesus

 

Intel Customer Support
0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Are you referring to the rs-depth-quality tool? Doesn't that application only work when facing a flat surface like a wall? If you are using the tool to visualize depth, we can also use realsense-viewer to do the same thing. It is more difficult to see the problem with both of these tools because they colorize the point cloud with RGB or gray data, but I can demonstrate the problem with both tools too.

Let me illustrate my setup with a diagram. In the diagram, the D435 is pointing at the surface at about a 45 degree angle. You must transform the point cloud so that you have a good view of the area between the object and the surface. Then move the object closer or further away from the surface. As the object approaches the surface, you will see spurious points appear in between the object and the surface.

I've seen this problem in many cases. Below are two screenshots from realsense-viewer. As I pointed out earlier, the RGB colorization makes it hard to see the problem, so it may take a bit of imagination for you to see it. The first image shows me holding a magazine above a table. The second image shows me holding the magazine closer to the table and the red arrows show the spurious points between the magazine and the table. I suggest you download both images, and switch between them quickly to see the difference in the area that I pointed to.

It is easier to see this problem if the point cloud is plain gray. I didn't find any tool in the SDK that visualizes the point cloud in this way. If you provide me with such a tool, I'm happy to run it and show you the results.

Let me know if there is any other information I can provide. Thanks.

0 Kudos
idata
Employee
1,224 Views

Hello Teeter,

 

 

I received some guidance from one of our developers.

 

 

Pre-sets may help you. The asic in the D400 can do tradeoffs between fill rate < -- > z-accuracy < --- > xy-accuracy.

 

When you sacrifice xy-accuracy, it will fill up the missing depth from neighbors.

 

 

Included presets have the potential to improve this situation. Also, whenever looking at depth quality in the Viewer or the DQT it is important to disable post-processing. By default when you use the SDK from code the filters are disabled, but in the tools they are enabled.

 

 

Experiment with the pre-set configurations and let us know if it helps.

 

 

Regards,

 

Jesus

 

Intel Customer Support

 

0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Thanks for your reply. I've got several follow-up comments and questions.

1. I've played with the presets. The "High Accuracy" and "Hand" presets reduce the number of spurious points (does not completely eliminate it) but I also see more holes in areas where depth was interpolated correctly previously.

2. The presets change a large combination of controls in the Advanced Mode section. Can you explain what each control does? That would help me fine-tune these controls in an attempt to find a combination that works better for my application.

3. While running realsense-viewer, how do I disable/enable post processing? I don't see a control that is labeled "post processing".

4. My colleague has used the ZR300 with SDK 1.x. With his app, I've seen the point cloud created using that hardware/SDK combination and it does NOT exhibit this problem under the same scenario (camera pointing at 45 degrees to the surface and object). It would seem like this is a new issue with the D430/D435. Is there some mode option that I could set to better mimic the ZR300's behavior in this scenario?

Thanks.

0 Kudos
idata
Employee
1,224 Views

Hi,

 

 

1. You will see more holes with High Accuracy because the High Accuracy setting reports only on very high confidence pixels.

 

2. We do not have documentation on the Advanced Mode controls. We have given the feedback to engineering that more documentation is needed.

 

3. The Post Processing option is at the bottom of the Depth controls area in the RealSense Viewer.

 

4. I will ask engineering for more direction.

 

 

Regards,

 

Jesus

 

Intel Customer Support

 

0 Kudos
TTay1
Beginner
1,224 Views

Hi Jesus,

Thanks for your reply. This is an important issue for our application, so please let me know when you have more information about how to improve the accuracy, especially with reference to how the ZR300 performed.

In the meantime, I've attached a screenshot of the Depth Control area. I'm not seeing an option that is called Post Processing. I may be missing something. Please point out what I should be adjusting to disable or enable post processing. Thanks!

0 Kudos
idata
Employee
1,224 Views

Hi, attached is a screenshot showing the Post Processing section. My apologies, it is at the bottom of the Stereo Module section.

Regards,

 

Jesus
0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

I was running v2.8.2, which I discovered doesn't have post processing. I upgraded to v2.9.1, which has the post processing options. Disabling these options produces a result similar to what I was seeing in v2.8.2, so the observations I made previously are still valid.

realsense-viewer v2.9.1 allows me to visualize the point cloud in 3D without RGB colorization. With post processing disabled, I set this option:

Depth Visualization > Color Scheme > White to Black

In 3D visualization mode, I rotated the viewpoint to produce a 3D display very similar to what my app produces. In this mode, It is easy to see the problem I reported.

Please let me know when engineering has any further information regarding how to reduce this error. Thanks.

0 Kudos
BHerl1
Beginner
1,224 Views

Not my intention to hijack your thread but I have similar issues related to density of the depthmap and in particular how to influence it. See e.g. threads

Perhaps someone from Intel can summarize relevant information and threads on this topic and put it somewhere easily available? I have a feeling this topic is (and will be) relevant to many people.

Regards,

- Bjarne

0 Kudos
idata
Employee
1,224 Views

Hello Teeter,

 

 

Does the post-processing reduce the observed error? You can find more information on post-processing by viewing the header file at https://github.com/IntelRealSense/librealsense/blob/ba01147d65db16fdf4da36a3e718fe81c8421034/include/librealsense2/h/rs_processing.h https://github.com/IntelRealSense/librealsense/blob/ba01147d65db16fdf4da36a3e718fe81c8421034/include/librealsense2/h/rs_processing.h.

 

 

You can also do a search for "post-processing" in https://github.com/IntelRealSense/librealsense https://github.com/IntelRealSense/librealsense and you will find where and how it is used.

 

 

If this helps, it is your best bet for improving the results.

 

 

Regards,

 

Jesus

 

Intel Customer Support
0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Sorry, but the post processing does not help with this problem that I reported.

I'm hoping that your engineering team can compare the ZR300 vs the D435. They may be able to figure out why the ZR300 does not exhibit this particular problem while the D435 does.

Thanks!

0 Kudos
idata
Employee
1,224 Views

Teeter,

 

 

While I gather more info from engineering, you may want to read this new article on the pre-sets in the RealSense Viewer: https://github.com/IntelRealSense/librealsense/wiki/D400-Series-Visual-Presets

 

 

Regards,

 

Jesus
0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Thanks for the article. It's helpful in understanding the differences between the presets.

Do you have information about what the "Vacuum Cleaner" and "Body Scan" presets do?

What does "point cloud spraying" in the article mean?

Thanks.

0 Kudos
idata
Employee
1,224 Views

Hello Teeter,

Point cloud spraying refers to the fuzziness that sometimes occurs around edges of objects. If you look at the pictures for the Default Preset, you will notice clean, sharp edges (especially around the mannequin).

There is a bug in Default Preset_435 - there is an extra "," at the end of the last parameter, which causes an error when loading.

The Vacuum Cleaner preset is optimized for looking close to the floor.

Body Scan is optimized for doing 3D scans of a human body.

Does this help?

Regards,

 

Jesus

0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Thanks for the explanation.

0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

If you recall, I mentioned earlier that my colleague uses the ZR300. He upgraded the cameras in his setup to D435s and he is now observing depth errors like those I reported. Because his setup involves distances larger than my setup, the errors he observed are also larger.

We are about the begin a major project with the D435s. The project is on hold waiting to hear if your team has any suggestions on how to reduce or fix these depth errors. We would much prefer to be able to use the D435s we purchased for this project instead of reusing the ZR300s.

Please advise. Thanks.

0 Kudos
idata
Employee
1,224 Views

Hello Teeter, I received a full response from engineering (FYI - the R200 is the depth camera within the ZR300):

 

 

"The D430 is expected to have 20% more RMS depth error at 848x480 vs the R200. We recommend using this resolution. Reducing the resolution will degrade the depth. For this unit, increasing to 1280x720 will not improve the depth.

 

The FOV is 2x for the D430 and new processing sees more in dark and bright sunlight, and the minZ is >2x better. The D430 uses 2x 1MP imagers that are mono-chrome global shutter, and hence better for fast motion beyond 1m/s.

 

 

The D415, by contrast, is designed to have RMS error which is 2x improved over R200, but of course in smaller package, and with wider FOV. This one should be operated at 1280x720 for best results. The D415 use 2x 2MP RGB imagers that are rolling shutter.

 

 

Tuning: As you say, users should play around with depth presets and exposure to get best results, and these presets can have a big effect. Regarding improving presets: We provide a set of different presets, but allow users to tune many of the depth parameters under the "advanced mode". These many parameters are all interlinked and we use machine learning to optimize the "weights" based on libraries of ground truth images. There is no simple prescription to offer to user for tuning, but we do allow access to these parameters in the spirit of openness.

 

 

I hope this answers the questions."

 

 

Regards,

 

Jesus

 

Intel Customer Support
0 Kudos
TTay1
Beginner
1,224 Views

Hello Jesus,

Thanks for your reply.

The response from your engineering team seems to be that the D435 has features that make it better than the ZR300. I agree with that viewpoint and that's the reason we purchased D435s to use in our lab and in our production systems.

The problem I'm reporting is that the D435 seems to also introduce a new type of depth error that was not previously seen in the ZR300. We are seeing this type of depth error in many scenarios. If your engineering team has trouble reproducing it, let me know and I can show you how to use the RealSense SDK tools to reproduce this issue. I'm hoping that they will be able to investigate and fix this problem.

Thanks.

0 Kudos
idata
Employee
1,224 Views

Hi Teeter,

 

 

Please send us specific instructions and code for reproducing your issue.

 

 

Regards,

 

Jesus
0 Kudos
TTay1
Beginner
1,108 Views

Hello Jesus,

I've previously described how to reproduce the problem but I'll repeat it here.

Problem: when the D435 measures depth at an angle, and two objects are close to each other, spurious depth measurements appear in between the two objects. You must point the D435 at an angle in order to see this problem. And you can visualize the errors using realsense-viewer.

There are many ways to reproduce this problem. I will describe one scenario below.

1. Mount the D435 camera at about 2 meters high and point it downwards at about a 45 degree angle.

2. Run realsense-viewer and enable the stereo module. Do the following:

  • Disable post processing
  • Under Depth Visualization, set the Color Scheme to "White to Black".
  • Switch to 3D mode to visualize the point cloud. Note that the point cloud shows a view of the scene from its current position, which is about 2 meters above the floor and pointing down at about a 45 degree angle.
  • Using the mouse, change the viewing angle so that you are looking at the scene parallel to the floor.

Here is a screenshot I captured of my work area.

3. Now stand in front of the camera. Below is a screenshot of me standing about 1.5 meters in front of the camera. I drew red arrows to where the depth errors occur.

Left arrow: I am standing with my legs apart. But notice the gray points that appear in the area between my legs. Those are false depth measurements that should not be there. Those points disappear when my legs are moved further apart. But when my legs get closer together, these spurious points appear to connect one leg to the other leg.

Right arrow: This area shows spurious points that connect my leg to the floor. There is nothing in that space, but the D435 is producing depth measurements that say that there are.

If you set the preset to "High Accuracy", that helps a little, but it is still fairly obvious that there are many spurious points in empty areas.

Is that enough information for you to see the problem in your lab?

0 Kudos
Reply