8 Replies Latest reply on Feb 21, 2018 6:24 PM by AndersGJ

# Depth resolution of the RealSense D400 at 0.2 metet

Hello,

What is the depth resolution of the D400 at a distance of 0.2 meter?

https://software.intel.com/en-us/realsense/d400

Thanks,  Yoni

• ###### 1. Re: Depth resolution of the RealSense D400 at 0.2 metet

The 400 Series cameras can depth-sense up to 1280x720 resolution.

• ###### 2. Re: Depth resolution of the RealSense D400 at 0.2 metet

Hi Marty,

How can I infer from these numbers ( 1280x720) the depth resolution sensed on a body, placed at a distance of 0.2 meter from the camera?

Thank you,  Yoni.

• ###### 3. Re: Depth resolution of the RealSense D400 at 0.2 metet

The depth resolution will depend on the resolution that you have set.  According to the chart below, the depth sensing will be limited to a maximum of 30 frames per second (FPS) when in the maximum 1280x720 resolution.  You can set the camera to use a particular resolution using programming scripting.

• ###### 4. Re: Depth resolution of the RealSense D400 at 0.2 metet

Also, do you have more specifications on the point cloud?

Thank you

• ###### 5. Re: Depth resolution of the RealSense D400 at 0.2 metet

The minimum depth scanning distance of the 400 series is 0.16 m for D415 and 0.11 m for D435.

The RealSense SDK 2.0 software used with the 400 Series cameras comes with a sample program called 'pointcloud',

librealsense/examples/pointcloud at master · IntelRealSense/librealsense · GitHub

I also found an additional sample program for the point cloud.

• ###### 6. Re: Depth resolution of the RealSense D400 at 0.2 metet

I think what he's asking, is what change in distance is the system able to distinguish?

In other words, say I have two cubes A and B, of the same size, next to each other, facing the camera, at the same distance (let's say 30cm).

If I start pushing cube B straight backwards away from the camera, how far do I have to move it for the system to register that cube B is farther away than cube A?

1mm? 10mm? 25mm?

Another way of looking at it is, what is the smallest change in Z that can be distinguished? Can it pick up wrinkles on a forehead? Or is the entire face including the tip of the nose considered to be the same distance?

I'm sure the answer is probably "it depends", so let's assume the configuration of camera, software, subject, subject distance from camera, etc, are all optimal for best possible performance on these matters.

The existence of 3D scanning software for RealSense devices suggests that it's capable of pretty good resolution to pick up details, but perhaps texture mapping is doing a lot of heavy lifting at making up for deficiencies.

• ###### 7. Re: Depth resolution of the RealSense D400 at 0.2 metet
This message was posted on behalf of Intel Corporation

Hello Jon_Hendry,

The Intel® RealSense™ D400 Series Datasheet contains the tables on page number 66 that describe the depth quality measurements for the cameras and the minimum-z depth.

For more details, see the Intel® RealSense™Depth Quality Testing White Paper.

I hope you find this information useful.

Best regards,

Josh B.
Intel Customer Support

• ###### 8. Re: Depth resolution of the RealSense D400 at 0.2 metet

There are two models. The D415 and D435. Here are answers for both:

D415: Minimum distance is 44cm when operating at 1280x720. It is 22cm at 640x480, and 11cm at 320x240.

Assuming you put the camera into 640x480, at 22cm you will get an RMS depth error (plane fit ripple) of a few hundred microns.

However to see this please do the following - 1. Under advanced mode change the depth units from 1000 to 100. Now each step in depth should be interpreted at 0.1m and not 1mm. 2. Make sure the wall you are measuring is very flat and had nice texture.

D435: Minimum distance is 17cm at 848x480. At 20cm the RMS error around 200um, with above mentioned tweaks.

Also, please note the depth resolution does improve as distance is decreased, but up to a limit. Other artifacts starts appearing. Namely, the lenses are currently focused at 50cm to infinity. As you move closer the images will blur which will degrade depth from the theoretically achievable. Also, non-flat objects will have occlusion problems when you get this close. Basically consider a finger moved close to your eyes. At some point your left eye sees the right side of the finger and the right eye sees the left side of the finger. At this point we cannot match left and right images and cannot calculate depth.

I hope this helps.