If you are using Windows, in the most recent 2.16.0 version of the RealSense SDK, a MATLAB wrapper was introduced. This contains point cloud support..
You can find the front page of the wrapper here:
If you are not using Windows, the discussion below looks at the process from the opposite direction (converting a point cloud to depth) and may provide some useful insights.
1 of 1 people found this helpful
I have not used the MATLAB wrapper myself. If it works like the SDK's other wrappers though then it would give access via MATLAB to functions in the core SDK such as point cloud creation. The problem tends to be that until example programs or script snippets for a particular function are published for a wrapper then it is hard to know how to write a script for the wrapper that produces the desired result. Such samples emerge over time, whether published by Intel or a member of the community. I understand that this does not help you today though.
I did some further research. On the page linked to below, there is a MATLAB function that can be downloaded that converts a depth image to a point cloud in MATLAB. Scroll down the page to the heading titled 'Depth Image to Point Cloud (MATLAB)' and click on the 'depthToCloud.m' link to launch its download in your browser.
Fortunately, the approach you recommend is feasible, but as I said earlier, converting a depth image to a point cloud requires some camera parameters, such as:
I saw two focal lengths in the camera data,I don't know which one I choose,
the other parameter :The physical dimensions of each pixel on the x and y axes of the image plane。That is the red part of the formula.
For clarity, here is the conversion formula:
Why must you use Matlab? C++ code is here and like this:
// Declare pointcloud object, for calculating pointclouds and texture mappings
// We want the points object to be persistent so we can display the last cloud when a frame drops
// Declare RealSense pipeline, encapsulating the actual device and sensors
// Start streaming with default recommended configuration
auto frames = pipe.wait_for_frames();
auto depth = frames.get_depth_frame();
// Generate the pointcloud and texture mappings
points = pc.calculate(depth);//you get points without color
auto color = frames.get_color_frame();
// For cameras that don't have RGB sensor, we'll map the pointcloud to infrared instead of color
color = frames.get_infrared_frame();
// Tell pointcloud object to map to this color frame
pc.map_to(color);//you get points with color
if you want to export the point out, use " export_to_ply", just find it. or you can code yourself.
Hope to help u.
The D415's focal length is 1.88 mm and the D435's focal length is 1.93 mm.
The Dx / DY question is a harder one. It has been said that it is dependent on sensor size. The rolling shutter of the D415 has a pixel size of 1.4um x 1.4 um and the global shutter of the D435 has a sensor size of 3um x 3 um.
RealSense expert Samontab once gave the following explanation of pixel size:
A .ply represents a pointcloud format that contains the information from only one frame.
If this is not what you are trying to accomplish, could you please give us more details in order to help you?
Thank you best regards,
Thank you for the clarifications!
In this case you are interested in knowing the Intrinsic Parameters, you can read about it on Intrinsic Camera Parameters 2. Deprojection.
You might also find this GitHub thread helpful.
Please let us know if this information helps you with your project!
Thank you and best regards,