The discussion linked to below may be of use to you. It contains code references for getting pixel depth with just SDK 2.0, or by using SDK 2.0 in combination with the OpenCV vision software.
Thanks for your answer MartyG.
Let me explain the issue I have in a better way.
I obtained a depth image, RGB image and Point cloud (.pyd) from the intel RealSense Viewer. Then I made a segmentation process in matlab, so I deleted some points of the original point cloud, but Im still having a .pyd file. Now I need to convert this .pyd file (the processed point cloud) into a depth image, either using matlab or the Intel´s SDK. How can I do it?
I really apreciate your help.
Getting a .pyd file to load into MATLAB without crashing does not seem to be easy. The most useful references I found were:
It sounds as though it would be much easier to process the point cloud in the RealSense Viewer as much as you can, export it as a ply and import it into MATLAB. There are a couple of ply tools available for MATLAB.
I made a mistake in my las message, when i wrote .pyd, y really meant .ply : ( I obtained a depth image, RGB image and Point cloud (.ply) from the intel RealSense Viewer. Then I made a segmentation process in matlab, so I deleted some points of the original point cloud, but Im still having a .ply file. )
I have alrready made some modifications to the original point cloud in matlab and I saved it as another point cloud (.ply). Now I need to convert this .ply file (the processed point cloud) into a depth image, either using matlab or the Intel´s SDK.
Original data obtained from Intel´s SDK Viewer:
a) Original depth image (I alrready have it in gray scale). b) Original point cloud
Processed pointcloud, in matlab (it is also a .ply file). It is the same as the original, but I deleted some points:
c) I need to convert this one into a depth image.
This week, Intel highlighted a point cloud system for Python called Pyntcloud that allows the loading in of ply files and the performing of a large number of different types of operation on it. It contains some example programs..
One of these example programs allows the visualization of the point cloud as a color map.
But once having the .ply on python, I would have to use the function "
rs2_project_point_to_pixel" that needs the intrinsic parameters.
Is there a way to obtain this intrinsic parameters from the Real Sense Viewer?
Are the intrinsic parameters always the same for the camera or they change accoding to the scene (like maximum distance measured in the scene)?
Thank you for your time.
Apologies for the delay in responding, I was considering how best to answer your questions.
Yes, intrinsics can change, especially when calibrating the camera. The sample program 'Sensor Control' provides a pre-made means to interface with the camera's details.
The API How To page also provides scripting for getting field of view and video stream intrinsics.
You mentioned that intrinsics can change, especially when calibrating the camera, but can intrinsics change while acquiring data from the sensor?
Lets supose Im acquiring data from the sensor and I get 100 frames, and each frame has a pointcloud and depth image,
Is it possible that the camera has taken different intrinsic values while capturing the video, and therefore having different intrinsic values for each frame?
I am going to process each frame's pointcloud independently, so I want to know if it is necessary to obtain the intrinsic values for each frame while taking the video (this would be complicated) ?
I just want to be able to convert a processed pointcloud to a depth image.
Unless the camera is absolutely motionless in position, there are very likely to be variations in the intrinsics. I have never heard of anyone having to manually update the intrinsics on every frame though. The RealSense SDK should automatically take care of those calculations as factors such as distance change.