- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The subject of getting the depth value from a .raw file has been discussed on the RealSense GitHub.
https://github.com/IntelRealSense/librealsense/issues/2231# issue-350003900 How to get the depth value corresponding to the color in the depth image? · Issue # 2231 · IntelRealSense/librealsense · …
There is a script below for getting XY measurements using the instruction deproject_pixels_to_point (which converts 2D pixel coordinates to 3D world coordinates), but I am not sure how to apply it to extracting XY from a .raw file.
https://github.com/IntelRealSense/librealsense/issues/1413# issuecomment-375265310 deprojection pixels to 3d point · Issue # 1413 · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OK I will try !
another question is can I use librealsense SDK 2.0 with ".raw" file , such as
align_to = rs.stream.color????
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This link describes how to align saved files:
https://github.com/IntelRealSense/librealsense/issues/1274# issuecomment-370226116 upload color and depth images and then align them · Issue # 1274 · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I've read your reply , but the methods that 1274# metioned are:
1。rs2::software_device
2。rs2_deproject_pixel_to_point
3。 use.bag file
1 and 2 are deal with stream .Can I use stream to deal with the saved ".raw"???????????
I don't save pointcloud data ,so i don't have ".bag" file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you explain more please about what you are trying to achieve? Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm so sorry about my bad description!!!
I want to measure the soybean's height using the depth image.
These images are taken above the soybean.
First , I should find the highest point and a point on ground.
But I only have distances between points and camera.
I don't know how to find the angle from one point to camera then to another point.
If successful ,I can use vector to calculate the height.
It's better if I get the (x , y , z) in the real world.
ps.
I only have the ".raw" file ,color ".png" and depth ".png".
thanks a lot!!!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
May I ask first if it is a required part of your project that you create image files. Or do you not have access to a camera and have to do the calculations using image files only. This is because there are already existing measuring programs for the 400 Series cameras.
Measure (C++)
https://github.com/IntelRealSense/librealsense/tree/master/examples/measure librealsense/examples/measure at master · IntelRealSense/librealsense · GitHub
Box Dimensioner (Python)
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam librealsense/wrappers/python/examples/box_dimensioner_multicam at master · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I already have access to a camera but have to do the calculations using image files only. Because the soybean have been almost riped, I don't have the right time .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It would probably be quite easy if you could have recordings provided to you in the .bag format. This is because the SDK's sample programs can run off a bag file recording instead of the camera if you make a simple three-line change to the program.
https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md librealsense/sample-data.md at master · IntelRealSense/librealsense · GitHub
But am I right in thinking that it is too late to record a bag and you now have only the images to work with?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I spent a few hours trying to work out a process that will work for you. I believe the basic steps are:
STEP ONE
Get your color and depth file images aligned.
STEP TWO
Convert the aligned image into a point cloud using the instruction deproject_pixel_to_point
STEP THREE
RealSense's SDK Manager, Dorodnic, has suggested using the SDK's point cloud example program to get the XYZ real-world coordinates, as "the output of pointcloud object is a collection of {x,y,z} coordinates in the real world".
https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud librealsense/examples/pointcloud at master · IntelRealSense/librealsense · GitHub
I wish I could give more detail, but advanced RealSense programming is not one of my areas of expertise. If you get stuck, I would advise asking a question on the RealSense GitHub website. you can do this by going to the link below and clicking the 'New Issue' button.
https://github.com/IntelRealSense/librealsense/issues Issues · IntelRealSense/librealsense · GitHub
Edit: a shortcut may be to use the program below with the ROS software to convert a collection of images into a bag file.
https://github.com/raulmur/BagFromImages GitHub - raulmur/BagFromImages: Create a rosbag from a collection of images
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your time!!I would try with your guide! ! !
Thanks!
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page