10 Replies Latest reply on Mar 12, 2018 4:44 AM by tk_eab

    Mapping from color to depth using the RealSense SDK 2.0

    tk_eab

      Hello,

       

      I have been using the RealSense SDK for Windows (i.e., the one having "pxcsensemanager.h") but now I'm switching over to the SDK 2.0 (i.e., the one having "librealsense2/rs.hpp"). With the SDK for Windows, I use the "MapColorToDepth" function (Intel® RealSense™ SDK 2016 R2 Documentation ) for mapping a single pixel on color frame to its corresponding pixel on depth frame. What is an equivalent function in the SDK 2.0?

       

      I know how to align a whole depth frame to a whole color frame with the SDK 2.0 but this method slows down FPS from 60 to 30 or less on my computer. So I just want to do the mapping for a single pixel.

       

      I also know how to manually transform a depth pixel to a world point and to a color pixel using camera intrinsics and extrinsics. But this website (Projection in RealSense SDK 2.0 · IntelRealSense/librealsense Wiki · GitHub ) shows that different RealSense models use different distortion models. And so I don't think the equation I use is applicable to all RealSense models (I use SR300 and D435). This is why I'd like to use a built-in function that accommodates the difference among the RealSense models.

       

      Thanks.

        • 1. Re: Mapping from color to depth using the RealSense SDK 2.0
          MartyG

          If different distortion models is a problem for you then perhaps a solution would be to set the camera to use the 'none' setting for distortion. 

           

          From the Projection page (which you have seen):

           

          None

          An image has no distortion, as though produced by an idealized pinhole camera.  This is typically the result of some hardware or software algorithm undistorting an image produced by a physical imager, but may simply indicate that the image was derived from some other image or images which were already undistorted. Images with no distortion have closed-form formulas for both projection and deprojection, and can be used with both rs2_project_point_to_pixel(...) and rs2_deproject_pixel_to_point(...).

          • 2. Re: Mapping from color to depth using the RealSense SDK 2.0
            tk_eab

            MartyG

             

            Thanks for replying to this question as well. As you suggested, I will try the "none" method and see what happens.

             

            By the way, although I have basic understanding of how mapping from depth frame to color frame is accomplished, I don't understand how the "MapColorToDepth" function (Intel® RealSense™ SDK 2016 R2 Documentation ) in the previous SDK accomplishes mapping a single pixel from color to depth frame in a quick way. It seems to me that all depth-frame pixels need to be projected onto a color frame first and then pick a depth-frame pixel that corresponds to a particular color-frame pixel. But this would the same thing as aligning a whole depth frame to a whole color frame, which reduces FPS. I'd like to maintain around 60 FPS during the mapping. Is there any way to take a look at what's actually going on inside the "MapColorToDepth" function? I looked for it and couldn't find it.

             

            Thanks.

            • 4. Re: Mapping from color to depth using the RealSense SDK 2.0
              jb455

              If you look at the source for the Align procedure (librealsense/align.cpp at master · IntelRealSense/librealsense · GitHub), you can see how it does it. Essentially, you'd want to copy align_images but modified so you can pass a single point instead of it using the loop. Though that method starts from depth points and maps to colour, so you'd need to do the reverse if you're starting from a colour point. The project/transform/deproject methods it uses (source available here: librealsense/rsutil.h at master · IntelRealSense/librealsense · GitHub) deal with distortion so you won't need to worry about that.

              • 5. Re: Mapping from color to depth using the RealSense SDK 2.0
                tk_eab

                Thanks MartyG. I took a look. I may ask further questions.

                • 6. Re: Mapping from color to depth using the RealSense SDK 2.0
                  tk_eab

                  jb455

                   

                  Thanks for replying to my question. Actually I have tried to do the reverse for a single pixel before but I couldn't make it by myself. So please help me on that.

                   

                  I understand how the depth-to-color mapping is accomplished:

                   

                  #1) With a depth-frame pixel as a starting point, I specify the x and y on the pixel coordinate. Then I get depth in meter at (x, y).

                  #2) Using the x, y, and depth along with depth camera intrinsics, I can get a point in the 3D world coordinate for the depth-frame pixel.

                      --> This should correspond to "rs2_deproject_pixel_to_point".

                  #3) Using camera extrinsics, I transform the point for the depth-frame pixel to the corresponding point for the color-frame pixel.

                      --> This should correspond to "rs2_transform_point_to_point".

                  #4) Using color camera intrinsics, I transform the latter point to the color-frame pixel.

                      --> This should correspond to "rs2_project_point_to_pixel".

                   

                  Now I have a problem in the color-to-depth mapping: I want to transform the color-frame pixel to a point in the 3D world coordinate. But the pixel is missing depth unlike a depth-frame pixel. Thus, I cannot take similar steps to #2-4 above.

                   

                  How can I do the reverse of the align_images?

                   

                  Thanks.

                  • 7. Re: Mapping from color to depth using the RealSense SDK 2.0
                    jb455

                    Ah right, I see what you mean. You need the depth value to be able to do the mapping, but you need the mapping to get the depth value.

                     

                    I'm not sure how you'd do this without using align then (I'm not an expert, only started looking at stuff like this a month or so ago).

                     

                    Actually, maybe this thread would be of use to you: How to project points to pixel? Librealsense SDK 2.0. But then if you're generating the pointcloud and its uv map you may get the same performance problems that you've had with align,

                     

                    You could also try compiling the library with OpenMP turned off. This reduces the CPU usage when streaming so you may get a better framerate.

                    • 8. Re: Mapping from color to depth using the RealSense SDK 2.0
                      tk_eab

                      jb455

                       

                      Yes, that's exactly the problem I encountered when trying the color-to-depth mapping. But there is a hint for accomplishing it without reducing FPS much. In the old version of SDK, the color-to-depth mapping (MapColorToDepth) function (Intel® RealSense™ SDK 2016 R2 Documentation ) requires a depth frame in the PXCImage format unlike the depth-to-color mapping (MapDepthToColor) function (Intel® RealSense™ SDK 2016 R2 Documentation ).  This suggests that the MapColorToDepth function relies on the align method and then somehow achieves the mapping for a single pixel (but not for many pixels) in a quick way. This is why I want to see what's going on inside the MapColorToDepth function. Is that possible? In C++, I could reach the header file where the function was declared but not the actual code for the function.

                       

                      This is my first time to hear the term "OpenMP" but that seems to correspond to "#pragma omp parralel for..." in align.cpp. If so, do I just need to omit that part of the code to turn off OpenMP?

                       

                      Thanks.

                      • 9. Re: Mapping from color to depth using the RealSense SDK 2.0
                        jb455

                        Unfortunately the source for the old SDK was never shared so we can't see how any of it worked. You could try asking on GitHub (Issues · IntelRealSense/librealsense · GitHub), a few of the RealSense developers answer questions on there so maybe one of them will know.

                         

                        To build with OpenMP off, you need to:

                        1. Clone the librealsense source locally
                        2. Install CMake
                        3. Point CMake at the librealsense source folder
                        4. Click Configure. Make sure you choose the correct generator for which platform you want to build for (eg, "Visual studio 2017" for x86, "Visual Studio 2017 Win64" for x64)
                        5. Untick "BUILD_WITH_OPENMP"
                        6. Click Generate, then Open Project
                        7. In Visual Studio, press ctrl+shift+b to build the library

                        Then you can sub in this dll for the one you're currently using - all usage will be the same (in terms of code), but you should see a difference in CPU utilisation while running.

                        • 10. Re: Mapping from color to depth using the RealSense SDK 2.0
                          tk_eab

                          jb455

                           

                          Thanks for telling me step for turning off OpenMP. I will do that.

                           

                          By the way, I've just figured out how to do something equivalent to the MapColorToDepth function in the old SDK. Having tested my programs many times, I noticed that the reduction in FPS results from two factors: 1) aligning images; and 2) using a point cloud to get 3D coordinates of each pixel. What I actually needed was the 3D coordinate for a single pixel and so I modified the second part.

                           

                          I will leave a note here for those who have the same issue as mine (Just like the MapColorToDepth function, this method only works for a few pixels without reducing FPS).

                           

                          Prep)

                          #include <librealsense2/rs.hpp>

                          #include <librealsense2/rsutil.h>

                           

                          rs2::config cfg;

                          rs2::pipeline pipe;

                           

                          cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 60)

                          cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 60)

                           

                          rs2::pipeline_profile prf = pipe.start(cfg);

                          auto stream = prf.get_stream(RS2_STREAM_DEPTH).as<rs2::video_stream_profiles>();

                          struct rs2_intrinsics intrin = stream.get_intrinsics();

                           

                          Step 1) Align a whole depth frame to the corresponding color frame:

                          rs2::frameset frames = pipe.wait_for_frames();

                           

                          rs2::align align(rs2_stream::RS2_STREAM_COLOR);

                          rs2::frameset aligned_frame = align.process(frames);

                          rs2::frame color_frame = frames.get_color_frame();

                          rs2::frame depth_frame = aligned_frames.get_depth_frame();

                           

                          Step 2) Transform a pixel on depth frame to a point on 3D coordinates

                          rs2::depth_frame df = depth_frame.as<rs2::depth_frame>();

                           

                          float d_pt[3] = { 0 };

                          float d_px[2] = { x, y }; // where x and y are 2D coordinates for a pixel on depth frame

                          float depth = df.get_distance(x, y);

                           

                          rs2_deproject_pixel_to_point(d_pt, &intrin, d_px, depth);

                           

                          // d_pt[0], d_pt[1], and d_pt[2] respectively are X, Y, and Z on 3D coordinates in meter

                           

                          Note: There may be some typos