1 Reply Latest reply on Sep 7, 2018 5:52 AM by Intel Corporation

    Relationship between pointcloud data indicies and aligned frame indicies




      I am working on a RealSense D435 multicamera setup where I show only those 3D points which are close enough to the related camera.


      My actual workflow for each camera is:

           Calculate the calibration rotation matrix and translation vector with a circle pattern using OpenCV

           Using some post-filter to reduce aliasing

           Align process based on the example

           Calculate the pointcloud


      I had some issue about using post-filtering before align but I got the answer from other topic. Currently I have a processing_block for the post-filter and the align with pointcloud calculation as well.

      I also saw that when I want to use the aligned video_frame and the depth_frame to generate the pointcloud then the result is really strange so I use the post-filtered depth_frame to get the pointcloud and the aligned video_frame for the mapping. I dont know about this issue as well but my main problem is that I dont know how to get properly those point indicies at align which are close enough to the related camera. I noticed that the decimation filter can change the resolution but still not got it.


      Here is my code for the align_process:


      rs2::processing_block align_processing(  [&](rs2::frameset data, // Input frameset (from the pipeline)

                                                                             rs2::frame_source& source) // Frame pool that can allocate new frames


              std::vector<int> point_indicies;

             auto frameset_data = align_frameset(profile, data, point_indicies, (*filter_config_ptr)->get_process_option("Depth_Clip"));

             rs2::frameset aligned_frameset = source.allocate_composite_frame({ frameset_data.first, data.get_depth_frame() });

                                                                                                                                         //the first is the aligned video_frame, the other the post-filtered depth_frame

             auto depth_frame = aligned_frameset.get_depth_frame();

             auto rgb_frame = aligned_frameset.get_color_frame();



              auto points = pc.calculate(depth_frame);



              output_d.points = points;     //rs2::points

              output_d.pois = point_indicies;     //Only these points will be drawn

              output_d.rgb_frame = rgb_frame;

              if (calibration_m.valid) output_d.calibration_m = calibration_m;






      In the remove_background() function I store these indicies:

           point_indicies.push_back((y / scale) * (width / scale) + (x / scale));

      where y goes through the row and x through the column and scale = aligned_depth_frame.get_width() / depth_frame.get_width()


      The result is:


      I marked with red circle that area where it shouldnt render the points.


      Someone can see where is my problem or know a better way to get those indicies from the pointcloud?


      Thanks for your support!