2 Replies Latest reply on Oct 17, 2018 12:23 AM by MartyG

    Extracts from the webinar "Hands-on with Multiple Intel RealSense D415 & D435 camera systems"

    MartyG

      Hi everyone,

       

      I attended the RealSense webinar session "Hands-on with Multiple Intel RealSense D415 & D435 camera systems" and took some notes, which I will share below.

       

      EXTRACTS OF SPOKEN ANSWERS TO QUESTIONS

       

      *  A new model of the D435 is planned, hopefully for release by the end of the year.  It is tentatively called the D435i (this may change), though release of the unit is not guaranteed as plans can change.  If it is released, it will be identical to the D435 except for the addition of an IMU (which will be hardware-timestamped and synced), and D435s and D435i will be able to be synched together.

       

      *  Only hardware depth sync is available on D435, not color sync.  If the D435i is released, it will not have color sync either as it is identical to D435 except for the IMU.

       

      *  Cabling is not absolutely required for syncing.  It is best used when micro-second precision in the syncing is required.  The parts for creating a cable, available from DigiKey, should only cost a few dollars, and making the cable is relatively simple.

       

      *  The more cameras that are added to a multi-camera setup, the higher the specification of PC (for example, an Intel i7 processor) that is recommended.  The ideal situation is for cameras to be connected directly to a USB port instead of a USB hub so that each camera's USB port has its own USB controller.

       

      *  Intel are continually enhancing the performance of the cameras through firmware and SDK updates.  The D435, for example, now performs better in outdoor sunlight due to the enabling of 'fractional exposure'.

       

      *  By using two cameras overlapping in parallel, you can get much more comprehensive depth data because there is more redundancy.

       

      *  There is no interference to RealSense cameras with used with non-Intel cameras, though the non-Intel cameras may experience interference.

       

      *  A new 'Modified Huffman' lossless compression will enable the streaming of higher resolution and FPS on USB2, and more channels in multi-camera configurations.

       

      *  RealSense cameras can run indefinitely as long as the camera's temperature is kept within tolerances.  Problems with streaming ending prematurely are usually related to the PC and problems with maintaining USB.

       

      *  Intel are not planning on releasing their own SLAM solution, and strongly encourage other companies to provide a solution if they wish.   "We will be happy to post whatever you have on our website, and you can even sell it there in the future".

       

      *  The camera lenses can not be changed without breaking the camera.  On large volume orders though, Intel are willing to work with companies on custom designs.

       

      *  When choosing a large heat-sink for the camera to aid long-term running, it is suggested that the maximum power specification of USB is looked at, as this will give the worst-case scenario for what a heat sink needs to be able to cope with.

       

      *  Using an external sync source requires very fine frequency resolution.  Instead, it is recommended to use one camera as the master.

       

      *  MIPI may work if V-sync is closely matched with the RealSense camera, though the sync frequency window is very small.   It is therefore much easier if only RealSense cameras are used in hardware syncing.

       

      "If you are able to get a sync pulse from the MIPI interface camera, and its frame rate is sufficiently close to the expected frame rate for the RealSense camera, then you should be able to .. you can read the white paper for the requirements on how closely you have to match the frame rate of the sensor.  It's a limitation of the sensor ... primarily a limitation of the rolling shutter, and we may look at doing something else for the global shutter solution ... right now there is a limitation of those sensors that they can only have a very small window in which they will allow you to trigger externally"..

       

      *  When doing hardware syncing of RealSense cameras, it is recommended that D415s are synced with D415s, and D435s with D435s.  The two models can be mixed, but it increases the chance of instabilities.

       

      *  Regarding how to do multiple camera 3D point cloud alignment, calibration and software tools: Vicalib can be used.  "It uses a board that you show to each of the cameras in turn., and it establishes overlapping regions to then minimize the pose of each of those together".

       

      Link to Vicalib software: GitHub - arpg/vicalib: Visual-Inertial Calibration Tool

       

      *   Regarding aligning multiple point clouds together: "Vicalib can do this, but there is a simpler approach, which will work in 90% of cases.  This is is to take the point cloud from every one of the cameras and then do an Affine Transform.  Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud".

       

      TRANSCRIPT OF TEXT FAQ

       

      Is it possible to get microsecond sync of RGB and depth on the D435?

       

      Yes for the D415, but NO for the D435.  The D435's color camera is NOT HW synced.

       

      How do we do combined reconstruction from all cameras?

       

      You need to calibrate the cameras extrinsic pose, we used Vicalib, and then perform a ridged transform of the point clouds using the extrinsics into the same space.

       

      How do I turn the cameras on / off individually from software?

       

      We would suggest that you look at the pipeline in librealsense, it should have open and close functionality

       

      What time protocol does D435 use?

       

      We dont use a protocol per se.  The HW sync is a voltage pulse. The time stamp you can query is for system time or HW time (recommended).

       

      Is there any example code of the skeleton tracking neural network?

       

      Not currently.

       

      Can simulation be done on Gazebo with multiple cameras?

       

      I believe so, if you are able to transform the cameras based on their extrinsics to reflect the real world.

       

      All of the D series cameras were advertised with camera syncronization.  When will the D435 have support for hardware sync?

       

      The D435 can HW sync depth, but NOT color.  This will not be supported in the future either, as it is a HW limitation. The D415 can sync color and depth.

       

      What kind of compression is used on the depth data?  LZW?

       

      Huffman

       

      We have multiple D415 cameras connected with hardware sync.  Each has a dedicated USB host controller.  A lot of times, individual streams depth or color stop updating and don't return until camera restart.  Could this be HW sync related or a problem with the USB bus?

       

      It could be ESD related to the HW sync cable, which is why we recommend the RC filter at the cable ends.

       

      So, what's about sync cable?  If I want to connect several cameras (more then 2) with sync, is it possible?

       

      We use a single master camera, that is connected to all other slaves

       

      Will you publish depth decompression source code in SDK?

       

      Yes.

       

      FURTHER INFORMATION

       

      *  At a future date, times-3 (x3) compression of data will be enabled.  This will allow more channels and more cameras to be supported.

       

      *  Regarding sync cable construction, there is an EMI/ESD bug where the units sometimes reset and the counters are reset.  Best practice is to 2.2 Kohm resistor / 22nF capacitor near the slave, and use twisted pair.

       

      *  Contact Intel if you are interested in a unit with a long baseline.  You can contact through support.intel.com, or through the GitHub, or NDA customers can use their own special contact channel.

       

       

      *  The webinar demonstrated markerless motion capture.

       

       

      The process:

       

      - Calibrate an inward-facing configuration of multiple cameras using the open-source Vicolib software.so that the extrinsic poses of each of the cameras can be received.

       

      -  Use a rigid transformation for each of the point clouds to align them in the same space.

       

      -  Run a 2D landmark detector on each of the cameras which is used to triangulate into a single estimate for all of the body parts in the captured sequences.

       

      -  This effectively provides markerless motion capture tat can be used with VR, providing depth and color information of multiple people.  It can also track their body parts and provide interaction at full frame rate.