7 Replies Latest reply on Mar 29, 2018 11:11 AM by MartyG

    RealSense Q&A by Brian Pruitt, RealSense Peripheral Segment manager, Intel

    MartyG

      Hi everyone,

       

      Brian Pruitt, RealSense Peripheral Segment manager at Intel, did a webinar session on Tuesday and answered questions put to him by attendees.  I've posted them below for the RealSense community.

       

      **********

       

      1.   Will you share slides later?

       

      I do not plan to, check out our website for all the info!

       

      2.  What is the difference between active and passive?

       

      Active uses an emitter to add texture to a scene allowing for differentiation for the two imagers.  Passive does not have this meaning backgrounds like a flat, same color wall will have more trouble determining depth.

       

      3.  What is the z-error described here?  Depth error?

       

      Yes – Depth Error

       

      4.  Can multiple D435 sensors have their shutters synchronized?

       

      Yes – Check out our whitepaper on http://realsense.intel.com

       

      5.  Are you planning to support Windows 7??

       

      No – Unless there is a really large unit commit then we would consider porting to windows 7.  In short we decided to use Microsoft’s current release when we built and designed the camera.

       

      6.  How can I find out more about multi-sensor synchronization?

       

      Check out our whitepaper on http://realsense.intel.com

       

      7.Is there a way to switch between presets for reconstruction through the SDK?

       

      Yes – check out the SDK – if we don’t say anything in the doc, please post in our community!

       

      8.  The R200 SDK included a rich feature set.  Is the previous SDK compatible with the D400 series?

       

      No – We focused more on depth across multiple OS’ and wrappers.  This was in much higher demand than the other features.

       

      9.  You mentioned about possibility for 3rd party Software developers to align with like 3DiVi.  How can we as a 3rd party approach Intel with our ideas & proposals?

       

      Please contact us via our website!

       

      10.  What's the price on the 30-packs?

       

      Check with your favorite distributor

       

      11.  Thanks!  Is the D4 processor based on the low power Movidius acquisition?

       

      No, the D5 VPU is a fixed processor allowing for low power.  Movidius allows for different vision processing.

       

      12.  How soon can we get the units D435?

       

      It's about 9-10 Week delay but getting better!

       

      13.  Can you talk about the T260 tracking module?

       

      Check out our website.  I bet there will be a webinar in the future…

       

      14.  What is the price for the bundle?

       

      Check with your distributor

       

      15.  How does the D4 VP stand up to other solutions?

       

      We think quite well for fixed purpose.  Check out the material we have online.

       

      16.  When will the skeletal tracking be available?

       

      Now – Check out 3diVi and Nuitrack.

       

      Edit by Marty: 3diVi  is a company that offers software called Nuitrack.

       

      Intel RealSense D415/D435 and Nuitrack skeletal tracking SDK replace Kinect SDK - YouTube

       

      17.  Does reconstruction use GPU processing?

       

      We provide depth and rgb information.  After it reaches the compute platform it is up to the developer to determine what is done with it.

       

      18.  Is there any object recognition databases that are available for developers?

       

      We provide Depth.  Over time we hope the community and third parties will provide middleware like object recognition.

       

      19.  The camera keeps crashing with laptops (prob. due to the USB port not providing enough power due to power saving features). Will/can this problem be resolved?

       

      Please submit a ticket on our website.  We do not see this issue as long as you are using USB 3.

       

      20.  What are the options to provide wireless communications with the D400 series cameras?

       

      No.  You could plug the camera into a compute board and then have the compute board send data to the cloud or wherever.

       

      21.  I am interested in point cloud processing including finding nearest NURB and conical surfaces.

       

      Great – Check out our website and if you have a specific need not answered please post!

       

      22.  Are you planning to add microphone array to the camera package similar to MS Kinect v2.

       

      No – Microphone were used in <1% of the previous kits we made.

       

      23.  What are the possibilities for developers to obtain a Camera for test purposes? (like with glasses for AR e.g.).

       

      Yes – feel free to purchase from our website or any distributor.

       

      24.  Is 435 camera limited to 90 fps?

       

      Yes.

       

      25.  What is the frame rate of your fastest module?

       

      90 FPS

       

      26.  For high-res head tracking the SR300 seems most suitable. At 1m distance, can the D400 series achieve comparable accuracy?

       

      Yes.

       

      27.  How fast is the reconstruction?

       

      Depends on the software doing the reconstruction and the SoC powering.  Remember we provide the data portion.  We do all the algorithm processing at the camera so that part is done.

       

      28.  How is accuracy to measure object length (x, y) assuming Z is distance.

       

      <2% error @ 4m.  Closer more accurate.  Further less.  Varies depending on conditions.

       

      29.  Will the LabVIEW adapter allow us to visualize the 3D point cloud similarly to the Intel’s platform, or is it a adaptor for intercepting depth data into the environment?

       

      Please check out our site for more on LabView!

       

      30.  Is access to these camera limited to RGB in current web browsers?

       

      RGB and depth is provided via all the different OS’ and Wrappers.  See our website for details.

       

      31.  What are the output formats for pointcloud and images?

       

      Please check out our website for the different output formats.

       

      32.  Is there any way to use any of the cameras with one (or several) USB 2.0 ports?

       

      Not today but maybe in the future… the near future.

       

      33.  Will available for Windows using Visual Studio C# WPF development? C# WPF source code demo samples?

       

      Please monitor our GitHub for all samples and wrapper additions.

       

      34. How many camera modules can a single D4(m) support, and at what levels of performance?

       

      A D4 VPU handles one module.  See our website for performance data.  We will talk more about the d4m in the future.