2 Replies Latest reply on Aug 10, 2018 8:26 AM by MartyG

    Notes and Q&A from June 27 2018 RealSense webinar


      Hi everyone,


      On June 27 2018, well-known RealSense development leader Sergey "Dorodnic" Dorodnicov hosted an online webinar session titled "Intel RealSense Software Overview: Architecture, Strategy, and Roadmap".  Summary notes and a transcript of questions and answers from the session are posted in the comments below.


      If you missed the previous RealSense webinar back on 21 March 2018 with RealSense manager Brian Pruitt, a transcript of the question and answer session from that event is available at the link below:


      RealSense Q&A by Brian Pruitt, RealSense Peripheral Segment manager, Intel


      UPDATE 11/08/2018: The slides from the webinar are now available in a document.



        • 1. Re: TODAY: Free RealSense webinar June 27 with Sergey "Dorodnic" Dorodnicov

          July 03 2018: Updated with new information from the second webinar session


          Posted below are summary snippets of information from the two sessions of the webinar that were held on June 27.  The full transcript of the questions and answers, posted with permission of the webinar moderator, are in the comment below that.




          *  You can "try before you buy" with RealSense 400 Series cameras by downloading the RealSense Viewer and loading pre-made data samples into its test mode, so that you can try the Viewer without having to first purchase a D415 or D435 camera.


          librealsense/sample-data.md at master · IntelRealSense/librealsense · GitHub


          *  Intel want to encourage the scaling up of RealSense cameras - not just one camera, but tens of cameras.  They believe RealSense can be everywhere, and on many platforms, and that depth sensing is the future in many fields.


          *  Intel have produced new Python tutorials for learning how to use RealSense.  It is new enough that it is not on Github yet, though the webinar provided a Google Drive location for the Python tutorial in the meantime.  The tutorial covers stream alignment, depth and object detection with Python..




          Producing tutorials to help get new users started is the direction that Intel is going in with education on RealSense topics, so that users and potential camera purchasers can learn about a subject such as using OpenCV with RealSense from a tutorial in their browser.


          *  Working to make Android more and more accessible with the 400 Series cameras.


          *  Looking into providing Java support later.


          *  Going in the direction of the camera being able to work without complex installation procedures.


          *  Aiming for regular SDK updates (around every two weeks) with bug-fixes, responding quickly to feedback where possible and introducing updates addressing more complex user-reported problems and needs later if possible.  Users can use the Issues page on Github to highlight problems they may be having, and this page is monitored regularly by the engineering and development teams.


          Issues · IntelRealSense/librealsense · GitHub


          If a posted question is not answered in a timely manner then it will be automatically escalated to Intel management, so that everyone should get an answer eventually.  "You can be sure that if you ask something, we will work on it".


          *  Another aspect of Intel's RealSense support is the Pull Requests page on Github.  The aim is to provide transparency so that when Intel is working on a feature, you can see the work being discussed and you can participate, give feedback and say whether it is a great idea or a terrible one.  In this way, users can be part of the process instead of just waiting for the next release.


          "We strongly appreciate and encourage community participation ... we wanted to make it very easy for people to take the library and modify it to their needs, and share it with more people, even if it doesn't line up with our vision of the product".


          If a customer writes a feature, documents and submits it, then it is "a very strong tool for influencing the direction of the product".  Intel will almost always happily accept a contribution and try to maintain it from that point onward


          Pull Requests · IntelRealSense/librealsense · GitHub


          *  Linux kernel 4.16 is aimed to be supported in 2018  Intel are still patching some issues in Librealsense to enable support for it.  Problems with kernel patching should be in the past by a year and a half's time from now.


          *  Aiming to make the 400 Series cameras easier to use with platforms that do not support kernel patching or have patches that do not work well by bypassing the operating system and talking directly to the camera.  This bypass is called -DFORCE_LIBUVC=true


          "When you enable this flag in CMake, what will happen is that the library will translate all the UVC commands into buffer and then send it to the camera via USB, therefore bypassing any specific streaming APIs.  It's a cool idea.  When it works, it just works, but we don't want to be entirely reliant on this, because we are essentially bypassing the kernel and the driver.  It's a bit messy, but if someone wants to try it ... you have this option and we encourage people to use it".


          *  Working to address support for motion tracking on the 400 Series.  This "will be solved in time".


          *  Possible to use R200 and 400 Series cameras together, as they are based on the same kind of technology, though the 400 Series is far more advanced.  Different versions of Librealsense can also co-exist side by side.  Intel are not developing new features for the previous-generation cameras though..


          *  Discussing the Open3D open source point cloud library, the possibility of combining point clouds into a "mega point cloud" was mentioned.




          *  It is easy to install Python via PyPy with $ pip install pyrealsense2 and then get something working with a small number of lines.



          *  Intel wants the SDK to be deployable to "very weak, limited hardware" as well as full PCs.  This goal is aided by the SDK having a very low footprint of  around 100 MB, with a 5 mb binary footprint and little to no third-party installations required.


          *  The RealSense API is designed to be a single system that works for everything and is not fractured into different branches.  If you see a particular function in one of the RealSense sample programs, you should be able to replicate that function in your own custom-created library.


          "One of the hidden features that not many people know about is that in CMake, there is a specific flag that will force any transition into the library to be logged so all API calls will be logged, including input values, output, how much time it took ... this will obviously impact performance negatively, but it's a cool way to try to understand which APIs a certain demo is using".



          *  "We continue to design and research new depth sensing technologies".


          *  "The sample program 'Software Device' can be used to create a fake RealSense device when it is run.  The example will teach you how to inject synthetic frames into this fake device, and then you can use the regular point cloud, texture mapping, all the regular APIs to work with it.  It can be useful to inject data from other sensors into our existing ecosystem.  We're also thinking about giving people tools to map RealSense to other sorts of input".


          The full Q&A text is posted in the comment below.

          • 2. Re: TODAY: Free RealSense webinar June 27 with Sergey "Dorodnic" Dorodnicov

            Do you plan to add body tracking to the RealSense SDK any time soon?


            Hi - We plan to focus our SDK on providing depth across multiple OS and wrappers.  We are working with 3rd party providers for what we call middleware. These MW will work with your SDK.   For skeletal tracking check out http://www.nuitrack.com


            Where do I find SDK documentation?


            On our webpage https://realsense.intel.com/ and on GitHub: https://github.com/IntelRealSense/librealsense


            When is the T260 tracking module going to be available?


            Schedules are available to our NDA customers.  Watch for public announcements.


            You mentioned support for Android but that it was perhaps still in early development - do you have a time frame for 'robust' support for Android that would run on a 'standard' Android device without rooting it


            Please refer to https://github.com/IntelRealSense/librealsense/blob/master/doc/android/Android.md for rooted devices as for additional Android support - we are working with Android 8.1 on integrating our Face Authentication RealSense APIs.  If you have additional request, please file an issue via our community or github links.


            Do you plan to support motion tracking using SLAM for D400 in Realsense SDK?


            Not at this time, please follow our webpage - https://realsense.intel.com/ for updates


            I've seen some references to a L500 projector and a IVCAM 2.0 in some recent librealsense commits.  Is there going to be further information about this camera soon?


            Schedule and future products are available for NDA customers. Please check our webpage https://realsense.intel.com/ for more updates.


            Are there any plans to support OpenNI 2?


            Yes, please check our webpage https://realsense.intel.com/ and github for future examples and capabilities.


            You said you're working to improve support for Android - does that mean Android support doesn't quite exist yet?  Or it does currently?


            Hi, did you see my previous answer?  Please check github and search for Android on the main page.


            Please show a 3D scan using the D415.  Thanks


            We do not have scanning software.  We leave this to companies specializing in scanning.  We will announce some third parties we are working with later this next quarter.


            Can I hack a D41x device to get high resolution scanning in near field of view?  I meant to work in a depth range from 20 cm to 30cm with under mm resolution.  Is it possible?


            Please check the datasheet, available on our webpage https://realsense.intel.com/ for the supported resolutions and minZ,  It changes according to the resolution.


            What are the similar products available on the market?


            We do not discuss our competitors . Feel free to google depth cameras and take a look.


            Where is the detailed step by step document of camera calibration tools?


            Please check our webpage https://realsense.intel.com/ under the calibration section: https://realsense.intel.com/intel-realsense-downloads/#cal' target='_blank'>https://realsense.intel.com/intel-realsense-downloads/#cal


            When will the matlab wrapper be available ?


            Soon, in a couple of sprints..


            You mentioned that you support OSX,  Please could you send me the the link to get the viewer for Mac


            Please check the GitHub main page - https://github.com/IntelRealSense/librealsense search for MAC


            Can you please tell us if SLAM is in your product roadmap for D400?


            Not at this point.  D400 family is a depth camera focused on providing depth.  If we add, say an IMU, we will release a notification to those signed up on our website http://realsense.intel.com


            Does the firmware now supports hardware sync. with external sensors? If not, is it in the works and when can we expect it to be available?


            Yes, it is being worked on now, coming soon,  Stay tuned for updates


            Do you have any depth accuracy analysis for the camera?


            Yes it is on the data sheet and in our whitepaper.  Check out http://realsense.intel.com


            Do you have any coupon codes if we want to purchase through intel right now?


            That would be awesome but not at this time.


            Seems like the bottleneck is the image processing.  Could framerate be increased with redundant ASICs? i.e. one camera, 4 processing chips?


            We do not support multiple ASICs. The bottle neck is actually the throughput through the host. Dedicated Real Time Depth Processing – up to 36.6MP/ sec, multiple resolutions up to 1280x720, and frame rates up to 90fps


            I am new to Depth Sense technology and camera.  I want to know the integration of OpenNI2 and Intel RealSense, and if Intel is not doing right now then how it (Intel) can provide help to users trying to integrate.


            Yes we have OpenNI2 on the roadmap.  Keep watching github for the release!


            How frequently will the D435 need calibration when it is out on the field on a moving base?


            Our experience shows that the devices remain calibrated when in normal usage.  Usually they require calibration when they fall or go through something extraordinary.


            How can I get depth data from the D435 into ROS?


            Please check the ROS github, there's a link from the main GitHub page https://github.com/IntelRealSense/librealsense


            If I use two D435 to see the same object (one to see the right side and the second one for the left side), this configuration could generate interference between them, especially with the IR lights.


            No issues.  Use as many as you want.  We only use IR to add texture - not for depth calculation


            Is there any problem in mounting the D435 vertically?




            Any examples or APIs for coordinate mapping between RGB,IR and depth?


            For spatial alignment please see the align example on GitHub.  If you mean temporal alignment (synchronization) it is on by default when using pipeline.


            What's your advice on using an external IMU with D400?


            Please reach out to your Intel contact.


            My question is for OpenNI2 integration.  You have it on your roadmap but say if I want to integrate Intel RealSense with OpenNI2 then what help can your team provide to me?


            You can feel free to contribute OpenNI to github till we support it,   Thank you.


            Sometime we have trouble with plain wall background farther than 5m.  Is there way to get IR projector unit, or to use extra D435 / D415 as simple projector mode without PC?


            Yes - use any IR pattern to augment.  We only use for texture.  Not used for depth calc so external is fine.