Items with no label
3335 Discussions

Notes and Q&A from June 27 2018 RealSense webinar

MartyG
Honored Contributor III
1,301 Views

Hi everyone,

On June 27 2018, well-known RealSense development leader Sergey "Dorodnic" Dorodnicov hosted an online webinar session titled "Intel RealSense Software Overview: Architecture, Strategy, and Roadmap". Summary notes and a transcript of questions and answers from the session are posted in the comments below.

If you missed the previous RealSense webinar back on 21 March 2018 with RealSense manager Brian Pruitt, a transcript of the question and answer session from that event is available at the link below:

UPDATE 11/08/2018: The slides from the webinar are now available in a document.

https://realsense.intel.com/wp-content/uploads/sites/63/realsense_software_architecture_webinar.pdf https://realsense.intel.com/wp-content/uploads/sites/63/realsense_software_architecture_webinar.pdf

2 Replies
MartyG
Honored Contributor III
307 Views

July 03 2018: Updated with new information from the second webinar session

Posted below are summary snippets of information from the two sessions of the webinar that were held on June 27. The full transcript of the questions and answers, posted with permission of the webinar moderator, are in the comment below that.

INFORMATION SNIPPETS

* You can "try before you buy" with RealSense 400 Series cameras by downloading the RealSense Viewer and loading pre-made data samples into its test mode, so that you can try the Viewer without having to first purchase a D415 or D435 camera.

https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md librealsense/sample-data.md at master · IntelRealSense/librealsense · GitHub

* Intel want to encourage the scaling up of RealSense cameras - not just one camera, but tens of cameras. They believe RealSense can be everywhere, and on many platforms, and that depth sensing is the future in many fields.

* Intel have produced new Python tutorials for learning how to use RealSense. It is new enough that it is not on Github yet, though the webinar provided a Google Drive location for the Python tutorial in the meantime. The tutorial covers stream alignment, depth and object detection with Python..

https://colab.research.google.com/drive/10YTLAf2i0R80-XX_6-1gTPtOMD0cRIx5 https://colab.research.google.com/drive/10YTLAf2i0R80-XX_6-1gTPtOMD0cRIx5

Producing tutorials to help get new users started is the direction that Intel is going in with education on RealSense topics, so that users and potential camera purchasers can learn about a subject such as using OpenCV with RealSense from a tutorial in their browser.

* Working to make Android more and more accessible with the 400 Series cameras.

* Looking into providing Java support later.

* Going in the direction of the camera being able to work without complex installation procedures.

* Aiming for regular SDK updates (around every two weeks) with bug-fixes, responding quickly to feedback where possible and introducing updates addressing more complex user-reported problems and needs later if possible. Users can use the Issues page on Github to highlight problems they may be having, and this page is monitored regularly by the engineering and development teams.

https://github.com/IntelRealSense/librealsense/issues Issues · IntelRealSense/librealsense · GitHub

If a posted question is not answered in a timely manner then it will be automatically escalated to Intel management, so that everyone should get an answer eventually. "You can be sure that if you ask something, we will work on it".

* Another aspect of Intel's RealSense support is the Pull Requests page on Github. The aim is to provide transparency so that when Intel is working on a feature, you can see the work being discussed and you can participate, give feedback and say whether it is a great idea or a terrible one. In this way, users can be part of the process instead of just waiting for the next release.

"We strongly appreciate and encourage community participation ... we wanted to make it very easy for people to take the library and modify it to their needs, and share it with more people, even if it doesn't line up with our vision of the product".

If a customer writes a feature, documents and submits it, then it is "a very strong tool for influencing the direction of the product". Intel will almost always happily accept a contribution and try to maintain it from that point onward

https://github.com/IntelRealSense/librealsense/pulls Pull Requests · IntelRealSense/librealsense · GitHub

* Linux kernel 4.16 is aimed to be supported in 2018 Intel are still patching some issues in Librealsense to enable support for it. Problems with kernel patching should be in the past by a year and a half's time from now.

* Aiming to make the 400 Series cameras easier to use with platforms that do not support kernel patching or have patches that do not work well by bypassing the operating system and talking directly to the camera. This bypass is called -DFORCE_LIBUVC=true

"When you enable this flag in CMake, what will happen is that the library will translate all the UVC commands into buffer and then send it to the camera via USB, therefore bypassing any specific streaming APIs. It's a cool idea. When it works, it just works, but we don't want to be entirely reliant on this, because we are essentially bypassing the kernel and the driver. It's a bit messy, but if someone wants to try it ... you have this option and we encourage people to use it".

* Working to address support for motion tracking on the 400 Series. This "will be solved in time".

* Possible to use R200 and 400 Series cameras together, as they are based on the same kind of technology, though the 400 Series is far more advanced. Different versions of Librealsense can also co-exist side by side. Intel are not developing new features for the previous-generation cameras though..

* Discussing the Open3D open source point cloud library, the possibility of combining point clouds into a "mega point cloud" was mentioned.

http://www.open3d.org/ Open3D

* It is easy to install Python via PyPy with $ pip install pyrealsense2 and then get something working with a small number of lines.

* Intel wants the SDK to be deployable to "very weak, limited hardware" as well as full PCs. This goal is aided by the SDK having a very low footprint of around 100 MB, with a 5 mb binary footprint and little to no third-party installations required.

* The RealSense API is designed to be a single system that works for everything and is not fractured into different branches. If you see a particular function in one of the RealSense sample programs, you should be able to replicate that function in your own custom-created library.

"One of the hidden features that not many people know about is that in CMake, there is a specific flag that will force any transition into the library to be logged so all API calls will be logged, including input values, output, how much time it took ... this will obviously impact performance negatively, but it's a cool way to try to understand which APIs a certain demo is using".

* "We continue to design and research new depth sensing technologies".

* "The sample program 'Software Device' can be used to create a fake RealSense device when it is run. The example will teach you how to inject synthetic frames into this fake device, and then you can use the regular point cloud, texture mapping, all the regular APIs to work with it. It can be useful to inject data from other sensors into our existing ecosystem. We're also thinking about giving people tools to map RealSense to other sorts of input".

The full Q&A text is posted in the comment below.

0 Kudos
MartyG
Honored Contributor III
307 Views

Do you plan to add body tracking to the RealSense SDK any time soon?

Hi - We plan to focus our SDK on providing depth across multiple OS and wrappers. We are working with 3rd party providers for what we call middleware. These MW will work with your SDK. For skeletal tracking check out http://www.nuitrack.com http://www.nuitrack.com

Where do I find SDK documentation?

On our webpage https://realsense.intel.com/ https://realsense.intel.com/ and on GitHub: https://github.com/IntelRealSense/librealsense https://github.com/IntelRealSense/librealsense

When is the T260 tracking module going to be available?

Schedules are available to our NDA customers. Watch for public announcements.

You mentioned support for Android but that it was perhaps still in early development - do you have a time frame for 'robust' support for Android that would run on a 'standard' Android device without rooting it

Please refer to https://github.com/IntelRealSense/librealsense/blob/master/doc/android/Android.md https://github.com/IntelRealSense/librealsense/blob/master/doc/android/Android.md for rooted devices as for additional Android support - we are working with Android 8.1 on integrating our Face Authentication RealSense APIs. If you have additional request, please file an issue via our community or github links.

Do you plan to support motion tracking using SLAM for D400 in Realsense SDK?

Not at this time, please follow our webpage - https://realsense.intel.com/ https://realsense.intel.com/ for updates

I've seen some references to a L500 projector and a IVCAM 2.0 in some recent librealsense commits. Is there going to be further information about this camera soon?

Schedule and future products are available for NDA customers. Please check our webpage https://realsense.intel.com/ https://realsense.intel.com/ for more updates.

Are there any plans to support OpenNI 2?

Yes, please check our webpage https://realsense.intel.com/ https://realsense.intel.com/ and github for future examples and capabilities.

You said you're working to improve support for Android - does that mean Android support doesn't quite exist yet? Or it does currently?

Hi, did you see my previous answer? Please check github and search for Android on the main page.

Please show a 3D scan using the D415. Thanks

We do not have scanning software. We leave this to companies specializing in scanning. We will announce some third parties we are working with later this next quarter.

Can I hack a D41x device to get high resolution scanning in near field of view? I meant to work in a depth range from 20 cm to 30cm with under mm resolution. Is it possible?

Please check the datasheet, available on our webpage https://realsense.intel.com/ https://realsense.intel.com/ for the supported resolutions and minZ, It changes according to the resolution.

What are the similar products available on the market?

We do not discuss our competitors . Feel free to google depth cameras and take a look.

Where is the detailed step by step document of camera calibration tools?

Please check our webpage https://realsense.intel.com/ https://realsense.intel.com/ under the calibration section: https://realsense.intel.com/intel-realsense-downloads/%23cal https://realsense.intel.com/intel-realsense-downloads/# cal' target='_blank'>https://realsense.intel.com/intel-realsense-downloads/%23cal https://realsense.intel.com/intel-realsense-downloads/# cal

When will the matlab wrapper be available ?

Soon, in a couple of sprints..

You mentioned that you support OSX, Please could you send me the the link to get the viewer for Mac

Please check the GitHub main page - https://github.com/IntelRealSense/librealsense https://github.com/IntelRealSense/librealsense search for MAC

Can you please tell us if SLAM is in your product roadmap for D400?

Not at this point. D400 family is a depth camera focused on providing depth. If we add, say an IMU, we will release a notification to those signed up on our website http://realsense.intel.com http://realsense.intel.com

Does the firmware now supports hardware sync. with external sensors? If not, is it in the works and when can we expect it to be available?

Yes, it is being worked on now, coming soon, Stay tuned for updates <span __jive_emoticon_name="happy" __jive_macro_name="emoticon" class="jive_emote jive_macro" data-renderedposition="1438_438.625_16_16" src...

0 Kudos
Reply