Thank you for your interest in the Intel RealSense Technology.
Could you please tell me with what Intel RealSense Camera you are trying to achieve this?
Thank you in advance,
I was able to do some research and unfortunately, we do not support the Jetson TX2.
I would like to propose, you upload your screenshots and add more specific errors that you are getting and at what point you are getting them, this way maybe someone from the community can help.
One guide that could also prove helpful can be found here: https://www.jetsonhacks.com/2018/04/09/intel-realsense-d400-librealsense2-nvidia-jetson-tx-dev-kits/
Let me know if you have any other questions.
Some more questions. I am not sure why a USB 3 camera can not work with Unbuntu and the
TX2. I am also not sure why there is a big kernel patch and rebuild, but I am no Linux expert.
It also is confusing to me since the most powerful ARM solution is the TX2 why its not supported.
I also want you to know that how to install on a Jetson TX2 that refers to Jetsonhacks
is included in the install on GitHub.
The support of Jetson TX2 by librealsense is provided on a best effort basis by Intel but it is not officially supported. The Jetson installation page (https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md) on the librealsense Github states:
"NOTE: Intel does not officially support the Jetson line of devices. Furthermore, there are several known issues (https://github.com/IntelRealSense/librealsense/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20jetson) with running librealsense on jetson."
The added support for TX2 in librealsense 2.13.0, as stated in the Release Notes, comes in the form of:
- Adding CUDA-optimized implementation (https://github.com/IntelRealSense/librealsense/pull/1866) for Jetson-TX (arm) platform
The librealsense Github pages point to Jetsonhacks for content and more information because that is where you will find the most information. The Jetsonhacks community has taken the lead on this support.
As explained in the BuildLibrealsense2TX page (https://github.com/jetsonhacks/buildLibrealsense2TX) on Github, "
In order for librealsense to work properly, the kernel image must be rebuilt and patches applied to the UVC module and some other support modules.
The Jetsons have the v4l2 module built into the kernel image. The module should not be built as an external module, due to needed support for the carrier board camera. Because of this, a separate kernel Image should be generated, as well as any needed modules (such as the patched UVC module).
In other words, RealSense cameras, because of their special depth formats, require patched video modules. These modules are built into the kernel image for the Jetsons so the kernel must be rebuilt in order to patch those modules for RealSense.
Is this information helpful?
The Jetsonhacks narrator indicated on how dangerous rebuilding the kernel was but supplied the scripts/instruction to do it.
As in it may not work..
I followed his instructions with version 2.12 and everything was OK. I upgraded to 2.13 because of some improvements in the TX2 and that leaves me wondering if is video and scripts were for version 2.12 not 2.13.
So if I was to rebuild the kernel with his scripts and the TX2 starts does that mean its all OK? It would be nice if the kernel was not involved.
I am not sure what direction to go into anymore.
Intel are aiming to make the 400 Series cameras easier to use with platforms that do not support kernel patching or have patches that do not work well by bypassing the operating system and talking directly to the camera. This bypass is called -DFORCE_LIBUVC=true
They are also working on eliminating the need for kernel patching in the next 18 months.
Edit: I transcribed some further information from the webinar given by Dorodnic on June 27 that the bypass was mentioned in.
"When you enable this flag in CMake, what will happen is that the library will translate all the UVC commands into buffer and then send it to the camera via USB, therefore bypassing any specific streaming APIs. It's a cool idea. When it works, it just works, but we don't want to be entirely reliant on this, because we are essentially bypassing the kernel and the driver. It's a bit messy, but if someone wants to try it ... you have this option and we encourage people to use it".