1. Like their RealSense predecessors, the D415 and D435 are 3D depth-sensing cameras.
2. These cameras will use the new RealSense SDK 2.0, which is an advanced form of Librealsense. The SDK is a product in ongoing development that is updated over time. At present it does not have face tracking and gestures built in, though you should be able to gain these features by using the SDK in combination with OpenCV and / or ROS, which have face and hand tracking modules.
3. Depth sensing camera such as RealSense lose image quality when scanning reflective surfaces, such as glass. So water may be included in the list of reflective materials that the camera would have difficulty seeing the surface of when scanning from above the water.
4. Apparently, it is possible to use RealSense cameras underwater, with a protective housing for the camera to keep the water out. However, the range that it can scan to is limited. Somebody suggested that this was because infrared does not travel well in water.
The discussion below may be of use to you if you have not seen it already.
Also, there is an engineering paper on underwater scanning with affordable depth sensing cameras such as RealSense.
The 400 Series cameras have a greatly enhanced operating range over previous RealSense camera models, and a D4 vision processing chip. It would still be subject to the same physics limitations of IR light traveling in water though. So you may get an improved performance, but there is no way to tell how much without actually trying the camera underwater.
5. The SR300 camera can be used with the new cross-platform RealSense SDK 2.0 for Windows and Linux, with full Mac support coming (there is partial Mac support at the moment).
Thanks for valuable response.
Need some more information,
1. Does SR300 camera support face tracking and Gesture ?
2. Any other versions of Intel camera supports Object Tracking, Face Tracking and Gesture ?
3. I got info about ZR300 and it supports Object detection, Person detection,so on.
Does this have SDK support still? If yes, please share the link.
4. Any plan to stop SDK support for SR300 and ZR 300?
1. For face tracking and gestures with the SR300, you would have to use the '2016 R2' or '2016 R3' versions of the RealSense SDK if you do not want to use OpenCV or ROS with SDK 2.0.
2. The R200 can do face tracking, but it does not have hand tracking.
3. If you need object detection and person detection for the ZR300, you should use the RealSense SDK For Linux. The R2, R3 and 2.0 SDKs do not support the ZR300. You can use Librealsense with the ZR300 but it does not have the object detection and person tracking features that the RealSense SDK For Linux has.
4. The SR300 is supported in the new RealSense SDK 2.0. I do not have any knowledge about how far into the future that support will continue. The ZR300 USB camera is being retired, so the RealSense SDK For Linux is unlikely to receive further updates as Intel focuses on the RealSense SDK 2.0 for SR300, D415 and D435.
Thanks for your valuable response.
I need some more information's about ZR300 and D400 series,
We are planning to buy ZR300, since it supports object detection , tracking, and so on.
1. Does this ZR300 RealSense SDK has inbuilt API's for object detection/tracking?
If yes, please share the example link or API names.
If no, what are API's we have to use to object detection/tracking?
2. If there is any example already available for object/person detection/tracking with ZR300, please share the link.
3. ZR300 supports min depth 0.5m or 0.6m. How do we configure this by using SDK or hardware config?
4. What is the maximum depth range for ZR300?
5.It's mentioned that recommended platform "Intel Joule developer kit".
Any other platform it supports?
Can we use Eclipse for development ?
6. Is "Intel Joule developer kit" mandatory for building SDK?
7. Is there any ZR300 RealSense SDK for windows support with above functionalities ?
8. If there any example video with object detection/tracking , please share the link?
9. Do you provide any support for SDK for object detection/tracking for ZR300?
10. What is output resolution and Frame rate?
11. Do you have RealSense SDK for Windows, Android, iOS also?
Regarding D400 Series information's,
1. Does D400 series has "Person detection and tracking, Single or multiple object detection, recognition, and tracking" ?
2. If yes, which SDK version supports this ?
3. If no, When these features will be included ? When we will get updated SDK with object detection/tracking features ?
4.How soon SDK will be available for D400 series with same features as ZR300 ?
5. Does this SDK supports Windows, Ubuntu, Android, iOS ?
6. What are all the platforms we can use this SDK, Camera ?
I'll try to zip through your questions.
1 & 2. You should use the Intel RealSense SDK For Linux for object detection and tracking.
3. Minimum depth range of the ZR300 is probably not changeable.
4. Maximum depth capture range of the ZR300 is 2.8 m.
5. You can also use the ZR300 with the Up Core development board.
Text version of link: up-board.org/upcore/
6. Eclipse can be used with the RealSense SDK For Linux. Click on the link below and look a couple of sections below it for the section headed 'Eclipse'
Text version of link: software.intel.com/sites/products/realsense/sdk/getting_started.html#Run_the_Pipeline_Async_Sample
7. As stated above, the ZR300 is also validated to work with the Up Core development board. The camera was even sold in Up's online store on their website
8. Video of object recognition with the ZR300.
Text link: youtube.com/watch?v=fGQhs_K_hO8
9. You can receive help support for the RealSense SDK For Linux and its features on this forum.
10. Resolution chart of the ZR300:
11. The only other SDK option supporting ZR300 other than the RealSense SDK For Linux is the original Librealsense SDK (now called the Legacy version after the release of RealSense SDK 2.0). It has a lot less features though, and the RealSense SDK For Linux installs Librealsense as one of its modules anyway. So the For Linux SDK is the better choice of the two options.
There are no SDKs for Android and iOS.
I have listed your 6 questions about the 400 series as questions 12 to 17.
12 & 13. Regarding the 400 Series cameras, you may be able to use object detection / tracking features if you use RealSense SDK 2.0 in combination with the OpenCV software, and Open CV modules providing those features.
14 & 15. It is probable that features such as object tracking will not be provided as built-in options for SDK 2.0, and - as stated above - will have to be gained by using 2.0 with OpenCV or other options such as ROS and LabVIEW.
16. SDK 2.0 supports Windows, Linux, and partial support for Mac OSX (full Mac support is being worked on).
17. SDK 2.0 is the only SDK for the 400 Series cameras.
Thanks for continuous response.
Need few more clarifications,
1. ZR300 has USB30 connectivity.
Is there any available option/hw connector to connect this device to smart phones (Windows, Android, iOS phones)
Or How can we use this camera in Mobiles / Is it possible to use?
2. What library used for object detection/tracking Intel RealSense SDK for linux ?
Whether this SDK has openCV inbuilt or INTEL's own library for detection/tracking ?
3. As you have mentioned that SDK2.0 supports ZR300.
So, I can use SDK2.0 with openCV for Windows/MAC for object detection/tracking ?
4. Does D400 series has any support to connect to Smart phones ?
1. Yes, ZR300 has USB 3.0 connectivity.
No, you cannot connect the ZR300 to mobile devices such as iOS / Android. You also cannot use RealSense with the built-in ordinary cameras of mobiles. In 2016 there was a RealSense Smartphone Developer Kit that used a built-in ZR300, Android and Google Project Tango, but it was cancelled before it went on sale as Intel shifted their RealSense strategy away from mobiles. I believe there are some pre-release preview units of the device in peoples possession.
2. The RealSense SDK For Linux uses its own Intel library for object tracking.
3. No, SDK 2.0 does not support ZR300, unfortunately. 2.0 only supports the SR300, D415 and D435 models. Whilst you may be able to use 2.0 for object tracking with OpenCV, you would have to use one of the supported camera models.
4. The 400 Series cameras do not connect directly to smartphones. They do have a set of 4 GPIO pins (like the ones on a Raspberry Pi board) for connecting external hardware to the camera to synchronize timing between them. So you may be able to create a cable to join the camera GPIO to a peripheral connector on a smartphone, though I am not sure what kind of data you could pass along such a connection.
This message was posted on behalf of Intel Corporation
Hello Sakthivel-S amd Marty,
The ZR300 and R200 cameras are discontinued and their software stack are only minimally supported for the time being. Only the SR300 and D400 cameras with SDK 2.0 will continue to be supported in the future. The SR300 development kits will continue to be sold until the current inventory runs out and no more will be built. SR300 camera modules will continue to be available from Intel Authorized Distributors.
Thanks for your understanding.
Thanks for continuous response.
Few more information's about ZR300,
1. Does ZR300 3D camera ?
2. It has 4 Camera modules ( Left & Right IR Camera, HD Color Camera, Fisheye Camera).
If I have to capture Image, will it capture using all camera's and output will be single image ? like 3D image or accurate depth image?
3. Is capturing using multiple camera's automatically done or do we have to do programming using SDK ?
4. Is it possible to see individual camera outputs ?
5. Is it possible to record a video using zr300 by default? or Do we have to do some programming to video record?
6. Even for video record, does this capture using all camera's and store it as single video ?
Does it captures using all camera's and it processes all camera outputs and produces output as single image or video ?
1. All RealSense cameras are 3D depth sensing cameras.
2. Each camera captures its own independent stream, and they do not combine together unless you program them with scripting to do that.
3. Librealsense supports use of multiple cameras, but you would have to use programming code to access that function. Librealsense has a multiple camera example program called cpp-multicam.
4. Yes, you can see individual camera outputs. Librealsense has a sample program for this called cpp-capture that shows the RGB, depth and IR streams as four separate boxes on the screen.
5 and 6. This section of the RealSense SDK For Linux covers record and playback.
Appreciate your continuous support.
1. ZR300 SDK (Intel RealSense SDK) has any API to combine all camera output as single image ?
Or How easy program using Intel RealSense SDK to combine all camera output ?
2. The following image shows that all camera output goes to "Interposer".
Interposer will not combine all camera output as single image ? or What Interposer will do ?
3. For capturing using multiple camera's you have mentioned to use Librealsense.
Is there any API available in Intel RealSense SDK (zr300) ?
Even for fetching individual camera outputs , does RealSense SDK has support?
4. Is it possible to include Librealsense API's in Intel RealSense SDK (ZR300) ?
Above questions are for ZR300 perspective and provide your answers for ZR300.
And also answer above all questions for D400 series perspective as well.
Apologies for the delayed response.
1. I believe that to merge streams together in Librealsense, you should 'align' them. the link below is useful for explaining the process.
2. The interposer is the part of the camera that the data cable connects to in order to connect the camera to whatever PC device is being used, as shown by the two-way arrows on the diagram.
3. If you are asking if you could use a RealSense-compatible SDK other than Librealsense or the SDK For Linux with the ZR300, the answer is unfortunately no.
4. Librealsense is installed as an included module when you install the RealSense SDK For Linux. So if you have the For Linux SDK, you will have Librealsense installed too.
Going through the same questions again for the D-cameras.
1. RealSense SDK 2.0 has a sample program called rs-align:
2. An interposer on the D-cameras would likely perform the same kind of connector / data transfer functions as the ZR300.
3. RealSense SDK 2.0 does support multiple cameras and has a sample program called Multicam.
4. RealSense SDK 2.0 is an advanced form of Librealsense.
I appreciate your continuous support.
I may be asking too many questions, Just want to know clear understanding of zr300 product and it's SDK.
1. You mean Librealsense is included in RealSense SDK for Linux SDK.
So, I can use multiple camera example and individual camera example programs from librealsense. Am I right ?
2. How many camera's does "Person tracking library" uses to track a person?
Does this take input from only one camera ? or multiplt camera ?
3. When we start programming, have to enable specific camera's based on our requirement right?
4. Your last reply has info for merge streams and it has info about align infrared with color. Thanks for this info.
Still I want to know more about merging streams. This may be repeated question. If you have any other info about this operation please do let me know.
It has 4 camera's and what is possibility of merging all 4 camera output as single image ? and also for combined video output ?
Your last response has align infrared with color. But, Is it possible to merge Fisheye camera, Color camera and Left & Right IR camera outputs ?
Mainly, I want to know how can we get high accuracy depth image ? Is it possible to get using any one camera or does it required to combine all four camera's to get high accuracy Depth image / video ?
What are all the information's you have for merging streams of these 4 camera's, please share it. And does RealSense SDK for Linux has API's for merging streams of 4 camera's ?
1. The Librealsense part of the RealSense SDK For Linux should enable you to use multiple cameras.
2. The Person Library that provides person tracking in the For Linux SDK is designed for one camera.
3. If you are using more than one camera, you can switch between cameras using a process called Enumeration. Here is information for doing this in Librealsense.
4. I do not have any additional information on merging streams with Librealsense, unfortunately.
I would think that in gaining a high quality image, the most important factors are the environment (e.g lighting conditions, amount of reflections, etc) and the camera model's capabilities. If you combine four cameras that all have a poor image, the end result is not going to be of a higher quality. An old saying says Garbage In, Garbage Out. So if your source images are not good, you will not get a combined image that is of a higher quality.