1. RealSense IR sensors require a certain amount of light in order to generate a good quality image. If the location is too dark then the image will be mostly or completely black.
I ran a test though with a mostly dark room at 7 am (not completely dark) with the lights off and blinds shut with a D415 camera, and still got a decent IR image.
2. When the lights were turned on, it was a very smooth, instant transition to the stronger lighting conditions.
3. You can extract RGB color, depth and IR. There are different formats for each of these main stream types. The link below has a script with a list of image formats on lines 54 to 74
4. If you use the RealSense Viewer tool that comes with SDK 2.0 to generate the infrared stream, you can snap a single frame using the SDK 's record and playback functions. To capture a frame instead of recording a sequence of frames with the Record button, click on the Pause icon to pause the stream, and then click on the Snapshot button next to it to capture that paused frame as a PNG image.
1 of 1 people found this helpful
The quickest method would likely to be to take the Capture sample (which shows color and depth streams) and alter the code to show the IR stream instead of the color one.
Thank you very much that was very helpful.
Just to give you an idea of the application I am planning to develop;
I did that with Kinect and Kinect is no more and trying to find a alternative.
Further what is the recommended specs for a PC to use D435?
Finally, I received my D435 after a long await.
At this time I am using windows 10 to develop my prototype application but eventually move to Ubuntu.
I installed the latest SDK 2.12.0. I am using MS Visual studio 2015 for compiling the code.
Following code compiles without any problem
// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2017 Intel Corporation. All Rights Reserved.
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include <opencv2/opencv.hpp> // Include OpenCV API
int main(int argc, char * argv) try
// Declare depth colorizer for pretty visualization of depth data
// Declare RealSense pipeline, encapsulating the actual device and sensors
// Start streaming with default recommended configuration
using namespace cv;
const auto window_name = "Display Image";
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
catch (const rs2::error & e)
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
catch (const std::exception& e)
std::cerr << e.what() << std::endl;
But when I run the application I am getting following error;
API version mismatch: librealsense.so was compiled with API version 2.9.1 but the application was compiled with 2.12.0! Make sure correct version of the library is installed (make install)
Error.JPG 28.2 K
Please ignore my question. I worked it out. Coping the realsense2.dll file to application directory fixed the problem.
I would like to how can get non colorizer raw depth frame gray scale and also the depth information of x,y coordinates.
In OpenNi2 I was able to save depth info from frame to Unit_16 array. Is it possible to do the same?
I researched your question carefully but there are no precedents to refer to for a Windows user - the small number of cases involving this error are all based on the Linux version of the SDK.
The solutions in those Linux cases could be summed up as:
1. Compile the 2.12.0 SDK from source code instead of using a pre-made executable, if the pre-built SDK is what you are using.
2. Download the old 2.9.1 version of the SDK and try your program with that.
I progressed further, I was able to get the depth data in meters for each frame and process it.
I have posted the sample code how to do it in previous link you provided, so it may be useful for some one.
I have one challenge to overcome. I am processing the IR frame and it seems to get effected by turning on the light in the room. Kinect IR frame does not get effected by it.
Any advice on this is welcomed.