Thanks for reaching out, and I have to say that it is an interesting project.
About your questions, I'll try to answer all of them in order:
I'm not sure if it is possible to do hand gesture recognition with the R200, so let me investigate a little more and I'll let you know when I got something helpful.
Regarding your issues with Java, check the instructions for Java in the SDK documentation, and download the latest Java JRE, we are using JRE8.
And you should copy the directories “C:\Program Files (x86)\Intel\RSSDK\framework\common” and “C:\Program Files (x86)\Intel\RSSDK\framework\Java” to the same writeable folder.
Create a Java project using either DF_CameraViewer or DF_FaceTracking.
When setting up the Java Project in Eclipse, ensure to Add Jar: libpxcclr.java.jar and run as Java Application.
I hope you find this information useful and good luck with your project.
Thanks for the reply
1. I succeed to open this examples in Eclipse (thanks a lot) , except the HandViewer do you know why?
2. Do you have any progress with the possibility to do hand gesture recognition with the R200? Maybe code for that..?
3. Where i can find the library functions i can use in java (SDK / API), I want for example to capture photo and do some process on it ( like deep learning).
4. I have read that there is might be an adaptor to work with Matlab. The Matlab environment is more familiar with me. There is a possibility to work with the r200 on Matlab?
i tried this adapter but it's not working:
Thanks in advance,
Hello Kfir, let me answer your questions. First, some history. Initially, the RealSense cameras were designed to be integrated into PCs and tablets. The F200 and SR300 models were to be placed on the front of the computer lids, facing the user. So the F200 and SR300 cameras are also called user-facing or front-facing. They are meant for short range usages so they are also called short-range cameras. The R200 camera model was designed to be placed in the rear of tablets, facing away from the user, toward the world, meant for long-range usages. So the R200 is also called rear-facing, world facing, or long range.
1. If you look at the names of the SDK samples they will have a prefix of FF, RF, or DF. FF means Front-Facing, which means the sample will work with only front-facing cameras like the SR300 or the F200. RF stands for Rear-Facing which means the sample will work with only rear-facing cameras like the R200. DF stand for Dual-Facing, which means it will work with any RealSense camera, both front-facing and rear-facing. The Hands Viewer sample is called FF_HandsViewer, which means it will only work with Front Facing cameras like the SR300.
2. The R200 is not designed to do well with gesture recognition. Remember, it was designed as a rear-facing camera on a tablet. A user who is holding a tablet with two hands with the R200 facing away will not use it for gesture recognition. The SR300 is meant to be used facing the user so it is better suited for close range gesture recognition.
3. All of the APIs for all of the SDK languages can be found in the SDK documentation online at https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_devguide_introduction.html or in your PC under C:\Program Files (x86)\Intel\RSSDK\doc. When you find the APIs under the different sections you will see tables with different tabs for each language implementation, including Java.
4. We are trying to get you answers to your Matlab question. Please stay tuned.
I hope this information has been helpful for you.
Intel Customer Support
Thanks Jesus Garcia,
Your answer was very useful.
1.I will wait to the answer about Matlab. It will really help me if it can be done, It will make the whole work much simple.
2. I understand now the limitation of the camera, but still i need to success with my final project and I have only the R200.
In my project i need to recognize in real time (or give the feel of it) the hand gesture, process the pic and send a command to robot.
The functions are move Left, right, go and stop. I have the platform to connect with the robot, and my main concern is about the hand recognition.
3. Do you think it possible to do hand gesture recognition with the r200, by this steps:
- Taking pictures from the r200 every few moments.
- Find the nearest Pixel by the depth image in the pic.
- Crop polygon from the image in the size of the hand.
- By machine learning/ deep learning / vision to figure out which gesture it is.
- Send the robot command.
- And repeat all the process again.
Thanks in advance,
For hand gesture recognition, your best bet is to use the SR300 camera. If you purchased the R200 on click.intel.com you may return it, within 90 days of purchase, and exchange it for an SR300. The RealSense SDK already includes gesture recognition algorithms that work with the SR300 and you can leverage them for your project.
Intel Customer Support
90 days have already passed since the purchase of a camera.
So I have to do my project with the r200.
do you think it is possible to do what i have asked in the previous post?
There is any progress with matlab?
I have r200, camera, i'm trying to take pic and save it as jpg.
i'm using eclipse.
i try to use your code for "Accessing photo" + save it.
But it look that eclipse not recognize PXCMPhoto (i add the relevant ref).
Well that's weird, I replicated the issues but I have not been able to find out why it is happening.
Just to let you know some Java functions don't work in the SDK 2016 R2 and support for Java has been deprecated as of 2016 R3. Personally, I recommend you to use another programming language to avoid these issues.
I wrote some code for capture photo and save it to the computer as JPEG in C++.
how can i save also the depth photo as JPEG?
this is my code:
// Create a SenseManager instance PXCSenseManager *sm=PXCSenseManager::CreateInstance(); // Create a photo instance PXCPhoto *photo=sm->QuerySession()->CreatePhoto(); // Select the color and depth streams sm->EnableStream(PXCCapture::STREAM_TYPE_COLOR,320,240,30); sm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH,320,240,30); // Initialize and Stream Samples sm->Init(); // This function blocks until both samples are ready if (sm->AcquireFrame(true)<PXC_STATUS_NO_ERROR) break; // retrieve the photo picture PXCCapture::Sample *sample=sm->QuerySample();
// Save on the photo to computer photo->SaveXDM(L"C://Users//kfir//Desktop//FP//Hand_Gesture//photos_FP//LiveStram//photo.jpg"); // go fetching the next samples sm->ReleaseFrame(); // Close down photo->Release(); sm->Release(); printf("Bye bye");
Nice code! Good job.
About your doubt, I found some information in the SDK, so I recommend you to check and try to adapt it in your code: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_essential_capture_color_or_depth_samples.html
I hope you find this helpful.
Have a nice day.