This message was posted on behalf of Intel Corporation
Thanks for reaching out!
I've never heard of anyone measuring the latency of hand recognition using the SR300. However, please let us understand your goal, what it is? Do you want to reduce this latency? Or, are you simply looking to confirm this number?
Let us know.
I managed to track down a research study of the older R200 and F200 cameras that someone wrote. Amongst numerous factors discussed in the paper are latency and 'time to recognition' values. In their study, they were aiming for a target value of under 50 ms from making a hand gesture to recognition of the gesture on the F200 camera (the direct predecessor of the SR300).
Ultimately though, the latency is not likely to be measurable in absolute values, as it may be influenced positively or negatively by factors such as the capabilities of the device the camera is attached to, the stability and behavior of the USB ports on a particular machine, etc.
Thanks for your help.
Do you think the CPU plays an important role in hand recognition latency? The CPU I use is a 2014 Xeon E3. I thought the 3D reconstruction is done by the ASIC inside the SR300 module and the computer is in charge of processing the depth data, such as hand recognition.
I will also look into my code again to see if something else influences the results.
1 of 1 people found this helpful
Since there is a computer involved, there is bound to be the potential for a processing bottleneck somewhere. For example, even if the camera has independent hardware that can handle detection speedily, there may be a delay in doing something with that data afterwards (e.g triggering an event in a program in response to the detection, since the program would be handled by the PC processor).
So whether your measurement includes some lag or not depends on what point in the process you are measuring. If you were measuring the speed that the camera reacts to a detection event then there may certainly be some lag caused by the PC hardware.
If you were simply measuring the initial detection time though, and that part was handled solely by the camera hardware without involvement of the PC hardware, then you could be confident that detection speed should be relatively consistent with all models of that camera (though the variables of no two cameras of the same model are precisely identical due to manufacturing variances at the factory in the camera's "intrinsic matrix").
I should add that some developers have speculated that temperature is a factor that can influence camera hardware performance. Page 30 of the SR300 data sheet document would seem to back this up.
Your 2014-era Xeon E3 is probably a 4th generation Haswell architecture. The SR300 usually prefers a minimum of a 6th generation Skylake architecture processor. However, Xeons exist in their own 'strange universe' that defy normal specification rules and work with the SR300.
I really appreciate your elaborate explanation. There are trully so many factors that may influence the results, including the length of cable. Since I can only use the SDK to do latency measurement, I can't find a better way to eliminate all these noise. But based on your answer, I ran a few more tests.
Firstly I decreased color camera fps to 30 and I record the processing loop time, which contains mainly "senseManager->AcquireFrame()" and "handData->Update()". It took about 33ms to finish each loop, which met the 30fps setting. So I conclude the computer has enough computing power to finish its work in time. Then I recorded the loop time when the algorithm detects a hand. Initially I thought the program would take extra time processing images which contained only part of hand, so lags would be accumulated before a "whole" hand was detected. However, my assumption didn't agree with the measuring data.
1. The first test is the normal "ALERT_HAND_DETECTED" program under 30fps. "1" means a hand is detected and "0" for no hand detected.
2. The second test is designed to see if image output process has influence on hand recognition latency.
In the first test, we can basically say the computer can decide whether a hand exists or not in 33ms. 6ms and 10ms are the loop time for the first two frames.
In the second test, the image output process does add up process time for the loop and the next two loop processing time drop a lot. This is probably because the frame is already stored in image buffer so the program can deal with it immediately. However, I don't think this process will influence hand recognition latency because the previous frame is done within 33 ms.
Probably we are not be able to measure the real latency for this camera through SDK, as you have mentioned it varies from computer to computer. However I do hope Intel can provide some parameters for reference.
Yes, cable length is a factor in signal quality. the cable supplied with the camera has been rated specifically for efficient uses with the camera, and using a USB extension cable can degrade performance. Most users find that the camera will work with a 1 m extension cable, but not a 2 m one unless that cable is a high-grade premium quality one.
I did some further research but wasn't able to find further information about RealSense's detection speed that might be relevant to your project, aside from an SDK function called QueryEngagementTime that reports when the cursor is ready to be controlled by the hand after detection.