The easiest way to develop accessibility applications would likely be to use an SR300 model of RealSense camera with the '2016 R2' software development kit (SDK) software from 2016. This development software has a range of pre-made tools for gesture, hand and face detection.
The new 400 Series D-cameras use a different SDK ('SDK 2.0) that does not have these features built in and instead relies on integration with other software platforms such as OpenCV to gain those detection / tracking functions. It would therefore require experience in programming to develop an accessibility application with the D-cameras.
The official Intel SR300 USB camera is no longer sold as it has been retired. However, there are third-party SR300 compatible cameras that work fine with the 2016 SDK. These are:
- Creative BlasterX Senz3D. This is virtually identical to Intel's SR300, as Creative produced the official SR300 camera for Intel.
- Razer Stargazer
Both of these cameras work with Windows.
Another advantage of using the SR300 is that the 2016 SDK has a function called the Touchless Controller that enables the camera to perform mouse functions such as clicks.
Also, the 2016 SDK works excellently with the Unity game engine, with an extensive range of detailed step by step guides authored by myself in using the SDK's tools to create RealSense camera applications in Unity with the SR300.
As an example of using RealSense to control a game application, here's an old video from my own Unity game project.
Thanks MartyG for your detailed reply, and links to your resources.
I appreciate that you've given me a lead on what the easiest path to development would be, and it may be a good way to prototype. However, I'm concerned about using 2016 tech going forward. I also read that the new D400 series cameras contain superior technology, which would make it more future-proof.
I'm not quite sure whether I've understood your reply properly, or I'm not making myself clear. I'm wondering whether someone has developed software that allows the Realsense camera to behave as a controller for Windows and therefore any application that runs on the OS. Like any other HID.
From what you're saying it seems not, and also seems that every app or game would have to be developed with coding specifically built in for the device.
Surely it would make sense to have an API which developers could use to offer Realsense "controller" support for their existing games?
The number of games and apps offered on the Intel app store suggests that not many developers are creating for this medium, compared to say the Playstation or XBox. Yet neither of those offer the resolution and precision of the RealSense hardware.
Sorry if I'm rambling, I'm new to this development environment and trying to understand it!
I can understand your need for future-proofing, given your previous experience with the ending of Kinect. I recommended the SR300 and the 2016 SDK because it represented the best way to develop an accessibility application rapidly and easily. If development time or project difficulty is not a consideration for you though then the 400 Series cameras would be a viable option.
The new SDK 2.0 is an open-source product in continuous development, through contributions by Intel and by members of its community of users. So whilst your project goals may be achievable, it may be more complex to implement at this time than when the SDK is more mature as time goes by.
You could likely make development of a Windows accessibility application with RealSense faster and easier if you integrate the TouchDesigner program into your application. Users of previous RealSense cameras have done this.
RealSense cameras are not treated by Windows as traditional input devices such as joypads and mice. There has to be some kind of software to convert body / face inputs in front of the camera into inputs that the application can do something useful with, such as move a cursor or do a click-action.
I wonder if a VR system such as Oculus Rift and its Oculus Touch controller that enables full-finger control for gestures may be an easier way for you to control a wide range of Windows applications.
A VR headset could also translate head motions into conventional control inputs.
Unlike the SR300, the R200 unfortunately didn't have an alternative version produced by another company.
A system with a finger-touch physical controller such as Oculus Touch, where you do not have to program a new application for each Windows program you use with it (just update the control definitions to convert VR inputs to conventional HID inputs for a particular program) still seems to be the best solution for your needs.
If you want to purchase R200's, you can do so by buying the Robotic Development Kit in Intel's online click store. It is a bundle deal of an R200 and a miniature single-board computer to attach it to. The bundle is heavily discounted to clear the stock now that the R200 is retired. The R200 in the bundle is exactly the same as the one that was sold individually, even with the same packaging.
The 2016 RealSense SDK was the final Windows SDK that supported it though. Also, it does not have finger joint tracking support, unlike the SR300, which may make it unsuitable for your project unless you used it with a hand tracking solution that did not rely on joint tracking
f you need range, a 400 Series camera with a 10 m tracking range may be the best choice of RealSense camera. The R200 model, meanwhile, can depth-sense up to 4 m.
It seems that in a way I'm looking for a combination of things that might not be possible with one of the older cameras. I need the longer range for working in a classroom environment with multiple users (for less precise whole body tracking), and ideally I need the precision of joint tracking for gesture control. But I can probably get away with using hand only tracking as you suggest.
I can't order direct from the Intel store, and this is what I can find in South Africa: