This message was posted on behalf of Intel Corporation
Thanks for reaching out!
I did a quick search online and I was able to find the source code of those files in the following links:
Nevertheless, since you are using the R3 version of the SDK, could you try version R2 to see if this issue can be fixed?
The version R2 can be downloaded from http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/9078/intel_rs_sdk_offline_package_10.0.26.0396.exe
I hope this helps.
I am also failing to compile the FaceTracking sample for the RealSense SDK 2016 R3 (for use with the SR300 camera), for the same reasons as above ie 3 missing header files. I have tried using the github headers you pointed at, but this fails to compile with the rest of the example code. Is there a bugfix release due any time soon, and if not, what earlier version of the SDK should I use, where can I get it, and will this work with the latest SR300 camera driver?
I have also tried installing the previous version of the SDK and runtime ( intel_rs_sdk_offline_package_10.0.26.0396.exe and intel_rs_sdk_runtime_10.0.26.0396.exe ) . These install, but claim not to be able to find the camera hardware, and the samples do not work with the more up to date DCM installed ( intel_rs_dcm_sr300_184.108.40.20618.exe ). The link to the previous DCM for the SR300 linked to from https://software.intel.com/en-us/articles/previous-intel-realsense-install does not exist.
Any suggestions on a work-around?
As the API has been refactored these headers were part of the old-style setup.
Unfortunately they were supposed to be still supported, I guess this was yet another oversight. (Examples seem to be all 'old style' so far)
I am currently combing through everything in order to switch to a new style API, as unfortunately there doesn't seem to be a clear conversion guide.
To see the new structure look at: /(RSSDK_Install)/include/RealSense/ the new header structure is in there.
If you look under /Face/ you will notice that all the previous functionality seems to be there just in a different form.
Ill try to post my experience if I have any success.
1 of 1 people found this helpful
There has not been emotion recognition in the SDK since 2015. It was originally a component provided by a third party company called Emotient but it was withdrawn after RealSense's first year, possibly because the licencing agreement wasn't renewed after Emotient got purchased by Apple.
1 of 1 people found this helpful
Using my records, I managed to track down the last SDK in which emotion was fully supported. It was R3 (not 2016 R3). So the emotion sample is presumably included in that install package, but the download links on the SDK page for past versions are broken links now and probably never coming back. It's a pity, as the emotion sample was good fun.
You can still interface with the landmark points on the face - it just won't give you a description of what an expression means in terms of emotion. I gave somebody help with this it a while ago - I'll link you to the documentation I directed them to.
Basically, you can query individual landmark points instead of all of them by using a function called QueryPoint.
The values of particular landmark points are on a face chart.
Also, it is possible also to recognize camera-generated emotional expressions on the face in the Unity game engine but it's not built into the SDK. It involves custom mechanisms that I built myself. I can try to give explanations if you are interested.
2 of 2 people found this helpful
It wasn't with the landmark information. In my original system, I analysed the current angle of objects representing the parts of the face and used 'If' logic statements to draw conclusions based on the angle of the object generated by the SDK's face tracking script.
For example, with the eyebrows, eyelids and lips,I would assume an angle around the '0' degree mark to be "neutral" - neither flexing up into a happy expression or down into a frown / grimace.
If the angle increased to something like +10 degrees and not less than 0 degrees, that was assumed to be a happy expression, as the camera script rotated the object representing the face part upwards.
If the angle was less than '0' and greater than -10 (e.g -15) then the expression was judged to be a negative expression.
Using the If logic conditions, you could take each individual analysis and use AND logic to compare different face parts.
For example, IF only the lips are in the negative range then the emotion is "Sad".
IF the lips are in the negative range AND the eyebrows are in the negative range then the emotion is "Angry".
My current system is more complex than this, but the above method works fine for simple emotional analysis of facial inputs to the camera.