Somebody else asked this question back in December 2015. Intel support staffer David Lu gave the reply that Intel had not yet had the chance to create a Unity version of the Person Tracking C++ sample. That situation remains the case, as a Unity sample of it never got made and is unlikely to be, as support for the R200 in the official RealSense SDK ended after '2016 R2'.
To be honest, the Person Tracking feature is very hard to get working even when programming it outside of Unity. Trying to get it working in Unity would add a whole new layer of complication on top. Also, it was classified as a Preview feature and so only had limited functionality before development of it ceased when official SDK support for the R200 ended after R2.
There are ways to simulate full body motion in Unity that are much easier, if you think cleverly about how to use hand and face tracking. Although the R200 does not have hand joint tracking, you can use Blob Tracking to achieve the same control by tracking the palm of the hand with it.
Check out the full-body avatar with complete virtual limb control in my own Unity project.
Edit: I realize I made the assumption that you were using an R200 camera because that is what most people who are interested in Person Tracking use. The SR300 has Person Tracking too though. Which camera are you using please?
Sorry for the late Answer, i'm using the SR300 Camera. Thank you for your Response !
I also had the idea to use the Blob tracking but i think that i thought it may be easier to get the Person Tracking Module working in Unity then to completely reprogram the Person Tracking with Blob Tracking Module. As far as i Know there are also no examples for the Blob Tracking in Unity ?
Blob Tracking was removed in the latest '2016 R3' SDK, according to its release notes. So you would have to use the '2016 R2' SDK to use that feature. The notes say the some features were removed in R3 as Intel refocuses the SDK's feature list based on developer feedback.
Blob Tracking is supported in Unity as part of the 'TrackingAction' script. On the TrackingAction menu beside 'Tracking Source', you can drop that down and choose Blob Tracking instead of Hand Tracking.
i had success on getting the Blob Module working in the 2016 R2 SDK in Unity.
But i don't know how to use it. Can anybody give me a hint or an example?
Are you using the TrackingAction implementation of blob tracking, or did you write your own blob tracking script?
To use blob tracking, you should simply have to move a large flat-ish area of the body such as the forehead or the hand palm close to the camera in order for tracking to be activated.
This message was posted on behalf of Intel Corporation
I was wondering if you could check the questions asked by MartyG.
If you have any other question or update, don’t hesitate to contact us.
I did use the TrackingAction implementation. The one of the Screenshot you provided. I don't know what I am doing wrong.
How can i make the tracking visible or should this happen automatically like with the Hand or Face tracking actions?
When using TrackingAction, the most obvious evidence that tracking is occurring is that the object that contains the TrackingAction is moving when you move your hand in front of the camera. If it does not seem to be moving, please bear in mind that with Blob Tracking, you need to put your hand much closer to the camera than with the more advanced Hand Tracking method.
Another way to see if your object is reacting to your hand movements is to run your project in the editor test-mode with the object containing your TrackingAction highlighted, so that the details of the TrackingAction script is shown in the Inspector panel of Unity. If you look at the Position and Rotation coordinates at the top of the Inspector, if the Position or Rotation values there are changing during tracking then the object is reacting to your input.
Another thing to remember is that unless you set Position constraints in the 'Constraints' section of the TrackingAction then it may seem as though your object disappears when the hand is detected, because the object moves so far that it goes offscreen. Position constraints ensure that the object can only move a certain distance on the screen before it is forced to stop.
It would be very helpful if you could provide an image of your TrackingAction's configuration, including having the constraints section of it expanded open so we can see the settings you have in there. Thanks!
i got it working now. I had to alter the SenseToolkitManager instance because the BlobExtractor Class of the SDK was deprecated in the Version of my SDK. I only had to change BlobExtractor to BlobModule and it's working now.
Now what would i need to do in order to recognize the right Body parts ? i guess i need to extract the segmentation image and compare it to individual images of the right body parts. Is this possible ?
Your idea about segmentation.sounds feasible but you should not try to match the images too strictly or it will only recognize the body that the comparison images were taken from.
I'm a little bit stuck here, actually. To be honest i don't know how i could compare the segmentation images. May you please give me a hint ? Or is it a better approach to compare the contour data of a blob ?
This subject is outside of my knowledge, sadly. Hopefully someone else reading this today can offer useful advice on an approach for you to take. Good luck!
Yeah i hope so. Anyways, Thank You! You're always helping !
In Unity, I simulated recognition of the movement of most body parts (shoulders, lower arms, waist, etc) by using a method I developed called Reverse Thinking. Instead of trying to track the body points that RealSense can't follow, I track a hand or face point and then use scripting to calculate automatically how body sections affected by movement of those hand and face points should be affected.
For example, if my hand palm moves up in front of the camera then my system knows that the lower arm should be lifting a little and the shoulder lifting a lot. If I lower my head then the system recognizes that the waist joint should be bending, causing the upper body to lean forwards. By applying this method of thinking, a few tracked points can be used to work out the positions of most of the body's parts.
Here's an old video from my project that demonstrates these tracking principles with a full-body avatar.