In the Unity game engine's implementation of RealSense, at least, the camera can track foot soles and toe joints as well as it can with palm and finger joints on the hands. This is because to the camera's internal image map, a sole looks like a palm and the toes look like fingers.
Here's a quite old video of me controlling and avatar's arms and fingers with movement of the feet and wiggling of the toes.
This method also works for controlling an object with leg movements, as the camera treats the knee as a large hand palm. The approach never got used outside of experiments though because by its very nature it was impractical, as the user would have to remove their pants / trousers for knee tracking or their shoes and socks for foot tracking. It may have more specific applications as a means of object control for physically disabled users though.
Thanks for the imformation MartyG, very helpful.
So tracking through clothes/shoes is not possible? even just for the placement and maybe not toe movement specifically?
There is a less refined form of tracking in the SDK called Blob Tracking. Instead of looking for specific bone joints, it just reacts when it sees a large, relatively flat area and treats it as though it is seeing a palm. A disadvantage of this method is that you have to get the body area much closer to the camera lens before it triggers. If a user were lifting the sole of their shoe up to the cam then it may trigger though. The same goes for putting a knee near the camera.
You could experiment with the toe-cap of a shoe but I'm not sure the surface area would be large enough to cause a reaction.
This message was posted on behalf of Intel Corporation
I was wondering if you had the chance to check the suggestion provided by MartyG.
If you have any other question, don’t hesitate to contact us.
Andres, I have not as I currently do not have a RealSense Camera. The questions were asked as we are always looking into new technology and which camera would be best for our product. I thank you for the follow up.