Information about the position of areas of the body can be captured using the R200's 'Person Tracking' system.
My own approach is to animate a full-body avatar based on camera inputs and make logic decisions based on what the joints of that avatar are doing. This is done using a custom system I built called CamAnims. My blog post below explains the basic principles of CamAnms.
Body language include position of fingers, can R200's 'Person Tracking' capture it?
The R200 unfortunately does not have support for finger joint tracking like the SR300 camera model does. It can only follow the palms through a method such as Blob Tracking.