I'm not sure if there is an easy, definitive answer to this question. Traditionally with older (pre-400 Series) RealSense cameras, the two main ways to store RealSense data in a file that will work outside of the RealSense SDK is to place the data in an image (like you did) or to convert the RealSense stream data into a MATLAB .mat format file.
Could you clarify what you are trying to achieve in your project please? Are you trying to detect the skeleton or to track its joint points? Thanks!
We want to track the joint points of the upper body but using the third dimension which is the depth. We know that we can track the movement in 2D by just using the images. But our main goal is use the depth data to move a skeletal model in Unity; The skeletal model is supposed to follow the movement captured by the camera. Does that make sense?
If you are using Unity then a solution is much simpler than recording the depth data into a file. The '2016 R2' RealSense SDK comes with a program called the Unity Toolkit. This can be used to import RealSense support into a Unity project, as well as a number of useful pre-made tracking scripts that can be dropped into Unity objects to control them with camera input.
One of these scripts is called TrackingAction. Once placed in an object, you can configure it in Unity's Inspector panel with menus to move that object in response to the face.without needing any programming knowledge.
The Unity Toolkit can be found in the SDK's RSSDK > framework > Unity folder. To run it, you should first open your Unity project, and then run the Unity Toolkit file. This causes a list of RealSense toolkit files to pop up in your Unity window, and gives you the option to click 'Import' to import them automatically into your project, along with the camera driver files.
Whilst the R200 can only provide face tracking inputs, as it doesn't have hand joint tracking, the movement of the head can be used to determine the movement of other parts of the body. For example, if the face moves towards in the Z-depth direction towards the camera then that can be interpreted as the waist bending forwards, whilst moving the head back from the camera can be interpreted as straightening the waist up.
Below is an old YouTube video from my own Unity project that demonstrates how the joints of a model can be realistically moved with simple camera inputs.
I have have a large range of published step by step Unity RealSense guides for the 2016 R2 SDK here: