It is theoretically possible to create new RealSense gestures but there is not a tool for doing so - you usually have to program them from scratch yourself. So it is a difficult thing to do.
There was a developer who used a program called TouchDesigner to create new RealSense custom gestures though by defining finger positions but I'm not sure how easy it would be to integrate it into your RealSense application (assuming that it probably saves gestures in a different format to the gestures used by the RealSense SDK).
Others have asked about programming custom gestures directly in the RealSense SDK in the past, but as far as I know no tutorials exist. It used to be relatively easy in RealSense's predecessor, the 2013 'Perceptual Computing Camera' (also known as Creative Senz3D - not to be confused with the more modern SR300-compatible Creative BlasterX Senz3D). It became more difficult in RealSense because its architecture was more complex than the Perceptual Computing SDK.
Regarding TouchDesigner: this article introduces how to integrate TouchDesigner with RealSense.
There are also follow-up guides, including one (in Part 3) that teaches how to capture body pose.
My own approach to gesture and body pose recognition was to build my own custom system in the Unity game engine that I called CamAnims. This detects body positions and finger configurations based on the rotation angles of virtual objects generated by real-time RealSense camera input.
Thanks for your helpful tips.
However, I just found out about Project Prague from Microsoft: https://labs.cognitive.microsoft.com/en-us/project-prague
This seems like a possible solution for creating custom gestures and or poses. Does anyone have experience with this combining the RealSense SDK?
I wanted to try this out, but unfortunately my computer doesn't recognize the SR300 camera anymore so I need to fix this first.