I think the issue may be more to do with the camera's perception than its sensory range. In RealSense, hands and face are tracked by the recognition of joint locations on the hands and landmarks on the face. The larger the feature, the easier it is for the camera to detect it and maintain tracking of that point, so the face tends to be able to be tracked from further away than the hands, which have a smaller surface area.
So even if the SR300 camera can scan as far as 150 cm in applications such as 3D model scanning, this may not matter for hand tracking applications if the hand point that the camera is following moves far enough away from the camera lens for it to no longer be detectable. This issue of perception also occurs if you move your hand too close to the camera - the tracking stalls because in the close proximity, the camera can no longer see individual joints.
Myself and another developer once discussed the theoretical possibility of extending the camera's view range by putting a smartphone camera zoom lens peripheral over the RealSense lens so that - like looking through the zoom lens on a digital camera - an image that was in the distance could be perceived by the camera as though it was up close.
We never actually tried it out in practice though, so I can't say if it would actually work. Of all the options we considered though, a smartphone zoom attachment seemed to be the most likely to succeed. This is because you can extend the depth scanning range with scripting in the RealSense SDK, but in the case of hand tracking, the perceived image of the user's hand features would still be just as small to the camera, I would think - it could just see further past the hand!