1 Reply Latest reply on Apr 28, 2017 7:29 AM by MartyG

    Maximum distance for simple hand tracking

    JPvM

      We are using the SR300 for a special pilot project. Here we have a very imporant physical limitation: the user will be at 120 cm in front the SR300. The SR300 is advertised to have a range up to 150cm, so it should be OK.

       

      However, hand tracking only seems to work when we're at < ~100 cm distance. We need 20 cm's more .  The hand is in front of the person. So the hand is about 70cm distance on average. But the head/face is about 120 cm.

       

      How can we change software/hardware parameters in order to achieve our goals?

       

      Thanks in advance for your response

        • 1. Re: Maximum distance for simple hand tracking
          MartyG

          I think the issue may be more to do with the camera's perception than its sensory range.  In RealSense, hands and face are tracked by the recognition of joint locations on the hands and landmarks on the face.  The larger the feature, the easier it is for the camera to detect it and maintain tracking of that point, so the face tends to be able to be tracked from further away than the hands, which have a smaller surface area.

           

          So even if the SR300 camera can scan as far as 150 cm in applications such as 3D model scanning, this may not matter for hand tracking applications if the hand point that the camera is following moves far enough away from the camera lens for it to no longer be detectable. This issue of perception also occurs if you move your hand too close to the camera - the tracking stalls because in the close proximity, the camera can no longer see individual joints.

           

          Myself and another developer once discussed the theoretical possibility of extending the camera's view range by putting a smartphone camera zoom lens peripheral over the RealSense lens so that - like looking through the zoom lens on a digital camera - an image that was in the distance could be perceived by the camera as though it was up close.

           

          We never actually tried it out in practice though, so I can't say if it would actually work.  Of all the options we considered though, a smartphone zoom attachment seemed to be the most likely to succeed.  This is because you can extend the depth scanning range with scripting in the RealSense SDK, but in the case of hand tracking, the perceived image of the user's hand features would still be just as small to the camera, I would think - it could just see further past the hand!