- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to detect human lying on the floor and detect body part. Like skeleton recognition. How can I solve this problem?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a very tricky one. I remember that a games company working with the Kinect camera had to write their own algorithm to detect when a person's skeleton was in the 'laying on the floor' pose.
If the camera was looking downward on the body from above then it should be relatively straightforward, since from the camera's downward pointing perspective, the person looks the same as if the person was sitting up and the camera was in front of them.
If you wanted to detect the person laying on their side though then it becomes more problematic. I tested this a couple of years ago and found that while the hand tracking still worked if the hand was viewed side-on, the face tracking didn't work because the camera relied on the facial landmarks being viewed from front-on with the head in its normal position, and not tilted onto the side (as the head would be if laying down on the body's side).
In my own RealSense project, in the Unity game engine, how I solve this problem is to track the nose landmark. When the nose is detected by the camera to be going downwards below the default value (which represents standing straight) then the code decides that the user is crouching. Once the value has decreased to a certain level, the code decides that the user is fully crouched. If the value continues decreasing past that point then the code decides that the user is laying down.
The process also works in reverse. As the value increases as the nose of the user rises, the user is detected as un-crouching. When the default value is reached, the user is standing up straight. If the nose is detected to be increasing above the default value then the code decides that the user is tilting their head up to look upwards.
This old video from my project, in which the nose motions are converted into virtual character motions, demonstrates the principle quite well (the avatar tech is much more advanced now).
https://www.youtube.com/watch?v=HHzXdLqI8p4 'My Father's Face' Tech Trailer 2 - "Thought" controlled avatar legs - YouTube
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Apparently, you can rotate the camera's image, according to Intel support staffer David Lu.
https://software.intel.com/en-us/forums/realsense/topic/621939 Rotation Prior to Face Tracking
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi kylerhwj
Do you still need assistance with this case?
-Sergio A
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, I need. I don't solve my problem. I need more assistance with the way to detect human lying pose.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi kylerhwj,
Thank you for your reply. We want to try to help you. Could you please provide more detailed information about your goal, what have you tried yet, how far are you into the development of your project, what tools have you used, and what specific questions do you have at the moment?
With just the information provided it's hard to tell specifically what issues you're facing at the moment.
We'll be waiting for your response.
-Sergio A
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your help. I'm sorry that I'm poor in English. Now I can get joints on color image. So I get (x,y) pixels on color image. I want to get (x,y,z) in the world coordinate according to (x,y). It means I want to know depth value by color image pixels. How can I do?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here's a script for mapping depth to color using the UV map.
https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_essential_coordinates_mapping.html Intel® RealSense™ SDK 2016 R2 Documentation
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page