Items with no label
3335 Discussions

3D facial scanning

NMant1
Beginner
2,938 Views

Hello there,

I'm using the SR300 to make a 3D facial scan, which will be used in an external algorithm to identify the face. This is done in VS in C# . Until now, I create the point cloud with the following specifications:

StreamType: STREAM_TYPE_DEPTH

PXCMImage.PixelFormat: PIXEL_FORMAT_DEPTH_RAW

Then I apply the QueryVertices function, which converts the depth data to vertices.

I've tried to improve the quality in some ways, like using an average of 5 frames, but until now, the results have been insufficient for the identifying algorithm. Now I've looked through the datasheet, and have found some filters and presets which might be able to help, like the preset "PRESET_FACE_LOGIN", and the filters"FILTER_SCANNING" and "FILTER_SKELETON".

Are there good ways to increase the fidelity of the facial image? Do these presets and filters help, and how do I apply them?

Thanks a lot for reading,

Nahuel

0 Kudos
30 Replies
MartyG
Honored Contributor III
771 Views

QueryVertices scans every point it sees, which means that it has the drawback that it can be slow. For this reason, developers often prefer to use a more precise instruction such as ProjectCameraToDepth, which is faster because it allows you to specify how many points should be scanned.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/projectcameratodepth_pxcprojection.html ProjectCameraToDepth

0 Kudos
NMant1
Beginner
771 Views

Thanks, I will look into this. However, speed is not my main problem at the moment. Also, removing the unimportant vertices also unaligns my matrices, and removes the possibility to average multiple depth images.

Do you have any advice on how to improve the quality of the 3D scan? Can filters help me with this? I can't really find any examples of filters in practice, so I've been unable to test this.

Thanks,

Nahuel

0 Kudos
MartyG
Honored Contributor III
770 Views

The SetIVCAMFilterIOption instruction can be used to get low smoothing and high sharpness, high accuracy and low noise at close range.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/setivcamfilteroption_device_pxccapture.html SetIVCAMFilterOption

0 Kudos
NMant1
Beginner
770 Views

Thanks! That looks like something I could use. The main problem I see is vertical lines running over the face, which I think are artifacts of the depth measuring system. I don't see any specific filters which take these out. Are there other ways of doing this?

0 Kudos
MartyG
Honored Contributor III
770 Views

Last year, a user called Eli E. suggested the following to someone who had wave-like ridges in their depth image:

"The easiest way to smooth your data is to use the SDK's module called ScenePerception. This module contains a KinectFusion style algorithm that performs volumetric integration and tracking. You don't need any of the tracking functionality, but you should get a fairly nice mesh after 15+ frames."

0 Kudos
NMant1
Beginner
770 Views

I'm afraid I can't find the post you mentioned. Also, ScenePerception seems to improve the image by moving the camera around, while I would like to make a face scan of a stationary person. I'm guessing I could try to find a function which calculates where the ridges in the model will be since it's caused by a stationary IR projector, but this seems like overkill for what I'm trying to achieve.

0 Kudos
jb455
Valued Contributor II
770 Views

I had some problems with the ridges a while back. The best I could do to improve it was to set the laser power to 1, the filter option to 5 and, most importantly, limit the range between the camera and skin to over about 22cm (sometimes needs to be at least 25-28cm away depending on lighting conditions).

Using PIXEL_FORMAT_DEPTH_F32 helped with the depth data smoothness too, but I still do get the occasional clump of outlying points which I filter out using a nearest neighbour type algorithm I made myself.

0 Kudos
MartyG
Honored Contributor III
770 Views

Here's the link to that article I quoted from, which relates to IR on the R200 model of camera.rather than the SR300 you have. Some of the principles talked about are transferable though.

https://software.intel.com/en-us/forums/realsense/topic/616443 Way to configure R200 gain and other parameters

Samontab also created a laser configuration utility for the F200 camera, and an accompanying article in which he provides images that have lines in the scan.

https://software.intel.com/en-us/forums/realsense/topic/537872 Utility for changing laser camera parameters (IVCAM v0.5)

0 Kudos
NMant1
Beginner
770 Views

Thanks for the links. Doing some further testing I realized the SR300 has a lot more trouble with human skin than with other objects. Within 30cm distance, the ridges in the face model start to show up, while they don't for the model of a synthetic face. I'm guessing it has something to do with the infrared light which can partially penetrate the skin. I don't have the option to move my face further away from the camera, as the resolution of the face model would become insufficient.

I filmed the IR light of the projector with a high framerate IR camera. This shows the SR300 projects purely vertical lines like shown below. It has 10 patterns with various thicknesses of the lines.

The ridges I find on the skin coincide with the thinnest line pattern, and I think (but certainly don't know for sure) it's caused by the scattering of the IR light by the skin.

Is there a mode which increases performance for face or skin scanning? And is there a way to filter/alter my data which comes out of 'PXCMCapture.Sample sample.depth' before turning it into vertices? This would allow me to try and remove the ridges myself.

Thanks,

Nahuel

0 Kudos
jb455
Valued Contributor II
770 Views

Hi Nahuel,

/thread/111206 Here is the thread where I brought this issue up before with my findings. Unfortunately I didn't get any proper answers from Intel but hopefully it'll be different this time! We are able to get by with being further away so that is an effective workaround for us, though not ideal. Perhaps you can find a way of physically reducing the strength of the IR laser without distorting the pattern by using some sort of filter?

Good luck!

James

0 Kudos
MartyG
Honored Contributor III
770 Views

I found a link to the discussion that jb455 had mentioned earlier in this post about his own results with skin scanning and his subsequent comments about those results.

Edit: speedy JB beat me to it!

The final outcome of that discussion thread was that Intel support member Andres said to JB, "I would like to inform you that the effective range of the SR300 camera starts at 20 cm, so any data captured at a shorter distance is indeterminate."

0 Kudos
NMant1
Beginner
770 Views

Thanks to both for the link.

I understand the range of the camera is limited, but in my experience, the quality of a face starts te deteriorate at 30cm, while the quality of a synthetic (plastic) face remains good up until 15cm.

jb455 , I don't understand why it would help to physically reduce the strength of the IR laser. SetIVCAMLaserPower() function allows for changing the laser strength, but I don't see how this would help. Aren't these lines the only element the SR300 uses to calculate depth information? Do you mean the light overexposes the sensor at close distances?

I found this in the documentation:

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v2016r3/documentation/html/index.html?exportdata_pxcimage.html https://software.intel.com/sites/landingpage/realsense/camera-sdk/v2016r3/documentation/html/index.html?exportdata_pxcimage.html

Do you know if this allows me to export the depth data to an array, edit it, and return it to a depth image to convert it to vertices? I can't seem to find an example of how to use it.

Thanks again for all your help,

Nahuel

0 Kudos
jb455
Valued Contributor II
771 Views

My reasoning was, and this is just a theory, that up close too much IR is reflected back from the skin at the peaks in the IR pattern, which causes the ridges in the point cloud. Then when you move back, the IR is less intense so not as much is reflected back and the point cloud is ok. Therefore, if we can reduce the amount of IR by more than the SDK lets you (despite there being 16 steps, there are actually 2 levels: off and auto, but this isn't mentioned in the docs) it may help as there will be less IR bouncing back at close range. I don't know how effective this will be in reality, but if you're running out of other ideas and can find a cheap & easy way to test this it may be worth a go.

I did have a play with the Import/Export methods a while ago but couldn't find a way to reconstruct the projection so I could map the depth & colour images after importing, and Intel again weren't able to help. I ended up doing my own import/export methods by going via a string to save and reload depth data. If you just want to do error correction on the depth values though you can edit the z component of each point in the vertices array directly.

0 Kudos
NMant1
Beginner
771 Views

Thanks for your feedback. I will try to create some kind of filter to see if the situation improves and get back with the results. Only changing the Z value of each vertex is not a complete solution though, since the X and Y values are influenced by the Z value of each pixel in functions like QueryVertices. I noticed setting SetIVcamLaserPower to 0 turns the IR light off. Is 1 the auto mode, and 2-16 full on?

I used Samontab's https://software.intel.com/en-us/forums/realsense/topic/537872 Utility for changing laser camera parameters (IVCAM v0.5) to test the SetIVcamLaserPower function, and setting it to 1 gave a much better result, allowing me to hold my head up to 15cm without ridges. I'm just not able to replicate this in my own program, sadly. I'm continuing with this tomorrow, so I'll tell you how it goes.

0 Kudos
jb455
Valued Contributor II
771 Views

NahuelM wrote:

Only changing the Z value of each vertex is not a complete solution though, since the X and Y values are influenced by the Z value of each pixel in functions like QueryVertices.

Ah yes, that's true. Perhaps you could use https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?projectcolortocamera_pxcprojection.html ProjectColourToCamera and supply the z values with whatever you end up with after your filtering?

The IVCAM app uses an old version of the SDK, so maybe that is actually changing the laser power by 16 discrete steps as it used to do (IIRC) before they introduced the auto laser control a few versions ago, and the lowest step is lower than the auto laser can go.

0 Kudos
NMant1
Beginner
771 Views

So, today has not yet produced anything useful, since I've been fighting with code. Every observation I've made matches your overexposure theory perfectly, so completely agree with you on the cause of the ridges. I don't know which SDK version IVCAM uses, but putting the laser control to 1 makes it adjust automatically.

Today I tried to make a loop to render a video stream to my screen, in order to see if the auto laser control fixes things. Sadly, all examples I could find on the internet didn't work when executed. The RawStreams.cs application which comes doesn't seem to have any documentation, and I've been unable to integrate it into my project. Is there an easy source I can use for this?

0 Kudos
MartyG
Honored Contributor III
771 Views

If you access the C# version of RawStreams from the Sample Browser application, there is a 'documentation' button that leads to a short page of documentation for the sample. I will post it here for your convenience. There is also a 'sources' button to access the sample's C# source code if you do not have it already.

************

The DF_RawStreams and DF_RawStreams.cs samples show how to visualize raw depth and color streams.

From the menu, you can choose the following items:

Device: Select from a list of I/O devices for streaming.

Color, Depth, (IR), Left, or Right: Select the corresponding stream configuration. The stream types are camera device specific.

Mode: Select whether to do live streaming, recording or playback. If the playback or recording mode is selected, the sample will prompt for the playback or recording file name.

C/D Sync (Sync SW) or No Sync: Select whether to use synchronous or asynchronous color and depth streaming during visualization. The former synchronizes the color sample with the corresponding depth sample, while the latter visualizing them in their own frame rates.

Sync HW: Performs strong hardware-based synchronization on the enabled streams.

From the side buttons, you can choose the following options:

Color: Visualize the color stream in the main display. If picture-in-picture is enabled, the stream previously displayed in the main display goes to the picture-in-picture display.

Depth: Visualize the depth stream in the main display. If picture-in-picture is enabled, the stream previously displayed in the main display goes to the picture-in-picture display.

IR: Visualize the infrared stream in the main display. If picture-in-picture is enabled, the stream previously displayed in the main display goes to the picture-in-picture display.

Scale: Scale the image to the size of the display window, or the actual size.

Mirror: Flip the image horizontally to show the camera view or the user view.

PIP: Open a picture-in-picture window to visualize the second stream in streaming. Multiple clicks can choose the window size and location.

Start: Start streaming.

Stop: Stop streaming.

0 Kudos
NMant1
Beginner
771 Views

Yes, this shows very nicely what you can do with the application. The code, however, is undocumented. I have tried to find my way through the tens of functions and classes, but I'm completely lost as for how to implement this in my own app.

0 Kudos
MartyG
Honored Contributor III
771 Views

This old tutorial PDF on creating a raw stream viewer might be useful to you, as it gives notes on each section of the code.

https://software.intel.com/sites/default/files/Capturing_Raw_Streams.pdf https://software.intel.com/sites/default/files/Capturing_Raw_Streams.pdf

0 Kudos
MartyG
Honored Contributor III
776 Views

There is also a Raw Streams sample by Intel from their IDF 2015 conference. You have to download it to get all the files. I downloaded it myself to check its script out for you though to see if it has documentation notes in it, and it does. Here's the download link for the whole sample package:

https://software.intel.com/en-us/articles/raw-streams Raw Streams Tutorial from IDF 2015 Intel® RealSense™ Lab | Intel® Software

And for your convenience, here's the script so you can view its annotations.

/*******************************************************************************

INTEL CORPORATION PROPRIETARY INFORMATION

This software is supplied under the terms of a license agreement or nondisclosure

agreement with Intel Corporation and may not be copied or disclosed except in

accordance with the terms of that agreement

Copyright(c) 2011-2014 Intel Corporation. All Rights Reserved.

*******************************************************************************/

# include

# include "pxcsensemanager.h"

# include "pxcmetadata.h"

# include "util_cmdline.h"

# include "util_render.h"

# include

int wmain(int argc, WCHAR* argv[]) {

/* Creates an instance of the PXCSenseManager */

// ToDo 1

if (!pp) {

wprintf_s(L"Unable to create the SenseManager\n");

return 3;

}

// Create stream renders

// Insert ToDo 2

pxcStatus sts;

// Configure the components

// Insert ToDo3

/* Initializes the pipeline */

sts = pp->Init();

if (sts

wprintf_s(L"Failed to locate any video stream(s)\n");

pp->Release();

return sts;

}

/* Stream Data */

for (;;) {

/* Waits until new frame is available and locks it for application processing */

// Insert ToDo 4

/* Render the frame*/

// Insert ToDo 5

// Insert ToDo 6

/* Releases lock so pipeline can process next frame */

// Insert ToDo 7

if( _kbhit() ) { // Break loop

int c = _getch() & 255;

if( c == 27 || c == 'q' || c == 'Q') break; // ESC|q|Q for Exit

}

}

wprintf_s(L"Exiting\n");

// Release the Instance

// Insert ToDo 2

return 0;

}

0 Kudos
Reply