Yes, Kinect 2 was a 'time of flight' camera. The 400 Series cameras' projection method is officially known as being Stereo or Stereoscopic because of their left and right imagers. For comparison, the earlier RealSense SR300 camera model uses Coded Light like Kinect 1.
The 400 Series has an error range of less than 1% thanks to its advanced D4 Vision Processor component.
You may also like to refer to the detailed data sheet document for the complete 400 Series.
An earlier data sheet focused exclusively on the D415 and D435 instead of the full 400 Series is also available.
Here is a description from the latter data sheet on how the stereo system works.
The Intel RealSense depth camera D400 series uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, right imager, and an optional infrared projector. The infrared projector projects non-visible static IR pattern to improve depth accuracy in scenes with low texture.
The left and right imagers capture the scene and sends imager data to the depth imaging processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via shift between a point on the Left image and the Right image.
The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.
Thanks, i'm not 100% sure yet cause i've not yet coded against it,(will happen soon) but so far it seams kinect one provides more detail.
Width the kinect one i was able (width my own software corrections) to get to a 2mm detph resolution over a scanline, and rather high contrast precision meaning detecting edges etc (or even fine facial details).
So far it (but i'm only using the Intel viewer), it seams less detailed. depth resolution +- 5mm, object resolution around a 1 cm objects, at 60cm distance, quite blob like. I see about around 3 times more pixels width a RealSence, but width about estimated 60% less detail, in total less detail, but more pixels to process for robotics applications, which might be a penalty. (but with a bonus that it will work outdoors which the kinect one didnt.
Others have also observed that they could not reproduce with the D-cameras values that they had on Kinect cameras. The 400 Series is undoubtedly far superior technically to the Kinects, though the differences in how they implement projection likely account for a number of differences in results. Areas in which the 400 Series compares negatively will probably be ironed out in time with SDK and firmware updates and new software tools, though the RealSense and Kinect cameras will never be exactly the same.
You can definite your own custom visual presets for the camera to re-balance its processing so that some functions are improved at the expense of others. An example is the pre-made High Accuracy setting, which boosts accuracy but lowers the fill rate.
Well the maths might be a bit ahead, but technically TOF is more complex to do.
And who performs best is technically ahead, but your right its just released.
Even the images in those articles where not final images, I like they are very open about it.
Much more open then Microsoft ever was, and maybe we can improve this one too.
What I kinda wonder is, why its so wobbly on larger scales ?
Does the viewer has a manual, can those be removed by width altered settings ?
And I wonder if it could be improved by some math if it was given known depth areas.
Often width robotics, we have a table or transport band, width some non used areas.
We could simply mark those area's red or so, and tell a custommer, those should be kept clean, and empty.
I also noticed hot air bulbs indoors (or dust?), something i've seen in other industrial tof depth cams as well.
Though they didnt use stereo view, if you think about that, then its kinda strange.
Normal RGB vision (my eyes) dont show that, but some IR cams have that 'flaw'. (then why use such tech?).
Or is something else going on here ? ( perhaps its a ghost hunted area ).
Yes, Intel have a community-focused approach with the open-source SDK 2.0 software, and have already incorporated community contributions into the SDK .
I do not have experience with image wobble in the 400 Series cameras, but I do encounter it when programming virtual camera views in the Unity game creation engine. In that instance, it tends to be caused by the camera image constantly updating in response to processing what it is seeing. For example, I use a routine that stops the camera view from passing through the surface of objects, so in a confined space where the camera is close to the walls, it tends to bounce the image as it constantly adjusts to prevent passing through the object's surface, until the camera is moved away from the objects into a more open space.
I was extensively researching yesterday the existence of a manual for the Viewer's settings such as Rau and Hdad but could not find one. I am waiting for a response from Intel regarding whether such documentation exists yet.
I would recommend de-selecting the 'Enable auto exposure' option in the viewer to see if the wobbles decrease when exposure is switched to manual control.
Developments in math that are incorporated into the SDK, whether by the RealSense developer team or by community contributions, are sure to refine the camera's capabilities over time. For example, Intel recently demonstrated using four D435 cameras for volumetric capture at the Sundance festival.
Ah yes, spirit orbs. You are correct, these can also be dust motes that are captured by the camera, which is why professional paranormal researchers are very careful before declaring that such an orb might be a spirit. Cameras can see more light spectrums than the human eye can. For example, using depth cameras under fluorescent lights can cause image disruption due to flickering that is hard to see with our own eyes.
Maybe I used the wrong word (i'm a Dutch person person) the wobbling is like, when i go to point cloud view then get lots of 'vibrations' , those vibration waves, seam rather large.
I'd assume depth 'noise' would be a bit more random, rather like white noise, and notsuch large waves.
A nice idea might be to use constant calibration CurrentDepth[x,y] = (depthOld[x,y]*n+CurrentDept[x,y] ) * (1/(n+1)) (where n>1 makes division by zero safe) ,
For configurable areas, so the other areas might use it as a reference of a known depth.
Can you help decipher this a bit further? I have a very similar question that would be super helpful to understand in terms of mm or cm of resolvable data.
How should I be thinking about this, and is there any easily translatable way of getting to an answer from terms Intel uses? So, for example at 1.5m distance, how many cm or mm of resolution should I expect, and how can I come to that answer for varying distances?
I do not think it is as easy to describe in practice as saying 'at x distance the image quality will be a certain state. There are all kinds of environmental factors such as lighting in a location that could affect the image results.
According to the data sheet for the 400 Series cameras, "the depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range".
Well yes light condition can / do matter, for the kinect i've seen once reports where people just recorded a few distances.
Pointing it to a flat wall, and then calculated errors (btw it also had a warming up error, so some people used extra coling, on those devices).
Error as static deviation, max err, and average err.
Since the kinect used TOF it could easily workout, but intel relies on stereo scopic view,
So to measure that i think the laser dots should be used, so it has a pattern to work width on a office white/gray wall.
As pattern has a effect that might influence it.
Well next you can put a camera to certain known distance, then record a single frame, record the distance differences per pixel.
Next you take a few frames more (a hundred or so), then per pixel calculate the distance variance averaged, max differnce, and staandard deviation.
Since its each pixel show it in a map, where those 3 values can be made visible using some color graphics.
Then you could repeat this over several distances, for example 60, 80, 100, 120 cm
Those are interesting from an industrial point of view, common converyer belt sizes (hence my interest in 60cm).
Or just on every 10 cm increase or at every cm
Though this is just the Z resolution since intelhas lower detail resolution maybe we need to think of an other method for detailism at certain distance.
But what i wrote in this post above would allready give some indication i think.
Yes well it was at a time that i had way more time to read and test things out.
I once did it with a measure lint for a single spot, but later I found how a university wrote reports and created maps as described above on a single distance.
Lacking only multiple distances, but it confirmed some of the distortion / noise ideas i then had of the device.
Those reports showed interesting light barel alike distortions, and a middle spot that required corrections.
Only with such reports you can see such errors, i hope intel can create them cause its best if you can compare a few camera's of the same model.
(well i remind at some point i had several kinects, but strangly one was particular bad, much worse then the others, some specific hardware release version.)
This message was posted on behalf of Intel Corporation1 of 1 people found this helpful
Intel has a documented procedure to test the RealSense Depth Quality. Please find it in the next link:
Please feel free to use this testing procedure and even compare it to your own procedure. However keep in mind the procedure I am sharing with you is how Intel tested their cameras.