Items with no label
3335 Discussions

Many SR300, One IR light, While Looking at Same Object (i.e., Use Two SR300, And One IR Light)

LBill
New Contributor I
2,779 Views

If I have two SR300 looking at the same object, how can I make one realsense (with Laser Power set to Zero) use the IR light of the other SR300? I tried it, but I cannot see the depth image in of the SR300 that has Laser Power equal zero. Is there a way to do this?

0 Kudos
21 Replies
MartyG
Honored Contributor III
1,017 Views

I am not sure from your description what the goal of your project is. Are you trying to take scans of an object from different sides of it, with the SR300 cameras arranged around it?

If this is the case then the approach I would use would be to cycle through the cameras with ID numbers. You would make one camera active and capture an image, then deactivate that camera and make the next camera in the sequence activate and capture an image. when the last camera is reached, the cycle gets back to the first camera again. E.g if you had 4 cameras, 1 - 2 - 3 - 4 -1.

Once you have taken a series of captures from different sides, you can stitch the separate images together into a single 360 degree model of the object using 3D modeling software.

If what I am suggesting is different from what you are trying to do with your project, please explain your project more. Thanks!

0 Kudos
LBill
New Contributor I
1,017 Views

MartyG,

I am recording videos of the same object with two cameras at the same time at 30FPS (two streams per cameras). All cameras are synchronized. I tried deactivating a camera when the other one is running, but the cameras don't react fast enough, and the videos don't look good...and frames are skipped.

I am running a while loop that captures two frames (one IR, one Depth) from each camera:

while (true) {

for(auto dev : devices) {

/// Wait for new images, otherwise we may get empty frames

if(dev->is_streaming()) {

dev->wait_for_frames();

}

dev->poll_for_frames();

///***************** DEPTH *******************************************

image = cv::Mat(cv::Size(width, height), CV_16UC1,

(void*)dev->get_frame_data(rs::stream::depth), cv::Mat::AUTO_STEP);

...

the image is then either displayed or stored in a video file.

There is interference, because I see a wave of lines on both the IR and Depth videos of each camera.

I tried turning off the laser of the camera after I was done grabbing a frame, but that made things worst (because it takes time to turn it back on, etc.)

0 Kudos
MartyG
Honored Contributor III
1,017 Views

Yes, I thought that there may be interference between cameras. The newer ZR300 model of camera can use multiple cameras without interference, but the SR300 does not have this. It is not an easy problem to solve. In previous cases, I have suggested placing some kind of electromagnetic shielding around the cameras to try to isolate their signals from each other.

https://en.wikipedia.org/wiki/Electromagnetic_shielding Electromagnetic shielding - Wikipedia

A cheap example of this is smartphone covers that have an inner 'Faraday' lining (named after the signal-blocking Faraday Cage) to shield a device from wireless hacking or reduce the body's exposure to EM signals. You can find examples of this on stores such as Amazon for $10 to 20 by searching for 'faraday phone'.

Someone else asked whether they could put an external shutter on the outside of their camera, presumably blocking the laser when the shutter was closed. A shutter might be able to react faster than turning a camera on and off.

Another approach that someone tried was to deactivate the camera's Sense Manager and then re-activate it. This has a time delay involved too though regarding the time it takes for the Sense Manager to re-activate.

0 Kudos
CAike
Novice
1,017 Views

Marty,

If I understand the "interference" is the IR pattern corruption (for depth) and the IR overlap (for IR capture).. I don't think it is electrical noise/crosstalk. So I don't the shielding would help out much.

The IR projector is very slow to start(really)/stop(not so much).. but the power = 0 is relatively fast to enact. Still TOO SLOW for video, but better than stopping the projector itself..

From what we understand (and see) the projector is a resonance type.. so.. it likes to stay running.. or you REALLY pay a time penalty! I believe that the firmware wants to allow time for it to stabilize before taking depth measurements.

And btw.. Thanks for being so active on the forum! I wish I had the time to spend to help out more. We have lots of F200s, SR300s, and a few ZR300s around if ever there is a need for some specific information that we can test quickly.

Chris

0 Kudos
MartyG
Honored Contributor III
1,017 Views

Thanks muchly, Chris. I know about EM signal interference, but not so much about pattern interference, so it's always good to have expert voices contributing to add to the community's knowledge-base.

I'm reminded of the Ghostbusters and their proton guns ... "Don't cross the streams!"

Back in 2014, a user cut open a camera's USB cable and spliced in a cable that provided an extra 5v of power for increased reliability (this was how it was discovered that mains-powered hubs solve a lot of camera detection / connection problems). Maybe it would be possible to splice in a circuit that produces resistance (cutting the power to the camera) and then drops the resistance and restores power again.

I once designed a mechanism where a cable would have a gap in it with an electrical terminal pad on each end. In the gap would be a powered spinning rotor that delivers power to the cable terminals when its rotation took the rotor past the contacts and then cut the power when the rotor's angle moved past the contacts.

But that would probably turn off the projector too when the power cut out, so maybe not so useful.

I wonder .... if the projector uses less power than the laser, maybe the resistor value could drop enough to disable the laser but still provide enough to keep the projector warm. I'm just designing off the top of my head here (I have a product engineering degree).

There isn't any publicly available information on the D-series cameras yet other than what is on their pre-release promo page. I guess you'll be buying some to add to your impressive RealSense camera inventory.

0 Kudos
CAike
Novice
1,017 Views

I think I misunderstood what Pototo was asking! (Happens a lot now that I am getting old.. my wife and employees have confirmed this often.. grin)

If you turned off the laser from one sensor.. and then used the other only.. maybe that would work for IR capture. In theory you could even use an external IR source.

But, for depth, I would seriously doubt you can use the structured light from one sensor for the other. I am almost sure there is a critical relationship between the projector and capture on the ASIC.

If, for example, they are on top, and looking at the pattern from the other sensor (on top).. then I bet you are back at some timing issue from the ASICs.. The projector of one is not synchronized with the capture of the other..

If the sensors were opposed.. I don't think you have a chance since the principle is to detect the distorted/translated by the object under inspection...The object would simply obscure the pattern completely.

Yep.. crossing streams is bad.. but who you gonna call?

For "browning-out" the camera.. I doubt that would be reliable in the end. Probably every camera would have it's own "magically" resistance.. and you would probably cook something due to overheating (as you drop the voltage, the current would increase).

It would be NICE to have a hardware blanking and trigger signal for the sensors.. It would open up a world of new possibilities. Especially for commercial applications where more than one camera is needed. Maybe we need to find the hardware guys and buy them a nice dinner? (Hardware guys.. really.. the offer stands).

We have already signed up for the notice on the D4XX kits.. And you are right.. we will add to our collection!

We have invested a lot in these sensors.. Have developed our own external calibration stand for sensor specific calibrations (for distortion correction of the depth and color images).. We generate a correction file for each sensor.. it is amazing what you can gain..

Now..the quest continues (again.. F200, SR300.. now D4xx)!

Thanks for guiding the conversations and the efforts.. if we can help out somewhere, feel free to message us.

0 Kudos
MartyG
Honored Contributor III
1,017 Views

Thanks too for your kind words.

One more idea I had ... the HTC Vive's sensors use a rracking box called Lighthouse where the laser is spun round at 1000+ RPM. Maybe that is adaptable in this case ... if The RealSense camera were spun round fast on a turntable then its laser would always be on but it would be pointing away from the object for half of the time (potentially stopping the streams from crossing during those periods).

Spinning a camera on a turntable like this would be physically tricky because of the potential for the cable getting wound up. A way around this may be to have a hole in the center of the turntable's base and thread the lead through that, so the cable is spinning round on the spot instead of winding round the turntable's stand.

There is also the matter of whether the camera's FPS could cope with such rotation. The 90 FPS on the new D-series cameras might be better suited.

If the scanning area was a white box surrounding the object then it might reduce the potential for capturing bad data when the camera faces away from the object, as the depth would read as '0' when it could only see the white surface, as there'd be nothing for ir to lock onto.

0 Kudos
CAike
Novice
1,017 Views

Interesting.. The cable would be a trouble.. but there are sliprings.. but rated for USB 3.0? that is the sneaky part. (see here - https://www.adafruit.com/product/736?gclid=CjwKCAjww9_MBRAWEiwAlaMJZsqOObasCBP7DVvSy2BEY92ARoHhhDcpS66yZGUivFpPMOsGxodmERoC8_kQAvD_BwE https://www.adafruit.com/product/736?gclid=CjwKCAjww9_MBRAWEiwAlaMJZsqOObasCBP7DVvSy2BEY92ARoHhhDcpS66yZGUivFpPMOsGxodmE… )

Early on there was a "Shake and Sense" approach that vibrated the sensors with the Primesense sensors (Kinect V1)..

Check this out.. http://www.i-programmer.info/news/194-kinect/3869-shake-n-sense-makes-kinects-work-together.html Shake n Sense Makes Kinects Work Together!

We never tried it.. It looked sort of brutal.. but maybe it is worth an investigation?

0 Kudos
MartyG
Honored Contributor III
1,017 Views

By googling for 'slip ring usb 3.0' I saw a couple that claimed to work with USB 3. Example:

http://www.directindustry.com/prod/jinpat-electronics-co-ltd/product-144799-1691805.html USB slip ring / capsule / modular - USB/power/signal capsule slip ring - JINPAT Electronics Co., Ltd.

Yeah, I believe they changed the projection method on Kinect 2? Might be worth testing it with an F200 though just in case it breaks it! The Japanese build shock absorbers under their buildings to dampen earthquake motion.

0 Kudos
CAike
Novice
1,017 Views

I have bookmarked the slip ring.. that is nice.. 1GB Ethernet as well! Could come in handy one day.

The F200 and the Primesense were really close.. We have the Kinect V2.. but we abandon it earlier on since it was big and required external power. It was really hard to repackage. And they did change the approach. It is a "Time of Flight" technology, not structured light.

For F200 and SR300 it is still structured light..But I am not sure if this approach could work. It was magic even for the Primesense sensor.

For the D400s it's anyone's guess since we don't know as yet the technology inside.

I really wish Intel had a DEVELOPERS PROGRAM for Realsense. A real one.. under NDA if needed.. and not free. We would pay for it.

That way we could have a way to get a view of what's coming and be prepared, know timelines, get attention if there is a real tech trouble.. etc..

Thanks again!

0 Kudos
MartyG
Honored Contributor III
1,017 Views

Chris, just wanted to add regarding your wish for advanced info that a good place to follow for this is Intel's press Newsroom. The D-cameras were first mentioned there last August, for example, when they were known as the R400.

https://newsroom.intel.com/ Intel Newsroom | Intel Official News and Information

0 Kudos
LBill
New Contributor I
1,017 Views

Do you have any sample videos and images of ZR300 cameras looking at the same object and displaying the depth images?

0 Kudos
CAike
Novice
1,017 Views

Pototo.. I don't think we do. These are only support under Linux we haven't played with them much as yet.

I will check with my developers and see if anyone has played with them at home.

0 Kudos
CAike
Novice
1,017 Views

Marty, Thanks! I will make a note!

0 Kudos
LBill
New Contributor I
1,017 Views

I think an ZR300 would be the best option. I will try getting one.

0 Kudos
CAike
Novice
1,017 Views

Pototo,

Note that the ZR300 and SR300 both use a laser projector (as discussed above), but maybe there is an important difference!

The ZR does have stereoscopic IR cameras.. and this is how it calculates the depth. The projector is "to add texture to non-textured objects" (at least that is the datasheet explanation).

So.. It might behave different if you used a single projector or IR source and two or more cameras. It could be interesting to find out!

My guess is without the projector the accuracy could be more unstable. It would depend somehow on the contrast of the object you were imaging.. and how the pixel edges are detected by both sensors.

Stereoscopic depends on detectable pixel shifts from perspective to calculate the Z.. so, it is a bit trickier.

Also, one neat thing (but we haven't play with it yet) is that there is an integrated IMU (gyro & accelerometer). The hint is that it uses this somehow to correct for changing orientations of the sensor.

For your application it might not matter if the sensors are static.

Please let us know if you do any testing on the ZR!

Chris

0 Kudos
LBill
New Contributor I
1,017 Views

I noticed the depth and IR images have different resolution in the ZR300. In spite of this, are the depth images always pixel aligned with the IR images in the ZR300?

Also, for the ZR300, can we still use the client software to change parameters, such as DEPTH_ACCURACY, etc.?

Does the ZR300 works well with the librealsense for Linux? I would like to do plug and play replacing the SR300 with an ZR300, and not have to heavily modify the current code I wrote using librealsense.

Thanks,

0 Kudos
MartyG
Honored Contributor III
1,017 Views

Whilst I'll leave the depth align question for Chris to reply to, I'll try to answer the rest of your questions.

The ZR300 has identical IR components to the R200, so Librealsense instructions related to the R200 specifically (e.g r200_auto_exposure_kp_gain) should work with the ZR300 too.

Intel recommend that with the ZR300, you use the Intel RealSense SDK For Linux. This installs Librealsense as a module, so you get the Librealsense functions plus extra features not in the basic Librealsense such as 'Person Tracking' and 'Object Recognition' / 'Object Localization' / 'Object Tracking'.

https://software.intel.com/sites/products/realsense/sdk/ Intel® RealSense™ SDK for Linux: Main Page

0 Kudos
CAike
Novice
924 Views

We haven't worked much with the ZR300 since our sensors are Windows based at the moment. There has been some development with LibRealSense.. but only a little.. so I am not sure I can advise you on the specifics of the ZR300!

With the SR300 we use depth + color for our application, not depth + IR.But in this case there is a correlation between the Depth pixel and the color.

For a given color pixel you can find the corresponding depth pixel. I would expect that the same would be the case between depth and IR as well.

If you need I can try to post some snippets of code that perform this with depth/color extraction.

0 Kudos
LBill
New Contributor I
924 Views

I bought a ZR300. The interference is not bad anymore when I use the SR300 and a ZR300. Sometimes, depending on the angle, I get the see, from the ZR300 depth image, the sweeping light of the SR300, but it's not a big issue right now since the important areas on each depth image have good depth data.

0 Kudos
Reply