I suspect that it may be a waste of money. RealSense camera bandwidth deteriorates quickly when USB extension cables are used. The most reliable maximum extension cable length with ordinary USB cables is 1 m. Most 2 m cables except for very high quality ones cause the camera to not work.
It may be the case that the signal might be sustained in the high grade fiber cables these boxes use, but I would not risk my own money on it unless i could afford to lose it if it didn't work. Somebody has to be the first to try it though before the rest of us can know for sure whether it works or not.
For long distance data transfer like this, I usually recommend streaming the data over an internet connection instead from one PC to another.
Thanks MartyG. I know the Black Box version has been successfully tested with the Kinect V2. So I'm asking the community if they have done any testing also. But to your recommendation, how would you stream the data over an internet connection? In our situation, the RS300 would have the shortest USB 3 cables we could use with the Extender mentioned in between.
I believe RealSense is more sensitive to USB cable conditions than Kinect 2 was. This was naturally a cause of great discussion in the early days of RealSense as users tried to see whether Kinect knowledge could be applied to the new RealSense cameras.
RealSense can record and play back data in a video format called .RSSDK, which is based on the H.264 video codec. So that offers two possibilities for internet transfer - live-streaming the data as it is generated, or recording it to a file and then sending the complete, finished file over the internet for playback on another machine.
Another way to transfer RealSense content over the internet would be in the peer-to-peer online networking system (called UNet) of the game creation engine Unity. I use UNet in my own RealSense-equipped Unity project to connect full-body avatar characters together in a single shared environment.
I believe our developer is using the Live Stream mode because we are trying to see if a person is in a particular position so we can take a photo (using a different camera). Does this new information help?
When you say 'position', do you mean if they are standing at a particular location (like security cameras that activate when motion is detected in its camera view), or whether they are putting their body in a certain pose? (e.g bending over or holding an arm up in the air).
I apologize. By position, I mean when their head is in a certain xyz spot. It's like a photo booth without the timer, we are using the camera to take the shot when a single head is in perfect position.
In my own project, the body is tracked to mirror the user's body almost 1:1, including their head movements. By creating objects to represent the human body like this, you can trigger events when the objects enter a certain range of coordinates or angles.
This may be a much more complex approach than you are looking for though. It would be simpler to use the RealSense SDK's 'ProjectCameraToDepth' instruction to convert an image of a person that the camera sees into real-world coordinates and then trigger the picture capture when the XYZ real-world coordinates are within a certain range..
Marty, you are very helpful. I'll ask developer if he is using the ProjectCameraToDepth with the RS300. If so, can we use the USB3 over fiber extender to our PC, which then sends the signal to our capture camera? It would great if it was all housed inside our photo-booth but that is not an option.
I would think that it may be possible to incorporate the PC into the photo booth if you use a small form-factor PC. Intel has a range of powerful mini-PCs called NUC that are only 4x4 inch and can be hugely customized to your specific component and OS needs. NUCs were hugely popular with RealSense developers when RealSense was first launched in 2014 and were the most common machines being used with the camera because of its rock-solid compatibility with NUCs.
You could design your booth to have a cavity for a NUC at the back or rear of the booth, or in a cavity above the booth ceiling.
Edit: it occurs to me that another option to explore would be the Intel Aero Compute Board and the Intel Vision Accessory Kit that can be attached to it. Aero is basically a small bare board designed for drones that uses the Yocto Project flavor of Linux to develop apps for it. The Vision Kit is a set of three cameras - an R200 RealSense camera, an 8 megapixel camera and a VGA camera - that can be connected to the Aero board. The 8 megapixel camera or VGA camera could act as the capture camera. You can also buy a pre-made enclosure for the Aero board.
Yes, I was thinking this would be our Plan B option. In our server room we have a #2 PC that is storing all the photos, another #3 PC that is doing some heavy photo processing, an Audio Interface Machine that needs to send audio to 2 locations, Network Switcher to connect online, etc.
On top of that:
#1 PC is running a Touch Screen Interface in the booth, handling the RealSense Camera, outputting two audio feeds, sending video to Touch Screen and outputting HD picture to our Projector.
Not sure #1 PC can go inside booth.
Maybe a NUC can handle just the RealSense Camera information and send the command back to the server room to take the photo? This would be your recommendation?
I would think that one of the latest 7th generation Kaby Lake NUCs would have enough grunt to run both the RealSense camera and the touchscreen and video. The following article has specs on the Kaby Lake NUC range.
The 8th generation Coffee Lake processor is also due out this year, so I would expect some even more powerful Coffee Lake NUCs to become available around 2018.
I totally understand the need to have the other machines in the back office for data security reasons and to do the work that the booth does not need to be involved with.
The other issue I'm forgetting: for this project to be successful, it must run by it's self for years. So having the NUC do all the handling and cross communication worries me. But if it can reliably handle the RealSense, stay cool, be accessible remotely, then that is probably our solution. One machine for one task is ok in this situation. The price of a USB3 Extender almost twice the price of decent NUC, It will just need to be developed and handshaking with out other computer inside the server room.
I don't think the NUC reliability with RealSense is in question. I haven't seen any requests on the RealSense forums for help with NUC in about 2 years. Also, I have been using an old F200 RealSense camera (the predecessor of the SR300) for three years solid on an almost daily basis and it is still working fine. RealSense cameras are powerful and reliable little beasts even when you push their capabilities hard day in, day out for years.
You can also get fanless NUC cases that act as a giant heatsink, reportedly provide superior cooling and quietness, and remove the only moving part from the NUC, helping to ensure longevity.
We ended up moving our computer closer to the Realsense. We are now using the non-barebones model. We are noticing a lot of heat but I'll save that question for another thread. Thanks for your help.