This message was posted on behalf of Intel Corporation
Thank you for your interest on Intel® RealSense™ Technology.
To answer your questions:
1) The specifications indicate you need a 6th gen intel CPU. However, it also says that was the related to the testing scenario. Does the system run on older processors?
The D400 series cameras can run with any Intel or ARM processor. The specifications are incorrect and will be corrected soon. The only platform requirement for the camera to run is a USB 3.0 port.
2) It's a bit odd that the FOV of the depth sensor is larger with the 435, but the image sensor FOV remains smaller and the same as that of the 415. I presume that means you can't retrieve any colour information for depth pixels outside of the image sensor FOV. If so, approx how many pixels are covered?
This is not so straightforward. The streams from each of the sensors can be mapped to each other by a complex process called Projection. You should be able to find some information in this link (search for the word "Projection"): https://github.com/IntelRealSense/librealsense/blob/165ae36b350ca950e4180dd6ca03ca6347bc6367/third-party/realsense-file/rosbag/msgs/sensor_msgs/CameraInfo.h.
This link contains projection information for the legacy RealSense cameras but will give you an idea of what is involved: https://github.com/IntelRealSense/librealsense/blob/legacy/doc/projection.md. We are working on getting better documentation that describes how projection works on the D400 series.
3) What is the actual depth sensing technology at work here? There is an IR beam and stereo cameras. Is the IR beam just providing a flood illuminations source? Is there any structured light source being used? Is depth extracted by comparing the features in stereo images?
The primary means of depth detection is through stereo vision technology and is assisted by an IR projector on the module that provides texture through structured light.
4) It seems that multiple sensors can work at the same time. Can they be synchronised? Is there any risk of interference if you add more and more sensors to the same scene?
Yes, multiple sensors can be used at the same time. This feature is still being developed and should be fully functional in Q1'2018. There may be interference if the IR from one sensor is directly pointed at another sensor. The sensors should be pointed toward an object or scene where the sensors do not point at each other.
I hope you find this information useful.
Please do not hesitate to contact us again if you need further assistance.
Intel Customer Support