This section of LibrealSense's manual explains about the distortion modes.
The SR300 can use Inverse Brown-Conrady to modify the depth and IR streams, whilst Modified Brown-Conrady is a mode used by the R200's color stream. The None mode can be used with all camera models, I assume.
In the appendix notes for the SR300, it says that color images have no distortion. Furthermore, you can skip the distortion step when projecting.
I did read that page a day or so ago, didn't seem overly useful. It was informative, but doesn't describe if the underlaying data is already 'calibrated' or not. If it's already calibrated, yet shows the hideous distortions, then one can say that the calibration or the model is not very effective.
The document says in the appendix for the SR300: "The depth and infrared images will always use the Inverse Brown-Conrady distortion model", but doesn't say how you can (i.e what API calls are available to handle them) apply the distortion corrections in order to get the mode to be == NONE, or equivalent result. I don't care about RGB overlay with Depth, I simply would like the Depth data to be undistorted (within reason, of course).
Is it true that I can use "rs_deproject_pixel_to_point" function call to convert from the distorted INVERSE BROWN CONRADY depth image to an equivalent "NONE" distortion image?
edit: Looks like there's no direct way to just replace the depth image with an undistorted one. The API calls simply produce a calibrated 3 Point from the given X,Y pixel and Depth value of the depth image, one pixel at a time. Intel really should provide us a way to just request a calibrated depth stream. I need to do a whole bunch of ugly boiler-plate code (probably inefficient too) to get something that should be a standard.
I will have to use these calls to get the XYZ points and then try to re-build a depth image back into the same format as before.
I just tested this now;
I went through and used the depth (converted to metres) to compute the XYZ point using the "rs_deproject_pixel_to_point" function, and plotted the points into a 1x1mm spaced 'calibration image', shifting everything into positive space.
The image looked just as bad as the original depth image, big curly and distorted edges on what should have been a flat table top.
Is there confirmed results from people re-doing the calibration process, and getting "better" results than the factory calibrations?
edit: because i'm using Librealsense right now, i also posted an issue on the github over here: SR300 depth distortion · Issue #565 · IntelRealSense/librealsense · GitHub
it might be this, it's most obvious at the 4 corners, where the depth data of what is a flat object appears to curl up/down.
The user Alex who was the author of the post about barrel distortion that I linked to said that they were going to try to create "a distortion correction using the Stream calibration parameters." Users of the old Kinect camera who had barrel distortion also identified that they would probably have to write a distortion correction filter to solve their problem.