A couple of months ago, Jesus G from Intel customer support said to another UWP user, "There is no UWP API that is specific to the RealSense cameras. The purpose of the driver is to allow the cameras to work with any UWP app using the Microsoft APIs."
The full discussion, if you have not seen it already, can be viewed here:
Hey Marty, thanks for linking to the other post. I will reply there as well now.
That is exactly the issue with the RealSense UWP driver, it does not provide the proper API mapping. The CoordinateMapper or the CoordinateTransform just return null. They should provide the mapping between the streams like depth to RGB or RGB to depth.
The data is there in Intel's native SDK with the extrinsics but it's not exposed via the UWP Windows.Media.Capture which is preventing us from having our holographic telepresence solution running with RealSense and therefore we won't buy more RealSense devices for now which could have been a lot for our customers.
We are hoping Intel will fix that flaw in their UWP driver or provide a workaround in the meantime like a static transformation matrix if the devices are similar manufactured. Right now it's a show stopper.
I researched your case carefully regarding the possible workaround you mentioned of static transforms. Apparently, others have been able to publish the static transform of an older R200 RealSense camera model using an instruction in the ROS software called tf_static. The new SDK 2.0 for the 400 Series cameras is fully compatible with ROS, so I would expect that tf_static ought to be usable on these camera models too.
The ROS documentation says "it is expected that any transform on this topic can be considered true for all time. Internally any query for a static transform will return true".
ROS has a wiki page on the subject.
Thanks. I just need the matrix Depth to RGB in plain text so I can hardcode it for now. Do you have it handy?
But in fact Intel needs to fix their UWP driver to support proper coordinate mapping otherwise it's not useful at all. I mean Microsoft advertises it as the Kinect replacement so I'd expect Intel and MSFT to work together to provide something equally good.
There is a script for SDK 2.0 that contains formulas for calculating its distortion and transform, if that is any use to you.
1 of 1 people found this helpful
Hello @rschu and @rosme,
Intel and Microsoft are working together on a fix. I will post here when I have an udpate. For now, you can use this workaround in the CameraStreamCorrelation sample,
Please add next lines under FrameRenderer.cpp:126
CameraIntrinsics^ cameraIntrinsic = colorFrame->VideoMediaFrame->CameraIntrinsics;
SpatialCoordinateSystem^ spatialCoordinateSystem = colorFrame->CoordinateSystem;
if (spatialCoordinateSystem == nullptr)
// Create the coordinate mapper used to map depth pixels from depth space to color space.
DepthCorrelatedCoordinateMapper^ coordinateMapper = depthFrame->VideoMediaFrame->DepthMediaFrame->TryCreateCoordinateMapper(cameraIntrinsic, spatialCoordinateSystem);
Thank you Intel! Great to hear.
Any ETA when the fix will be in the driver?
Also is there a way to get the the 4x4 matrix with a workaround and not use the CoordinateMapper?
We are doing a manual registration / mapping based on camera intrinsics and extrinsics. This call would normally provide a 4x4 matrix for doing the transformation:
var matDepthToRgb = depthSource.Info.CoordinateSystem.TryGetTransformTo(colorSource.Info.CoordinateSystem)
I tried the suggested workaround but it fails the same way with a null CorrdSys.
This hits true and therefore returns:
if (spatialCoordinateSystem == nullptr)
Any ETA when a fix will be delivered? A coarse date would be fine. Like next month or are we talking about H2 of 2018?
Thank you again.
Thank you for your reply.
We will be letting you know as soon as we have any update.
Thank you for your patience and understanding.
Intel Customer Support