# Transforming point cloud from aligned depth frame back into depth coordinates (D415)

**_xct**Oct 22, 2018 8:40 AM

**Problem:**

With RealSense D415, I use color camera data to modify a depth map, then calculate a point cloud from it. In order to do this, I have to align the depth frame to the color frame.

The main issue is that the resulting point cloud is in the coordinate system of the color sensor and not the left imager (like all the other point clouds).

I want to transform the point cloud back to the coordinate system of the depth sensor.

**Current result:**

I am trying to use color->depth intrinsics to transform the point cloud back into the original space, but the result is misaligned.

Color to depth extrinsics:

Transform(

Orientation(

[[ 0.9999483 0.0047962 0.00896312]

[-0.00485901 0.9999637 0.00699891]

[-0.00892923 -0.0070421 0.9999353 ]]

),

Vector(-0.014856948517262936, 0.00011981908755842596, -6.095561184338294e-05)

)

**Desired result:**

Two point clouds must be in the same coordinate system (in this case - identical, since the frame is the same).

**Relevant code:**

# not aligned

frameset = camera.pipeline.wait_for_frames()

depth_frame = frameset.get_depth_frame()

color_frame = frameset.get_color_frame()

# get color->depth sensor transform

extrinsics = color_frame.profile.get_extrinsics_to(depth_frame.profile)

orient = m3d.Orientation(np.array(extrinsics.rotation).reshape((3, 3)))

pos = m3d.Vector(extrinsics.translation)

transform = m3d.Transform(orient, pos)

depth_frame = postprocessing.process([depth_frame]) # only spatial filter in here

depth = np.asanyarray(depth_frame.get_data())

points = camera._pc.calculate(depth_frame)

vertices = np.asanyarray(points.get_vertices())

pc = PointCloud(vertices.view(np.float32).reshape((vertices.size, 3)))

pc.save('not_aligned.npy')

# aligned

align = rs.align(rs.stream.color)

# frameset = camera.pipeline.wait_for_frames() # working with the same frameset to test similarity

frameset = align.process(frameset)

depth_frame = frameset.get_depth_frame()

depth_frame = postprocessing.process([depth_frame])

depth = np.asanyarray(depth_frame.get_data())

points = camera._pc.calculate(depth_frame)

vertices = np.asanyarray(points.get_vertices())

pc = PointCloud(vertices.view(np.float32).reshape((vertices.size, 3)))

pc.save('aligned.npy')

# transform pc back using color->depth extrinsics

pc = transform * pc

pc.save('transformed.npy') # must be similar to non-aligned, but isn't