Myself and another developer had a similar idea in the past about using smartphone zoom lens peripherals with RealSense to see if it extended hand and face tracking range by magnifying the perceived size of those body parts, but we never tried it in practice. So it would be fascinating to hear about what results you have if you try it!
This won't work, unfortunately. As you note, there are three cameras in the R200 (one color, and two IR). They are carefully calibrated with respect to each other and their lens distortions have been calibrated and accounted for. By the way... see image below... this was taken by pointing one R200 at another and looking at the IR output; the front panel is transparent in IR.
Adding external lenses (even if you could line them up with the IR cameras...) will mess up this calibration, and wide angle lenses are notorious for having strange distortions (for instance, all rays tend not to go through a common point, so you can't even in principle warp the images to a projective view). Finally... making the laser projector more wide angle probably won't do much other than decreasing range and accuracy (by spreading out the dots). The RealSense doesn't care so much about the actual pattern as the density of dots and the fact that there IS a texture on the surfaces.
What WILL work to increase the FOV is adding more cameras. R200s have a 56 degree by 44 degree FOV. 360/56 = 6.4, 360/44 = 8.18. After some experimentation I found that if I reduce the frame rate to 30Hz, I can hang three R200's off one Joule (one USB3.0 root port). If you overlap their fields of view slightly you could get about 150 degrees in a horizontal orientation and about 130 degrees if you use a vertical orientation. Here is a config I am playing with (happens to be a ZR300 and two R200s, but three R200s would also work):
PS by "won't work" I mean "the camera may give you some data but it will probably be extremely hard to interpret." In particular, you could look at the disparity output and IF you were able to account for all the distortions in your lenses and calibrated the system somehow, you might be able to recover depth from that. But it would be a huge project, and it would probably be easier, frankly, to just start with a pair of off-the-shelf wide-angle cameras and your own projector.