Yes, it is true that most of the tools and documentation for using RealSense with Unity relates to the previous version, '2016 R2'. R3 has a very different structure to the previous SDKs and so the only sample available for it is RawStreams, and the old tools do not work with it.
If you want to continue using the R3 SDK then I have authored a step by step guide to setting up RealSense in R3.
For a new Unity user though, you may find it easier to use the R2 SDK because of the RealSense tools available for it in its Unity Toolkit. I have also authored a very large range of step by step guides to RealSense in R2.
I apologize that I have not had time to write similar documentation for R3 aside from the setup guide.
If you need to use R2, you can download it directly in your browser with the following link once R3 has been uninstalled.
Thanks for the response, I will try to install the previous version. The only module that I need to implement in Unity is the user background segmentation. Is it possible to integrate this module within unity? Has anyone done it before with R3? I couldn't find anything on the internet or even here.
Is it possible to have a higher resolution of the color stream when I am working with segmentation module or the max in 1280x720p? Is it possible to limit the depth distance when we are using the segmentation module or is it predefined and unchangeable?
Most camera features can be made to work in Unity if you know how to convert them. Since Unity uses C# code, I would use the C# Segmentation sample's source code as the starting point for the conversion to Unity.
I remembered that back in 2014 with the original RealSense SDK, I tried adapting a C# script to Unity and wrote a short piece about it. As I didn't get much feedback on it, I'm not sure how well the process works for other people but I'll link you to it in case you want to have a look.
I would think that the color stream could go as far as the SR300's maximum RGB resolution of 1920 x 1080 / 30 FPS (this is a guess rather than known fact as far as the capabilities of segmentation goes), though if you needed 60 FPS then 1280 x 720 is the appropriate choice for you.
The depth range of the camera can be altered in scripting with the SetIVCAMFilterOption feature.
Edit: I tracked down an Intel article on segmentation in Unity 5.
Thanks for the help. I followed the instructions and with trial and error, I were able to get unity to work with my camera. I was wondering if you could help me with the image size that I can get from segmentation module. In this page: Intel® RealSense™ SDK 2016 R3 Documentation , it says that the output will be [color 640x360/depth 640x480] and that's what the only thing I can get. Do you know what's the procedure to change it?
Should I first set these parameters and then initialize the SenseManager or afterward?
The documentation for segmentation appears to contradict itself. In the opening introduction page of the segmentation section that you linked to, it infers that both color and depth resolutions can be changed. In a reply to a similar question to yours in January 2017 though, Intel support staff member Pablo said after investigating the R3 documentation that "it seems like the only property that can be set / changed in the Seg3D module is FrameSkipInterval But there is no other configuration possible".
The Properties section of the Seg3D documentation seems to back this up.
Isn't that strange? There is another sample for segmentation module written in C# that is able to switch between different color and depth resolutions. Is this limitation that you're referring to only for the Seg3D module for Unity or for all of them? And if this limitation exists then would there be a new SDK for solving it or Intel has no plan to update it?
The documentation on segmentation is not entirely clear. Pablo's interpretation of the 'User Segmentation' intro section was that you could choose color resolutions, but the depth resolution was fixed at 640x480. Then, in regard to the Seg3D module, he interpreted that the FrameSkip was the only thing that you could really change. this was the obvious conclusion to draw from the selection of options listed on the Properties page of the segmentation docs.
I think the key to understanding these discrepancies is if the User Segmentation section and the Seg3D section are talking about two different systems, rather than it all being about the same module (one about segmentation in general, and the other about the Seg3D sample in particular).
The User Segmentation section said that the output resolution of the color mode was dependent on the input resolution that you set. So if you set Input resolution to a particular setting, the Output resolution will match it.
I ran the 3D Segmentation sample in the Sample Browser (3DSeg.cs) to try to see what the documentation is describing. You can change resolution in that sample using the 'Profile' menu of the sample. Each menu option lists a Color and a Depth setting, and confirms that the Depth resolution is the same for all options.
I wasn't able to find information about how to change the Color resolution with SDK scripting. But since RealSense samples come with the source code, you could probably find the code you need to change resolution in that.
I find it very strange if either of them was limited to 640x480 resolution, though when I try to acquire the image from SenseManager, I get an image with (640x360) resolution. I looked into the Seg3D sample project for C#, but I couldn't port it to Unity. I've tried but apparently, the camera doesn't let me make any changes.
I also found this page from SDK 2016 R3: Intel® RealSense™ SDK 2016 R3 Documentation which again says the color resolution can be set up to 1280x720.
If I am reading the documentation page correctly, if your program is going to use more than one module then you need to set a single resolution in the SenseManager that all the camera modes (face tracking, segmentation, etc) will use, and you should not try to set separate resolutions for each aspect of tracking.
Advanced stream programming is not in my skill-set, though I have enough knowledge to read and understand such a script. As such, the following advice is based on reasonable guess rather than certain knowledge. I believe that if you want to set a resolution of 1280x720, you should use a line such as this in the SenseManager section of your code:
which gives a resolution of 1280x720 at a frame rate of 60 FPS.
whilst depth would be set with:
The above lines were adapted from the script on this page:
The above approach, to my knowledge, only works when you want to have a raw stream. When I try to have:
and at the same time having:
seg = Seg3D.Activate(sm);
then the camera becomes unavailable. I know that by the SreamReader, I can increase the resolution, but I won't be able to use the segmentation module.
If there is no way to increase Seg3D's output resolution then I might go with the raw stream and then do my own segmentation. Do you have any suggestion/recommendation on that( even though you said it's not your skill-set)
1 of 1 people found this helpful
I've never used 3DSeg either, though I do manually do something similar in C# (WPF): I have the colour & depth streams going, map depth points to the colour image, and colour each pixel in the colour image depending on its depth value. So you could easily do "if depth greater than [whatever value you like] set pixel black or transparent or whatever", assuming the code transfers to Unity alright. Search the forum for "QueryInvUVMap" and you should find some snippets I've posted to get you started.