The information posted and the FAQ seems to suggest that there (1) will be an SDK 2.0 for Windows which supports only the SR300 and D4xx cameras. (2) That this SDK is separate from and different than the cross platform API, and that (3) it will contain similar functionality to the (now discontinued) Windows SDK. I suppose the obvious question is do you have any information about when this SDK 2.0 might be available? Just for the avoidance of doubt, can you confirm the accuracy of points (1) to (3) above. I suppose I also ought to ask whether the reference to Github in relation to the SDK 2.0 means that Intel intends to open-source the code of the SDK 2.0?
I refer in particular to the passage in the discontinuation notice that reads: "For the Intel® RealSense™ SDK 2.0—our next generation SDK—support will only be available for Intel® RealSense™ cameras SR300 and D400-Series through GitHub" and also later in the Frequently Asked Questions, reply to the first question "The new software support will be aligning to our camera, supporting the SR300 and the D400-Series cameras." This seems to imply that another SDK will be available, although it does not say that it supports Windows.
Many thanks for any light you can shed on this.
The fact that the new SDK is open-source is confirmed on the information page that promotes the new D415 and D435 cameras.
I do not have any information about when the 2.0 SDK will be released, though it is highly likely that it will be around the same time as the release date in September. There is not a release date available yet. I recommend keeping an eye on the RealSense section of the Intel Click online store. If a date is announced on the forum in a notice then I will make a post to draw it to users' attention.
The 'Windows 10 RS3' referred to as the means of providing Windows support references Windows 10 Redstone 3, also known as the Fall Creators Update. The FAQ says "We plan to make the Intel RealSense SDK for Windows available through the release of Windows 10 RS3". Since the Windows SDK is already available, my interpretation of this sentence is that Intel plan to keep offering the Windows SDK during the time period when the Fall Creators Update is the current Windows version but do not make a commitment to keep offering the Windows SDK in future major Windows builds after the Fall Creators Update.
Regarding your questions.
1. I am not an employee of Intel and so if Intel say that the SDK 2.0 supports the D-cameras and SR300, then those are the facts of the matter as far as I know them.
2. Each time the information says 'Intel RealSense Cross Platform API', it is referring to Librealsense. There is not enough information available yet for me to make an educated guess on the nature of SDK 2.0, but I believe it technically will be a very different product from Librealsense.
3. It is also not possible to speculate on the feature list of SDK 2.0 either. Even in the previous Windows SDKs, features were added or removed between releases. The information page of the D-cameras may give clues about the kind of new features those cameras may support (which presumably SR300 will not due to lacking the necessary hardware). But ultimately the best advice is to wait for further details to become available when the D-cameras are released.
Based on the information available, I believe that support for the F200, R200 and ZR300 cameras (and the SR300 if people want to use it with that) will be provided by Librealsense, and through the older Windows SDKs at least through the time period of Redstone 3 / Fall Creators Update being the current major version of Windows.
Regarding the question of open-sourcing the code of the SDK 2.0: whilst a certain amount of code is open-sourced, Intel tends not to release the contents of algorithms they have created due to them being a protected intellectual property of Intel.
That's very interesting. I had thought you were an Intel employee, hence my presumption that you might know more than the rest of us. I'm reassured that you seem to have interpreted Intel's communications in much the same way as I did -- namely that there will be an SDK 2.0, which will not be the same as librealsense (and hopefully not the same as the current Linux SDK, which seems to do practically nothing). That it's likely that Intel will announce the SDK at the same time as releasing the D4xx cameras because how can they not?
I suppose I'm less concerned about the continuing availability of the current SDK, as it's currently broken for the SR300 which is the device I'm most interested in using. It would be nice if Intel were to fix this, but given they're discontinuing the SDK I don't see much hope for that.
Anyway, thanks for your views on this subject, they've been very helpful.
1 of 1 people found this helpful
I'm glad the information provided was helpful to you.
A user reported that their previously malfunctioning SR300 was working for them consistently from Windows Insider Preview build 16273 onwards. The Windows issue with the SR300 is one that affects some PCs and not others though, so we will have to wait until the new Windows builds become more widely distributed before we can tell whether this user's success was an isolated case or not.
Ok, thank you very much for your feedback. Admittedly the user who had it working had special conditions on their PC as well as having the new build. Their camera started working after they installed the User Background Segmentation feature, presumably in the '2016 R3' SDK where it is a a separately installed module.
I mentioned this success recently to the Intel guy who has been handling the SR300. This method will not be suggested as an official fix though, simply because there is apparently no single cause or solution for everyone's SR300 problems. This is emphasized by a user who makes a comment immediately after the above success report that they tried the User Segmentation installation method themselves and it did not work for them in their case.
I understand that there are multiple causes of this problem. Interestingly, since I wrote my last reply, I have been able to coax some life out of the SR300, albeit fleetingly. I de-installed and re-installed the DCM (126.96.36.19918) and it came alive, although did not seem to return any data. Subsequent invocations of the same program lit up the OEM F200 which is also installed on the system. This is characteristic of the type of failure I've been seeing all along, so nothing new here -- the problem is unchanged. As far as I can see that exhausts my options for getting this camera to work. I expect it'll find a new home in my desk drawer along with all the other stuff that doesn't work! Many thanks MartyG for your helpful comments and suggestions along the way.
You are right of course: the new SDK may not have the same problems and I may return to it when it becomes available. I'm not limited to using Windows -- Linux is an option too, and I could always re-write my code to run with the new software, and that remains a possibility. I think I'll wait for the launch of the D4xx cameras and the associated SDK and then decide whether I want to invest more time and money in developing for Intel RealSense products. I understand that these products are, effectively, prototypes and that no development is without risk, but that needs to be weighed against the likelihood of spending the time more profitably doing other things.
I admit to being concerned that Intel does not, in my opinion, seem to know what it wants to do with these products. Originally launched on Windows and the PC platform, they made an abrupt -- and abortive -- left turn to mobile products, and now seem to be targeted at IoT applications (whatever they are). It does rather raise warning signs of being an answer in search of a question. If I had to guess, I'd say that the inclusion of the SR300 in the SDK 2.0 is mostly because of an unwillingness on Intel's part to be seen to be breaking with the past completely, and I wouldn't be surprised if SR300 support is short-lived in the new SDK. It seems hard to fathom how any SDK written to take advantage of the new camera hardware can also work well with the SR300. This means, in effect, committing to the D4xx cameras, and that line of development, wherever it may take us. I think I'd like more information about precisely what the cameras and the SDK can do before making any decisions, so I'll wait and see.
The past year has certainly been a transitional one for Intel in various product ranges including wearable sensors, RealSense and development boards. There are signs that things are stabilizing though, and clear strategies are emerging to answer questions that end users may have had in recent months. It is very reasonable for you to want further information before drawing conclusions about your future investment decisions. I am hopeful about what I am seeing emerge now though.
I am also a bit perturbed by Intel not showing a very clear sense of direction and a concrete product plan with realsense. But When I think deeper about it, it may be partly because of the fact that technology in this field has changed much much rapidly than anyone could have envisaged. A few example
(1) Realsense initially started with focus on PC. In fact there were many technologists who wanted a Linux SDK, but Intel won't oblige them. After long wait, Intel only released windows sdk for ZR300 afaik. When first generation cameras had arrived, stereo vision using active patterns was very nascent. That time, Kinect, was considered revolutionary. Indoor Depth Camera seemed really magical and no one imagined that outdoor cameras with similar performance will arrive soon. Pose Estimation, Gesture Estimation, Face recognition, were largely unsolved - as Deep Learning had not arrived. Now few years down the line, we have outdoor depth cameras with range more than 100 feet. Things like Gesture Estimation, Face recognition are now considered a solved problem. Mobile robotics is today considered one of the big challenge. Mobile SLAM, is a big time focus for computer/robot vision researchers. This also where most of the money will be in coming years. So it may be fair that Intel too has shifted its goal post.
(2) Similar things happened with the way Computer Vision Algorithms are processed today. Deep Learning, CNNs were never heard of 5 years ago. But almost every computer vision Engineer has shifted to this apprach today. So, Galileo, the Joule are now not the focus. Google is pushing TPUs. My hunch is that Intel will start promoting Movidius based hardware very soon. That's the only thing which robotists will leader as End to End Deep Learning is gaining momentum. We have seen such a huge change from 2012 to 2017. I wonder if by 2022, we will be again at a situation which are not even imagining today. For us product developers, it is risky to chose to a wrong platform to start with. But if I think about organizations like Intel, risk they have is even bigger. If they fail to anticipate and fail to keep pace with the rapidly changing technology, they will soon face extinction. Though I am worried that Intel has not followed the path they planned. But at least one positive is that they have been able to catch up with the state of art. Let's see what they reveal in coming months.
A fascinating analysis, Mr Snipe. . I enjoyed it a lot.
Regarding Movidius hardware, are you aware of the new Movidius Myriad X that was announced in the past week?
I agree with MartyG: an interesting analysis. Judging by the specs, the D4xx cameras will be better cameras than their predecessors. I've heard that Intel is using the Movidius chip in them for stereo matching, and it's the case that good stereo matching is more important for outdoor work where structured light is overwhelmed by ambient sunlight. It's certainly the case that the F200 and SR300 don't tolerate sunlight well. I have no idea how well the ZR300 works outdoors, but if it's based on the SR300, I suspect it won't work well.
I also agree that there are areas where Intel's offering is behind the state of the art. In particular, the speech recognition facility (dropped in the R3 release), did not perform well compared to, say, Google's speech recognition on Android phones.
However, I don't currently see any link between deep learning and CNNs and what Intel is doing -- it may be the case that they are moving in that direction, and that future announcements will clarify that. While mobile robotics is presently an area attracting much interest and activity, it's far from being the only area where face recognition, gestures, and speech recognition is being used. Certainly here in the UK, many of the big banks are experimenting with these technologies with the eventual aim of using them in customer-facing systems. I believe that the banks are not the only large organisations doing this sort of thing. I think there's still a case to be made for deploying RealSense technology on the Windows/PC platform, although I agree that Intel seems to be moving away from it.
What continues to trouble me is the element of "flavour of the month" in Intel's decision making. First Windows/PC, then mobile (and who decided mobile was a good idea when the cameras won't work outdoors?), now robotics and IoT. As you say, things may become clearer in the near future when further announcement are made. At the moment, I'd settle for something that works, and that I know will remain supported for a reasonable time.
The ZR300's IR components are identical to those of the R200 camera. The SR300 was designed as an indoor camera, whereas the R200 and ZR300 can be used both indoors and outdoors. The new D-cameras also support both indoor and outdoor use.
Intel have dropped features before in the past on the basis that they didn't work as well as they wanted, such as recognition of head-tilting. In other cases, particularly where a feature is provided by a third-party company under licence, the feature may be removed if the licence is not renewed. A company called Emotient provided the emotion recognition in the original 2014-2015 SDKs, but then Apple purchased Emotient.
Apple also purchased Metaio, which provided 3D tracking map creation to RealSense via the Metaio Toolbox. That feature continued to be in the RealSense SDK though up until '2016 R2'. The speech recognition component was provided under licence by Nuance, I believe.