Good point ChicagoBob.
My assumption was that on D435 and D415, Intel may be already using the Movidius processor for depth computation. However, I am not 100% sure about it.
I am also hoping that NCS App Zoo will get more and more stuff in future. As of now there is not much there except some basic examples.
About Facial Recognition, I have not tried it myself. Is it computationally too expensive?
Gary Brown, the director of marketing for Movidius, was asked about whether it would be possible to combine NCS with RealSense in an August 2017 interview with InfoQ He replied:
"Yes, of course. For developers creating deep neural networks requiring depth information, Intel RealSense technology can be combined as an input to a deep neural network running on Movidius Neural Compute Stick."
Thanks for sharing it. Really interesting interview.
However, I think ChicagoBob wanted to know if it Intel plans to release SDK with NCS. So basically, as a programmer, we only call the APIs, and SDK does the job of deciding what to run on NCS.
I would like to see some initiative by Intel, where they combine Realsense, NCS, to build a platform for developers. Something similar they have already done on Euclid. Recently I talked to some one at Intel and they said that there are no such plans. As per my knowledge movidius and realsense teams are still pretty separate at Intel, and they have no plans to collaborate. So it is up to the users to combine Realsense and NCS. But that may not be what ChicagoBob wanted?
Yes, I knew that this probably wasn't what ChicagoBob was looking for. I figured that some news of interoperatibility was probably better than none though. Without an SDK, I guess that one might be able to make some form of connection to NCS via the D435's GPIO sensor connections for linking to external devices, though it's unknown what you could usefully do with such a link.
I imagine that some enterprising person with tech skills might find a way to create a mod to access NCS functions on a setup with a RealSense camera attached. If NCS has a channel for accepting RealSense as an input, then you would think that the door could be made somehow to flow the other way, from NCS to RealSense.
I just got the NCS stick yesterday. Got an old laptop added an SSD and built an environment to work with.
Not totally done yet with that, currently Python doesn't know what Tensorflow is.
Anyway, as solution developer who run a million miles an hour in all directions at the same time its hard to
dig deep into any tech that is not producing some positive results out of the box. Running the NCS getting started
examples made me scratch my head.
The getting started guide is really lacking.
There was no information as to what the NCS helped with, what feature was performance enhanced and what should I expect.
I am going to post this question on the developer support site. Since DNN and RealSense are solutions with some
synergy you would think that the whiz kids at Intel and NCS would instantly find ways to make
the solution end to end and be able to point out how great it is to do so. The IPP (Integrated Performance Primitives)
which I think is under the sheets of the RealSense stuff has been around for a long time and those coders should
be able to use the NCS primatives to squeeze the fastest results don't you think?
Me, I am still trying to figure out what I bought.
Maybe thePaintedStripe can let me in on what I should be seeing.
I got results some numbers and results with explanation as to what I am even running.
I had a look just now through the NCS Getting Started manual that you mentioned. I don't have the stick myself and so can't run practical tests. I'm pretty good at converting manual-speak into plain English though, so I'll have a try. I can see your point - I have an engineering degree and I was scratching my head too.
What do I do with this stick?
The manual thinks that the main uses for the stick will be to either
(a) set up a neural network on a desktop or laptop computer, as you did; or
(b) create an application on the computer that accesses the NCS stick's hardware in order to accelerate the processing of a neural network. The NCS' API software provides the linkages for the application to connect with the stick hardware.
How do I get started with it once set up?
The workflow diagram refers to undertaking a 'training' phase before the NCS stick is actually used. That is the sole mention of training in the manual though, and it is not explained how to do so after that. Without being certain, I would speculate that it refers to the process of training a Convolutional Neural Network (CNN) model. The documentation implies that you need to train your own CNN model if you do not have an existing one to import.
Here's a guide on CNN that includes training information.
To me the stick is a WAY WAY cheaper GPU accelerator for Tensorflow or Caffe.
Even using 4 of them simultaneously is cheaper than a single GPU card to accelerate inference computation.
That said the jury is still out for me, still pouring through the new docs. I am not sure its working as
quick as they claim or I had hoped.
I am not sure if it is right place to discuss what NCS can do or what the performance is. But, as of now I will share some information here, and then restrict the conversation beyond that. We could connect on NCS developers forum for more information. I too got NCS only around 10 days back and spent a couple of days to set up the environment to see what it can. My exposure to it is very limited, but I can share what I know.
(1) What is NCS meant for: as a hardware accelerator for executing DNNs on your platform. Theoretically any platform which is running one of the supported Ubuntu version can use NCS.
(2) NCS SDK contains some tools which can compile any Caffe or TensorFlow model to run on NCS. So if you have Caffe model and weights, or TensorFlow model or weights, then NCS tools will help you to get files which can be executed on NCS. As you may already know TensorFlow support was added only recently.
(3) NCS contains Movidius Myriad2 VPU - 12 of it on each NCS. You can use multiple NCS, but each one needs to be used independently. There are some example codes on NCS forum to demonstrate it. Basically if you want to execute a particular task on 4 NCS, then you create 4 graphs, one on each NCS, and distribute your processing. Suppose you want to run inference on a 20FPS video, then you could send each frame to a different NCS, thus each one processing 5FPS. But if the task is sequential in nature, then you won't be able to utilise it so easily I think.
(4) You might have seen the examples which come with NCS SDK. The are pretty self explanatory. Apart from this there is an app zoo on git which has more examples. Though very limited, but good place to start with
(5) I have a gaming laptop from Dell with Nvidia GPU. I ran tiny_yolo example from darknet website on this with GPU & CUDA support enabled. On my laptop webcam I got about 30fps.
I found a tinyyolo implementation on NCS where on with same webcam I got 4-5 fps. I am not sure if it is the exact implementation of network, but still I would say that 4-5 fps is not bad. But here is the link if you want to try
(6) Also here is a paper which gives some benchmarks on NCS. But when I ran examples myself I got much better results. So could be that paper was done with older hardware version, or may be the compiler has got more efficient now.
I guess someone at Intel is listening. I was at the Movidus Site today and they mentioned they have added ROS
support and support for the Intel D series cameras. Makes me wonder when the D435 will be released or if its real close now.
Either way it was GREAT news that Intel is trying to merge things together. I am hoping they create a MyriadX stick soon which will
be much more powerful.
Thanks for sharing the update.
I just checked webpage for the ROS NCS wrapper on git.
Looks really interesting. Will give it a shot some time in coming days.
Finally Intel seems to be providing all what developers wants.
Unified software support for NCS, RealSense, ROS really makes sense if they are planning to target Robotics Industry. It's small effort for them as they only need to it once, and they already have software teams which are well familiar with NCS and RealSense. If Intel can't do it, the effort will have to be replicated 1000 times, as each user will have to manage it of their own.
Really excited to see recent progress.
Sorry this is off topic but interesting, not sure if you are following the Intel AMD hook up.
Its more than a little interesting but no TensorFlow acceleration by AMD tha i know of.
This makes me scratch my head.