I haven't looked closely at the demo, but from a quick review of the article you linked, it uses OpenCV, and OpenCV does support video processing. So assuming you have some development experience and the time to get up to speed with OpenCV, you should at least be able to adapt the demo to support live video. Basically, instead of reading the image from a file, you'd use a VideoCapture instance to capture images from your camera in a loop - other than some minor adjustments, I would expect you could mostly just reuse the rest of the code.
Note that, to achieve a good frame rate, you may have to implement your own video capture logic - my experience is that OpenCV (at least 2.4.9) doesn't handle buffering well on Linux, so I was only able to get an effective frame rate of ~15 FPS from my camera (which was configured to run at 30 FPS).
Hope that helps,