1 2 3 Previous Next 31 Replies Latest reply on Jul 23, 2015 6:15 PM by João Silva

    Frame grab performance problems using OpenCV on Edison


      Hi everyone,


      I'm attempting to use a USB camera module (the SB101D to be specific) with the Edison for an image processing application, but I'm hitting a wall with some performance issues that I just can't troubleshoot.  I'm hoping someone more experienced with the platform might point me in the right direction.


      I have loaded the uvcvideo package and OpenCV using Alex_T's package repo.  (Many thanks BTW, Alex, you're a god among men!)  I can successfully use OpenCV to establish a VideoCapture and retrieve frames.  The SB101D advertises that it can support 30 FPS (640x480, YUYV for what it's worth).  However, I'm finding that even just grabbing the frame (i.e.: capture.grab()) takes about 80ms.  This means I'm essentially starting at about 12 FPS before even doing any image processing.


      I tried enabling all traces on the uvcvideo driver and checking dmesg.  Each time I attempt to grab a frame, the log is flooded with the following message:

      "uvcvideo: Dropping payload (out of sync)"


      My understanding is that that message could acceptably happen when you first start retrieving images from the device, but that it should not continuously occur, so it makes me think that the delay is some driver-level synchronization issue rather than basic performance limitations on the Edison.  I did try reducing the frame rate and image size, but the behavior doesn't change much.  Decreasing to 320x240 did decrease the grab delay to ~60ms, but that's still quite bad...


      Anyway, does anyone have any ideas what I could check?  Thanks in advance!


        • 1. Re: Frame grab performance problems using OpenCV on Edison

          Hi Aenimated


          There are a lot of related issues in the web about "uvcvideo: Dropping payload (out of sync)". Some of the suggestions are:

               Reduce the size of the image (You have already tried this)

               In some cases, this only happens on the first run, and after that it works fine.

               Use the latest kernel version.

          Could you tell me which image are you using? I'm assuming that you are using the Yocto Image from Edison - Software Downloads but let me check this. And verify that you have the latest version of the image in your board.

          In the meantime I will search more about this issue and try to get to you more useful information.




          • 2. Re: Frame grab performance problems using OpenCV on Edison

            Thanks for the quick response, CMata.  I believe I'm using the latest image:

                 cat /etc/version


            • 3. Re: Frame grab performance problems using OpenCV on Edison

              It may be the case that you need to enable one of the uvcvideo workarounds (so called quirks) for this specific camera. I haven't found any specific USB Vendor and Device ID on the module's vendor page, what does lsusb report for it?


              Another thing to check (though it shouldn't cause such an error message) is the actual frame rate V4L2 is setting for the module. You can check that with "v4l2-ctl -P". Please also post the output of "v4l2-ctl --list-formats-ext" (all supported formats) and "v4l2-ctl -V" (currently set format), maybe there's something there that could be tweaked as well.

              • 4. Re: Frame grab performance problems using OpenCV on Edison

                Edit: I tried this same experiment with another webcam - the Logitech B500.  The results are the same, so it doesn't seem that the issue is isolated to a specific camera.  I am using an uncompressed format...  Is it possible that I'm just seeing fundamental limitations of the Edison platform?  Has anyone else achieved 30 FPS at 640x480 or higher resolution without compression?  Thanks!


                Thanks for your response, AlexT.  I've posted the requested info at the end of this reply.


                I have explored uvcvideo quirks to some degree, but haven't had any success so far.  I decided to download the uvcvideo source, and I noticed the following

                in uvc_driver.c for the Aveo Technology USB 2.0 Camera:

                     driver_info = UVC_QUIRK_PROBE_MINMAX | UVC_QUIRK_PROBE_EXTRAFIELD

                The idVendor is the same as my camera, but the idProduct is different.  So I tried manually applying those quirks like so (just in case my camera had the same deficiencies):


                ~# rmmod uvcvideo

                ~# modprobe uvcvideo quirks=6


                Unfortunately, this didn't alter the behavior aside from adding the following entries to the trace:

                [ 1158.772637] uvcvideo: Forcing device quirks to 0x6 by module parameter for testing purpose.

                [ 1158.772651] uvcvideo: Please report required quirks to the linux-uvc-devel mailing list.


                Also, in uvc_video.c, I noticed that UVC_QUIRK_STREAM_NO_FID is used just after reporting the "Dropping payload (out of sync)" trace message, so I tried that as well.  (I'm just shooting in the dark here really - I have no real reason to believe the device isn't toggling the FID appropriately.)  Anyway, that caused all but the first frame grab to timeout with the "select timeout" error.


                Finally, I also tried, (based on some suggestions I read online), using nodrop=1 and timeout=5000.  Unfortunately, I still see the same behavior.


                I haven't exhaustively tried all the permutations, but it seems like there's something more fundamental wrong.  It makes me wonder if there's simply some HW issue with the camera...  I don't own another webcam that supports UVC video, but I'll see if I can track one down.  At least that way I'll have a sense whether the problem is isolated to this camera.


                Again, thanks for your time.



                ~# lsusb

                Bus 001 Device 002: ID 1871:0d01 Aveo Technology Corp. USB2.0 Camera


                ~# v4l2-ctl -P

                Streaming Parameters Video Capture:

                        Capabilities     : timeperframe

                        Frames per second: 30.000 (30/1)

                        Read buffers     : 0


                ~# v4l2-ctl --list-formats-ext

                ioctl: VIDIOC_ENUM_FMT

                        Index       : 0

                        Type        : Video Capture

                        Pixel Format: 'YUYV'

                        Name        : YUV 4:2:2 (YUYV)

                                Size: Discrete 640x480

                                        Interval: Discrete 0.033 s (30.000 fps)

                                Size: Discrete 160x120

                                        Interval: Discrete 0.033 s (30.000 fps)

                                Size: Discrete 320x240

                                        Interval: Discrete 0.033 s (30.000 fps)

                                Size: Discrete 176x144

                                        Interval: Discrete 0.033 s (30.000 fps)

                                Size: Discrete 352x288

                                        Interval: Discrete 0.033 s (30.000 fps)


                ~# v4l2-ctl -V

                Format Video Capture:

                        Width/Height  : 640/480

                        Pixel Format  : 'YUYV'

                        Field         : None

                        Bytes per Line: 1280

                        Size Image    : 614400

                        Colorspace    : SRGB

                • 5. Re: Frame grab performance problems using OpenCV on Edison

                  I don't mean to take this thread off topic, but I can clearly see this is the crowd that would have a good answer for this. We're about to embark on a low power embedded linux style project that makes use of ffmpeg and opencv for the most part. My question is, which would be best suited / better performing etc...the yocto build that comes on the edison, or the ubilinux debian wheezy build for edison? Are there any major performance drawbacks if we run wheezy vs the yocto build? Would the time it takes to reconfigure a package for a product that has been deployed already (with a yocto build) be negated by using wheezy instead with some simple bash scripts to perform updates & changes after the fact?


                  Thanks in advance!

                  • 6. Re: Frame grab performance problems using OpenCV on Edison

                    Thanks for the information, everything generally looks legit. This one is of the tough kind though - only uncompressed streams, which no USB controller likes because it requires ~17MBps bandwidth for this frame size and frame rate and that's about a half of practically possible USB 2.0 bandwidth.


                    I don't see this specific VID:PID on the uvcvideo list, however all other ones with this VID are marked as fully supported (without quirks), so I think we're rather down to a USB bandwidth being insufficient or processing in the kernel/driver being suboptimal


                    I have tried uncompressed once, but it was many months ago and I simply don't remember what the outcome was (though I definitely don't recall seeing such error messages from uvc). Also, I've switched to MJPG very quickly right after seeing the difference in the frame sizes (my webcam supported both).


                    Try setting the lowest resolution/frame rate and seeing if that changes anything. If it does, then there's a bottleneck. If it doesn't - there must be a bug somewhere, because Edison is quite powerful compute platform and I wouldn't expect it to give up that fast


                    Another thing you can try is disabling the USB controller autosuspend feature. Execute the following steps in this exact order:

                    1. Make sure the mechanical switch is flipped closer to the port where you connect the camera (BTW you are using Arduino expansion board or the mini-breakout one?)
                    2. run "echo -1 > /sys/module/usbcore/parameters/autosuspend"
                    3. Plug the camera module to the USB port
                    4. try capturing the frames, see if there's any difference
                    • 7. Re: Frame grab performance problems using OpenCV on Edison

                      Ubilinux uses the same kernel as the Yocto build, so I think it mostly boils down to what exactly are you going to use in the userspace and whether Ubilinux has necessary features you're after enabled (IIRC there were some things, which haven't worked compared to Yocto build).


                      And yes - you better ask this in a dedicated or Ubilinux' thread (or ping @David_J_Hunt directly), for this one it's a real off-topic, which will confuse people

                      • 8. Re: Frame grab performance problems using OpenCV on Edison

                        Thanks for the suggestions, Alex.  I tried disabling autosuspend as you suggested, but unfortunately it didn't improve the performance any.  (I am using the Arduino expansion board BTW.)


                        I also tried using the lowest supported resolution (160x120).  At that resolution, it takes ~56ms to grab each frame.  It's interesting that lowering the resolution does have a noticeable effect.  It seems like there is some kind of performance issue, although maybe the issue is in the OpenCV implementation.  I haven't looked at the source code, but I have heard that OpenCV's VideoCapture isn't particularly optimal with it's buffer mgmt.


                        By the way, have you used OpenCV to retrieve MJPEG-encoded video streams?  It would appear that the Linux OpenCV implementation doesn't support changing the FOURCC code, at least not using VideoCapture::set(CV_CAP_PROP_FOURCC, <fourcc code>).  I spent some time searching online how I might go about working around it, but I haven't had any luck so far.  Maybe I need to use a different API.  (My original camera doesn't even support MJPG, but the other webcam I'm testing with does - it seems that CV_CAP_PROP_FOURCC just isn't implemented in the Linux version of OpenCV 2.4.9.)


                        Anyway, thanks again for your time and expertise.

                        • 9. Re: Frame grab performance problems using OpenCV on Edison

                          You are most welcome and no, I haven't used OpenCV beyond a couple of tutorial programs yet. In my projects I used Motion/v4l2grab and these don't use OpenCV at all, so I'm not able to comment on the fourcc thing.


                          Though theoretically you could just take a look at the OpenCV sources, if that set() doesn nothing, that should be more or less obvious I'd guess.

                          • 10. Re: Frame grab performance problems using OpenCV on Edison

                            I'm able to grab 720p at 60 fps, however, I'm using mjpeg (much smaller), and I had to move the image processing off-board (in my case onto a smartphone).  My experience is that the opencv framegrab has a number of issues (including setting the fps correctly) and performance-wise on the Edison, I found it lacking (I even built it from source trying to optimize).  Also, aperture priority will lower the achievable framerate if you are in low light (i.e., indoor).


                            I recommend grabbing using v4l2 and if you really need to use opencv locally, load the buffer into a Mat in opencv and force it onto the other core - this obtained the best results for me.  Avoid writing the buffer to flash and be very careful with the memory usage (Mat's can be big and bulky).


                            Good starting points for this approach are: Capture images using V4L2 on Linux — Jay Rambhia and Modified uvccapture at dreamport | nonoo.hu.


                            Hope this helps ....

                            • 11. Re: Frame grab performance problems using OpenCV on Edison

                              Thanks for the response, mpapini!


                              I decided to experiment with capturing images using V4L2 as you suggested.  I experimented with Jay Rambhia's code and got it working, but even after removing everything but the image capture logic, the performance was still similar to that of OpenCV.  I was able to change the mode to MJPEG, but it didn't have any substantial performance impact.  Apparently, there's a much larger bottleneck causing the issue.


                              After doing some profiling, I discovered that the delay (almost all of it) occurs during the select() routine.  When I activate all traces on the uvcvideo driver, here's what I see:

                              1) The first capture happens quickly.

                              2) In the next loop, (long before the next frame is ready), I see the poll request, but then, for the entire duration of the frame (~33ms), I continually see the "Dropping payload (out of sync)" message.

                              3) Finally, I see the Frame complete message, at the time that it would be expected, but the software never retrieves that frame.

                              4) The software then waits until the next frame is complete (another 33ms) before actually retrieving it.


                              So I wind up seeing an effective frame rate of ~15 FPS instead of 30 FPS since roughly every other frame is dropped.


                              I also played around with deliberately sleeping between frame grabs to see if that makes a difference.  I notice that I don't see the "Dropping payload" messages until I run select() on the video device, but the messages always continue only until the frame is finished being processed.  In any case, it didn't have any effect on the overall behavior.


                              Given that you have successfully achieved 60 FPS at a much higher resolution, it sounds like this isn't a fundamental limitation of the Edison.  Maybe my uvcvideo driver is corrupted.  I installed it using the package repo created by @AlexT_Intel - maybe I need to dig deeper into that package and build my own image.  I've been shying away from that since I expect it will be a challenge for me; I don't have much experience with Linux.


                              Thanks again for any further insights you can provide.


                              • 12. Re: Frame grab performance problems using OpenCV on Edison

                                First of all, (unless there's someone with this sort of experience in this community) I'd suggest you to additionally ask on the uvcvideo mail list - I doubt it's the driver problem, but they may be able to help you to pinpoint the problem, knowing what normally functioning driver is supposed to do.


                                In addition to that, I suspect that may be webcam-specific (model, not specific unit). If you share the code, I could try running it with my webcam to see how it behaves - that would allow to gather some additional data points to that.

                                • 13. Re: Frame grab performance problems using OpenCV on Edison

                                  That's a good idea; I'll definitely post to the uvcvideo mailing list as well.  I've searched online extensively, but I haven't posted to any other forums as of yet.


                                  It's possible that the issue is related to the webcams, but I have tried two different models with similar results, so I'm not sure. Initially, I was trying to use the SB101D, which is a cheap USB camera with a tiny form-factor, but I also tried a Logitech B500 camera in both YUYV and MJPEG mode, both with similar results.  It's not that I'm hung up on any specific camera - it's just difficult to find a very small USB camera with UVC support.  (Ideally, I'd like to find something small enough that it could be embedded in an HO scale toy bus.  If you happen to know of anything that might be suitable, I would definitely be willing to try a different camera.)


                                  Anyway, if you're game, I'd definitely love to know how the attached code works with your camera.  The code in its current state configures the image format for 320x240 YUYV mode, but that can easily be adjusted by changing the set_image_format function.  (The code was mostly copied from Jay Rambhia's code which mpapini provided a link to - it doesn't use OpenCV.)  BTW, I realize this doesn't explicitly set the frame rate, but 30 is the default, and I can see in the dmesg output that it's being set correctly.  In any case, here's a typical output from the program (in this case using the SB101D):


                                  Driver Caps:

                                    Driver: "uvcvideo"

                                    Card: "USB2.0 Camera"

                                    Bus: "usb-dwc3-host.2-1"

                                    Version: 1.0

                                    Capabilities: 84000001

                                  Camera Cropping:

                                    Bounds: 320x240+0+0

                                    Default: 320x240+0+0

                                    Aspect: 1/1

                                    FMT : CE Desc


                                    YUYV:    YUV 4:2:2 (YUYV)

                                  Selected Camera Mode:

                                    Width: 320

                                    Height: 240

                                    PixFmt: YUYV

                                    Field: 1

                                  Length: 153600

                                  Address: 0xb5f0f000

                                  Image Length: 0

                                  Profile (QBUF): 0 ms

                                  Profile (STREAMON): 10 ms

                                  Profile (SELECT): 350 ms

                                  Profile (DQBUF): 350 ms

                                  Profile (QBUF): 0 ms

                                  Profile (SELECT): 79 ms

                                  Profile (DQBUF): 79 ms

                                  Profile (QBUF): 0 ms

                                  Profile (SELECT): 79 ms

                                  Profile (DQBUF): 80 ms

                                  Profile (QBUF): 0 ms

                                  Profile (SELECT): 79 ms

                                  Profile (DQBUF): 79 ms



                                  It basically stabilizes at about an 80ms delay for each frame.  Assuming you see better results, (and you have the time), I'd also be curious to know what the dmesg output looks like with all traces active.  At this point, I have no idea what it normal output.  For example, once in a while I see a message indicating that some callbacks were ignored.


                                  Anyway, thanks again.  Sorry to be taking so much of your time!


                                  • 14. Re: Frame grab performance problems using OpenCV on Edison

                                    The benchmark is somewhat flawed (I only figured it out after having run it and remembered slamming my head into this particular wall).  Your code is feeding a single buffer to the driver, waiting for it to fill, taking it back, "processing it" (or at least wasting enough time to miss the start of the next frame) and then passing it back to the driver ... the driver meanwhile is dropping a frame or two as you play with the ONLY buffer that it had.  I've included the version that I modified to get better timing resolution and stay C only (sorry I have an allergy to mixing languages in the same file) and excerpts from my code that allocates 32 buffers (NOTE: 32 is the max that the driver would take) then spins up the stream and feeds the buffers back as they are no longer needed.  I use a circular queue and can have a window of 20 or so buffers to play with without dropping frames.  The format size at this stage shouldn't matter (MJPEG vs YUYV) since we are simply passing a point back and forth to the kernel (driver). 

                                    Having said all of that, there is still something funky going on since my queue/de-queue tend to take of the order of 20/250 microseconds respectively. My best guess is that you have a ton of noise kicking around or a crappy connection somewhere.  Just to remove my last doubt: you are using the OTG USB and not the serial converted USB - right?

                                    Attached, please find:v I wanted to attach but don't have the patience to figure out how ...

                                    1) Modified version of your code with my results showing q/dq in the ranges I mention above. (my times are all nanoseconds so divide by 10^6 to get milliseconds)

                                    2) Code extracts (sorry they won't compile directly and they are a bit "hacky") with more buffers allocated; camera_thread is clearly a separate thread.

                                    Hope this helps - Mario

                                    1 2 3 Previous Next