1 Reply Latest reply on Oct 15, 2013 1:43 PM by Patrick_Kutch

    Performance problem in 82599NIC and Xen

    chaiahn

      Routing Experiment Setting :

      I am making an experiment with Xen PV-domain and Intel 82599 NIC.

      There are 3 machines for experiment; sender, receiver and routing machine.

      Routing machine loads PV-domain by using Xen and SR-IOV.

      Sender generates packets for sending 64B packets with 10Gb sending rate.

      The packets are transmitted to PV-domain on routing machine,

      and the PV-domain performs routing and forwards packets to receiver.

       

      Result & Problem :

      The performance of our modified routing module is about 7Mpps on 1 PV-domain.

      But, when we load 2 PV-domains (2 routing flows), the performance of each PV-domain is 4Mpps. (together, total 8Mpps)

      The performance degradation is about 40% pps drop. It is so great degradation.

       

      My Opinion :

      Each VM has their own core pinning and physical NIC. Theoretically, there are no conflict.

      If the packet size is 128B or more, there are no problem, the problem is occured at only 64B.

      So, I think that cause is a chipset bug between 82599 SR-IOV and Xen.

      Or, 82599 SR-IOV Virtual Function has performance limit or boundary; packet rate(packets per second).

       

      Is anyone who experiment multi PV-domain routing in SR-IOV and Xen?

      or What is your opinion?

        • 1. Re: Performance problem in 82599NIC and Xen
          Patrick_Kutch

          Virtualized Ethernet performance under a Linux based OS has historically been slow.  There is simply a great deal of overhead in the architecture of the  networking.  On top of that, 64B packets are going to result in the lowest throughput and the highest CPU utilization simply by it's nature.

           

          Our testing in a virtualized environment, for 64B packets is around 2.6 Gbps Tx, at over 20% CPU Utilization, and on the RX side is only around 0.3 Gbps.

           

          Virtualization technologies such as SR-IOV and VMDq can significantly improve this performance, however vanilla Ethernet virtualization in a XEN or KVM environment remains pretty poor.  This is due to the virtualization of the Ethernet as opposed to the Ethernet devices themselves.

           

          It has been a while since I personally dug into this area, but I do recall quite a number of articles discussing this very problem several years ago.

           

          Hope this helps,

           

          Patrick

          1 of 1 people found this helpful