1 2 3 Previous Next 44 Replies Latest reply: Jul 12, 2012 9:13 AM by Patrick_Kutch Go to original post RSS
  • 30. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    Thank you for your help.

     

    All these VMs are at home in a very boring network: no VLANs, single subnet 192.168.1.0.

     

    Each VM has 2 VF (one from each PF). They are configured with static consecutive IP numbers: 40&41, 50&51, 55&56, 118&119.

     

    I program the MACs for the VFs with the script that launches the VMs in order to have consistent MAC between reboots. I use MACs ending in PF:VF with PF from 0 to 1 and VF from 0 to 5. The driver in the host is set to enable 6 VF per PF.

     

    Except for the fact that pings fail between some combinations of VM, the rest of the communication seems to work well from the rest of the network. Independent physical PCs can see all the VM without issues at all their IPs. I just pinged all the IPs listed above and all respond immediately (less than 1 ms).

     

    I will need to test if the PFs talk to each other. I can not run those tests from the office without risking leaving myself disconnected from the host.I will test it tonight.

     

    Thanks again for your help.

  • 31. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    Beginners ignorance: I can specify what NIC to ping from. I did not know that I do not need to disable the other NICs to select the source of the ping...

     

    The 2 PFs with IP .32 & .33 talk to each other in both directions.

  • 32. Re: a problem with 82599 using SR-IOV in linux
    Patrick_Kutch Community Member
    Currently Being Moderated

    Can you do an ip link show on each PF and provide the output.

     

    thanx,

     

    Patrick

  • 33. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    After I rebooted the server, I lost half of the VFs.

    Is it an issue that 2 NICs are sharing the same IRQ?

     

    03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

        Subsystem: Super Micro Computer Inc Device 10d3

        Flags: bus master, fast devsel, latency 0, IRQ 17

        Memory at fdce0000 (32-bit, non-prefetchable) [size=128K]

        I/O ports at d800 [size=32]

        Memory at fdcdc000 (32-bit, non-prefetchable) [size=16K]

        Capabilities: [c8] Power Management version 2

        Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+

        Capabilities: [e0] Express Endpoint, MSI 00

        Capabilities: [a0] MSI-X: Enable+ Count=5 Masked-

        Capabilities: [100] Advanced Error Reporting

        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-4b-6d-18

        Kernel driver in use: e1000e

        Kernel modules: e1000e

     

    04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

        Subsystem: Super Micro Computer Inc Device 10d3

        Flags: bus master, fast devsel, latency 0, IRQ 18

        Memory at fdde0000 (32-bit, non-prefetchable) [size=128K]

        I/O ports at e800 [size=32]

        Memory at fdddc000 (32-bit, non-prefetchable) [size=16K]

        Capabilities: [c8] Power Management version 2

        Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+

        Capabilities: [e0] Express Endpoint, MSI 00

        Capabilities: [a0] MSI-X: Enable+ Count=5 Masked-

        Capabilities: [100] Advanced Error Reporting

        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-4b-6d-19

        Kernel driver in use: e1000e

        Kernel modules: e1000e

     

    05:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

        Subsystem: Intel Corporation Ethernet Server Adapter I350-T2

        Flags: bus master, fast devsel, latency 0, IRQ 16

        Memory at fe800000 (32-bit, non-prefetchable) [size=1M]

        Memory at fde7c000 (32-bit, non-prefetchable) [size=16K]

        Expansion ROM at fde80000 [disabled] [size=512K]

        Capabilities: [40] Power Management version 3

        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

        Capabilities: [70] MSI-X: Enable+ Count=10 Masked-

        Capabilities: [a0] Express Endpoint, MSI 00

        Capabilities: [100] Advanced Error Reporting

        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-03-ee-86

        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)

        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

        Capabilities: [1a0] Transaction Processing Hints

        Capabilities: [1c0] Latency Tolerance Reporting

        Capabilities: [1d0] Access Control Services

        Kernel driver in use: igb

        Kernel modules: igb

     

    05:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

        Subsystem: Intel Corporation Ethernet Server Adapter I350-T2

        Flags: bus master, fast devsel, latency 0, IRQ 17

        Memory at fea00000 (32-bit, non-prefetchable) [size=1M]

        Memory at fe9fc000 (32-bit, non-prefetchable) [size=16K]

        Capabilities: [40] Power Management version 3

        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

        Capabilities: [70] MSI-X: Enable+ Count=10 Masked-

        Capabilities: [a0] Express Endpoint, MSI 00

        Capabilities: [100] Advanced Error Reporting

        Capabilities: [140] Device Serial Number a0-36-9f-ff-ff-03-ee-86

        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)

        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

        Capabilities: [1a0] Transaction Processing Hints

        Capabilities: [1d0] Access Control Services

        Kernel driver in use: igb

        Kernel modules: igb

     

     

     

    The output of ip link show is:

     

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:25:90:4b:6d:18 brd ff:ff:ff:ff:ff:ff

    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

        link/ether 00:25:90:4b:6d:19 brd ff:ff:ff:ff:ff:ff

    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000

        link/ether a0:36:9f:03:ee:86 brd ff:ff:ff:ff:ff:ff

        vf 0 MAC da:96:f0:e3:d5:a8

        vf 1 MAC ee:b3:6e:8f:e1:9a

        vf 2 MAC 7e:87:4e:d9:5e:9f

        vf 3 MAC ae:6c:e9:5e:56:16

        vf 4 MAC 06:a2:2d:c1:f7:53

        vf 5 MAC 1a:35:b5:81:44:71

    5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000

        link/ether a0:36:9f:03:ee:87 brd ff:ff:ff:ff:ff:ff

    6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether ae:ea:c6:0d:f0:44 brd ff:ff:ff:ff:ff:ff

    7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether 46:5e:11:bb:b1:8b brd ff:ff:ff:ff:ff:ff

    8: eth6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether 46:dc:23:45:4a:a3 brd ff:ff:ff:ff:ff:ff

    9: eth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether 3a:ff:e5:67:49:5e brd ff:ff:ff:ff:ff:ff

    10: eth8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether 3e:8a:d7:00:a1:69 brd ff:ff:ff:ff:ff:ff

    11: eth9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000

        link/ether e6:40:38:06:8d:8e brd ff:ff:ff:ff:ff:ff

    12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN

        link/ether 00:25:90:4b:6d:18 brd ff:ff:ff:ff:ff:ff

     

    As far as I can tell the PF that is sharing IRQ 17 with the other NIC is the one that is not generating VFs.

     

    The system has the latest igb (3.4.7) and igbvf (1.1.5)

     

    Any idea where to go next?

     

    Thank you for your help !!!

  • 34. Re: a problem with 82599 using SR-IOV in linux
    Patrick_Kutch Community Member
    Currently Being Moderated

    when you specify max_vfs, do you use just 1 number or 2?

     

    The kernel driver (the 'inbox' driver) only takes a single parameter and makes that many VF's for all the devices that use that driver for SR-IOV.

     

    The one you downloaded allows you to specify the # of VF's per port, separated by a comma.

     

    My guess  is that you have max_vfs=6.  If you have max_vfs=6,6 you should be back to where you were.

     

    I do not know where all those extra eth devices are coming from - they do not have the same MAC address as the VF's listed for eth 2.

     

    - Patrick

  • 35. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    you guessed right. I was passing max_vfs=6 to the driver. Now I have 6 VFs per PF by passing max_vfs=6,6

     

    I will check if the VMs can see each other later (possibly tomorrow).

     

    I am also trying to solve the issue that every time that I restart the host I get different MACs for the VFs. I am currently changing the MACs with a script.

    Do you know any wait to make the VF MACs permanent? Some igb driver parameter?

     

    Thank you very much for your help.

  • 36. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    Thank you very much for your help and patience.

     

    Finally everything works now.

     

    The changes from when I had problems to now is the use of the driver directly from SourceForge. The system was rebooted a lot of times when I was not able to get VFs from one of the PFs. As you pointed out, the Intel driver takes 2 numbers for the max_vfs parameter (I assume it will take 4 numbers in a I350-T4, right?). My mistake.

     

    Again, thank you very much for your help.

  • 37. Re: a problem with 82599 using SR-IOV in linux
    Patrick_Kutch Community Member
    Currently Being Moderated

    Phew!  And Great!

     

    Very happy all is well now.  Yes, you can do 4 numbers for a I350-T4, and 6 numbers if you have a T4 and a T6 :-)  Remember - this option is only for the SourceForge driver, the kernel driver only takes a single parameter and it is used for all ports.

     

    The static MAC addresses is indeed a problem.  One that we can't really control.  Only solution I am aware of is exactly what you are doing, which is to set the MAC for each VF yourself.

     

    Come back anytime!

     

    - Patrick

  • 38. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    One last questions (for now):

     

    Is there a good guide for the parameters that I can pass to the igb and igbvf drivers?

     

    Thank you very much for your help.

  • 40. Re: a problem with 82599 using SR-IOV in linux
    chbarg Community Member
    Currently Being Moderated

    Patrick,

     

    thank you for your answer.

     

    I was using that guide (your link). It may be helpful to others noting that the max_vfs parameter should be specified for each port, therefore it needs to multiple numbers in applications that expect to create VFs on more than one port.

     

    Thank you for your help.

  • 41. Re: a problem with 82599 using SR-IOV in linux
    liyu Community Member
    Currently Being Moderated

    Hello,I'm sorry to ask about the Failture of "no enough MMIO resource",we confirm that we enable the  BIOS's and kernel's SR-IOV capability,but still show the same message,and when we check the /proc/iomem we find no resource were allocated for SR-IOV as well as we show the resource in /sys/bus/pci/device directory,we think maybe this is a  bugger in our motherboard

    Our BIOS information is as follow:

    BIOS Information

            Vendor: American Megatrends Inc.

            Version: 2.0a  

            Release Date: 09/29/2010

    followed is information about motherboard:

    System Information

            Manufacturer: Supermicro

            Product Name: X8DTH-i/6/iF/6F

            Version: 1234567890

            Serial Number: 1234567890

            UUID: 02050D04-0904-0205-0000-000000000000

            Wake-up Type: Power Switch

            SKU Number: To Be Filled By O.E.M.

            Family: Server

    Recently,we find a new motherboard is Supermicro X8DTU-6TF+,can you help us to confirm whether this motherboard supported with 82599 SR-IOV?Looking forward to seeing your response.

  • 42. Re: a problem with 82599 using SR-IOV in linux
    Patrick_Kutch Community Member
    Currently Being Moderated

    Sorry that you continue to experience problems.

     

    Unfortunately we do not validate all of the various combinations of servers and BIOS available.

     

    We ensure that our devices and drivers conform the the SR-IOV spec, with compliance tools and do of course some testing on a subset of available servers.

     

    I do know from personal experience that Supermicro systems support SR-IOV, however the last time I ordered one from them, I had to specifically request a special BIOS that had SR-IOV support.  This was 3 years ago, I would assume, however cannot confirm, that SR-IOV support is now standard.

    At this time, I have nothing additional that can help you.  If I hear anything about this specific system, I will pass it along.  Hopefully another reader of this forum may have some insight to your issue that appears to be  beyond the scope of the actual Intel device.

  • 43. Re: a problem with 82599 using SR-IOV in linux
    liyu Community Member
    Currently Being Moderated

    Patric;

         I have solved the problem now by updating the BIOS again with the latest version. Here I get another quesion, can 82599 or 82576 provide bandwidth guarantee?

    Thank you for your help!

  • 44. Re: a problem with 82599 using SR-IOV in linux
    Patrick_Kutch Community Member
    Currently Being Moderated

    Great!  I figured it was the BIOS.

     

    Not sure what you mean by bandwidth guarantee.  Both of those devices support rate limiting, where you can specify the maximum bandwidth available to a VF.  This is described in detail in my latest paper:

    http://communities.intel.com/community/wired/blog/2012/06/25/latest-flexible-port-partitioning-paper-is-now-available-learn-about-qos-and-sr-iov

1 2 3 Previous Next

More Like This

  • Retrieving data ...

Legend

  • Correct Answers - 4 points
  • Helpful Answers - 2 points