Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4809 Discussions

X710 dropping LLDP frames ?

mmich37
Beginner
18,958 Views

Hello,

We have servers with 2 XL710 quad cards each.

In first, we got this firmware version:

firmware-version: f4.22 a1.1 n04.26 e8000152d

LLDP was working fine, no problem, trames are correctly received.

But after upgrading to (seems to be the last version):

firmware-version: 4.53 0x80001da6 0.0.0

We don't receive any LLDP frames anymore.

It seems that theses frames are dropped by the X710 cards.

I tried to downgrade to 4.42 firmware but ... same thing

The problem occurs on both vSphere 6.0 and Linux (RHEL 6.6) on the 5 servers that have theses ethernet adapter and their firmware updated

We have other cards from other vendor on theses servers that are plugged on the same network equipments and they receive LLDP frames without problems.

Some debug informations (vsphere side, we upgraded the driver to the last version according the HCL, but the problem is the same with the "base" driver):

# lspci|grep Ethernet

0000:02:00.0 Network controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet [vmnic0]

0000:02:00.1 Network controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet [vmnic1]

0000:02:00.2 Network controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet [vmnic2]

0000:02:00.3 Network controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet [vmnic3]

0000:04:00.0 Network controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet [vmnic4]

0000:04:00.1 Network controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet [vmnic5]

0000:08:00.0 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic10]

0000:08:00.1 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic11]

0000:08:00.2 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic12]

0000:08:00.3 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic13]

0000:0b:00.0 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic6]

0000:0b:00.1 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic7]

0000:0b:00.2 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic8]

0000:0b:00.3 Network controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [vmnic9]

# vmkchdev -l|grep vmnic6

0000:0b:00.0 8086:1572 8086:0002 vmkernel vmnic6

# ethtool -i vmnic6

driver: i40e

version: 1.3.38

firmware-version: 4.53 0x80001da6 0.0.0

bus-info: 0000:0b:00.0

Some debug informations (Linux side, i40e driver upgraded too, but problem is the same with the "base" driver):

# lspci | grep Ethernet

02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

02:00.2 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

02:00.3 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)

04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

08:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

08:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

08:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

08:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

0b:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

0b:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

0b:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

0b:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

# ethtool -i eth6

driver: i40e

version: 1.3.39.1

firmware-version: 4.53 0x80001da6 0.0.0

bus-info: 0000:0b:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

# ethtool -k eth6

Features for eth6:

rx-checksumming: on

tx-checksumming: on

tx-checksum-ipv4: on

tx-checksum-unneeded: off

tx-checksum-ip-generic: off

tx-checksum-ipv6: on

tx-checksum-fcoe-crc: off [fixed]

tx-checksum-sctp: on [fixed]

scatter-gather: on

tx-scatter-gather: on

tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

tx-tcp-segmentation: on

tx-tcp-ecn-segmentation: on

tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

fcoe-mtu: off [fixed]

loopback: off [fixed]

 

vSphere says that "Link Layer Discovery Protocol is not available on this physical network adapter"

LLDPD under Linux don't output any informations on neighbors switchs for the XL710's cards

From switch perspective we are seeing the LLDP frame well transmitted on the port !

But when trying to capture LLDP frames both tcpdump under linux (tcpdump -i eth6 -s 1500 -XX 'ether proto 0x88cc') and pktcap-uw under vSphere (pktcap-uw --uplink vmnic1 --ethtype 0x88cc -c 1) never output any frames.

0 Kudos
1 Solution
Jesse_B_Intel
Employee
13,958 Views

LLDP is meant to be handled directly on the NIC in the X710, and the firmware defaults to this feature being on. You can work around the issue (at least under linux) using

# echo lldp stop > /sys/kernel/debug/i40e//command

 

View solution in original post

0 Kudos
12 Replies
SYeo3
Valued Contributor I
13,958 Views

Hi michelm,

The LLDP may have been disabled when you updated the firmware. I'll check on this further.

Sincerely,

Sandy

0 Kudos
mmich37
Beginner
13,958 Views

Hi Sandy,

First, thanks for your reply.

Yes, LLDP seems disabled.

Perhaps did you known how to enable it again ?

I tried to play with some settings thanks to 'ethtool -K ...' but no luck and there's no clear settings about LLDP.

MM.

0 Kudos
SYeo3
Valued Contributor I
13,958 Views

Hi michelm,

Please refer to "Configuring Link Layer Discovery Protocol agent daemon (lldpad)"

Start lldpad service and configure to start at boot time.

root# service lldpad start

Starting lldpad: [done] [ OK ]

root# chkconfig lldpad on

NOTE: There is no output from this command but it will enable lldpad to automatically start when the system

is booted.

User Guide:

http://www.intel.com/content/dam/www/public/us/en/documents/guides/fcoe-user-guide.pdf http://www.intel.com/content/dam/www/public/us/en/documents/guides/fcoe-user-guide.pdf

Sincerely,

Sandy

0 Kudos
mmich37
Beginner
13,958 Views

Hello Sandy,

Same result when installing and enabling some settings thanks to LLDPAD.

LLDP RX frames are never seen from OS side ..

Here what I do (after installing and starting lldpad):

# dmesg | grep i40e|grep eth|grep Link

i40e 0000:08:00.0: eth2: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:08:00.1: eth3: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:08:00.2: eth4: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:08:00.3: eth5: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:0b:00.0: eth6: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:0b:00.1: eth7: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:0b:00.2: eth8: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

i40e 0000:0b:00.3: eth9: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

# for i in `seq 2 9`; do lldptool set-lldp -i eth$i adminStatus=rxtx ; lldptool -T -i eth$i -V sysName enableTx=yes; lldptool -T -i eth$i -V portDesc enableTx=yes ; lldptool -T -i eth$i -V sysDesc enableTx=yes; lldptool -T -i eth$i -V sysCap enableTx=yes; lldptool -T -i eth$i -V mngAddr enableTx=yes; done

Doing eth2

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth3

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth4

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth5

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth6

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth7

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth8

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

Doing eth9

adminStatus = rxtx

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

enableTx = yes

# /etc/init.d/lldpad restart

Stopping lldpad: [ OK ]

Starting lldpad: [ OK ]

# lldptool get-lldp -i eth6 adminStatus

adminStatus=rxtx

# lldptool -t -n -i eth6

# tcpdump -i eth6 -s 1500 -XX 'ether proto 0x88c

 

Some informations regarding DCB:

# dcbtool gc eth6 dcb

Command: Get Config

Feature: DCB State

Port: eth6

Status: Device not capable

# dcbtool sc eth6 dcb on

Command: Set Config

Feature: DCB State

Port: eth6

Status: Device not capable

Seems that's impossible to enable it ?

Driver/Frimware version (same as below):

# ethtool -i eth6

driver: i40e

version: 1.3.39.1

firmware-version: 4.53 0x80001da6 0.0.0

bus-info: 0000:0b:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

Tests outputed with eth6 but it's the same result with any other ethernet interface from i40e.

It really seems that LLDP RX Frames are dropped directly inside the NIC (firmware setting to tune ?) whatever the OS (vSphere side, same result, LLDP frames are never received on the ESXi).

 

MM.

0 Kudos
Jesse_B_Intel
Employee
13,959 Views

LLDP is meant to be handled directly on the NIC in the X710, and the firmware defaults to this feature being on. You can work around the issue (at least under linux) using

# echo lldp stop > /sys/kernel/debug/i40e//command

 

0 Kudos
mmich37
Beginner
13,958 Views

Great, it's working now.

Perhaps Intel should put this in a documentation somewhere ... ?

Thank you jbrandeb ! :-)

0 Kudos
JConn6
Beginner
13,958 Views

This seems to be the root cause of the problem we're having as well. However, it seems that the kernel version we're using (2.6.x) requires a different method to disable the internal LLDP agent. When I attempt to echo 'lldp stop' to the path specified above, I get an error and no matter what I try, I'm unable to issue the command in the manner stated. Does Intel provide *any* documentation on how to disable this internal LLDP agent for Linux 2.6.x and 3.x?

Also asked:

http://serverfault.com/questions/775980/disable-internal-intel-x710-lldp-agent centos6 - Disable internal Intel X710 LLDP agent - Server Fault

TIA, Jim

0 Kudos
JConn6
Beginner
13,958 Views

Found the steps I needed to make the above (correct) step work. Please follow the link to ServerFault for the steps required if you're having the same issues. For the record, I'm quite disappointed in Intel for not making this a way more transparent process/issue.

0 Kudos
oroma1
Beginner
13,958 Views

Hello,

I am having the same problem. I use a XL710 card in HP DL380 servers, running a Debian 7 OS.

We found out the card do not bring LLDP packets to the Linux stack, and discard it. This behavior is different from X520 cards which pass the LLDP packets to the linux stack.

The work around like '# echo lldp stop > /sys/kernel/debug/i40e//command' works, but we cannot do that on production systems.

Could intel provide some way to set this value in a proper manner (like with ethtool) in the next driver version? Or simply change the default value so that the default behaviour is the same as X520 cards.

Thanks in advance.

Olivier

0 Kudos
idata
Employee
13,958 Views

Is there a way to do this from esxi? I need esxi to exchange lldp info with our switches for our SDN product to do its magic. ESXi is currently showing that the physical adapter is not lldp capable. Updating firmware and driver to the latest did not help.

Thanks

idata
Employee
13,958 Views

Same issue with me - X710 not able to transmit CDP or LLDP on ESXi 6.5 on a Dell R730XD

Product: VMware ESXi

Version: 6.5.0

Build: Releasebuild-5224529

Update: 0

Patch: 15

X710

Firmware: 16.5.20

Driver: 1.4.28

It was somewhat frustrating to discover that as this is a Dell OEM X710 the only way to upgrade the firmware was via the Dell Lifecycle controller process. Downloading from Intel directly didn't work, but no error messages saying why were produced. Didn't find this information anywhere on Dell or Intel websites related to this X710 NIC.

Thanks,

--Rob.

0 Kudos
ken1
Beginner
13,958 Views

 

Disabling the LLDP agent under ESXi:

esxcli system module parameters set -m i40en -p LLDP=0,0,0,0

Reference:

https://thevirtualist.org/enabling-lldp-for-intel-x-in-esxi/

 

best regards,

Ken

0 Kudos
Reply