Skip navigation
1 2 3 Previous Next

Wired Ethernet

229 posts

See attached Microsoft® Windows® driver support matrix.


Storage How-to's and Solutions

Posted by Shingi May 25, 2016

Network storage related topics including how-to's, whitepapers and solutions documentation.

Intel® Ethernet Converged Network Adapter X710/XL710: iSCSI Quick Connect Guide (Windows*)


Windows How-to's and Solutions

Posted by Shingi May 25, 2016

Linux How-to's and Solutions

Posted by Shingi May 25, 2016
How-to's and SolutionsComments
Linux Linux Driver, Firmware
WindowsWIndows Driver, Firmware
Network VirtualizationSDN, SRIOV, NFV, VXLAN, OpenStack, DPDK
StorageiSCSI, iWARP, FCoE

Accelerating Mirantis OpenStack 8.0 with 10/40Gb Intel® Ethernet CNA XL710 and 10Gb Intel® Ethernet CNA X710 families


This how-to documents how to setup Mirantis OpenStack 8.0 (Liberty – Ubuntu 14.04 LTS) and how to verify that network virtualization offloads are enabled.

With virtualization stateless offloads support for VXLAN, NVGRE, and GENEVE, the Intel Ethernet Converged Network Adapter XL710 preserves application performance for overlay networks. With these offloads, it is possible to distribute network traffic across multiple CPU cores.


*It is important to review and consult the Mirantis Fuel 8.0 install documentation available here before and during setting up your environment


Node Types

Servers used in this Reference Architecture will serve as one of these node types: Infrastructure, Controller, Compute, or Storage.

Infrastructure Node

The Infrastructure node is an Ubuntu 14.04-based node, which carries two virtual appliances: • Fuel Master node—an OpenStack deployment tool. • Cloud Validation node—a set of OpenStack post-deployment validation tools including Tempest and Rally.

Controller Node

The Controller node is a control plane component of a cloud, which incorporates all core OpenStack infrastructure services such as MySQL, RabbitMQ, HAProxy, OpenStack APIs, Horizon, and MongoDB. This node is not used to run VMs.

Compute Node

The Compute node is a hypervisor component of a cloud, which runs virtual instances. Storage Node The storage node is a component of an OpenStack environment, which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage backend.

Storage Node

The storage node is a component of an OpenStack environment which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage back end.

*Storage node excluded from this how-to to simplify deployment, storage nodes can easily be added from Fuel admin page.


Network Topology


Node network port configuration


Recommended Server Configurations



Controller Node

Compute Node

Storage Node

Infrastructure Node

Server Model

Dell R630

Dell R730

Dell R730

Dell R630


Intel® 2699v3

Intel® 2699v3

Intel® 2699v3

Intel® 2699v3







2 x 400GB SSD

2 X 400GB SSD

4 x 400G SSD, 20 x 1.2TB SAS

2 X 400GB SSD


Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE



Recommended Server Configurations








Cisco SG300-28


10 or 40GbE

Arista 7060




Switch configuration

For Cisco switches use the following commands:

# enable

# configure terminal

# interface range GigabitEthernet 1/0/1 – 24

# switchport trunk encapsulation dot1q

# switchport trunk allowed vlan all

*To allow your switches to work with tagged and untagged VLAN’s it needs to be in trunk mode.


# end


For Arista switches use the following commands:


#conf t

#vlan 102

#interface Ethernet 1/1-32

#switchport trunk allowed vlan all



Build fuel server and deployment VM


Install fuel server OS and connect networks:

1.     Install the latest Mirantis Fuel iso from

2.     Install iso from cd-rom or USB drive (usb drive must be created from linux) using option 1

3.     Configure Fuel eth0 for Admin PXE






4.     Configure eth3 for your public network and gateway




5.      Configure PXE Setup




6.     Ensure DNS and Time Sync check work with no errors



7.     Quit Setup and select >Save and Quit and you should see Fuel Login





Startup all the nodes that are part of your environment and set them to boot from PXE.


1.     Select bootstrap




2.     Confirm nodes booted correctly





Create a new OpenStack Environment


1.     Login to fuel at






2.     Create new OpenStack environment




3.     Select the following:


OpenStack Release - Juno on Ubuntu 14.04 (default)

Compute - KVM

Network Setup - Neutron with tunnelling segmentation

Storage Backend - Block storage - LVM or CEPH depending if you have storage nodes

Additional services - Blank (feel free to install if you would like)


Now click Finish and Create and click on the environment name to configure your environment.





4.     Identify your nodes


You will now see unallocated nodes with MAC addresses as their names.  You can differentiate the compute, storage and controller node by looking at the HDD size or network MAC address. 

Select the node and rename them and assign the appropriate role to it:




5.     Add controller nodes




6.     Add compute nodes




7.     Configure interfaces for each node

          a. Go to nodes tab, click Select  and click “Configure Interfaces”



          b. Select each interface and drag them to the appropriate ports




8.     Configure network settings under the networks tab



Setup your networks as follows:


          a.       Click “default” under Node Network Groups and set:

                    Public – Set the ip adress that have access to your internet connection

                    Storage – Use the defauls and VLAN 102

                    Management – User defaults and VLAN 101

                    Private – User defaults and VLAN 103


          b.      Click “Neutron L3” and set the Floating IP range with adresses that have internet access, usually the same as Public ip adresses above.


Leave everything else with defaults.



9.     Verify networks. Under the same Networks menu click “Connectivity Check” and then click “Verify Networks”



If everything is setup correctly you will see a green success message.





10.     Deploy environment. Click Dashboard and then “Deploy Changes”





11.     Verify everything is installed and login to OpenStack by clicking on the “Horizon” link





Verify your compute nodes have Intel® X710 or Intel® XL710 virtualization offloads are enabled.


1.     Log in to the compute node and type command:

#ethtool –k enp4s0f0


You should see the following output:


Switch# configure terminalEnter configuration commands, one per line. End with CNTL/Z.Switch(config)# interface gigabitethernet0/2Switch(config-if)# switchport mode dynamic desirableSwitch(config-if)# switchport trunk encapsulation dot1qSwitch(config-if)# end


The industry is abuzz about the specification under development by the NVM Express Working Group call “NVMe over Fabrics” — for good reason.  The goal of this specification is to extend the highly efficient and scalable protocol for NVMe beyond just direct-attached storage to include networked storage.  For an excellent up-to-date background on NVMe over Fabrics, I strongly recommend the December 2015 SNIA Ethernet Storage Forum webcast, Under the Hood with NVMe over Fabrics, presented by J Metz of Cisco and Dave Minturn of Intel.


SNIA-ESF is following up on this webcast with another on Tuesday, January 26, 2016 10 a.m. Pacific that focuses on how Ethernet RDMA fabrics specifically fit into this new specification: How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.  This will be co-presented by John Kim from Mellanox and yours truly from Intel.  The webcast is free, and you can register at  If you are interested, but can’t make it at that time, the webcast will be posted immediately afterwards at the SNIA ESF website:


David Fair


Chair, SNIA Ethernet Storage Forum

Ethernet Networking Mktg Mgr, Intel

Hi Forum.


I have an Intel Pro/1000 PT Dual Port Server Adaptor card running on Windows 7 US 64bit.


I have the latest drivers for the Intel card and Windows fully updated.


If I try to make a TEAM, with the two ports, it comes up with an error. (And creates a 'Virtual adapter...'-something)  The second time I try, it works, but something is still not working correctly.


It did work fine on my old motherboard (Asus P8z68 pro) but not i my new computer (Asus Z97-A).


I have tried everything. Same old driver, new driver, BIOS settings etc. Nothing is working. Is there a know problem with the driver for this card and windows 7 64bit US?


I am out of ideas now...


Best regards



About cookies

Posted by sharan Nov 5, 2015

I set cookies in the app using intel xdk cache plugin its working in emulator but not working when i installed the app on android moto g 1st generation mobile I'm using thiese statements to set and get cookie resp intel.xdk.cache.setCookie('username',uname,'-1'); un = intel.xdk.cache.getCookie('username');

We are investigating what appears to be a damaged PCIe input lane, PER_0_p and PER_0_n on the i350. The 100 Ohm (at DC) differential R is located on the die per the data sheet. With the i350 powered down, a differential Ohm measurement with a DMM simply reads as an open, ~ 11 M Ohm across PER_0_p and PER_0_n. Is the input AC coupled on the die, or is the 100 Ohm differential across PER_0_p and PER_0_n not realized until power is applied? Thanks.


We are interested to use the X540 Twinville Dual Port 10GbE MAC/PHY in our application. The marketing data sheet


Intel® Ethernet Controllers and PHYs


lists the operating temperature at 0-55C. Yet the data sheet


on pg 1188 lists a maximum case temperature of Tcase Max = 107C.


Please elaborate on the difference and meaning between these two figures. We need the 0-70C temperature range if we are to use this part.


Thank you.

From Dawn Moore, General Manager of the Networking Division, read her latest blog:  Better Together: Balanced System Performance Through Network Innovation


The IT environment depends on hyperscale data centers and virtualized servers, making it crucial that upgrading to the latest technology be viewed from a comprehensive systems viewpoint. Need more data center performance? Maximize investment by upgrading the CPU, network and storage.

Due to the rapid growth of ever-more powerful mobile devices, enterprise networks need to keep pace. NBASE-T™ technology boosts the speed of twisted pair copper cabling up to 100 meters in length well beyond the designed limits of 1 Gigabit per second (Gbps). Capable of reaching 2.5 and 5 Gbps over 100m of Cat 5e cable, NBASE-T solutions implement a new type of signaling over twisted-pair cabling. The upcoming Intel(R)  Ethernet Controller code named Sageville, a single chip dual-port 10GBASE-T  and NBASE-T controller, can auto-negotiate to allow the selection of the best speed: 100 Megabit Ethernet (100MbE), 1 Gigabit Ethernet (GbE), 2.5GbE and 5GbE, over Cat 5e or Cat 6 and 10GbE over Cat 6A or Cat 7. Watch our recent demo from Cisco Live.




This year, I attended Cisco Live for the first time and it was quite a large event. At Intel’s booth, we showcased a network service chaining demo, which was a combination of Cisco’s optimized UCS platform and Intel’s Ethernet Controller XL710 along with Intel’s Ethernet SDI Adapter using our new 100GbE silicon code named Red Rock Canyon. By using network service headers (NSH) to forward packets to virtualized network functions, virtual packet processing pipelines can be established on top of physical networks. But with the exponential increase in networking bandwidth, high performance forwarding solutions are needed. This demo showed how packets with NSH headers can be forwarded to virtual machines running on UCS platforms using the latest generation Intel adapters operating at 40GbE and 100GbE. Watch our recent demo from Cisco Live.



If you want to learn more about service creation using NSH, see the Cisco and Intel webinar from April 2015. Register here to replay.


Dynamic Service Creation (Making SDN Work for Your Success with Network Service Header)


Host: Dan Kurschner, Sr. Manager SP Mobility Marketing Speakers:
Paul Quinn, Cisco Distinguished Engineer Cloud Systems Development
Humberto La Roche, Cisco Principal Engineer
Uri Elzur, Intel Engineer


Overview: We all like to talk about creating new customized services for the end user at “web speed”.  But today there is no way to automate service creation or to dynamically affect changes (augmentation) to existing services without touching the network topology.  This is because we use physical service chains across the data plane. To achieve automated flexibility in service creation, we must logically decouple the service plane from the transport plane—a software abstraction from specific network nodes. Cisco and Intel are leading a fast-growing ecosystem of network technology vendors, which includes Citrix and F5, to drive the Internet Engineering Task Force (IETF) standardization of the Network Services Headers (NSH) protocol. Open source NSH implementations are available today for Open Virtual Switch (OVS) and OpenDayLight (ODL).