Skip navigation
1 2 3 Previous Next

Wired Ethernet

230 posts

This how-to documents the setup Mirantis Openstack 9.1 (Mitaka) with the SRIO-V enabled Intel Ethernet XL710-QDA2 adapter (dual port) for carrier grade NFV.

Intel Ethernet XL710 provides rock solid and industry proven Single Root I/O Virtualization (SR-IOV) support to enable Network Functions Virtualization (NFV).

NFV enables carriers to virtualize network functions by running them as software instances on any hardware platform anywhere within their networks.

 

 

For the best performance and stability, upgrade to the latest firmware and drivers for your XL710 adapters available here:

Latest firmware: https://downloadcenter.intel.com/download/25791/NVM-Update-Utility-for-Intel-Ethernet-Adapters-Linux-?product=36773

Latest driver: https://downloadcenter.intel.com/download/24411/Intel-Network-Adapter-Driver-for-PCI-E-Intel-40-Gigabit- Ethernet-Network-Connections-under-Linux-?product=83418

Latest Virtual Function driver: https://downloadcenter.intel.com/download/24693/Intel-Network-Adapter-Virtual-Function-Driver-for-40-Gigabit-Ethernet-Network-Connections?v=t

 

 

*It is important to review and consult the Mirantis Fuel 9.1 install documentation available here before and during setting up your environment https://docs.mirantis.com/openstack/fuel/fuel-9.1/index.html

 

 

DOWNLOAD THE PDF ATTATCHMENT  BELOW

See attached Microsoft® Windows® driver support matrix.

Shingi

Storage How-to's and Solutions

Posted by Shingi May 25, 2016

Network storage related topics including how-to's, whitepapers and solutions documentation.

Intel® Ethernet Converged Network Adapter X710/XL710: iSCSI Quick Connect Guide (Windows*)

Shingi

Windows How-to's and Solutions

Posted by Shingi May 25, 2016
Shingi

Linux How-to's and Solutions

Posted by Shingi May 25, 2016
How-to's and SolutionsComments
Linux Linux Driver, Firmware
WindowsWIndows Driver, Firmware
Network VirtualizationSDN, SRIOV, NFV, VXLAN, OpenStack, DPDK
StorageiSCSI, iWARP, FCoE

Accelerating Mirantis OpenStack 8.0 with 10/40Gb Intel® Ethernet CNA XL710 and 10Gb Intel® Ethernet CNA X710 families

 

This how-to documents how to setup Mirantis OpenStack 8.0 (Liberty – Ubuntu 14.04 LTS) and how to verify that network virtualization offloads are enabled.

With virtualization stateless offloads support for VXLAN, NVGRE, and GENEVE, the Intel Ethernet Converged Network Adapter XL710 preserves application performance for overlay networks. With these offloads, it is possible to distribute network traffic across multiple CPU cores.

 

*It is important to review and consult the Mirantis Fuel 8.0 install documentation available here before and during setting up your environment https://docs.mirantis.com/openstack/fuel/fuel-8.0/index.html

 

Node Types

Servers used in this Reference Architecture will serve as one of these node types: Infrastructure, Controller, Compute, or Storage.

Infrastructure Node

The Infrastructure node is an Ubuntu 14.04-based node, which carries two virtual appliances: • Fuel Master node—an OpenStack deployment tool. • Cloud Validation node—a set of OpenStack post-deployment validation tools including Tempest and Rally.

Controller Node

The Controller node is a control plane component of a cloud, which incorporates all core OpenStack infrastructure services such as MySQL, RabbitMQ, HAProxy, OpenStack APIs, Horizon, and MongoDB. This node is not used to run VMs.

Compute Node

The Compute node is a hypervisor component of a cloud, which runs virtual instances. Storage Node The storage node is a component of an OpenStack environment, which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage backend.

Storage Node

The storage node is a component of an OpenStack environment which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage back end.

*Storage node excluded from this how-to to simplify deployment, storage nodes can easily be added from Fuel admin page.

 

Network Topology

 

Node network port configuration

 

Recommended Server Configurations

 

 

Controller Node

Compute Node

Storage Node

Infrastructure Node

Server Model

Dell R630

Dell R730

Dell R730

Dell R630

CPU

Intel® 2699v3

Intel® 2699v3

Intel® 2699v3

Intel® 2699v3

Memory

256GB

256GB

256GB

256GB

Storage

2 x 400GB SSD

2 X 400GB SSD

4 x 400G SSD, 20 x 1.2TB SAS

2 X 400GB SSD

Network

Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE

 

 

Recommended Server Configurations

 

 

Model

Quantity

 

 

1GbE

Cisco SG300-28

2

10 or 40GbE

Arista 7060

1

 

 

Switch configuration

For Cisco switches use the following commands:

# enable

# configure terminal

# interface range GigabitEthernet 1/0/1 – 24

# switchport trunk encapsulation dot1q

# switchport trunk allowed vlan all

*To allow your switches to work with tagged and untagged VLAN’s it needs to be in trunk mode.

 

# end

 

For Arista switches use the following commands:

#enable

#conf t

#vlan 102

#interface Ethernet 1/1-32

#switchport trunk allowed vlan all

 

 

Build fuel server and deployment VM

 

Install fuel server OS and connect networks:

1.     Install the latest Mirantis Fuel iso from https://software.mirantis.com/openstack-download-form/

2.     Install iso from cd-rom or USB drive (usb drive must be created from linux) using option 1

3.     Configure Fuel eth0 for Admin PXE

 

 

 

 

 

4.     Configure eth3 for your public network and gateway

 

 

 

5.      Configure PXE Setup

 

 

 

6.     Ensure DNS and Time Sync check work with no errors

 

 

7.     Quit Setup and select >Save and Quit and you should see Fuel Login

 

 

 

 

Startup all the nodes that are part of your environment and set them to boot from PXE.

 

1.     Select bootstrap

 

 

 

2.     Confirm nodes booted correctly

 

 

 

 

Create a new OpenStack Environment

 

1.     Login to fuel at https://10.20.0.2:8443

 

 

 

 

 

2.     Create new OpenStack environment

createnv.png

 

 

3.     Select the following:

 

OpenStack Release - Juno on Ubuntu 14.04 (default)

Compute - KVM

Network Setup - Neutron with tunnelling segmentation

Storage Backend - Block storage - LVM or CEPH depending if you have storage nodes

Additional services - Blank (feel free to install if you would like)

 

Now click Finish and Create and click on the environment name to configure your environment.

 

 

 

 

4.     Identify your nodes

 

You will now see unallocated nodes with MAC addresses as their names.  You can differentiate the compute, storage and controller node by looking at the HDD size or network MAC address. 

Select the node and rename them and assign the appropriate role to it:

 

 

 

5.     Add controller nodes

 

 

 

6.     Add compute nodes

 

 

 

7.     Configure interfaces for each node

          a. Go to nodes tab, click Select  and click “Configure Interfaces”

 

 

          b. Select each interface and drag them to the appropriate ports

 

 

 

8.     Configure network settings under the networks tab

 

 

Setup your networks as follows:

 

          a.       Click “default” under Node Network Groups and set:

                    Public – Set the ip adress that have access to your internet connection

                    Storage – Use the defauls and VLAN 102

                    Management – User defaults and VLAN 101

                    Private – User defaults and VLAN 103

 

          b.      Click “Neutron L3” and set the Floating IP range with adresses that have internet access, usually the same as Public ip adresses above.

 

Leave everything else with defaults.

 

 

9.     Verify networks. Under the same Networks menu click “Connectivity Check” and then click “Verify Networks”

 

 

If everything is setup correctly you will see a green success message.

 

 

 

 

10.     Deploy environment. Click Dashboard and then “Deploy Changes”

 

 

 

 

11.     Verify everything is installed and login to OpenStack by clicking on the “Horizon” link

 

 

 

 

Verify your compute nodes have Intel® X710 or Intel® XL710 virtualization offloads are enabled.

 

1.     Log in to the compute node and type command:

#ethtool –k enp4s0f0

 

You should see the following output:

 

 

The industry is abuzz about the specification under development by the NVM Express Working Group call “NVMe over Fabrics” — for good reason.  The goal of this specification is to extend the highly efficient and scalable protocol for NVMe beyond just direct-attached storage to include networked storage.  For an excellent up-to-date background on NVMe over Fabrics, I strongly recommend the December 2015 SNIA Ethernet Storage Forum webcast, Under the Hood with NVMe over Fabrics, presented by J Metz of Cisco and Dave Minturn of Intel.

 

SNIA-ESF is following up on this webcast with another on Tuesday, January 26, 2016 10 a.m. Pacific that focuses on how Ethernet RDMA fabrics specifically fit into this new specification: How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.  This will be co-presented by John Kim from Mellanox and yours truly from Intel.  The webcast is free, and you can register at https://www.brighttalk.com/webcast/663/185909.  If you are interested, but can’t make it at that time, the webcast will be posted immediately afterwards at the SNIA ESF website: http://www.snia.org/forums/esf/knowledge/webcasts.

 

David Fair

 

Chair, SNIA Ethernet Storage Forum

Ethernet Networking Mktg Mgr, Intel

Hi Forum.

 

I have an Intel Pro/1000 PT Dual Port Server Adaptor card running on Windows 7 US 64bit.

 

I have the latest drivers for the Intel card and Windows fully updated.

 

If I try to make a TEAM, with the two ports, it comes up with an error. (And creates a 'Virtual adapter...'-something)  The second time I try, it works, but something is still not working correctly.

 

It did work fine on my old motherboard (Asus P8z68 pro) but not i my new computer (Asus Z97-A).

 

I have tried everything. Same old driver, new driver, BIOS settings etc. Nothing is working. Is there a know problem with the driver for this card and windows 7 64bit US?

 

I am out of ideas now...

 

Best regards

Michael.

sharan

About cookies

Posted by sharan Nov 5, 2015

I set cookies in the app using intel xdk cache plugin its working in emulator but not working when i installed the app on android moto g 1st generation mobile I'm using thiese statements to set and get cookie resp intel.xdk.cache.setCookie('username',uname,'-1'); un = intel.xdk.cache.getCookie('username');

We are investigating what appears to be a damaged PCIe input lane, PER_0_p and PER_0_n on the i350. The 100 Ohm (at DC) differential R is located on the die per the data sheet. With the i350 powered down, a differential Ohm measurement with a DMM simply reads as an open, ~ 11 M Ohm across PER_0_p and PER_0_n. Is the input AC coupled on the die, or is the 100 Ohm differential across PER_0_p and PER_0_n not realized until power is applied? Thanks.

Hi,

We are interested to use the X540 Twinville Dual Port 10GbE MAC/PHY in our application. The marketing data sheet

 

Intel® Ethernet Controllers and PHYs

 

lists the operating temperature at 0-55C. Yet the data sheet

 

http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-x540-datasheet.pdf

 

on pg 1188 lists a maximum case temperature of Tcase Max = 107C.

 

Please elaborate on the difference and meaning between these two figures. We need the 0-70C temperature range if we are to use this part.

 

Thank you.

From Dawn Moore, General Manager of the Networking Division, read her latest blog:  Better Together: Balanced System Performance Through Network Innovation

 

The IT environment depends on hyperscale data centers and virtualized servers, making it crucial that upgrading to the latest technology be viewed from a comprehensive systems viewpoint. Need more data center performance? Maximize investment by upgrading the CPU, network and storage.

Due to the rapid growth of ever-more powerful mobile devices, enterprise networks need to keep pace. NBASE-T™ technology boosts the speed of twisted pair copper cabling up to 100 meters in length well beyond the designed limits of 1 Gigabit per second (Gbps). Capable of reaching 2.5 and 5 Gbps over 100m of Cat 5e cable, NBASE-T solutions implement a new type of signaling over twisted-pair cabling. The upcoming Intel(R)  Ethernet Controller code named Sageville, a single chip dual-port 10GBASE-T  and NBASE-T controller, can auto-negotiate to allow the selection of the best speed: 100 Megabit Ethernet (100MbE), 1 Gigabit Ethernet (GbE), 2.5GbE and 5GbE, over Cat 5e or Cat 6 and 10GbE over Cat 6A or Cat 7. Watch our recent demo from Cisco Live.