Skip navigation
1 2 3 Previous Next

Wired Ethernet

234 posts

I’ve been quiet for a while as I’ve been off working on different things.  Recently I started digging into the area of Network Functions Virtualization (NFV). 

 

As most of us likely do, when I need to learn new things, the 1st thing I do is go search the Internet for information on the topic.  My search on NFV led me to a growing number of papers on using SR-IOV for NFV.

 

This made sense to me, given that SR-IOV bypasses the hypervisor and virtual switch and can provide better performance by doing so.  Yet when I read these documents the numbers published for performance all looked great, however many of them were for just one or two VM/VNF – this didn’t seem like a very valid NFV use case.

 

When I think of NFV I think of several Virtual Network Functions (VNF) VM’s (or perhaps containers) running all on the same platform, with traffic sometimes going from one VNF to another in a service chain. 

 

Thus began a 6 month experiment in which Brian Johnson and I setup test environments to see how performance scaled when using SR-IOV compared to using Open vSwitch with DPDK enhancements.  We have created a technical paper on our results that we hope can provide guidance when looking at Ethernet solutions for your NFV needs.

 

I hope you find it of use.  If you do please comment so we know if folks actually read these docs

 

The paper is available here:  http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/sr-iov-nfv-tech-brief.pdf

How well do you know your cables and optics? This slide shows what cables and optics match up to which high speed ethernet technology.

Click to open the attached pdf below.

Shingi

Ethernet Ecosystem

Posted by Shingi Jan 26, 2017

Ethernet ecosystem related topics including how-to's, whitepapers and solutions documentation.

 

How well do you know your cables and optics?

A network is designed with the right buffering in place at the right spots in the network. This is to avoid packets are lost in case there are not enough resources available for immediate processing in the next steps of a pipeline. When the buffer is not large enough, the packet loss will go up. When the buffers are too large, valuable resources are wasted and high latency might become an issue.

 

Finding the right balance between all available compute/network resources and the sizes of buffers is something we need to carefully design.

 

This balance is also important when looking at the interaction within a server between the network cards (which have some on-board buffering) and the DPDK managed buffer resources on the host. A better tuning of the buffer sizes can eliminate potential packet losses. This paper is summarizing what to do when going from one type of network card to another one that has different on-board buffer behavior. It also has the potential to explain and fix certain packet loss issues going from one generation of a NIC card to another (e.g. when moving from Intel® Ethernet Server Adapter X520 to Intel® Ethernet Controller XL710)

This how-to documents the setup Mirantis Openstack 9.1 (Mitaka) with the SRIO-V enabled Intel Ethernet XL710-QDA2 adapter (dual port) for carrier grade NFV.

Intel Ethernet XL710 provides rock solid and industry proven Single Root I/O Virtualization (SR-IOV) support to enable Network Functions Virtualization (NFV).

NFV enables carriers to virtualize network functions by running them as software instances on any hardware platform anywhere within their networks.

 

 

For the best performance and stability, upgrade to the latest firmware and drivers for your XL710 adapters available here:

Latest firmware: https://downloadcenter.intel.com/download/25791/NVM-Update-Utility-for-Intel-Ethernet-Adapters-Linux-?product=36773

Latest driver: https://downloadcenter.intel.com/download/24411/Intel-Network-Adapter-Driver-for-PCI-E-Intel-40-Gigabit- Ethernet-Network-Connections-under-Linux-?product=83418

Latest Virtual Function driver: https://downloadcenter.intel.com/download/24693/Intel-Network-Adapter-Virtual-Function-Driver-for-40-Gigabit-Ethernet-Network-Connections?v=t

 

 

*It is important to review and consult the Mirantis Fuel 9.1 install documentation available here before and during setting up your environment https://docs.mirantis.com/openstack/fuel/fuel-9.1/index.html

 

 

DOWNLOAD THE PDF ATTATCHMENT  BELOW

See attached Microsoft® Windows® driver support matrix.

Shingi

Storage How-to's and Solutions

Posted by Shingi May 25, 2016

Network storage related topics including how-to's, whitepapers and solutions documentation.

Intel® Ethernet Converged Network Adapter X710/XL710: iSCSI Quick Connect Guide (Windows*)

Shingi

Windows How-to's and Solutions

Posted by Shingi May 25, 2016
Shingi

Linux How-to's and Solutions

Posted by Shingi May 25, 2016
How-to's and SolutionsComments
Linux Linux Driver, Firmware
WindowsWIndows Driver, Firmware
Network VirtualizationSDN, SRIOV, NFV, VXLAN, OpenStack, DPDK
StorageiSCSI, iWARP, FCoE
Ethernet EcosystemCabling, Optics, Switches

Accelerating Mirantis OpenStack 8.0 with 10/40Gb Intel® Ethernet CNA XL710 and 10Gb Intel® Ethernet CNA X710 families

 

This how-to documents how to setup Mirantis OpenStack 8.0 (Liberty – Ubuntu 14.04 LTS) and how to verify that network virtualization offloads are enabled.

With virtualization stateless offloads support for VXLAN, NVGRE, and GENEVE, the Intel Ethernet Converged Network Adapter XL710 preserves application performance for overlay networks. With these offloads, it is possible to distribute network traffic across multiple CPU cores.

 

*It is important to review and consult the Mirantis Fuel 8.0 install documentation available here before and during setting up your environment https://docs.mirantis.com/openstack/fuel/fuel-8.0/index.html

 

Node Types

Servers used in this Reference Architecture will serve as one of these node types: Infrastructure, Controller, Compute, or Storage.

Infrastructure Node

The Infrastructure node is an Ubuntu 14.04-based node, which carries two virtual appliances: • Fuel Master node—an OpenStack deployment tool. • Cloud Validation node—a set of OpenStack post-deployment validation tools including Tempest and Rally.

Controller Node

The Controller node is a control plane component of a cloud, which incorporates all core OpenStack infrastructure services such as MySQL, RabbitMQ, HAProxy, OpenStack APIs, Horizon, and MongoDB. This node is not used to run VMs.

Compute Node

The Compute node is a hypervisor component of a cloud, which runs virtual instances. Storage Node The storage node is a component of an OpenStack environment, which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage backend.

Storage Node

The storage node is a component of an OpenStack environment which keeps and replicates all user data stored in your cloud including object and block storage. Ceph is used as a storage back end.

*Storage node excluded from this how-to to simplify deployment, storage nodes can easily be added from Fuel admin page.

 

Network Topology

 

Node network port configuration

 

Recommended Server Configurations

 

 

Controller Node

Compute Node

Storage Node

Infrastructure Node

Server Model

Dell R630

Dell R730

Dell R730

Dell R630

CPU

Intel® 2699v3

Intel® 2699v3

Intel® 2699v3

Intel® 2699v3

Memory

256GB

256GB

256GB

256GB

Storage

2 x 400GB SSD

2 X 400GB SSD

4 x 400G SSD, 20 x 1.2TB SAS

2 X 400GB SSD

Network

Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE

Intel® XL710 40GbE

 

 

Recommended Server Configurations

 

 

Model

Quantity

 

 

1GbE

Cisco SG300-28

2

10 or 40GbE

Arista 7060

1

 

 

Switch configuration

For Cisco switches use the following commands:

# enable

# configure terminal

# interface range GigabitEthernet 1/0/1 – 24

# switchport trunk encapsulation dot1q

# switchport trunk allowed vlan all

*To allow your switches to work with tagged and untagged VLAN’s it needs to be in trunk mode.

 

# end

 

For Arista switches use the following commands:

#enable

#conf t

#vlan 102

#interface Ethernet 1/1-32

#switchport trunk allowed vlan all

 

 

Build fuel server and deployment VM

 

Install fuel server OS and connect networks:

1.     Install the latest Mirantis Fuel iso from https://software.mirantis.com/openstack-download-form/

2.     Install iso from cd-rom or USB drive (usb drive must be created from linux) using option 1

3.     Configure Fuel eth0 for Admin PXE

 

 

 

 

 

4.     Configure eth3 for your public network and gateway

 

 

 

5.      Configure PXE Setup

 

 

 

6.     Ensure DNS and Time Sync check work with no errors

 

 

7.     Quit Setup and select >Save and Quit and you should see Fuel Login

 

 

 

 

Startup all the nodes that are part of your environment and set them to boot from PXE.

 

1.     Select bootstrap

 

 

 

2.     Confirm nodes booted correctly

 

 

 

 

Create a new OpenStack Environment

 

1.     Login to fuel at https://10.20.0.2:8443

 

 

 

 

 

2.     Create new OpenStack environment

createnv.png

 

 

3.     Select the following:

 

OpenStack Release - Juno on Ubuntu 14.04 (default)

Compute - KVM

Network Setup - Neutron with tunnelling segmentation

Storage Backend - Block storage - LVM or CEPH depending if you have storage nodes

Additional services - Blank (feel free to install if you would like)

 

Now click Finish and Create and click on the environment name to configure your environment.

 

 

 

 

4.     Identify your nodes

 

You will now see unallocated nodes with MAC addresses as their names.  You can differentiate the compute, storage and controller node by looking at the HDD size or network MAC address. 

Select the node and rename them and assign the appropriate role to it:

 

 

 

5.     Add controller nodes

 

 

 

6.     Add compute nodes

 

 

 

7.     Configure interfaces for each node

          a. Go to nodes tab, click Select  and click “Configure Interfaces”

 

 

          b. Select each interface and drag them to the appropriate ports

 

 

 

8.     Configure network settings under the networks tab

 

 

Setup your networks as follows:

 

          a.       Click “default” under Node Network Groups and set:

                    Public – Set the ip adress that have access to your internet connection

                    Storage – Use the defauls and VLAN 102

                    Management – User defaults and VLAN 101

                    Private – User defaults and VLAN 103

 

          b.      Click “Neutron L3” and set the Floating IP range with adresses that have internet access, usually the same as Public ip adresses above.

 

Leave everything else with defaults.

 

 

9.     Verify networks. Under the same Networks menu click “Connectivity Check” and then click “Verify Networks”

 

 

If everything is setup correctly you will see a green success message.

 

 

 

 

10.     Deploy environment. Click Dashboard and then “Deploy Changes”

 

 

 

 

11.     Verify everything is installed and login to OpenStack by clicking on the “Horizon” link

 

 

 

 

Verify your compute nodes have Intel® X710 or Intel® XL710 virtualization offloads are enabled.

 

1.     Log in to the compute node and type command:

#ethtool –k enp4s0f0

 

You should see the following output:

 

 

The industry is abuzz about the specification under development by the NVM Express Working Group call “NVMe over Fabrics” — for good reason.  The goal of this specification is to extend the highly efficient and scalable protocol for NVMe beyond just direct-attached storage to include networked storage.  For an excellent up-to-date background on NVMe over Fabrics, I strongly recommend the December 2015 SNIA Ethernet Storage Forum webcast, Under the Hood with NVMe over Fabrics, presented by J Metz of Cisco and Dave Minturn of Intel.

 

SNIA-ESF is following up on this webcast with another on Tuesday, January 26, 2016 10 a.m. Pacific that focuses on how Ethernet RDMA fabrics specifically fit into this new specification: How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.  This will be co-presented by John Kim from Mellanox and yours truly from Intel.  The webcast is free, and you can register at https://www.brighttalk.com/webcast/663/185909.  If you are interested, but can’t make it at that time, the webcast will be posted immediately afterwards at the SNIA ESF website: http://www.snia.org/forums/esf/knowledge/webcasts.

 

David Fair

 

Chair, SNIA Ethernet Storage Forum

Ethernet Networking Mktg Mgr, Intel

Hi Forum.

 

I have an Intel Pro/1000 PT Dual Port Server Adaptor card running on Windows 7 US 64bit.

 

I have the latest drivers for the Intel card and Windows fully updated.

 

If I try to make a TEAM, with the two ports, it comes up with an error. (And creates a 'Virtual adapter...'-something)  The second time I try, it works, but something is still not working correctly.

 

It did work fine on my old motherboard (Asus P8z68 pro) but not i my new computer (Asus Z97-A).

 

I have tried everything. Same old driver, new driver, BIOS settings etc. Nothing is working. Is there a know problem with the driver for this card and windows 7 64bit US?

 

I am out of ideas now...

 

Best regards

Michael.

sharan

About cookies

Posted by sharan Nov 5, 2015

I set cookies in the app using intel xdk cache plugin its working in emulator but not working when i installed the app on android moto g 1st generation mobile I'm using thiese statements to set and get cookie resp intel.xdk.cache.setCookie('username',uname,'-1'); un = intel.xdk.cache.getCookie('username');

Filter Blog