1 2 3 Previous Next

The Data Stack

1,440 posts


Q1: Intel is engaged in a number of SDN and NFV community and standards-developing organizations worldwide. What is the reasoning behind joining the new Korea SDN/NFV Forum?

 

A1: Intel Korea is firmly committed to helping enable our local Korean partner ecosystem to fully participate and benefit from the transformation of the global networking industry towards software-defined networking (SDN) and network functions virtualization (NFV). Incorporating the latest SDN and NFV standards and open source technologies from initiatives such as OpenStack, OpenDaylight, OPNFV, OpenFlow and others into a coherent, value-added and stable software platform is complex. Working with our partners in the Korea SDN/NFV Forum, we are hoping to contribute to reducing this complexity, thereby accelerating the usage of SDN and NFV in Korea itself as well as globally through the export solutions of the Korean ICT industry.

 

Q2: What can Intel contribute to the Korea SDN/NFV Forum?

 

A2: Intel has been at the forefront of SDN and NFV technology for more than five years. During that time, the company has invested in working with a wide range of technology partners to develop cutting-edge SDN/NFV hardware and software, as well the best practices for rapid deployment. This customer-centric expertise in architecting, developing and deploying SDN/NFV in cloud, enterprise data center and telecommunication networks is core to our contribution to the Korea SDN/NFV Forum.

 

Another concrete example of our expertise is a deep level of experience in testing and validating SDN/NFV hardware and software solutions, which is an important component when developing credible proofs-of-concept (PoC) for the next generation of SDN/NFV software. In addition, Intel has been operating Intel Network Builders, an SDN/NFV ecosystem program for solution partners and end-users. Products / solutions developed by Korea SDN/NFV Forum members can be leveraged by the ecosystem members to promote their solutions globally.

 

Q3: Are there any specific working groups within the Forum that Intel will focus on?

 

A3:  Intel plans to contribute to the success of all working groups with a focus on the standard technology, service PoC, policy development, and international relations working groups. Through the global Intel network, we are also aiming to assist in the collaboration of the Korea SDN/NFV Forum with other international organizations.

 

Q4: What is Intel’s main goal for participating in the Korea SDN/NFV Forum in 2015?

 

A4: Primarily, we want add value by helping the SDN/NFV Forum to get established and become a true source of SDN/NFV innovation for our partner ICT ecosystem here in Korea.

Back in 2011, I made the statement, "I have put my Oracle redo logs or SQL Server transaction log on nothing but SSDs" (Improve Database Performance: Redo and Transaction Logs on Solid State Disks (SSDs). In fact since the release of the Intel® SSD X25-E series in 2008, it is fair to say I have never looked backed. Even though those X25-Es have long since retired, every new product has convinced me further still that from a performance perspective a hard drive configuration just cannot compete. This is not to say that there have not been new skills to learn, such as configuration details explained here (How to Configure Oracle Redo on SSD (Solid State Disks) with ASM). The Intel® SSD 910 series provided a definite step-up from the X25-E for Oracle workloads (Comparing Performance of Oracle  Redo on Solid State Disks (SSDs)) and proved concerns for write peaks was unfounded (Should you put Oracle Database Redo on Solid State Disks (SSDs)). Now with the PCIe*-based Intel® SSD DC P3600/P3700 series we have the next step in the evolutionary development of SSDs for all types of Oracle workloads.

 

Additionally we have updates in operating system and driver support and therefore a refresh to the previous posts on SSDs for Oracle is warranted to help you get the best out of the Intel SSD DC P3700 series for Oracle redo.

 

NVMe

 

One significant difference in the new SSDs is the change in interface and driver from AHCI and SATA to NVMe (Non-volatile memory express).  For an introduction to NVMe see this video by James Myers and to understand the efficiency that NVMe brings read this post by Christian Black. As James noted, high performance, consistent, low latency Oracle redo logging also needs high endurance, therefore the P3700 is the drive to use. With a new interface comes a new driver, which fortunately is included in the Linux kernel at the Oracle supported Linux releases of Red Hat and Oracle Linux 6.5, 6.6 and 7. 

I am using Oracle Linux 7.


Booting my system with both a RAID array of Intel SSD DC S3700 series and Intel SSD DC P3700 series shows two new disk devices:


First the S3700 array using the previous interface


Disk /dev/sdb1: 2394.0 GB, 2393997574144 bytes, 4675776512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes



Second the new PCIe P3700 using NVMe

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 1562824368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes



Changing the Sector Size to 4KB

 

As Oracle introduced support for 4KB sector sizes at Oracle release 11g R2, it is important to be at a minimum of this release or Oracle 12c to take full advantage of SSD for Oracle redo. However ‘out of the box’ as shown the P3700 presents a 512 byte sector size. We can use this ‘as is’ and set the Oracle parameter ‘disk_sector_size_override’ to true. With this we can then specify the blocksize to be 4KB when creating a redo log file. Oracle will then use 4KB redo log blocks and performance will not be compromised.


As a second option, the P3700 offers a feature called ‘Variable Sector Size’. Because we know we need 4KB sectors, we can set up the P3700 to present a 4KB sector size instead. This can then be used transparently by Oracle without the requirement for additional parameters. It is important to do this before you have configured or started to use the drive for Oracle as the operation is destructive of any existing data on the device.

 

To do this, first check that everything is up to date by using the Intel Solid State Drive Data Center Tool from https://downloadcenter.intel.com/download/23931/Intel-Solid-State-Drive-Data-Center-Tool Be aware that after running the command it will be necessary to reboot the system to pick up the new configuration and use the device.


[root@haswex1 ~]# isdct show -intelssd
- IntelSSD Index 0 -
Bootloader: 8B1B012D
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
Firmware: 8DV10130
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
Index: 0
ProductFamily: Intel SSD DC P3700 Series
ModelNumber: INTEL SSDPEDMD800G4
SerialNumber: CVFT421500GT800CGN



Then run the following command to change the sector size. The parameter LBAFormat=3 sets it to 4KB and LBAFormat=0 sets it back to 512b.

 

[root@haswex1 ~]# isdct start -intelssd 0 Function=NVMeFormat LBAFormat=3 SecureEraseSetting=2 ProtectionInformation=0 MetaDataSetting=0
WARNING! You have selected to format the drive! 
Proceed with the format? (Y|N): Y
Running NVMe Format...
NVMe Format Successful.



After it ran I rebooted, the reboot is necessary because of the need to do an NVMe reset on the device because I am on Oracle Linux 7 with a UEK kernel at 3.8.13-35.3.1. At Linux kernels 3.10 and above you can also run the following command with the system online to do the reset.

 

echo 1 > /sys/class/misc/nvme0/device/reset



The disk should now present the 4KB sector size we want for Oracle redo.

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 195353046 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes



Configuring the P3700 for ASM

 

For ASM (Automatic Storage Management) we need a disk with a single partition and, after giving the disk a gpt label, I use the following command to create and check the use of an aligned partition.

 

(parted) mkpart primary 2048s 100%                                        
(parted) print                                                            
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
1      2048s  195352831s  195350784s               primary

(parted) align-check optimal 1
1 aligned
(parted)  


     

I then use udev to set the device permissions. Note: the scsi_id command can be run independently to find the device id to put in the file and the udevadm command used to apply the rules. Rebooting the system is useful during configuration to ensure that the correct permissions are applied on boot.

 

[root@haswex1 ~]# cd /etc/udev/rules.d/
[root@haswex1 rules.d]# more 99-oracleasm.rules 
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600508e000000000c52195372b1d6008", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="nvme0n1p1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="365cd2e4080864356494e000000010000", OWNER="oracle", GROUP="dba", MODE="0660"



Successfully applied, the oracle user now has ownership of the DC S3700 RAID array device and the P3700 presented by NVMe.

 

[root@haswex1 rules.d]# ls -l /dev/sdb1
brw-rw---- 1 oracle dba 8, 17 Mar  9 14:47 /dev/sdb1
[root@haswex1 rules.d]# ls -l /dev/nvme0n1p1 
brw-rw---- 1 oracle dba 259, 1 Mar  9 14:39 /dev/nvme0n1p1



Use ASMLIB to mark both disks for ASM.

 

[root@haswex1 rules.d]# oracleasm createdisk VOL2 /dev/nvme0n1p1
Writing disk header: done
Instantiating disk: done

[root@haswex1 rules.d]# oracleasm listdisks
VOL1
VOL2



As the Oracle user, use the ASMCA utility to create the ASM disk groups.

 

fult1.png

 

I now have 2 disk groups created under ASM.

 

fult2.png

 

Because of the way the disk were configured Oracle has automatically detected and applied the sector size of 4KB.

 

[oracle@haswex1 ~]$ sqlplus sys/oracle as sysasm
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 12 10:30:04 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> select name, sector_size from v$asm_diskgroup;

NAME                     SECTOR_SIZE
------------------------------ -----------
REDO                          4096
DATA                          4096


 

 

SPFILES in 4K DISKGROUPS

 

In previous posts I noted Oracle bug “16870214 : DB STARTUP FAILS WITH ORA-17510 IF SPFILE IS IN 4K SECTOR SIZE DISKGROUP” and even with Oracle 12.1.0.2 this bug is still with us.  As both of my diskgroups have a 4KB sector size, this will affect me if I try to create a database in either without having applied patch 16870214.


With this bug, upon creating a database with DBCA you will see the following error.

 

fult3.png


The database is created and the spfile does exist so can be extracted as follows:

 

ASMCMD> cd PARAMETERFILE
ASMCMD> ls
spfile.282.873892817
ASMCMD> cp spfile.282.873892817 /home/oracle/testspfile
copying +DATA/TEST/PARAMETERFILE/spfile.282.873892817 -> /home/oracle/testspfile



This spfile is corrupt and attempts to reuse it will result in errors.

 

ORA-17510: Attempt to do i/o beyond file size
ORA-17512: Block Verification Failed



However, you can extract the parameters by using the strings command and create an external spfile or a spfile in a diskgroup with a 52b sector size. Once complete, the Oracle instance can be started.

 

SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileTEST.ora' from pfile='/home/oracle/testpfile';
SQL> startup
ORACLE instance started



Creating Redo Logs under ASM


In viewing the same disks within the Oracle instance, the underlying sector size has been passed right through to the database.

 

SQL> select name, SECTOR_SIZE BLOCK_SIZE from v$asm_diskgroup;

NAME                   BLOCK_SIZE
------------------------------ ----------
REDO                      4096
DATA                      4096



Now it is possible to create a redo log file with a command such as follows:

 

SQL> alter database add logfile ‘+REDO’ size 32g; 



…and Oracle will create a redo log automatically with an optimal blocksize of 4KB.

 

SQL> select v$log.group#, member, blocksize from v$log, v$logfile where v$log.group#=3 and v$logfile.group#=3;

GROUP#
----------
MEMBER
-----------
BLOCKSIZE
----------
       3
+REDO/HWEXDB1/ONLINELOG/group_3.256.874146809
      4096



Running an OLTP workload with Oracle Redo on Intel® SSD DC P3700 series


To put the Oracle redo on P3700 through its paces I used a HammerDB workload. The redo is set with a standard production type configuration without commit_write and commit_wait parameters.  A test shows we are running almost 100,000 transactions per second at redo over 500MB / second and therefore we would be archiving almost 2 TBs per hour.

 

Per Second

Per Transaction

Per Exec

Per Call

Redo size (bytes):

504,694,043.7

5,350.6

 

 


Log file sync even at this level of throughput is just above 1ms

 

Event

Waits

Total Wait Time (sec)

Wait Avg(ms)

% DB time

Wait Class

DB CPU

 

35.4K

 

59.1

 

log file sync19,927,44923.2K1.1638.7Commit


…and the average log file parallel write showing the average disk response time to just 0.13ms

 

Event

Waits

%Time -outs

Total Wait Time (s)

Avg wait (ms)

Waits /txn

% bg time

log file parallel write3,359,0230442

0.13

0.12

2237277.09


 

There are six log writers on this system. As with previous blog posts on SSDs I observed the log activity to be heaviest on the first three and therefore traced the log file parallel write activity on the first one with the following method:

 

SQL> oradebug setospid 67810;
Oracle pid: 18, Unix process pid: 67810, image: oracle@haswex1.example.com (LG00)
SQL> oradebug event 10046 trace name context forever level 8;
ORA-49100: Failed to process event statement [10046 trace name context forever level 8]
SQL> oradebug event 10046 trace name context forever, level 8;


The trace file shows the following results for log file parallel write latency to the P3700.

 

Log Writer Worker

Over  1ms

Over 10ms

Over 20ms

Max Elapsed

LG001.04%0.01%0.00%14.83ms

 

Looking at a scatter plot of all of the log file parallel write latencies recorded in microseconds on the y axis clearly illustrate that any outliers are statistically insignificant and none exceed 15 milliseconds. Most of the writes are sub-millisecond on a system that is processing many millions of transactions a minute while doing so.

fult4.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A subset of iostat data shows the the device is also far from full utilization.

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          77.30    0.00    8.07    0.24    0.00   14.39
Device:         wMB/s avgrq-sz avgqu-sz   await w_await  svctm  %util
nvme0n1        589.59    24.32     1.33    0.03    0.03   0.01  27.47


 

Conclusion


As a confirmed believer in SSDs, I have long been convinced that most experiences of poor Oracle redo performance on SSDs has been due to an error in configuration such as sector size, block size and/or alignment as opposed to performance of the underlying device itself. In following the configuration steps I have outlined here, the Intel SSD DC P3700 series shows as an ideal candidate to take Oracle redo to the next level of performance without compromising endurance.

By: Adrian Hoban

 

The performance needs of virtualized applications in the telecom network are distinctly different from those in the cloud or in the data center.  These NFV applications are implemented on a slice of a virtual server and yet need to match the performance that is delivered by a discrete appliance where the application is tightly tuned to the platform.

 

The Enhanced Platform Awareness initiative that I am a part of is a continuous program to enable fine-tuning of the platform for virtualized network functions. This is done by exposing the processor and platform capabilities through the management and orchestration layers. When a virtual network function is instantiated by an Enhanced Platform Awareness enabled orchestrator, the application requirements can be more efficiently matched with the platform capabilities.

 

Enhanced Platform Awareness is composed of several open source technologies that can be considered from the orchestration layers to be “tuning knobs” to adjust in order to meaningfully improve a range of packet-processing and application performance parameters.

 

These technologies have been developed and standardized through a two-year collaborative effort in the open source community.  We have worked with the ETSI NFV Performance Portability Working Group to refine these concepts.

 

At the same time, we have been working with developers to integrate the code into OpenStack®. Some of the features are available in the OpenStack Juno release, but I anticipate a more complete implementation will be a part of the Kilo release that is due in late April 2015.

 

How Enhanced Platform Awareness Helps NFV to Scale

In cloud environments, virtual application performance may often be increased by using a scaling out strategy such as by increasing the number of VMs the application can use. However, for virtualized telecom networks, applying a scaling out strategy to improve network performance may not achieve the desired results.

 

NFV scaling out will not ensure that improvement in all of the important aspects of the traffic characteristics (such as latency and jitter) will be achieved. And these are essential to providing the predictable service and application performance that network operators require. Using Enhanced Platform Awareness, we aim to address both performance and predictability requirements using technologies such as:

 

  • Single Root IO Virtualization (SR-IOV): SR-IOV divides a PCIe physical function into multiple virtual functions each with the capability to have their own bandwidth allocations. When virtual machines are assigned their own VF they gain a high-performance, low-latency data path to the NIC.
  • Non-Uniform Memory Architecture (NUMA): With a NUMA design, the memory allocation process for an application prioritizes the highest-performing memory, which is local to a processor core.  In the case of Enhanced Platform Awareness, OpenStack® will be able to configure VMs to use CPU cores from the same processor socket and choose the optimal socket based on the locality of the relevant NIC device that is providing the data connectivity for the VM.
  • CPU Pinning: In CPU pinning, a process or thread has an affinity configured with one or multiple cores. In a 1:1 pinning configuration between virtual CPUs and physical CPUs, some predictability is introduced into the system by preventing host and guest schedulers from moving workloads around. This facilitates other efficiencies such as improved cache hit rates.
  • Huge Page support: Provides up to 1-GB page table entry sizes to reduce I/O translation look-aside buffer (IOTLB) misses, improves networking performance, particularly for small packets.

 

A more detailed explanation of these technologies and how they work together can be found in a recently posted paper that I co-authored titled: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release

 

 

Virtual BNG/BRAS Example

The whitepaper also has a detailed example of a simulation we conducted to demonstrate the impact of these technologies.

 

We created a VNF with the Intel® Data Plane Performance Demonstrator (DPPD) as a tool to benchmark platform performance under simulated traffic loads and to show the impact of adding Enhanced Platform Awareness technologies. The DPPD was developed to emulate many of the functions of a virtual broadband network gateway / broadband remote access server.

 

We used the Juno release of OpenStack® for the test, which was patched with huge page support. A number of manual steps were applied to simulate the capability that should be available in the Kilo release such as CPU pinning and I/O Aware NUMA scheduling.

 

The results shown in the figure below are the relative gains in data throughput as a percentage of 10Gpbs achieved through the use of these EPA technologies. Latency and packet delay variation are important characteristics for BNGs. Another study of this sample BNG includes some results related to these metrics: Network Function Virtualization: Quality of Service in Broadband Remote Access Servers with Linux* and Intel® Architecture®

 

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations..PNG

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations

 

 

The order in which the features were applied impacts the incremental gains so it is important to consider the results as a whole rather than infer relative value from the incremental increases. There are also a number of other procedures that you should read more about in the whitepaper.

 

The two years of hard work by the open source community has brought us to the verge of a very important and fundamental step forward for delivering carrier-class NFV performance. Be sure to check back here for more of my blogs on this topic, and you can also follow the progress of Kilo at the OpenStack Kilo Release Schedule website.

By: Frank Schapfel

 

One of challenges in deploying Network Functions Virtualization (NFV) is creating the right software management of the virtualized network.  There are differences between managing an IT Cloud and a Telco Cloud.  IT Cloud providers take advantage of centralized and standardized servers in large scale data centers.  IT Cloud architects aim to maximize the utilization (efficiency) of the servers and automate the operations management.  In contrast, Telco Cloud application workloads are different from IT Cloud workloads.  Telco Cloud application workloads have real-time constraints, government regulatory constraints, and network setup and teardown constraints.  New tools are needed to build a Telco Cloud to these requirements.

 

OpenStack is the open software community developing IT Cloud orchestration management since 2010.  The Telco service provider community of end users, telecomm equipment manufacturers (TEMs), and software vendors have rallied around adapting the OpenStack cloud orchestration for Telco Cloud.  Over the last few releases of OpenStack, the industry has been shaping and delivering Telco Cloud ready solutions. For now, let’s just focus on the real-time constraints. For IT Cloud, the data center is viewed as a large pool of compute resources that need to operate a maximum utilization, even to the point of over-subscription of the server resources. Waiting a few milliseconds is imperceptible to the end user.  On the other hand, a network is real-time sensitive – and therefore cannot tolerate over-subscription of resources.

 

To adapt OpenStack to be more Telco Cloud friendly, Intel contributed to the concept of “Enhanced Platform Awareness” to OpenStack. Enhanced Platform Awareness in OpenStack offers a fine-grained matching of virtualized network resources to the server platform capabilities.  Having a fine-grained view of the server platform allows the orchestration to accurately assign the Telco Cloud application workload to the best virtual resource.  The orchestrator needs NUMA (Non-Uniform Memory Architecture) awareness so that it can understand how the server resources are partitioned, and how CPUs, IO devices, and memory are attached to sockets.  For instance, when workloads need line rate bandwidth, high speed memory access is critical, and huge page access is the latest technology in the latest Intel® Xeon™ E5-2600 v3 processor.

 

Now in action at the Oracle Industry Connect event in Washington, DC, Oracle and Intel demonstrate the collaboration using Enhanced Platform Awareness in OpenStack.  The Oracle Communications Network Service Orchestration uses OpenStack Enhanced Platform Awareness to achieve carrier grade performance for Telco Cloud. Virtualized Network Functions are assigned based on the needs for huge page access and NUMA awareness.  Other cloud workloads, which are not network functions, are not assigned specific server resources.

 

The good news – the Enhanced Platform Awareness contributions are already up-streamed in the OpenStack repository, and will be in the OpenStack Kilo release later this year.  At Oracle Industry Connect this week, there is a keynote, panel discussions and demos to get even further “under the hood.”  And if you want even more details, there is a new Intel White Paper: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release.

 

Adapting OpenStack for Telco Cloud is happening now. And Enhanced Platform Awareness is finding its way into a real, carrier-grade orchestration solution.

March has been a big month for demonstrating the role of Intel® Ethernet in the future of several key Intel initiatives that are changing the data center.

 

At the start of the month we were in Barcelona at Mobile World Congress demonstrating the role of Ethernet as the key server interconnect technology for Intel’s Software Defined Infrastructure initiative read my blog post on that event.

 

And just this week, Intel was in San Jose at the Open Compute Project Summit highlighting Ethernet’s role in Rack Scale Architecture, which is one of our initiatives for SDI.

 

RSA is a logical data center hardware architectural framework based on pooled and disaggregated computing, storage and networking resources from which software controllers can compose the ideal system for an application workload.

 

The use of virtualization in the data center is increasing server utilization levels and driving an insatiable need for more efficient data center networks. RSA’s disaggregated and pooled approach is an open, high-performance way to meet this need for data center efficiency.

 

In RSA, Ethernet plays a key role as the low-latency, high bandwidth fabric connecting the disaggregated resources together and to other resources outside of the rack. The whole system depends on Ethernet providing a low-latency, high throughput fabric that is also software controllable.

 

MWC was where we demonstrated Intel Ethernet’s software controllability through support for network virtualization overlays; and OCP Summit is where we demonstrated the raw speed of our Ethernet technology.

 

A little history is in order. RSA was first demonstrated at last year’s OCP Summit, and as a part of that, we revealed an integrated 10GbE switch module proof of concept that included switch chip and multiple Ethernet controllers that removed the need for a NIC in the server.

 

This proof of concept showed how this architecture could disaggregate the network from the compute node.

 

At the 2015 show, we demonstrated a new design with our upcoming Red Rock Canyon technology, a single-chip solution that integrates multiple NICs into a switch chip. The chip delivered throughput of 50 Gbps between four Xeon nodes via PCIe, and multiple 100GbE connections between the server shelves, all with very low latency.

 

The features delivered by this innovative design provide performance optimized for RSA workloads. It’s safe to say that I have not seen a more efficient or high-performance rack than this PoC video of the performance.

 

Red Rock Canyon is just one of the ways we’re continuing to innovate with Ethernet to make it the network of choice for high-end data centers.

Amber Huffman, Sr Principal Engineer, Storage Technologies Group at Intel


For Enterprise, everyone is talking about "Cloud," "Big Data," and “Software Defined X," the latest IT buzzwords. For consumers, the excitement is around 4K gaming and 4K digital content creation. At the heart of all this is a LOT of data. A petabyte of data used to sound enormous – now the explosion in data is being described in exabytes (1K petabytes) and even zettabytes (1K exabytes). The challenge is how to get fast access to the specific information you need in this sea of information.

 

NVM Express* (NVMe*) was designed for enterprise and consumer implementations, specifically to address this challenge and the opportunities created by the massive amount of data that businesses and consumers generate and devour.

 

NVMe is the standard interface for PCI Express* (PCIe*) SSDs. Other interfaces like Serial ATA and SAS were defined for mechanical hard drives and legacy interfaces are slow, both from a throughput and a latency standpoint. NVMe jettisons this legacy and is architected from the ground up for non-volatile memory, enabling NVMe to deliver amazing performance and low latency. For example, NVMe delivers up to 6x the performance of state of the art SATA SSDs1.

 

There are several exciting new developments in NVMe. In 2015, NVMe will be coming to client systems, delivering great performance at the low power levels required in 2-in-1s and tablets. The NVM Express Workgroup is also developing “NVMe over Fabrics,” which brings the benefits of NVMe across the data center and cloud over fabrics like Ethernet, Fibre Channel, InfiniBand*, and OmniPath* Architecture.

 

NVM Express is the interface that will serve data center and client needs for the next decade. For a closer look at the new developments in NVMe, look Under the Hood with this video. Check out more information at www.nvmexpress.org.

 

 



1Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Configuration: Performance claims obtained from data sheet, Intel® SSD DC P3700 Series 2TB, Intel® SSD DC S3700 Series: Intel Core i7-3770K CPU @ 3.50GHz, 8GB of system memory, Windows* Server 2012, IOMeter. Random performance is collected with 4 workers each with 32 QD. Configuration for latency: Intel® S2600CP server, Intel® Xeon® E5-2690v2 x2, 64GB DDR3, Intel® SSD DC P3700 Series 400GB, LSI 9207-8i, Intel® SSD DC S3700.

 

© 2015 Intel Corporation

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

It seems ironic to be blogging about wired Ethernet in conjunction with the world’s largest wireless technology show, Mobile World Congress.

 

But it’s actually very relevant because the “wireless” network is mostly wired and Ethernet is becoming a bigger part of this infrastructure.

 

That’s why we teamed with Cisco for a joint demonstration of a new Ethernet technology at MWC that shows the potential for virtualized Ethernet as a key part of Intel’s software-defined infrastructure initiative.

 

SDI is Intel’s vision of the future of the data center – both for enterprises and service providers. In an SDI data center, the network, compute and storage infrastructure are virtualized and managed by orchestration software to enable IT – or applications – to dynamically define and assign resources.

 

Compute and storage virtualization have been ongoing for some time, but the physical network is just now being virtualized via network virtualization overlays. NVOs are new packet encapsulation techniques that allow the industry to realize some of the same flexibility that we see today in virtualized compute and storage.

 

Intel has supported acceleration for early NVO protocols (Geneve, VXLAN, and NVGRE) in our Intel® Ethernet Controller XL710, codenamed Fortville. And now we’re supporting the latest NVO protocol - Network Service Header.

 

NSH is an IETF standard originally developed by Cisco that is an important advance in the ability to create service chains, making network design easier by routing packets through specific network services (firewall, encryption, etc.) in a virtualized network. When NSH is used for service chains, it simplifies the creation of complex services.

 

Which brings us to MWC where Intel and Cisco demonstrated an NFV security service-chaining application based on NSH using Intel’s Red Rock Canyon-based customer reference board.

 

Red Rock Canyon is a new breed of Ethernet product that integrates an Ethernet switch with high-speed network interface controllers. Red Rock Canyon includes PCIe 3.0 interfaces as a low latency way to connect Intel® Xeon-based servers to the network. Red Rock Canyon is now sampling to customers and will launch later this year.

 

It’s important to note that we’re supporting NSH in firmware at wire speed. This is an innovation that we originally developed for the XL710, which is now available in Red Rock Canyon too. This flexible protocol support is a critical precursor to Ethernet’s role as the interconnect for SDI.

 

The mission of my team is to provide Intel Ethernet network virtualization and switch innovations that enable the SDI vision.

 

This means that in addition to controllers, adapters and switches, we’ll be building our flexible Intel® Ethernet into Xeon CPUs and other devices to integrate it more completely with the compute node and with our SDI architecture.

 

The network may be the last component of the data center to be virtualized, but that is happening right now and it is a major milestone for the promise of SDI to be fully realized.

By Erin Jiang, Media Processing Segment Manager

 

 

As I was flying home from Mobile World Congress last week I reflected on what a great show it was for our virtual media processing and serving technology and how far this technology has come in its evolution.

It wasn't too long ago that video processing required a dedicated appliance. But at MWC, we teamed up with Huawei to demonstrate real-time NFV video conferencing with 4K video processing on an Intel® Core i7 powered high density, power efficient server utilizing the Intel integrated hardware acceleration with Intel Iris Pro graphics and Intel Media Server Studio.


IMG_0133.JPGThe demonstration was part of our collaboration with Huawei to deliver next-generation NFV video and audio processing solutions for the company’s cloud-based media platform.

 

Based on Intel media server solutions and Intel graphics virtualization technology, Huawei’s server solution delivers high-density video encoding, composition, and decoding on OpenStack-managed virtual machines.

This joint project with Huawei was definitely a highlight, but we had some other noteworthy mobile video service demonstrations in the Intel booth from partners Elemental Technologies, Artesyn Embedded Technologies and Vantrix.

 

Elemental demonstrated its next-generation software-defined video platform, including its Elemental Live and Elemental Delta products.  Elemental and Intel are collaborating to enable mobile network operators to monetize network investment in new ways with personalized advertising that is customizable by geography, node, device, or even by subscriber.

 

Artesyn and Vantrix combined their technologies to show virtualized video transcoding for over-the-top services and live/linear video content. This technology packs a very high stream density in a compact form factor and provides both mobile video delivery as well as speech transcoding.

 

The transcoding server is designed using Intel’s integrated hardware-accelerated graphics SharpStreamer™ edge platforms for cloud-based media processing for cloud RAN and virtual RAN network functions.

With three partners using Intel technology to revolutionize video services delivery, I would say that MWC was a very good show that demonstrated just how far we’ve come and the promise of a great future.

Since its inception in 2011, Intel has been a key contributor to the Open Compute Project (OCP). As a founding member of OCP, Intel strives to continue increasing the number of open solutions based on OCP specifications available on the market. That mission was front and center today at the annual OCP Summit in San Jose where we talked about a number of OCP-based products available from Intel and our ecosystem partners.

 

One of the highlights today at the OCP Summit was the introduction of the Intel Xeon processor D product family, announced by Intel on March 9th. It was an exciting moment to share more details with the OCP Summit audience about the first Intel Xeon based product manufactured on 14nm. Intel has leveraged our extensive data center experience and our leading 14nm process technology to create a highly integrated system-on-chip (SoC) that integrates Intel Xeon compute cores, Intel networking, and I/O onto a single processor.

 

During my keynote I had the pleasure of welcoming Jason Taylor, Facebook’s VP of Infrastructure, to talk about how Intel and Facebook are collaborating to create solutions and share them with the OCP. During our conversation, Jason talked about an Intel Xeon processor D-based system called Yosemite that Facebook will adopt and contribute to OCP, and the power and performance benefits delivered by this new SoC. For Intel, the most important part of launching a new product is helping our customers to be successful, and it is very rewarding to see a key partner such as Facebook be an early adopter of our latest product and join us at the OCP Summit to share with the audience how the Intel Xeon processor D product family will benefit their data center.

 

In our demo booth at the OCP Summit we are demonstrating the first implementation of Intel Rack Scale Architecture (Intel RSA) based on OCP-compliant hardware. Intel RSA is a logical architecture that enables the disaggregation and pooling of compute, storage, and networking resources, allowing our customers to deliver higher performance while lowering the TCO of their data center systems. Already, there is a growing RSA ecosystem focused on the development of OCP hardware, with RSA evaluations under way in the data centers of leading cloud service providers.

Intel is also contributing a wide range of new networking and storage products and technologies. At our booth you can check the first live demo of a 100-gigabit Ethernet switch for Intel RSA and also a new 40GbE adapter that supports the OCP 2.0 design specification.

 

As these examples should show, Intel is deeply committed to the OCP and its mission to enable the design and delivery of open, highly efficient hardware for scalable computing. To date, Intel has made several contributions to the OCP in the form of servers, racks, storage, and networking components aligned to OCP specifications and has worked with our partners on the development of 40 OCP systems.

 

You can expect us to continue to innovate and work together with our ecosystem partners to share specifications and best practices with OCP that deliver highly efficient solutions to the industry.

 

Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

All throughout the world you will find that some of the most innovative and impactful things have incredibly small beginnings. For example the giant sequoia, which grows to be the largest tree species in the world, starts as a tiny seed no longer than a grain of rice. All of the complex genetic makeup and intelligence that is needed to grow a massive living organism more than 250 feet tall is contained in a tiny footprint that can fit onto the tip of your finger. A small seed, like the giant sequoia, can provide the foundation for something that grows to massive heights and impacts the entire planet.


In the spirit of the giant sequoia seed, the Intel Xeon Processor D-1500 Product Family brings great intelligence in an extremely small footprint, enabling you to harness the incredible performance of Intel Xeon processors within a small, low power package that will help drive the transformation of the massive digital services economy and data center innovation.


Intel believes that everyone contains great potential, even though we as individuals represent only a small footprint on the face of the planet. To celebrate  the opportunity we all have to start with something small and impact great things, Intel will provide the funds to plant one of world’s largest species. Each time someone Tweets #XeonDTreePromo, Intel will donate $1.00 to the Penny Pines Reforestation Program in the Sequoia National Forest to fund the planting of a giant sequoia tree.  So that we all can all harness the great intelligence and innovation contained in the tiny footprint of a sequoia seed and help make the world a better place together.

 

 

 

 

Promotion Details:

Intel will donate $1.00 to The Penny Pines Reforestation Program for every instance that the phrase #XeonDTreePromo is Tweeted in an English language post, starting at 9:00 a.m. PST March 9, 2015 and ending 9:00 p.m. PST March 14, 2015, with the total possible donation limited to no more than $10,000. The funds donated to The Penny Pines Reforestation Program run through the Sequoia National Forest may be used to plant Sequoia seedlings, for general reforestation, or drawn upon as improvement projects are determined by resource managers within the program.

By Nidhi Chappell, Product Line Manager for the Intel® Xeon® Processor D family, Intel

 

 

Cloud and telecom service providers are caught in a constant battle to speed new service delivery, handle rapid growth in numbers of users, and contain IT costs. To achieve these critical goals, service providers need to look for opportunities to optimize infrastructure for density and cost, both in the data center and at the network edge.

 

That’s the idea behind the new Intel® Xeon® processor D family, the first system-on-a-chip (SoC) in the Intel Xeon processor portfolio. Designed for dense, small form factors, this new SoC puts the performance and advanced intelligence of Intel Xeon processors into dense, low-power networking, storage, and microserver infrastructure.

 

This is an ideal SoC for cloud service providers operating hyperscale data centers who want to use microservers to process lightweight workloads, such as dynamic web serving, memory caching, web hosting, and warm storage. They can now pack more compute density into their data centers. Better still, with support for up to 128 GB of memory, the SoC allows service providers to meet the needs of more users per server.

 

The Intel Xeon processor D family is also a great choice for telecommunications service providers who want to replace fixed function, proprietary, network edge appliances with open-standards Intel® Architecture. They can now get the benefits of Moore’s law, which has lowered the cost of compute by 60x over the past 10 years alone. They also get the benefit of standard Intel Architecture that can run common software across Intel product lines, generation after generation.

 

Intel Xeon Intelligence in a Low-power SoC

 

When it comes to performance, don’t let the small size fool you. We’re talking about “big core” performance and intelligence in a microserver form factor. This new SoC delivers up to 3.4 times the performance per processor and up to 1.7 times the performance per watt of the Intel® Atom™ processor C2750 SoC for dynamic web serving workloads. (1, 2)  Here’s a good analogy to help translate the performance density of Xeon D: In a typical server rack, you could pack up to 150 of Xeon D processor and be able to simultaneously host the entire population of L.A. and Chicago combined — all 6.6 million people! (3)

 

Despite its dense design, the Intel Xeon processor D family delivers all the intelligence you’ve come to expect in Intel Xeon processors. The SoC includes server-class Reliability, Availability, and Serviceability (RAS), hardware-enhanced security and compliance features, and Platform Storage Extensions that offer new intelligence for dense, low-power storage solutions.

 

Looking ahead, you can expect more good news about the Intel Xeon processor D family in the coming months. Intel plans to release more Xeon D processor versions in the second half of 2015, to build on the 4- and 8-core SoCs that are available today. These processors will be available in thermal design points of near 20 watts to 45 watts, making them ideal for networking and Internet-of-things usages.

 

All of this gives cloud and telecom service providers a lot to look forward to. They will soon have a broader range of options to address the pressing need for low-power, high-density infrastructure—from the data center to the network edge.

 

For a closer look at the Intel Xeon processor D family, visit intel.com/xeond.

 

 

 

 


Intel, the Intel logo Xeon, Intel Atom, and Intel Core are trademarks of Intel Corporation in the United States and other countries.

* Other names and brands may be claimed as the property of others.

 

 

 

1. Source: Intel performance estimates based on internal testing of Dynamic Web Serving (Performance and Performance per Watt)

New Configuration: Intel® Xeon Processor D-based reference platform with one Pre-Production Xeon Processor D (8C, 1.9GHz, 45W), Turbo Boost Enabled, Hyper-Threading enabled, 64GB memory (4x16GB DDR4-2133 RDIMM ECC), 2x10GBase-T X552, 3x S3700 SATA SSD, Fedora* 20 (3.17.8-200.fc20.x86_64, Nginx*1.4.4, Php-fpm* 15.4.14, memcached* 1.4.14, Simultaneous users=43843, Maximum un-optimized CRB wall power =114W, Perf/W=384.5 users/W . Note: Intel CRB (customer reference board) platform is not power optimized. Expect production platforms to consume less power. Other implementations based on microserverchassis, power=90W (estimated), Perf/W=487.15 users/W

Base Configuration: SupermicroSuperServer* 5018A-TN4 with one Intel Atom Processor C2750 (8C, 2.4GHz,20W), Turbo Boost Enabled, 32GB memory (4x8GB DDR3-1600 SO-DIMM ECC), 1x10GBase-T X520, 2x S3700 SATA SSD, Ubuntu* 14.10 (3.16.0-23 generic), Nginx* 1.4.4, Php-fpm* 15.4.14, memcached* 1.4.14, Simultaneous users=12896. Maximum wall power =46W, Perf/W=280.3 users/W

2. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase. For more information go to http://www.intel.com/performance.

3. 150 servers * 43843 simultaneous users per at approximately 90Ws per server in a 15KW rack

We’ve worked with SAP HANA* for more than a decade to deliver better performance for SAP* applications running on Intel® architecture. And the results just keep getting better. The latest Intel® Xeon® processor E7 v2 family can help IT get even more insights from SAP HANA, faster. When you add VMWare VSphere* to the mix, you’ll see a huge boost in efficiency without adding more servers.


Why virtualize? Data centers running mission-critical apps are pushing toward more virtualization because it can help reduce costs and labor, simplify management, and save energy and space. In response to this push, Intel, SAP, and VMware have collaborated to make a robust solution for data center virtualization with SAP HANA.


What does this mean for IT managers? Your data center can grow with more scalable memory. You can have the peace of mind your data is protected with greater reliability. And, you’ll see big gains in efficiency, even when virtualized.


Grow with scalable memory


The Intel Xeon processor E7 v2 family offers 3x more memory capacity than previous generations, This not only dramatically increases the speed of SAP HANA performance, but it also gives you plenty of room as your data grows. Our Intel Xeon processor E7 v2 family also provides up to 6 terabytes of memory in four-socket servers and 64 GB dual in-line memory modules.[

 

Relax, your mission-critical data is protected


We designed the Intel Xeon processor E7 v2 family to support improved reliability, accessibility, and serviceability (RAS) features, which means solid performance all day, every day with 99.999% uptime[4]. Intel® Run Sure Technology adds even more system uptime and increases data integrity for business-critical workloads. Whether you run SAP HANA on physical machines, virtualized, or on a private cloud, your data and mission-critical apps are in good hands.

 

Be amazed at the gains in efficiency


When Intel, SAP HANA, and VMWare join forces in a virtualized environment, efficiencies abound. Data processing can be twice as fast with a PCI Express* (PCle) solid-state drive, You can get up to 4x more I/O bandwidth, which equates to 4x more capacity for data circulation.5,6 Hyper-threading doubles the number of execution threads, increasing overall performance for complex workloads.

.

In summary, the Intel Xeon processor E7 v2 family unleashes the full range of SAP HANA capabilities with the simplicity and flexibility of virtualization on vSphere. Read more about this solution in our recent white paper, “Go Big or Go Home with Raw Processing Power for Virtualized SAP HANA.”


Follow #TechTim on Twitter and his growing #analytics @TimIntel community.

By Juan F. Roche

 

Oracle’s new generation of X5 Engineered Systems, announced in late January, are powered by the latest top-of-the-line Intel® Xeon® E5 v3 processors to deliver businesses significantly improved benefits in performance, power efficiency, virtualization, and security.

 

Oracle Engineered Systems, which are purpose-built converged systems with pre-integrated stacks of software and hardware engineered to work together, are a cost-effective, simple-to-deploy alternative to data center complexity. With Intel performance, security technologies, and flexibility co-engineered directly into the system, these integrated systems are built with business value in mind.

 

These cost-effective Oracle systems perfectly demonstrate the tight, ongoing collaboration between Oracle and Intel. For over twenty years, the two companies have worked closely together, and the relationship is built on much more than simply tuning CPUs for performance. The collaboration extends from silicon to systems, with both companies working to optimize architectures, operating systems, software, tools and designs for each other’s technologies. Intel and Oracle work together to co-engineer the entire software, middleware, and hardware stack to ensure that the Engineered Systems take maximum advantage of the power built into Intel® Xeon® processors.

 

How does this translate into business value for our customers? Here’s a real-world example. Intel worked closely with Oracle in developing the customized Intel® Xeon® E7-8895 v2 processor, creating an elastic enterprise workload SKU that allows users to vary core counts and frequencies to meet the needs of differing workloads. Because of optimizations between the Intel Xeon processors and Oracle operating system kernels and system BIOS, this single processor SKU can behave like a variety of other SKUs at runtime.

 

That means faster processing and more flexibility for addressing business computing requirements: Oracle Exalytics* workloads that would take nearly 12 hours to run on an a non-optimized Intel Xeon-based platform now drop to a 6.83 hour runtime on the customized Intel Xeon platform—a speed-up of 1.72x. That’s the difference between being able to run analytical workloads overnight and having to wait until a weekend to get business critical analytics. With on-demand analytics like this, businesses have timely, precise intelligence for real-time decision-making.

 

In addition, the Exalytics platform is flexible: not all workloads require heavy per-thread concurrency, and this elastic SKU can be tuned and balanced to vary core counts and frequency levels to meet the needs of different computing requirements.

 

Oracle Engineered Systems are also optimized to take advantage of Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), which is built into advanced Intel Xeon processors. Intel AES-NI helps boost data security by eliminating the performance penalty usually associated with software-based encryption by running encryption technology on hardware. Intel AES-NI speeds up the execution of encryption algorithms by as much as 300 percent, enhancing encryption performance so businesses don’t have to pay a performance overhead to keep data more secure.

 

To learn more about Oracle Exadata Engineered Systems powered by Intel Xeon processors, download our ebook What Will It Take to Manage Your Database Infrastructure?

By John Healy, General Manager, Software Defined Networking Division, Network Platforms Group, Intel


Mobile World Congress is upon us and there is plenty of buzz again about the progress of network functions virtualization (NFV). I’m looking forward many new NFV demos, product announcements and presentations on how mobile operators are solving problems using the technology.


I’m very bullish on the future of the NFV market. In this last year, the industry has successfully passed from a normative phase where specifications and use cases were determined, applications developed and proofs of concept and demos were successfully conducted.




Now we are moving into the next phase where NFV applications move into operation in production networks.  I am excited at the progress that our partners have achieved in translating trials into deployments and the benefits that they are beginning to achieve and measure.


But at the same time, I realize that as an industry there is still significant work to do to accelerate the technology to a point where carriers can consider full deployment and scaled implementations.  I believe there are two significant themes that need to be addressed in the coming year.

 

Challenge 1 - Technology Maturity


There have been plenty of successful NFV demos over the last 18 months proving the capability of virtualized services and the performance of standards-based computing platforms.  Now, we need to achieve mass scale and ruggedized implementations and for that the various building block technologies need to be hardened and matured.


Through this work the many virtual network functions (VNFs) will be “ruggedized” in order to provide the same service and reliability levels as today’s fixed-function counterparts.  This need for “carrier-grade reliability“ is the necessary maturing that will occur.


Much of this ruggedization will happen as operators test these VNFs in practical demonstrations; those that feature traffic types, patterns and volumes found in production networks.  Several announcements at MWC have highlighted the deployments into live networks that mark this new phase. We are actively involved in this critical activity with our partners and their customers.


But there’s also a need for more orchestration functionality to be developed and proven so that service providers can scale their networks through the automation of the implementation composition of network functions and services.


The intelligent placement of network functions mapped to the best capabilities of the computing platforms enables network services orchestration (NSO) to achieve the best performance. Exciting demos of this NSO in practice in a multi-vendor environment are on show at MWC.


Many of our ecosystem partners are tackling the orchestration of lower-level functions such as inventory, security, faults, application settings and other infrastructure elements following the ETSI management and network operations (MANO) model.  Others still have focused on the service orchestration based on models of the networks resources and policy definition schemes.


The open source community is also a key enabler of the maturing phase, including projects such as the Nova and Neutron software developments that are building orchestration functionality into OpenStack. The Open Platform for NFV (OPNFV) project is focused on hardening the NFV infrastructure and improving infrastructure management, which should improve the performance predictability of NFV services.


All of these initiatives are important and must be tested through implementation into carrier networks and stressed so that operators can be confident that services will perform predictably.

I’ve seen this performance evolution take place at Intel as we tackled the challenge of consolidating multiple processing workloads on our general purpose Intel Architecture CPUs while growing performance for packet processing to enable replacement of fixed function packet processors.


In the mid-2000s - packet processing performance on Intel processors was not where we wanted it to be, so we made modifications to the microarchitecture and at the same time we developed a series of acceleration libraries and algorithms that became the Data Plane Development Kit.


After several product generations, we can now provide wire-speed packet processing performance delivering 160Gbps of layer-three forwarding on a single core. This is made possible through our innovations and through deep collaborations with our partners, a concept we have extended to the world of NFV and from which many of the announcements at MWC have originated.

 

Challenge 2 – Interoperability


Interoperability on a grand scale is what will make widespread NFV possible. That means specification, standardization and interoperability are a major requirement for this phase of NFV.


The Open source dimension to NFV creates the community driven and supported approach that speeds innovation but it needs to be married to the world of specification definition and standardization that has traditionally moved at a much slower pace. Too slow for the new world that NFV enables.


This is a significant opportunity and challenge for the industry - we need to collectively find the bridge between both worlds. This is new territory for many of the parties involved and many of the projects are just starting on the path.

 

Intel’s Four Phase Approach to NFV


Intel is leading efforts to accelerate the maturity of the NFV market and we have outlined four key ways to do that.

First, we’re very active in developing and promoting open source components and standards. We are doing this by contributing engineering and management talent and our own technology to open source efforts. The goal is to ensure that standards evolve in an open and interoperable way.


Next, we have developed the Open Network Platform to integrate open source and Intel technologies into a set of server and networking reference designs that VNF developers can use to shorten their time to market.


Working with the industry is important, which is why we have developed Intel Network Builders, a very active ecosystem of ISVs, hardware vendors, operating system vendors and VNF developers. Network Builders gives these companies opportunities to work together and with Intel, and gives operators and others in the industry a place to find solutions and keep a pulse on the industry.


And lastly, we are working closely with service providers to support them in converting POCs into full deployments in their networks. It was at last year’s MWC that Telefonica announced its virtual CPE implementation, which Intel contributed to, and this year there are several more and we have many other similar projects that we’re working on now.


While these engineering challenges are significant, they are the growing pains that NFV must pass through to be a mature and tested solution. The key will be to keep openness and interoperability at the forefront and to keep the testing and development programs active so that they can scale to meet the needs of today’s carriers. If MWC is an indicator of the future it is definitely very bright.

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel


It's a busy time for the Intel Open Network Platform Server team and our Intel Network Builder partners. This week at Mobile World Congress in Barcelona, there are no less than six SDN/NFV demos that are based on Intel ONP Server and are developed by our Intel Network Builder ecosystem partners. Back home, we are releasing Intel ONP Server release 1.3 with updates to the open source software as well as the addition of real-time Linux kernel support and 40GbE NIC support.


The Intel ONP Server is a reference-architecture that brings together hardware and open source software building blocks used in SDN/NFV. It helps drive development of optimized SDN/NFV products in telecom, cloud and enterprise IT markets


The MWC demos illustrate this perfectly as they all involve Intel Network Builders partners showcasing cutting-edge SDN/NFV solutions.


The ONP software stack comprises Intel and community-developed open source released software such as Fedora Linux, DPDK, Open vSwitch, OpenStack, OpenDaylight and others. The key is that we address the integration gap of multiple open source projects and bring it all together into a single software release.


Here’s what’s in release 1.3:


  • OpenStack Juno 2014.2.2 release
  • OpenDaylight Helium.1 release
  • Open vSwitch 2.3.90 release
  • DPDK 1.7.1 release
  • Fedora 21 release
  • Real-Time Linux Kernel
  • Integration with 4x10 Gigabit Intel® Ethernet Controller XL710 (Fortville)
  • Validation with a server platform that incorporates the Intel® Xeon® Processor E5-2600 v3 product family


Developers who go to www.01.org to generate the software will see the value of this bundle because it all works together.  In addition, the reference architecture guide available on 01.org is a “cook book” that provides guidelines on how to test ONP servers or build products that are based on Intel ONP Server software and hardware ingredients.


A first for this release is the support of Real-Time Linux Kernel, which makes ONP Server an option for new applications.


Another important aspect to the new release is the support for the 4x10GbE Intel Ethernet Controller XL710. This adapter delivers high performance with low power consumption. For applications like a vEPC, having the data throughput of the XL710 is a significant advance.


If you are an NFV / SDN developer who wants to get to market quickly, I hope you will take a closer look at the latest release of ONP Server and consider it as a reference for your NFV/SDN development.


If you can’t make it to Barcelona to see the demos, you can find more information at: www.intel.com/ONP or at www.01.org.

Filter Blog

By date:
By tag: