1 2 3 Previous Next

The Data Stack

1,450 posts

Intel has been advancing its strategy to embed Intel® Ethernet into System-on-a-chip (SoC) designs and products that will enable exciting new applications. To enable these SoCs, the Networking Division has been developing Ethernet IP blocks based on our market-leading 10GbE controller.


Last month, we had one of our biggest successes to date with the launch of the Intel® Xeon® processor D product family. The Xeon D, as it is known, is the first SoC from Intel that combines the power of a Xeon processor and the performance of Intel 10GbE - in a single chip.


In one sense, the integration of this IP into the Xeon chip was very fast – taking only one year from concept to tape in. But you could also say that the revolution was 12 years in the making because that’s how long Intel has been delivering 10GbE technology and perfecting the performance, features, drivers and software that the customers trust.


In fact, I like to say that this device has a trifecta of reliability advantages: • Proven Xeon performance • Proven Intel Ethernet • Intel leading edge manufacturing process


The Xeon D is developed for applications that will make the most use of this trifecta: emerging micro servers, wireless base stations, routers and switches, security and network appliances, as well as the build-out of Software Defined Networking (SDN) and Network Functions Virtualization (NFV). The opportunities are endless.


These applications make use of the high performance and tight integration between the processor and integrated network controller. And all of them have an ongoing need for components that reduce cost, shrink system footprint and reduce power consumption, both in the data center and at the network edge.


The road to the Xeon D SoC


While Xeon D is certainly a highlight of our Ethernet IP strategy, it’s not the first successful SoC integration of Intel® Ethernet.


Our first announced venture in this area was with the Intel Atom® C2000 processor family that is targeted at networking and communications devices and emerging microservers and storage devices. These processors include four ports of Gigabit Ethernet that can also support up to 2.5GbE for backplane implementations, for up to 10GbE of total throughput.


It’s great to see Intel® Ethernet play such a big role in a significant product like the Xeon D.  The combination of our proven Ethernet with the performance of a Xeon CPU offers our customers a tremendous value, and will open up new and exciting applications.

Today, 70 percent of US consumer Internet traffic is video, and it’s growing every day with over-the-top (OTT) providers delivering TV and movies to consumers, broadcasters and enterprises streaming live events. Cloud computing is changing the landscape for video production as well. Much of the work that used to require dedicated workstations is being moved to servers in data centers and offered remotely by cloud service providers and private cloud solutions. As a result, the landscape for content creation and delivery is undergoing significant changes. The National Association of Broadcasters (NAB) show in Las Vegas highlights these trends. And Intel will be there highlighting how we help broadcasters, distributors, and video producers step up to the challenges.


Intel processors have always been used for video processing, but today's video workloads place new demands on processing hardware. The first new demand is for greater processing performance. As video data volume explodes, encoding schemes become more complex, and processing power becomes more critical. The second demand is for increased data center density. As video processing moves to servers in data centers, service cost is driven by space and power. And the third demand is for openness. Developers want language- and platform-independent APIs like OpenCL* to access CPU and GPU graphics functions. The Intel® Xeon® processor E3 platform with integrated Intel® IrisTM Pro Graphics and Intel® Quick Sync Video transcoding acceleration provides the performance and open development environment required to drive innovation and create the optimized video delivery systems needed by today's content distributors. And does it with unparalleled density and power efficiency.


The NAB 2015 show provides an opportunity for attendees to see how these technologies come together in new, more powerful industry solutions  to deliver video content across the content lifecycle—acquire, create, manage, distribute, and experience.


We've teamed with some of our key partners at NAB 2015 to create the StudioXperience showcase that demonstrates a complete end-to-end video workflow across the content lifecycle. Waskul TV will generate real time 4k video and pipe it into a live production facility featuring Xeon E3 processors in an HP Moonshot* server and Envivio Muse* Live. The workflow is divided between on air HD production for live streaming and 4K post-production for editorial and on demand delivery. The cloud-based content management and distribution workflow is provided by Intel-powered technologies from technology partners to create a solution that streams our content to the audience via Waskul TV.


Other booths at the show let attendees drill down into some of the specific workflows and the technologies that enable them. For example, "Creative Thinking 800 Miles Away—It's Possible" lets attendees experience low latency, remote access for interactive creation and editing of video content in the cloud. You'll see how Intel technology lets you innovate and experiment with modeling, animation, and rendering effects—anywhere, anytime. And because the volume of live video content generated by broadcasters, service providers, and enterprises continues to explode, we need faster and more efficient ways of encoding it for streaming over the Internet. So Haivision's "Powerful Wide-Scale Video Distribution" demo will show how their Intel-based KulaByte* encoders and transcoders can stream secure, low latency HD video at extremely low bitrates over any network, including low cost, readily available, public Internet connections.


To learn more about how content owners, service providers, and enterprises are using Intel Xeon processor E3 based platforms with integrated HD Graphics and Intel Quick Sync video to tame the demand for video, check out the interview I did on Intel Chip Chat recently. And even if you're not attending NAB 2015, you can still see it in action. I'll be giving a presentation Tuesday, April 14 at 9:00 a.m. Pacific time. We'll stream it over the very systems I've described, and you can watch it on Waskul.TV. Tune in.





© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

By Christian Buerger, Technologist, SDN/NFV Marketing, Intel



This week I am attending the Intel Developer Forum (IDF) in Shenzhen, China, to promote Intel’s software defined networking (SDN) and network functions virtualization (NFV) software solutions. During this year’s IDF, Intel has made several announcements and our CEO Brian Krzanich has showcased Intel’s innovation leadership across a wide range of technologies with our local partners in China. On the heel of Krzanich’s announcements, Intel Software & Services Group Senior VP Doug Fisher extended Krzanich’s message to stress the importance of open source collaboration to drive industry innovation and transformation, citing OpenStack and Hadoop as prime examples.


I participated at the signing event and press briefing for a ground-breaking announcement between Intel and Huawei’s enterprise division to jointly define a next-generation Network as a Service (NaaS) SDN software solution. Under the umbrella of Intel’s Open Network Platform (ONP) server reference platform, Intel and Huawei intend to jointly develop a SDN reference architecture stack. This stack is based on integrating Intel architecture optimized open source ingredients from projects such as Cloud OS/OpenStack, OpenDaylight (ODL), Data Plane Development Kit (DPDK), and Open Virtual Switch (OVS) with virtual network appliances such as a virtual services router and virtual firewall. We are also deepening existing collaboration initiatives in various open source projects such as ODL (on Service Function Chaining and performance testing), OVS (SRIOV-based performance enhancements), and DPDK.


In addition to the broad range of open source SDN/NFV collaboration areas this agreement promotes, what makes it so exciting to me personally is the focus on the enterprise sector. Specifically, together with Huawei we are planning to develop reference solutions that target specific enterprise vertical markets such as education, financial services, and government. Together, we are extending our investments into SDN and NFV open source projects to not only accelerate advanced NaaS solutions for early adopters in the telco and cloud service provider space, but also to create broad opportunities to drive massive SDN adoption in the enterprise in 2015. As Swift Liu, President of Huawei’s Switch and Enterprise Communication Products, succinctly put it, Intel and Huawei “are marching from software-hardware collaboration to the entirely new software-defined era in the enterprise.”





© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

At a press event on April 9, representatives from the U.S. Department of Energy announced they awarded Intel contracts for two supercomputers totaling (just over) $200 million as part of its CORAL program. Theta, an early production system, will be delivered in 2016 and will scale to 8.5 petaFLOPS and more than 2,500 nodes, while the 180 PetaFLOPS, greater than 50,000 node system called Aurora will be delivered in 2018. This represents a strong collaboration for Argonne National Laboratory, prime contractor Intel, and sub-contractor Cray on a highly scalable and integrated system that will accelerate scientific and engineering breakthroughs.



Rendering of Aurora


Dave Patterson (President of Intel Federal LLC and VP of the Data Center Group), led the Intel team on the ground in Chicago; he was joined on stage by Peter Littlewood (Director of Argonne National Laboratory), Lynn Orr (Undersecretary for Science and Energy, U.S. Department of Energy), and Barry Bolding (Vice President of Marketing and Business Development for Cray). Also joining the press conference were Dan Lipinski (U.S. Representative, Illinois District 3), Bill Foster (U.S. Representative, Illinois District 11), and Randy Hultgren (U.S. Representative, Illinois District 14).


DavePatterson1.jpgDave Patterson at the Aurora Announcement (Photo Courtesy of Argonne National Laboratory)


This cavalcade of company representatives disclosed details on the Aurora 180 PetaFLOPS/50,000 node/13 Megawatt system. It utilizes much of the Intel product portfolio via Intel’s HPC scalable system framework, including future Intel Xeon Phi processors (codenamed Knights Hill), second generation Intel Omni-Path Fabric, a new memory hierarchy composed of Intel Lustre, Burst Buffer Storage, and persistent memory through high bandwidth on-package memory. The system will be built using Cray’s next generation Shasta platform.


Peter Littlewood kicked off the press conference by welcoming everyone and discussing Argonne National Laboratory – the mid west’s largest federally funded R&D center fostering discoveries in energy, transportation, protecting the nation and more. He handed off to Lynn Orr, who made the announcement of the $200 million contract and the Aurora and Theta supercomputers. He discussed some of the architectural details of Aurora and talked about the need for the U.S. to dedicate funds to build supercomputers to reach the next exascale echelon and how that will fuel scientific discovery, a theme echoed by many of the speakers to come.


Dave Patterson took the stage to give background on Intel Federal, a wholly owned subsidiary of Intel Corporation. In this instance, Intel Federal conducted the contract negotiations for CORAL. Dave touched on the robust collaboration with Argonne and Cray needed to bring Aurora on line in 2018, as well as introducing Intel’s HPC scalable system framework – a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems capable of supporting both compute- and data-intensive workloads.


Next up, Barry Bolding from Cray talked about the platform system underpinning Aurora – the next generation Shasta platform. He mentioned that when deployed, Aurora has the potential to be one of the largest/most productive supercomputers in the world.


And finally, Dan Lipinski, Bill Foster and Randy Hultgren, all representing Illinois (Argonne’s home base) in the U.S. House of Representatives each gave a few short remarks. They echoed Lynn Orr’s previous thoughts that the United States needs to stay committed to building cutting edge supercomputers to stay competitive in a global environment and tackle the next wave of scientific discoveries. Representative Hultgren expressed very succinctly: “[The U.S.] needs big machines that can handle big jobs.”



Dan Lipinski (Photo Courtesy of Argonne National Laboratory)



Bill Foster (Photo Courtesy of Argonne National Laboratory)


Randy Hultgren (Photo Courtesy of Argonne National Laboratory)


After the press conference, Mark Seager (Intel Fellow, CTO of the Tech Computing Ecosystem) contributed: “We are defining the next era of supercomputing.” While Al Gara (Intel Fellow, Chief Architect of Exascale Systems) took it a step further with: “Intel is not only driving the architecture of the system, but also the new technologies that have emerged (or will be needed) to enable that architecture. We have the expertise to drive silicon, memory, fabric and other technologies forward and bring them together in an advanced system.”



The Intel and Cray teams prepping for the Aurora announcement


Aurora’s disruptive technologies are designed to work together to deliver breakthroughs in performance, energy efficiency, overall system throughput and latency, and cost to power. This signals the convergence of traditional supercomputing and the world of big data and analytics that will drive impact for not only the HPC industry, but also more traditional enterprises.


Argonne scientists – who have a deep understanding of how to create software applications that maximize available computing resources – will use Aurora to accelerate discoveries surrounding:

  • Materials science: Design of new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels.
  • Biological science: Gaining the ability to understand the capabilities and vulnerabilities of new organisms that can result in improved biofuels and more effective disease control.
  • Transportation efficiency: Collaborating with industry to improve transportation systems to design enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines.
  • Renewable energy: Wind turbine design and placement to greatly improve efficiency and reduce noise.
  • Alternative programming models: Partitioned Global Address Space (PGAS) as a basis for Coarray Fortran and other unified address space programming models.


The Argonne Training Program on Extreme-Scale computing will be a key program for training the next generation of code developers – having them ready to drive science from day one when Aurora is made available to research institutions around the world.


For more information on the announcement, you can head to our new Aurora webpage or dig deeper into Intel’s HPC scalable system framework.






© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

In March, we started off covering the future of next generation Non-Volatile Memory technologies and the Open Compute Project Summit, as well as the recent launch of the Intel® Xeon® Processor D-1500 Product Family. Throughout the second half of March we archived Mobile World Congress podcasts recorded live in Barcelona. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • The Future of High Performance Storage with NVM Express – Intel® Chip Chat episode 370: Intel Senior Principal Engineer Amber Huffman stops by to talk about the performance benefits enabled when NVM Express is combined with the Intel® Solid-State Drive Data Center Family for PCIe. She also describes the future of NVMe over fabrics and the coming availability of NVMe on the client side within desktops, laptops, 2-in-1s, and tablets. To learn more visit: http://www.nvmexpress.org/
  • The Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 371: John Nguyen, a Senior Product Manager at Supermicro discusses the Intel® Xeon® Processor D-1500 Product Family launch and how Supermicro is integrating this new solution into their products today. He illustrates how the ability to utilize the small footprint and low power capabilities of the Intel Xeon Processor D-1500 Product Family is facilitating the production of small department servers for enterprise, as well as enabling small businesses to take advantage of the Intel Xeon Processor Family performance. To learn more visit: www.supermicro.com/products/embedded/
  • Innovating the Cloud w/ Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 372: Nidhi Chappell, Entry Server and SoC Product Marketing Manager at Intel, stops by to announce the launch of the Intel® Xeon® Processor D-1500 Product Family. She illustrates how this is the first Xeon processor in a SoC form factor and outlines how the low power consumption, small form factor, and incredible performance of this solution will greatly benefit the network edge and further enable innovation in the telecommunications industry and the data center in general. To learn more visit: www.intel.com/xeond
  • Making the Open Compute Vision a Reality – Intel® Chip Chat episode 373: Raejeanne Skillern, General Manager of the Cloud Service Provider Organization within the Data Center Group at Intel explains Intel’s involvement in the Open Compute Project and the technologies Intel will be highlighting at the 2015 Open Compute Summit in San Jose California. She discusses the launch of the new Intel® Xeon® Processor D-1500 Product Family, as well as how Intel will be demoing Rack Scale Architecture and other solutions at the Summit that are aligned with OCP specifications.
  • The Current State of Mobile and IoT Security – Intel® Chip Chat episode 374: In this archive of a livecast from Mobile World Congress in Barcelona, Gary Davis (twitter.com/garyjdavis), Chief Consumer Security Evangelist at Intel Security stops by to talk about the current state of security within the mobile and internet of things industry. He emphasizes how vulnerable many wearable devices and smart phones can be to cybercriminal attacks and discusses easy ways to help ensure that your personal information can be protected on your devices. To learn more visit: www.intelsecurity.com or home.mcafee.com
  • Enabling Next Gen Data Center Infrastructure – Intel® Chip Chat episode 375: In this archive of a livecast from Mobile World Congress Howard Wu, Head of Product Line for Cloud Hardware and Infrastructure at Ericsson chats about the newly announced collaboration between Intel and Ericsson to launch a next generation data center infrastructure. He discusses how this collaboration, which is in part enabled by Intel® Rack Scale Architecture, is driving the optimization and scaling of cloud resources across private, public, and enterprise cloud domains for improved operational agility and efficiency. To learn more visit: www.ericsson.com/cloud
  • Rapidly Growing NFV Deployment – Intel® Chip Chat episode 376: In this archive of a livecast from Mobile World Congress John Healy, Intel’s GM of the Software Defined Networking Division, stops by to talk about the current state of Network Functions Virtualization adoption within the telecommunications industry. He outlines how Intel is driving the momentum of NFV deployment through initiatives like Intel Network Builders and how embracing the open source community with projects such as OPNFV is accelerating the ability for vendors to now offer many solutions that are targeted towards function virtualization.


Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

By Dave Patterson, President, Intel Federal LLC and Vice President, Data Center Group, Intel



The U.S. Department of Energy’s (DOE) CORAL program (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Laboratories) is impressive for a number of advanced technical reasons. But the recent award announcement to Intel has shown a spotlight on another topic I am very excited about: Intel Federal LLC.


Intel Federal is a subsidiary that enables Intel to contract directly and efficiently with the U.S. Government. Today we work with DOE across a range of programs that address some of the grand scientific and technology challenges that must be solved to achieve extreme scale computing. One such program is Intel’s role as a prime contractor in the Argonne Leadership Computing Facility (ALCF) CORAL program award.


Intel Federal is a collaboration center. We’re involved in strategic efforts that need to be orchestrated in direct relationship with the end users. This involves the engagement of diverse sets of expertise from Intel and our partners, ranging from providers of hardware to system software, fabric, memory, storage and tools. The new supercomputer being built for ALCF, Aurora, is a wonderful example of how we bring together talent from all parts of Intel in collaboration with our partners to realize unprecedented technical breakthroughs.


Intel’s approach to working with the government is unique – I’ve spent time in the traditional government contracting space, and this is anything but. Our work today is focused on understanding how Intel can best bring value through leadership and technology innovation to programs like CORAL.


But what I’m most proud of about helping bring Aurora to life is what this architectural direction with Intel’s HPC scalable system framework represents in terms of close collaboration in innovation and technology. Involving many different groups across Intel, we’ve built excellent relationships with the team at Argonne to gather the competencies we need to support this monumental effort.


Breakthroughs in leading technology are built into Intel’s DNA. We’re delighted to be part of CORAL, a great program with far-reaching impact for science and discovery. It stretches us, redefines collaboration, and pushes us to take our game to the next level.  In the process, it will transform the HPC landscape in ways that we can’t even imagine – yet.


Stay tuned to CORAL, www.intel.com/hpc






© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

By Charlie Wuischpard, VP & GM High Performance Computing at Intel


Every now and then in business it all really comes together— a valuable program, a great partner, and an outcome that promises to go far beyond just business success. That's what I see in our newly announced partnership with the Supercomputing Center of the Chinese Academy of Sciences. We're collaborating to create an Intel Parallel Computing Center (Intel® PCC) in the People's Republic of China. We expect our partnership with the Chinese Academy of Sciences to pay off in many ways.


Through working together to modernize “LAMMPS”, the world’s most broadly adopted Molecular Dynamics application, Intel and the Chinese Academy of Sciences will help researchers and scientists understand everything from physics and semiconductor design to biology, pharmaceuticals, DNA analysis and ultimately aid in identifying cures for diseases, genetics, and more.


The establishment of the Intel® PCC with the Chinese Academy of Sciences is an important step. The relationship grows from our ongoing commitment to cultivate our presence in China and to find and engage Chinese businesses and institutions that will collaborate to bring their talents and capabilities to the rest of the world. Their Supercomputing Center has been focused on operating and maintaining supercomputers and exploiting and supporting massively parallel computing since 1996. Their work in high performance computing, scientific computing, computational mathematics, and scientific visualization has earned national and international acclaim. And it has resulted in important advances in the simulation of large-scale systems in fields like computational chemistry and computational material science.


We understand, solving the biggest challenges for society, industry, and science requires a dramatic increase in computing efficiency. Many leverage High Performance Computing to solve these challenges, but seldom realize they are only using a small fraction of the compute capability their systems provide. To take advantage of the full potential of current and future hardware (i.e. cores, threads, caches, and SIMD capability), requires what we call “modernization”. We know building Supercomputing Centers is an investment. By ensuring software fully exploits the modern hardware, this will aid in maximizing the impact of these investments. Customers will realize the greatest long-term benefit when they pursue modernization in an open, portable and scalable manner.


The goals of the Intel® PCC effort go beyond just creating software that takes advantage of hardware, all the way to delivering value to researchers and other users around the world. Much of our effort is training and equipping students, scientists, and researchers to write modern code that will ultimately accelerate discovery.


We look forward to our partnership with the Chinese Academy of Science and the great results to come from this new Intel® Parallel Computing Center. You can find additional information regarding this effort by visiting our Intel® PCC website.





© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

It is a very exciting time for the industry of information and communication technology (ICT) as it continues the massive transformation to the digital service, or “on demand”, economy.  Earlier today I had the pleasure of sharing Intel’s perspective and vision of the Data Center market at IDF15 in Shenzhen and I can think of no place better than China to exemplify how the digital services economy is impacting people’s everyday lives.  In 2015 ICT spending in China will exceed $465 Billion, comprising 43% of global ICT spending growth.  ICT is increasingly the means to fulfil business, public sector and consumer needs and the rate at which new services are being launched and existing services are growing is tremendous.  The result is 3 significant areas of growth for data center infrastructure:  continued build out of Cloud computing, HPC and Big Data.


Cloud computing provides on-demand, self-serve attributes that enable application developers to deliver new services to the markets in record time.  Software Defined Infrastructure, or SDI, optimizes this rapid creation and delivery of business services, reliably, with a programmable infrastructure.  Intel has been making great strides with our partners towards the adoption of SDI.  Today I was pleased to be joined by Huawei, who shared their efforts to enable the network transformation, and Alibaba, who announced their recent success in powering on Intel’s Rack Scale Architecture (RSA) in their Hangzhou lab.


Just as we know the future of the data center is software defined, the future of High Performance Computing is software optimized. IDC predicts that the penalties for neglecting the HPC software stack will grow more severe, making modern, parallel, optimized code essential for continued growth. To this end, today we announced that the first Intel® Parallel Computing Center in China has been established in Beijing to drive the next generation of high performance computing in the country.  Our success is also dependent on strong partnerships, so I was happy to have Lenovo onstage to share details on their new Enterprise Innovation Center focused on enabling our joint success in China.


As the next technology disruptor, Big Data has the ability to transform all industries.  For healthcare, through the use of Big Data analytics, precision medicine becomes a possibility providing tremendous opportunities to advance the treatment of life threatening diseases like cancer.  By applying all the latest Cloud, HPC and Big Data analytics technology and products, and working collectively as an industry, we can enable the sequence of a whole genome, identify the fundamental genes that cause the cancer, and the means to block them through the creation of personalized treatment, all in one day by 2020.


Through our partnership with China technology leaders we will collective enable the Digital Service Economy and deliver the next decade of discovery, solving the biggest challenges in society, industry and the sciences.






© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Demand for efficiency, flexibility, and scalability continues to increase, and the data center must keep pace with movement to our digital business strategies.  Previously, Diane Bryant, Intel’s senior vice president and general manager of Intel’s Data Center Group, recently stated, “We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture.”


Those “fundamental principles of data center architecture” are on a collision course with the direction that virtualization has lead us.  This virtualization in conjunction with automation and orchestration is leading us to the Software Defined Infrastructure (SDI). The demand of SDI is driving new hardware developments, which will open a whole new world of possibilities for running a state-of-the-art data center.  This eventually will leave our legacy infrastructure behind.  While we’re not quite there yet, as different stages need to mature, the process has the power to transform the data center.




Logical Infrastructure


SDI rebuilds the data center into a landing zone for new business capabilities. Instead of comprising multiple highly specialized components, it’s a cohesive and comprehensive system that meets all the demands placed on it by highly scalable, completely diversified workloads, from the traditional workloads to cloud-aware applications.


This movement to cloud-aware applications will demand the need for SDI; by virtualizing and automating the hardware that powers software platforms, infrastructure will be more powerful, cost-effective, and efficient. This migration away from manual upkeep of individual resources will also allow systems, storage, and network administrators to shift their focus to more important tasks instead of acting as “middleware” to connect these platforms.


Organizations will be able to scale their infrastructure in support of the new business services and products, and bring them to market much more quickly with the power of SDI.


Hardware Still Matters


As the data center moves toward an SDI-driven future, CIOs should be cautious in thinking that hardware does not count anymore. Hardware that works in conjunction with the software to ensure that the security and reliability of the workloads are fully managed and provide telemetry and extensibility that allow specific capabilities to be optimized and controlled within the hardware will be critical.


The Future of the Data Center Lies with SDI


Data centers must be agile, flexible, and efficient in this era of transformative IT. SDI allows us to achieve greater efficiency and agility by allocating resources according to our organizational needs, applications requirements, and infrastructure capabilities.


As Bryant concluded, “Anyone in our industry trying to cling to the legacy world will be left behind. We see the move to cloud services and software defined infrastructure as a tremendous opportunity and we are seizing this opportunity.”


To continue the conversation, please follow me on Twitter at @EdLGoldman or use #ITCenter.

I had the opportunity to attend Mobile World Congress and the Open Compute Summit this year where we demonstrated Red Rock Canyon (RRC) at both venues. At Fall IDF in San Francisco last year, we disclosed RRC for the first time. RRC is Intel’s new multi-host Ethernet controller silicon with integrated Ethernet switching resources.

The device contains multiple integrated PCIe interfaces along with Ethernet ports that can operate up to 100G. The target markets include network appliances and rack scale architecture which is why MWC and the OCP summit were ideal venues to demonstrate the performance of RRC in these applications.


Mobile World Congress

This was my first time at MWC and it was an eye opener. Eight large exhibit halls in the middle of Barcelona with moving walkways to shuffle you from one hall to the next. Booths the size of two story buildings packed with 93,000 attendees - a record number according to the MWC website.

At the Intel booth, we were one of several demonstrations of technology for network infrastructure. Our demo was entitled “40G/100GbE NSH Service Chaining in Intel ONP” and highlighted service function forwarding using network services headers (NSH) on both the Intel XL-710 40GbE controller and the Intel Ethernet 100Gbps DSI adapter that uses RRC switch silicon. In case you’re not familiar with NSH, it’s a new virtual network overlay industry initiative being driven by Cisco, which allows flows to be identified and forwarded to a set of network functions by creating a virtual network on top of the underlying physical network.

The demo was a collaboration with Cisco. It uses a RRC NIC as a 100GbE traffic generator to send traffic to an Intel Sunrise Trail server that receives the traffic at 100Gbps using another RRC 100GbE NIC card. Sunrise Trail then forwards 40Gbps worth of traffic to a Cisco switch, which, in turn, distributes the traffic to both another Sunrise Trail server and a Cisco UCS server, both of which contain Intel® Ethernet XL710 Converged Network Adapters.

The main point of the demonstration is that the RRC NIC, the XL710 NIC and the Cisco switch can create a wire-speed service chain by forwarding traffic using destination information in the NSH header. For NFV applications, the NIC cards can also forward traffic to the correct VM based on this NSH information.

Network Function Virtualization (NFV) was a hot topic at MWC this year, and we had many customers from leading network service providers and OEMs come by our booth to see the demo. In some cases they were more interested in our 100GbE link, which I was told was one of the only demos of this kind at the show.

Another 100G Intel Ethernet demo was at the Ericsson booth where they announced their project Athena, which demonstrated a 100GbE link using two RRC-based NIC cards. Athena is designed for hyperscale cloud data centers using Intel’s rack scale architecture framework.


Open Compute Project Summit

The very next week, I traveled to San Jose to attend the Open Compute Project Summit where RRC was part of a demonstration of Intel’s latest software development platform for its rack scale architecture. OCP was a much smaller show focused on the optimization of rack architectures for hyperscale data centers. At last year’s conference, we demonstrated an RSA switch module using our Intel Ethernet Switch FM6000 along with four Intel 10GbE controller chips.

This year, we showed our new multi-host RSA module that effectively integrates all of these components into a single device while at the same time provides 50Gbps of bandwidth to each server along with multiple 100GbE ports out of the server shelf. This RSA networking topology not only provides a 4:1 cable reduction, it also enables flexible network topologies. We also demonstrated our new open source ONP Linux Kernel driver, which will be up-streamed in 2015 consistent with our Open Network Platform strategy.

We had a consistent stream of visitors to our booth partially due to an excellent bandwidth performance demo.

After first disclosing RRC at IDF last year, it was great to be able to have three demonstrations of its high-performance capabilities at both MWC and the OCP Summit. It doesn’t hurt that these conferences are also targeted at two key market segments for RRC: network function virtualization and rack scale architecture.

We plan to officially launch RRC later this year, so stay tuned for much more information on how RRC can improve performance and/or reduce cost in these new market segments.

Q1: Intel is engaged in a number of SDN and NFV community and standards-developing organizations worldwide. What is the reasoning behind joining the new Korea SDN/NFV Forum?


A1: Intel Korea is firmly committed to helping enable our local Korean partner ecosystem to fully participate and benefit from the transformation of the global networking industry towards software-defined networking (SDN) and network functions virtualization (NFV). Incorporating the latest SDN and NFV standards and open source technologies from initiatives such as OpenStack, OpenDaylight, OPNFV, OpenFlow and others into a coherent, value-added and stable software platform is complex. Working with our partners in the Korea SDN/NFV Forum, we are hoping to contribute to reducing this complexity, thereby accelerating the usage of SDN and NFV in Korea itself as well as globally through the export solutions of the Korean ICT industry.


Q2: What can Intel contribute to the Korea SDN/NFV Forum?


A2: Intel has been at the forefront of SDN and NFV technology for more than five years. During that time, the company has invested in working with a wide range of technology partners to develop cutting-edge SDN/NFV hardware and software, as well the best practices for rapid deployment. This customer-centric expertise in architecting, developing and deploying SDN/NFV in cloud, enterprise data center and telecommunication networks is core to our contribution to the Korea SDN/NFV Forum.


Another concrete example of our expertise is a deep level of experience in testing and validating SDN/NFV hardware and software solutions, which is an important component when developing credible proofs-of-concept (PoC) for the next generation of SDN/NFV software. In addition, Intel has been operating Intel Network Builders, an SDN/NFV ecosystem program for solution partners and end-users. Products / solutions developed by Korea SDN/NFV Forum members can be leveraged by the ecosystem members to promote their solutions globally.


Q3: Are there any specific working groups within the Forum that Intel will focus on?


A3:  Intel plans to contribute to the success of all working groups with a focus on the standard technology, service PoC, policy development, and international relations working groups. Through the global Intel network, we are also aiming to assist in the collaboration of the Korea SDN/NFV Forum with other international organizations.


Q4: What is Intel’s main goal for participating in the Korea SDN/NFV Forum in 2015?


A4: Primarily, we want add value by helping the SDN/NFV Forum to get established and become a true source of SDN/NFV innovation for our partner ICT ecosystem here in Korea.

Back in 2011, I made the statement, "I have put my Oracle redo logs or SQL Server transaction log on nothing but SSDs" (Improve Database Performance: Redo and Transaction Logs on Solid State Disks (SSDs). In fact since the release of the Intel® SSD X25-E series in 2008, it is fair to say I have never looked backed. Even though those X25-Es have long since retired, every new product has convinced me further still that from a performance perspective a hard drive configuration just cannot compete. This is not to say that there have not been new skills to learn, such as configuration details explained here (How to Configure Oracle Redo on SSD (Solid State Disks) with ASM). The Intel® SSD 910 series provided a definite step-up from the X25-E for Oracle workloads (Comparing Performance of Oracle  Redo on Solid State Disks (SSDs)) and proved concerns for write peaks was unfounded (Should you put Oracle Database Redo on Solid State Disks (SSDs)). Now with the PCIe*-based Intel® SSD DC P3600/P3700 series we have the next step in the evolutionary development of SSDs for all types of Oracle workloads.


Additionally we have updates in operating system and driver support and therefore a refresh to the previous posts on SSDs for Oracle is warranted to help you get the best out of the Intel SSD DC P3700 series for Oracle redo.




One significant difference in the new SSDs is the change in interface and driver from AHCI and SATA to NVMe (Non-volatile memory express).  For an introduction to NVMe see this video by James Myers and to understand the efficiency that NVMe brings read this post by Christian Black. As James noted, high performance, consistent, low latency Oracle redo logging also needs high endurance, therefore the P3700 is the drive to use. With a new interface comes a new driver, which fortunately is included in the Linux kernel at the Oracle supported Linux releases of Red Hat and Oracle Linux 6.5, 6.6 and 7. 

I am using Oracle Linux 7.

Booting my system with both a RAID array of Intel SSD DC S3700 series and Intel SSD DC P3700 series shows two new disk devices:

First the S3700 array using the previous interface

Disk /dev/sdb1: 2394.0 GB, 2393997574144 bytes, 4675776512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Second the new PCIe P3700 using NVMe


Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 1562824368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Changing the Sector Size to 4KB


As Oracle introduced support for 4KB sector sizes at Oracle release 11g R2, it is important to be at a minimum of this release or Oracle 12c to take full advantage of SSD for Oracle redo. However ‘out of the box’ as shown the P3700 presents a 512 byte sector size. We can use this ‘as is’ and set the Oracle parameter ‘disk_sector_size_override’ to true. With this we can then specify the blocksize to be 4KB when creating a redo log file. Oracle will then use 4KB redo log blocks and performance will not be compromised.

As a second option, the P3700 offers a feature called ‘Variable Sector Size’. Because we know we need 4KB sectors, we can set up the P3700 to present a 4KB sector size instead. This can then be used transparently by Oracle without the requirement for additional parameters. It is important to do this before you have configured or started to use the drive for Oracle as the operation is destructive of any existing data on the device.


To do this, first check that everything is up to date by using the Intel Solid State Drive Data Center Tool from https://downloadcenter.intel.com/download/23931/Intel-Solid-State-Drive-Data-Center-Tool Be aware that after running the command it will be necessary to reboot the system to pick up the new configuration and use the device.

[root@haswex1 ~]# isdct show -intelssd
- IntelSSD Index 0 -
Bootloader: 8B1B012D
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
Firmware: 8DV10130
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
Index: 0
ProductFamily: Intel SSD DC P3700 Series
ModelNumber: INTEL SSDPEDMD800G4
SerialNumber: CVFT421500GT800CGN

Then run the following command to change the sector size. The parameter LBAFormat=3 sets it to 4KB and LBAFormat=0 sets it back to 512b.


[root@haswex1 ~]# isdct start -intelssd 0 Function=NVMeFormat LBAFormat=3 SecureEraseSetting=2 ProtectionInformation=0 MetaDataSetting=0
WARNING! You have selected to format the drive! 
Proceed with the format? (Y|N): Y
Running NVMe Format...
NVMe Format Successful.

After it ran I rebooted, the reboot is necessary because of the need to do an NVMe reset on the device because I am on Oracle Linux 7 with a UEK kernel at 3.8.13-35.3.1. At Linux kernels 3.10 and above you can also run the following command with the system online to do the reset.


echo 1 > /sys/class/misc/nvme0/device/reset

The disk should now present the 4KB sector size we want for Oracle redo.


Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 195353046 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Configuring the P3700 for ASM


For ASM (Automatic Storage Management) we need a disk with a single partition and, after giving the disk a gpt label, I use the following command to create and check the use of an aligned partition.


(parted) mkpart primary 2048s 100%                                        
(parted) print                                                            
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
1      2048s  195352831s  195350784s               primary

(parted) align-check optimal 1
1 aligned


I then use udev to set the device permissions. Note: the scsi_id command can be run independently to find the device id to put in the file and the udevadm command used to apply the rules. Rebooting the system is useful during configuration to ensure that the correct permissions are applied on boot.


[root@haswex1 ~]# cd /etc/udev/rules.d/
[root@haswex1 rules.d]# more 99-oracleasm.rules 
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600508e000000000c52195372b1d6008", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="nvme0n1p1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="365cd2e4080864356494e000000010000", OWNER="oracle", GROUP="dba", MODE="0660"

Successfully applied, the oracle user now has ownership of the DC S3700 RAID array device and the P3700 presented by NVMe.


[root@haswex1 rules.d]# ls -l /dev/sdb1
brw-rw---- 1 oracle dba 8, 17 Mar  9 14:47 /dev/sdb1
[root@haswex1 rules.d]# ls -l /dev/nvme0n1p1 
brw-rw---- 1 oracle dba 259, 1 Mar  9 14:39 /dev/nvme0n1p1

Use ASMLIB to mark both disks for ASM.


[root@haswex1 rules.d]# oracleasm createdisk VOL2 /dev/nvme0n1p1
Writing disk header: done
Instantiating disk: done

[root@haswex1 rules.d]# oracleasm listdisks

As the Oracle user, use the ASMCA utility to create the ASM disk groups.




I now have 2 disk groups created under ASM.




Because of the way the disk were configured Oracle has automatically detected and applied the sector size of 4KB.


[oracle@haswex1 ~]$ sqlplus sys/oracle as sysasm
SQL*Plus: Release Production on Thu Mar 12 10:30:04 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Automatic Storage Management option
SQL> select name, sector_size from v$asm_diskgroup;

NAME                     SECTOR_SIZE
------------------------------ -----------
REDO                          4096
DATA                          4096





In previous posts I noted Oracle bug “16870214 : DB STARTUP FAILS WITH ORA-17510 IF SPFILE IS IN 4K SECTOR SIZE DISKGROUP” and even with Oracle this bug is still with us.  As both of my diskgroups have a 4KB sector size, this will affect me if I try to create a database in either without having applied patch 16870214.

With this bug, upon creating a database with DBCA you will see the following error.



The database is created and the spfile does exist so can be extracted as follows:


ASMCMD> cp spfile.282.873892817 /home/oracle/testspfile
copying +DATA/TEST/PARAMETERFILE/spfile.282.873892817 -> /home/oracle/testspfile

This spfile is corrupt and attempts to reuse it will result in errors.


ORA-17510: Attempt to do i/o beyond file size
ORA-17512: Block Verification Failed

However, you can extract the parameters by using the strings command and create an external spfile or a spfile in a diskgroup with a 52b sector size. Once complete, the Oracle instance can be started.


SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileTEST.ora' from pfile='/home/oracle/testpfile';
SQL> startup
ORACLE instance started

Creating Redo Logs under ASM

In viewing the same disks within the Oracle instance, the underlying sector size has been passed right through to the database.


SQL> select name, SECTOR_SIZE BLOCK_SIZE from v$asm_diskgroup;

NAME                   BLOCK_SIZE
------------------------------ ----------
REDO                      4096
DATA                      4096

Now it is possible to create a redo log file with a command such as follows:


SQL> alter database add logfile ‘+REDO’ size 32g; 

…and Oracle will create a redo log automatically with an optimal blocksize of 4KB.


SQL> select v$log.group#, member, blocksize from v$log, v$logfile where v$log.group#=3 and v$logfile.group#=3;


Running an OLTP workload with Oracle Redo on Intel® SSD DC P3700 series

To put the Oracle redo on P3700 through its paces I used a HammerDB workload. The redo is set with a standard production type configuration without commit_write and commit_wait parameters.  A test shows we are running almost 100,000 transactions per second at redo over 500MB / second and therefore we would be archiving almost 2 TBs per hour.


Per Second

Per Transaction

Per Exec

Per Call

Redo size (bytes):





Log file sync even at this level of throughput is just above 1ms




Total Wait Time (sec)

Wait Avg(ms)

% DB time

Wait Class







log file sync19,927,44923.2K1.1638.7Commit

…and the average log file parallel write showing the average disk response time to just 0.13ms




%Time -outs

Total Wait Time (s)

Avg wait (ms)

Waits /txn

% bg time

log file parallel write3,359,0230442





There are six log writers on this system. As with previous blog posts on SSDs I observed the log activity to be heaviest on the first three and therefore traced the log file parallel write activity on the first one with the following method:


SQL> oradebug setospid 67810;
Oracle pid: 18, Unix process pid: 67810, image: oracle@haswex1.example.com (LG00)
SQL> oradebug event 10046 trace name context forever level 8;
ORA-49100: Failed to process event statement [10046 trace name context forever level 8]
SQL> oradebug event 10046 trace name context forever, level 8;

The trace file shows the following results for log file parallel write latency to the P3700.


Log Writer Worker

Over  1ms

Over 10ms

Over 20ms

Max Elapsed



Looking at a scatter plot of all of the log file parallel write latencies recorded in microseconds on the y axis clearly illustrate that any outliers are statistically insignificant and none exceed 15 milliseconds. Most of the writes are sub-millisecond on a system that is processing many millions of transactions a minute while doing so.



















A subset of iostat data shows the the device is also far from full utilization.


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          77.30    0.00    8.07    0.24    0.00   14.39
Device:         wMB/s avgrq-sz avgqu-sz   await w_await  svctm  %util
nvme0n1        589.59    24.32     1.33    0.03    0.03   0.01  27.47



As a confirmed believer in SSDs, I have long been convinced that most experiences of poor Oracle redo performance on SSDs has been due to an error in configuration such as sector size, block size and/or alignment as opposed to performance of the underlying device itself. In following the configuration steps I have outlined here, the Intel SSD DC P3700 series shows as an ideal candidate to take Oracle redo to the next level of performance without compromising endurance.

By: Adrian Hoban


The performance needs of virtualized applications in the telecom network are distinctly different from those in the cloud or in the data center.  These NFV applications are implemented on a slice of a virtual server and yet need to match the performance that is delivered by a discrete appliance where the application is tightly tuned to the platform.


The Enhanced Platform Awareness initiative that I am a part of is a continuous program to enable fine-tuning of the platform for virtualized network functions. This is done by exposing the processor and platform capabilities through the management and orchestration layers. When a virtual network function is instantiated by an Enhanced Platform Awareness enabled orchestrator, the application requirements can be more efficiently matched with the platform capabilities.


Enhanced Platform Awareness is composed of several open source technologies that can be considered from the orchestration layers to be “tuning knobs” to adjust in order to meaningfully improve a range of packet-processing and application performance parameters.


These technologies have been developed and standardized through a two-year collaborative effort in the open source community.  We have worked with the ETSI NFV Performance Portability Working Group to refine these concepts.


At the same time, we have been working with developers to integrate the code into OpenStack®. Some of the features are available in the OpenStack Juno release, but I anticipate a more complete implementation will be a part of the Kilo release that is due in late April 2015.


How Enhanced Platform Awareness Helps NFV to Scale

In cloud environments, virtual application performance may often be increased by using a scaling out strategy such as by increasing the number of VMs the application can use. However, for virtualized telecom networks, applying a scaling out strategy to improve network performance may not achieve the desired results.


NFV scaling out will not ensure that improvement in all of the important aspects of the traffic characteristics (such as latency and jitter) will be achieved. And these are essential to providing the predictable service and application performance that network operators require. Using Enhanced Platform Awareness, we aim to address both performance and predictability requirements using technologies such as:


  • Single Root IO Virtualization (SR-IOV): SR-IOV divides a PCIe physical function into multiple virtual functions each with the capability to have their own bandwidth allocations. When virtual machines are assigned their own VF they gain a high-performance, low-latency data path to the NIC.
  • Non-Uniform Memory Architecture (NUMA): With a NUMA design, the memory allocation process for an application prioritizes the highest-performing memory, which is local to a processor core.  In the case of Enhanced Platform Awareness, OpenStack® will be able to configure VMs to use CPU cores from the same processor socket and choose the optimal socket based on the locality of the relevant NIC device that is providing the data connectivity for the VM.
  • CPU Pinning: In CPU pinning, a process or thread has an affinity configured with one or multiple cores. In a 1:1 pinning configuration between virtual CPUs and physical CPUs, some predictability is introduced into the system by preventing host and guest schedulers from moving workloads around. This facilitates other efficiencies such as improved cache hit rates.
  • Huge Page support: Provides up to 1-GB page table entry sizes to reduce I/O translation look-aside buffer (IOTLB) misses, improves networking performance, particularly for small packets.


A more detailed explanation of these technologies and how they work together can be found in a recently posted paper that I co-authored titled: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release



Virtual BNG/BRAS Example

The whitepaper also has a detailed example of a simulation we conducted to demonstrate the impact of these technologies.


We created a VNF with the Intel® Data Plane Performance Demonstrator (DPPD) as a tool to benchmark platform performance under simulated traffic loads and to show the impact of adding Enhanced Platform Awareness technologies. The DPPD was developed to emulate many of the functions of a virtual broadband network gateway / broadband remote access server.


We used the Juno release of OpenStack® for the test, which was patched with huge page support. A number of manual steps were applied to simulate the capability that should be available in the Kilo release such as CPU pinning and I/O Aware NUMA scheduling.


The results shown in the figure below are the relative gains in data throughput as a percentage of 10Gpbs achieved through the use of these EPA technologies. Latency and packet delay variation are important characteristics for BNGs. Another study of this sample BNG includes some results related to these metrics: Network Function Virtualization: Quality of Service in Broadband Remote Access Servers with Linux* and Intel® Architecture®


Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations..PNG

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations



The order in which the features were applied impacts the incremental gains so it is important to consider the results as a whole rather than infer relative value from the incremental increases. There are also a number of other procedures that you should read more about in the whitepaper.


The two years of hard work by the open source community has brought us to the verge of a very important and fundamental step forward for delivering carrier-class NFV performance. Be sure to check back here for more of my blogs on this topic, and you can also follow the progress of Kilo at the OpenStack Kilo Release Schedule website.

By: Frank Schapfel


One of challenges in deploying Network Functions Virtualization (NFV) is creating the right software management of the virtualized network.  There are differences between managing an IT Cloud and a Telco Cloud.  IT Cloud providers take advantage of centralized and standardized servers in large scale data centers.  IT Cloud architects aim to maximize the utilization (efficiency) of the servers and automate the operations management.  In contrast, Telco Cloud application workloads are different from IT Cloud workloads.  Telco Cloud application workloads have real-time constraints, government regulatory constraints, and network setup and teardown constraints.  New tools are needed to build a Telco Cloud to these requirements.


OpenStack is the open software community developing IT Cloud orchestration management since 2010.  The Telco service provider community of end users, telecomm equipment manufacturers (TEMs), and software vendors have rallied around adapting the OpenStack cloud orchestration for Telco Cloud.  Over the last few releases of OpenStack, the industry has been shaping and delivering Telco Cloud ready solutions. For now, let’s just focus on the real-time constraints. For IT Cloud, the data center is viewed as a large pool of compute resources that need to operate a maximum utilization, even to the point of over-subscription of the server resources. Waiting a few milliseconds is imperceptible to the end user.  On the other hand, a network is real-time sensitive – and therefore cannot tolerate over-subscription of resources.


To adapt OpenStack to be more Telco Cloud friendly, Intel contributed to the concept of “Enhanced Platform Awareness” to OpenStack. Enhanced Platform Awareness in OpenStack offers a fine-grained matching of virtualized network resources to the server platform capabilities.  Having a fine-grained view of the server platform allows the orchestration to accurately assign the Telco Cloud application workload to the best virtual resource.  The orchestrator needs NUMA (Non-Uniform Memory Architecture) awareness so that it can understand how the server resources are partitioned, and how CPUs, IO devices, and memory are attached to sockets.  For instance, when workloads need line rate bandwidth, high speed memory access is critical, and huge page access is the latest technology in the latest Intel® Xeon™ E5-2600 v3 processor.


Now in action at the Oracle Industry Connect event in Washington, DC, Oracle and Intel demonstrate the collaboration using Enhanced Platform Awareness in OpenStack.  The Oracle Communications Network Service Orchestration uses OpenStack Enhanced Platform Awareness to achieve carrier grade performance for Telco Cloud. Virtualized Network Functions are assigned based on the needs for huge page access and NUMA awareness.  Other cloud workloads, which are not network functions, are not assigned specific server resources.


The good news – the Enhanced Platform Awareness contributions are already up-streamed in the OpenStack repository, and will be in the OpenStack Kilo release later this year.  At Oracle Industry Connect this week, there is a keynote, panel discussions and demos to get even further “under the hood.”  And if you want even more details, there is a new Intel White Paper: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release.


Adapting OpenStack for Telco Cloud is happening now. And Enhanced Platform Awareness is finding its way into a real, carrier-grade orchestration solution.

March has been a big month for demonstrating the role of Intel® Ethernet in the future of several key Intel initiatives that are changing the data center.


At the start of the month we were in Barcelona at Mobile World Congress demonstrating the role of Ethernet as the key server interconnect technology for Intel’s Software Defined Infrastructure initiative read my blog post on that event.


And just this week, Intel was in San Jose at the Open Compute Project Summit highlighting Ethernet’s role in Rack Scale Architecture, which is one of our initiatives for SDI.


RSA is a logical data center hardware architectural framework based on pooled and disaggregated computing, storage and networking resources from which software controllers can compose the ideal system for an application workload.


The use of virtualization in the data center is increasing server utilization levels and driving an insatiable need for more efficient data center networks. RSA’s disaggregated and pooled approach is an open, high-performance way to meet this need for data center efficiency.


In RSA, Ethernet plays a key role as the low-latency, high bandwidth fabric connecting the disaggregated resources together and to other resources outside of the rack. The whole system depends on Ethernet providing a low-latency, high throughput fabric that is also software controllable.


MWC was where we demonstrated Intel Ethernet’s software controllability through support for network virtualization overlays; and OCP Summit is where we demonstrated the raw speed of our Ethernet technology.


A little history is in order. RSA was first demonstrated at last year’s OCP Summit, and as a part of that, we revealed an integrated 10GbE switch module proof of concept that included switch chip and multiple Ethernet controllers that removed the need for a NIC in the server.


This proof of concept showed how this architecture could disaggregate the network from the compute node.


At the 2015 show, we demonstrated a new design with our upcoming Red Rock Canyon technology, a single-chip solution that integrates multiple NICs into a switch chip. The chip delivered throughput of 50 Gbps between four Xeon nodes via PCIe, and multiple 100GbE connections between the server shelves, all with very low latency.


The features delivered by this innovative design provide performance optimized for RSA workloads. It’s safe to say that I have not seen a more efficient or high-performance rack than this PoC video of the performance.


Red Rock Canyon is just one of the ways we’re continuing to innovate with Ethernet to make it the network of choice for high-end data centers.

Filter Blog

By date:
By tag: