1 2 3 Previous Next

The Data Stack

1,552 posts

Over the past few years, we’ve seen that big data is enabling fundamental changes across numerous industries, from finance to retail. But thus far, businesses, data scientists and developers have been held back from realizing the full potential of data analytics by the absence of customizable and domain-specific apps, the complexity of big data infrastructure and the lack of analytic functions with consistent application program interfaces.


At this year’s Strata + Hadoop World, we introduced the Trusted Analytics Platform, an open source analytics Platform-as-a-Service (PaaS) designed to solve these issues, making it easier for data scientists and app developers to deploy predictive models faster on a shared big data analytics platform. This enables users to harness open source development for faster, lower cost innovation.


The Intel platform is open source, allowing data scientists to easily publish data sources, data analytical pipelines, and applications. In turn, developers can use this data to build applications that range from visualization to fully articulated recommender systems using local batch or streaming data, to the benefit of every developer and data scientist in the ecosystem.


How does it work? The platform provides an end-to-end solution with three key layers:


  • Data layer that includes Apache Hadoop, Spark, and other data components optimized for performance and security

  • Analytics layer that includes a data science tool kit to simplify model development and an extensible framework to generate predictive APIs

  • Application layer that includes a managed runtime environment for cloud-native apps


More importantly, how is the platform already starting to impact the big data landscape?


First, let’s take a look at our work with Penn Medicine, who is working with Intel to advance healthcare analytics with a solution that combines patient vitals, lab records, and medications to develop predictive models that can forecast risk of disease or readmission. Using TAP, Penn Medicine will be able to build better models for predicting risk and is currently evaluating the integration of TAP with Penn Signals.


Intel is also working with Oregon Health & Science University (OHSU)’s to develop the “Collaborative Cancer Cloud,” a big data analytics solution for precision medicine that allows hospitals to securely share patient genomic data to enable potentially lifesaving discoveries. OHSU is using TAP to securely manage patient data gathered from wearable devices, labs, and surveys in a central location. The deployment takes in more than 3 million records every day with more than 360 million records already in the system. OHSU data scientists are using TAP to analyze the data to find new ways of determining a subject's overall health.


With a more open approach to big data applications and analysis, we’re creating an entirely new universe of possibilities. Greater efficiency and lower costs puts big data innovation into the hands of the many instead of the few, which will lead to never-before-seen data insights that might just change the world as we know it.

Today’s enterprises are adept at capturing and storing huge amounts of data. While that alone is an accomplishment, it’s the easy part of a much harder problem. The real challenge is to turn those mountains of data into meaningful insights that enhance competitiveness.


Business analysts at enterprises today are hampered by the lack of domain-specific and customizable applications based on big data analytics. App developers are slowed by the complexity of the big data infrastructure and the lack of analytic functions for which they depend on. And data scientists, in turn, are challenged by the time it takes to build handcrafted models. The models are re-coded for production and rarely reused, slowing down their deployment and update.


This is a problem not for any one company to solve but for the entire community to address and the new Trusted Analytics Platform (TAP) was built to help do this. This open source platform, developed by Intel and our ecosystem partners, enables data scientists and app developers to deploy predictive models faster on a shared big data analytics platform.


The platform provides an end-to-end solution with a data layer optimized for performance and security, an analytics layer with a built-in data science toolkit, and an application layer that includes a managed runtime environment for cloud-native apps. Collectively, the capabilities of the Trusted Analytics Platform will make it much easier for developers and data scientists to collaborate by providing a shared, flexible environment for advanced analytics in public and private clouds.


And this brings us to a new collaborative effort with cloud service providers that will leverage the Trusted Analytics Platform to accelerate the adoption of big data analytics. Intel has formed a strategic collaboration with OVH.com to accelerate the adoption of big data analytics solutions with TAP and on OVH.com’s infrastructure. OVH.com, based in France, is one of the largest and fastest growing cloud service providers in Europe and is the first service provider that we have announced collaborating with.


Through this partnership, announced at the OVH Summit in Paris this week, the two companies will work together to optimize infrastructure for the best performance per workload and security while leveraging new Intel technologies to grow and differentiate OVH.com services.


At a broader level, partnerships like these reflect Intel’s long-running commitment to bring new cloud technologies to market and help our customers streamline and speed their path to deployment. We are deeply committed to working with partners like OVH.com to accelerate the adoption of big data technologies and further Intel’s mission of enabling every organization to unlock intelligence from big data.


To further fuel big data solutions, OVH.com and Intel are collaborating on a “big data challenge” to pilot the development of solutions using the new Trusted Analytics Platform. Here’s how the challenge works:


  • Submit your big data analytics challenge via OVH.com/bigdatachallenge by Oct 24th, 2015.
  • OVH and Intel will select up to three innovative big data use cases and help the winners develop and deploy their solutions with the Trusted Analytics Platform on OVH.com managed hosting.
  • The selected innovators will receive free hosting for the pilot period and be featured in global big data conferences next year.


Or if you’re a data scientist or a developer who wants to capitalize on the TAP platform, visit http://trustedanalytics.github.io/ to get the tools you need to accelerate the creation of cloud-native applications driven by big data analytics.

By Leah Schoeb, Intel


As data centers manage their growing volumes of data while maintaining SLAs, storage acceleration and optimization become all the more important. To help enterprises keep pace with data growth, storage developers and OEMs need technologies that enable them to accelerate the performance and throughput of data while making the optimal use of available storage capacity.


These goals are at the heart of the Intel Intelligent Storage Acceleration Library (Intel ISA-L), a set of building blocks designed to help storage developers and OEMs maximize performance, throughput, security, and resilience, while minimizing capacity usage, in their storage solutions. The acceleration comes from highly optimized assembly code, built with deep insight into the Intel® Architecture processors.


Intel® ISA-L is an algorithmic library that enables users to obtain more performance from Intel CPUs and reduce investment in developing their own optimizations. The library also uses dynamic linking to allow the same code to run optimally across Intel’s line of processors, from Atom to Xeon, and the same technique assures forwards and backwards compatibility as well, making it ideally suited for both software-defined storage and OEM or “known hardware” usage. Ultimately, the library helps end-user customers accelerate service deployment, improve interoperability, and reduce TCO by providing support for storage solutions that make data centers more efficient.


This downloadable library is composed of optimized algorithms in five main areas: data protection, data integrity, compression, cryptography, and hashing. For instance, Intel® ISA-L delivers up to 7x bandwidth improvement for hash functions compared to OpenSSL algorithms. In addition it, delivers up to 4x bandwidth improvement on compression compared to the zlib compression library, and it lets users get to market faster and with fewer resources than they would need if they had to develop (and maintain!) their own optimizations.


One way Intel® ISA-L could assist to accelerate storage performance in a cost-effective manner is by accelerating data deduplication algorithms using chunking and hashing functions. If you develop storage solutions, you know all about the goodness of data deduplication and how it can improve capacity optimization by reducing the need for duplicated data. During the data deduplication process, a hashing function can be combined to generate a fingerprint for the data chunks. Once each chunk has a fingerprint, incoming data can be compared to a stored database of existing fingerprints and, when a match is found, the duplicate data does not need to be written to the disk again.


Data deduplication algorithms can be very CPU-intensive and leave little processor utilization for other tasks, Intel® ISA-L removes this barrier. The combination of Intel® processors and Intel® ISA-L can provide the tools to help accelerate everything from small office NAS appliances up to enterprise storage systems.


The Intel® ISA-L toolkit is free to be downloaded, and parts of it are available as open source software. The open source version contains data protection, data integrity, and compression algorithms, while the full licensed version also includes cryptographic and hashing functions. In both cases, the code is provided free of charge.


Our investments in Intel® ISA-L reflect our commitment to helping our industry partners bring new, faster, and more efficient storage solutions to market. This is the same goal that underlies the new Storage Performance Development Kit (SPDK), launched this week at the Storage Developer Conference (SDC) in Santa Clara. This open source initiative, spearheaded by Intel, leverages an end-to-end user-level storage reference architecture, spanning from NIC to disk, to achieve performance that is both highly scalable and highly efficient.


For a deeper dive, visit the Intel Intelligent Storage Acceleration Library site. Or for a high-level overview, check out this quick Intel ISA-L video presentation from my colleague Nathan Marushak.

By Nathan Marushak, Director of Software Engineering for the DCG Storage Group at Intel



New non-volatile memory (NVM) technologies are transforming the storage landscape—and removing the historical I/O bottleneck caused by storage. With the performance of next-generation technologies like 3D XPoint, latency will now be measured not in milliseconds but nanoseconds.


While this is a huge step forward, fast media alone doesn’t get us to blazingly fast application performance. The rise of next-generation storage simply shifts the I/O bottlenecks from storage media to the network and software.


At Intel, we seek to address these bottlenecks, so developers can take full advantage of the potential performance of new NVM technologies. That’s a key goal driving the new Storage Performance Development Kit (SPDK), announced today at the Storage Developer Conference (SDC) in Santa Clara.


This new open source initiative, spearheaded by Intel, applies the high performance packet processing framework of the open source Data Plane Development Kit (DPDK) to a storage environment. SPDK offers a way to do things in Linux user space that typically requires context switching into the kernel.  By enabling developers to work in user space, it improves performance and makes development simpler for storage developers.


As part of the SPDK launch, Intel is releasing an open source user space NVMe driver. Why is this important?  For developers building their storage application in user space, this driver enables them to capitalize on the full potential of NVMe devices. Andy Warfield, CTO of Coho Data says, "The SPDK user space NVMe driver removes a distracting and time consuming barrier to harnessing the incredible performance of fast nonvolatile memory. It allows our engineering team to focus their efforts on what matters: building advanced functionality. This translates directly into more meaningful product development and a faster time to market.”


A lot of storage innovation is occurring in user space. This includes efforts like Containers, Ceph, Swift, Hadoop and proprietary applications designed to scale storage applications out or up. For applications like these, the SPDK NVMe polled-mode driver architecture delivers efficient performance and allows a single processor core to handle millions of IOPS. Removing or circumventing system bottlenecks are key for realization of the performance promised by next gen NVM technologies. For this reason, companies like Kingsoft Cloud have joined the SPDK community, and their experiences and feedback have already influenced the course of the project. Kingsoft Cloud is a leading cloud service provider, which provisions online cloud storage and hosting services to end users in China. “We will continue to evaluate SPDK techniques and expect to leverage SPDK for improving Kingsoft cloud’s scale out storage service” said Mr. Zhang, chief storage architect at Kingsoft Cloud.


With this announcement, storage developers can join an open ecosystem that is leveraging shared technology. This adds up to faster time to market while positioning organizations to capitalize on future NVM technology and storage advancements.


Intel’s work on SPDK won’t stop with today’s announcement. We expect to work closely with the broad ecosystem to enrich the features and functionality of SPDK in the months ahead. For example, we are already at work on additional modules that will join the NVMe driver in the SPDK open source community.


Ready to join the development effort? For easy access by all, the SPDK project resides on GitHub and related assets are available via 01.org, the online presence for the Intel Open Source Technology Center (OTC). To learn about other Intel storage software resources, visit software.intel.com/storage. And to learn more about next-generation non-volatile memory technologies, including the new 3D XPoint technology, visit http://www.intel.com/nvm.

Throughout the third quarter of 2015 we had many excellent guests and topics including the launch of the Intel® Xeon® processor E7 v3, the launch of the Intel® Xeon® Processor E3-1200 v4 Product Family with integrated Intel® Iris™ Pro Graphics, several interviews from the OpenStack Summit in Vancouver B.C. including Margaret Chiosi from AT&T, Ken Won from HP and Curt Aubley from Intel. We also got to chat with Intel’s Das Kamhout about the new Intel Cloud for All Initiative, discuss OpenStack in Latin America with KIO Networks’ Francisco Araya, talk Big Data and Analytics with Intel’s Alan Ross, and meet many other great guests.


If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:


In this livecast from the Intel Developer Forum (IDF) in San Francisco Mike Ferron Jones, Director of Datacenter Platform Technology Marketing and Greg Matson, Director of SSD Strategic Planning and Product Marketing at Intel discuss the announcement of Intel® Optane™ technology based on Intel’s new 3D XPoint™ technology. They outlined how 3D XPoint technology is an entirely new class of nonvolatile memory that will revolutionize storage enabling high-speed, high-capacity data storage close to the processor. This technology will be made available as SSDs (solid state drives) called Intel Optane technology, as well as in a DIMM (dual in-line memory module) form factor which will open up new possibilities of types of workloads and applications that you can accelerate or take to whole new levels of big memory applications. Greg emphasizes that Intel will be making Optane SSDs available for servers, enthusiast clients, and laptops within 2016.


In this livecast from the Intel Developer Forum (IDF) in San Francisco John Leung, Software & System Architect at Intel and Jeff Autor, Distinguished Technologist with the Servers Business Unit at HP stop by to discuss the release of RedFish 1.0. They highlight how on August 4th 2015 the DMTF (Distributed Management Task Force Inc) announced the availability of RedFish version 1.0, an adopted and approved industry standard interface which simplifies the management of scalable compute platforms, and is extensible beyond compute platforms. John and Jeff emphasize how RedFish 1.0 is a great example of what can be accomplished when technology leaders, along the supply chain, truly listen to requests and feedback of end-users, and come together satisfy those request in an open and broad manner.


Jim Blakley, Visual Cloud Computing General Manager at Intel chats about how Intel is driving innovation in visual cloud computing. He talks about the International Broadcasting Conference (IBC) and announces the launch of the Intel® Visual Compute Accelerator, the new Intel® Xeon® E3 – based PCIe add-in card that brings media and graphics capabilities into the Intel Xeon processor E5-based servers. Jim outlines how the Intel Visual Compute Accelerator Card enables real time transcoding specifically targeting AVC and HEVC workloads and reduces the amount of storage and network bandwidth needed to deliver the transcoded video streams. He also highlights several partners that will be demoing Intel technology or in the Intel booth at IBC including; Thomson Video Networks, Envivio, Kontron, Artesyn, and Vantrix. To learn more, follow Jim on Twitter twitter.com/jimblakley


In this livecast from the Intel Developer Forum (IDF) in San Francisco Das Kamhout, Principal Engineer and SDI Architect at Intel discusses the Intel Cloud for All Initiative and how Intel is working to enable tens of thousands of new clouds for a variety of usage models across the world. Das illuminates a concept he covered in his IDF session contrasting more traditional types of cloud infrastructure with a new model of cloud based upon the use of containers and the ability to run an application by scheduling processes across a data center. He explains how container based cloud architectures can create a highly efficient delivery of services within the data center by abstracting the infrastructure and allowing application developers to be more flexible. Das also highlights how Intel is investing in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions. To learn more, follow Das on Twitter at twitter.com/dkamhout.


In this livecast from the Intel Developer Forum in San Francisco Curt Aubley, VP and CTO of Intel’s Data Center Group stops by to talk about some of the top trends that he sees in data center technology today. Curt emphasizes how the fundamental shift in capabilities in the data center is enabling businesses to create an incredible competitive differentiator when they take advantage of emerging technologies. He brings up how new technologies like Intel’s 3D XPoint™ are creating an amazing impact upon real time analytics and calls out how dynamic resource pooling is helping to drive a transformation in the network and enable the adoption of software defined networking (SDN) and network functions virtualization (NFV) to remove networking performance bottlenecks. Curt highlights many other data center technology trends from rack scale architecture (RSA) and intelligent orchestration to cutting edge security technologies like Intel® Cloud Integrity Technology.


Caroline Chan, Director of Wireless Access Strategy and Technology at Intel stops by to talk about the shift to 5G within the mobile industry and how the industry will need to solve more challenges than just making the cellular network faster to make this shift possible. She stresses that there needs to be a focus on an end to end system that will enable the communications and computing worlds to merge together to create a more efficient network and better business model overall. Caroline also discusses possible upcoming 5G related showcases that will happen in Asia within the next 3-7 years and how Intel is collaborating immensely with many initiatives in Europe, Asia, and around the world to help drive the innovation of 5G.


John Healy, General Manager of the Software Defined Networking Division within Intel’s Network Platforms Group stops by to chat about the current network transformation that is occurring and how open standards and software are integral to building the base of a new infrastructure that can keep pace with the insatiable demand being put on the network by end users. He illustrates how Intel is driving these open standards and open source solutions through involvement in initiatives like OpenStack*, OpenDaylight*, and the development of Intel Network Builders to create interoperability and ease of adoption for end-users. John also highlights the Intel® Open Network Platform and how it has been awarded the LightReading Leading Lights award for being the most Innovative network functions virtualization (NFV) product strategy technology.


Alan Ross, Senior Principal Engineer at Intel outlines how quickly the amount of data that enterprises deal with is scaling from millions to tens of billions and how gaining actionable insight from such unfathomable amounts of data is becoming increasingly challenging. He discusses how Intel is helping to develop analytics platform-as-a-service to better enable flexible adoption of new algorithms and applications that can expose data to end users allowing them to glean near real-time insights from such a constant flood of data. Alan also illustrates the incredible potential for advances in healthcare, disaster preparedness, and data security that can come from collecting and analyzing the growing expanse of big data.


Francisco Araya, Development and Operations Research & Development Lead at KIO Networks stops by to talk about how KIO Networks has delivered one of the first public clouds in Latin America based in OpenStack. He mentions that when Kio Networks first started implementing OpenStack it took about 2 months to complete an installation and now, thanks to the strong OpenStack ecosystem, it only takes about 3 hours for his team to complete an installation. Francisco emphasizes how the growing amount of OpenStack offerings and provider use cases greatly increases the ease and confidence when implementing OpenStack.


Rob Crooke, Senior VP and GM of NVM Solutions Group at Intel, discusses how Intel is breaking new ground in a type of memory technology that is going to help solve real computing problems and change the industry moving forward. This disruptive new technology is significantly denser and faster than DRAM and NAND technology. Rob outlines how this non-volatile memory will likely be utilized across many segments of the computing industry and have incredible effects on the speed, density, and cost of memory and storage moving into the future. To learn more, visit www.intel.com/ and search for ‘non-volatile memory’.


Das Kamhout, Principal Engineer and SDI Architect at Intel joins us to announce Intel’s launch of the Cloud for All initiative founded to accelerate cloud adoption and create tens of thousands of new clouds. He emphasizes how Intel is in a unique position to help align the industry towards delivery of easy to deploy cloud solutions based on standards based solutions optimized for enterprise capability. Das discusses that Cloud for All is a collaborative initiative involving many different companies including a new collaboration with Rackspace and ongoing work with companies including CoreOS, Docker, Mesosphere, Redapt, and Red Hat.


In this livecast from Big Telecom Sandra Rivera, Vice President and General Manager of the Network Platforms Group at Intel chats about the network transformation occurring within telecommunications and enterprise industries. She talks about how moving to an open industry standard solution base has created a shift in the industry paradigm from vertically integrated purpose built solutions supplied by one provider to a model where end users can choose best of breed modules from a number of different providers. This network transformation is providing a number of new business opportunities for many telecom equipment and networking equipment manufacturers. To learn more, follow Sandra on Twitter twitter.com/sandralrivera.

Brian McCarson, Senior Principal Engineer and Senior IoT System Architect for the Internet of Things Group at Intel chats about the amazing innovations happening within the Internet of Things (IoT) arena and the core technology from Intel that enables IoT to achieve its’ full potential. He emphasizes how important security and accuracy of data is as the amount of IoT devices grows to potentially 50 Billion devices by 2020 and how Intel provides world class security software capabilities and hardware level security which are helping to protect from any risks associated with deploying IoT solutions. Brian also describes the Intel IoT Platform that is designed to promote security, scalability, and interoperability and creates a standard that allows customers to reduce time to market and increase trust when deploying IoT solutions. To learn more, visit www.intel.com/iot.


Bill Mannel, General Manager and Vice President at Hewlett-Packard, stops by to discuss the growing demand for high performance computing (HPC) solutions and the innovative use of HPC to manage big data. He highlights an alliance between Intel and HP that will accelerate HPC and big data solutions tailored to meet the latest needs and workloads of HPC customers, leading with customized vertical solutions. Experts from both companies will be working together to accelerate code modernization of customer workloads in verticals including life sciences, oil and gas, financial services, and more. To learn more, visit www.hp.com/go/hpc.


In this livecast from the OpenStack Summit in Vancouver B.C. Das Kamhout, Principal Engineer and SDI Architect at Intel, stops by to chat about democratizing cloud computing and making some of the most complicated cloud solutions available to the masses. He outlines key changes occurring in cloud computing today like automation and hyperscale, highlighting how key technologies like OpenStack enable smaller cloud end users to operate in similar ways as some of the largest cloud using organizations. To learn more, follow Das on Twitter twitter.com/dkamhout.


In this livecast from the OpenStack Summit in Vancouver B.C. Mauri Whalen, VP & Director of Core System Development in the Open Source Technology Center at Intel, discusses how beneficial open source software innovation like Open Stack is and how the collaborative process helps produce the highest quality code and software possible. She also discusses the importance of initiatives like the Women of OpenStack and how supporting diversity within the open source community enables an overall better end product and ensures that all populations are represented in the creation of different solutions.


In this livecast from the OpenStack Summit in Vancouver B.C. Cathy Spence, Principal Engineer at Intel stops by to talk about Intel’s IT infrastructure move to the cloud and how their focus has evolved to center around agility and self service provisioning on demand services. She discusses how enterprise IT needs more applications that are designed for cloud to take advantage of private cloud implementations and more efficiently use public cloud solutions. Cathy also highlights Intel’s engagement with OpenStack, the Open Data Center Alliance, and other organizations that are driving best practices for cloud and advancing the industry as a whole. To learn more, follow Cathy on Twitter @cw_spence.


In this livecast from the OpenStack Summit in Vancouver B.C., Margaret Chiosi, Distinguished Network Architect at AT&T Labs, stops by to chat about how OpenStack is influencing the telecommunications industry and AT&T’s goals for transforming to a software-centric network. She also discusses the Open Platform for Network Functions Virtualization (OPNFV) project and the work being done to create a platform that is accepted industry wide to ensure consistency, innovation, and interoperability between different open source components.


In this livecast from the OpenStack Summit in Vancouver B.C. Ken Won, Director of Cloud Software Product Marketing at HP, chats about the HP Helion strategy that is helping customers shift from traditional to on demand infrastructure environments to drive down costs and deal with common compliance issues. He also describes how HP is heavily engaged in the OpenStack community to help drive the portability and standards for all different types of cloud environments making it easy for end users to shift resources and utilize the right infrastructure based on their application needs. To learn more, visit www.hp.com/helion.


In this livecast from the OpenStack Summit in Vancouver B.C. Curt Aubley, VP and CTO of Data Center Group at Intel stops by to talk about how OpenStack provides a foundational capability for cloud computing that allows customers to tailor and share common technologies to better address their specific needs. Curt discusses Intel® Cloud Integrity Technology and emphasizes how important it is to establish a foundation of trust to allow customers to easily move their workloads into the cloud. He highlights how standard approaches to security help facilitate flexibility and interoperability which in turn lowers levels of risk for everyone in the industry.


Jim Blakley, Visual Cloud Computing General Manager at Intel stops by to chat about the large growth in the use of cloud graphics and media processing applications and the increasing demands these applications are putting on the data center. He discusses the launch of the new Intel® Xeon® Processor E3-1200 v4 Product Family with integrated Intel® Iris™ Pro Graphics which provides up to 1.4x performance vs. the previous generation for video transcoding, as well as substantial improvement in overall media and graphics processing. These improvements not only benefit video quality and graphics rendering for end users, but also bring a better cost of ownership for data center managers by increasing density, throughput, and overall performance per rack. To learn more, visit www.intel.com/ and search for Iris™ Pro graphics, Intel® Xeon® Processor E3-1200 v4 Product Family, or Quick Sync Video.


Susan McNeice, Marketing Thought Leadership at Oracle Communications stops by to chat about how OpenStack* Enhanced Platform Awareness (EPA), which is built on open source solutions and supported by Intel, is helping the industry re-think strategies for managing a telecommunications cloud. She also discusses how EPA is addressing the gap between orchestrating virtualized network functions (VNF) activity, services into the network, and the conversation with the processor platform that exists today. To learn more, visit www.oracle.com/communications.


Lynn Comp, Director of the Market Development Organization for the Network Products Group at Intel, stops by to chat about the advances that have been made in network virtualization and flexible orchestration enabling applications to be spun up within a virtual machine in minutes instead of months. She outlines how Intel is driving network transformation to a software defined infrastructure (SDI) by enabling network orchestrators to more rapidly apply security protocols to virtual applications. Lynn also highlights how enterprises are already employing virtualized routers, firewalls, and other aspects of network functions virtualization (NFV) and that NFV is already a mainstream trend with lots of reference material and applications available for enterprises to utilize. To learn more, follow Lynn on Twitter @comp_lynn or visit www.intel.com/itcenter.


In this archive of a livecast from Mobile World Congress Guy Shemesh, Senior Director of the CloudBand Business Unit at Alcatel-Lucent stops by to talk about how the CloudBand* platform enables service providers to accelerate adoption of Network Functions Virtualization (NFV). Guy emphasizes how important it is to embrace the open source community in such a rapidly changing industry in order to ensure the ability to adapt to different market trends and capture additional value for customers. To learn more, visit www.alcatel-lucent.com/cloudband.


Vineeth Ram, VP of Product Marketing at HP Servers chats about how HP is working to reimagine the server for the data driven organization and the wide breadth of solutions that HP has to offer. He outlines how HP is focused on redefining compute and how they are leveraging the infrastructure to deliver significant business outcomes and drive new insights from big data for their customers. To learn more, visit www.hp.com/go/compute.


Jim McHugh, VP of UCS & Data Center Solutions Marketing at Cisco stops by to talk about new possibilities that the launch of Intel® Xeon® processor E7 v3 family will bring to Cisco's Unified Computing System (UCS) in the big data and analytics arena. He emphasizes how new insights driven by big-data can help businesses become intelligence-driven to create a perpetual and renewable competitive edge within their field. To learn more, visit http://www.cisco.com/c/en/us/products/servers-unified-computing/index.html.


Ravi Pendekanti, Vice President of Server Solutions Marketing at Dell stops by to talk about the launch of Dell’s PowerEdge R930* four socket server that incorporates the new Intel® Xeon® processor E7 v3 family. Ravi discusses how the PowerEdge R930 will help enterprise customers migrate from RISC-based servers to more energy efficient servers like the R930 that will deliver greater levels of performance for demanding mission critical workloads and applications. To learn more, visit www.dell.com/accelerate.


Scott Hawkins, the Executive Director of Marketing for the Enterprise Business Group at Lenovo stops by to chat about how Lenovo is refreshing their high-end X6 portfolio to bring greater performance and security to its’ customers. He highlights how Lenovo’s X6 portfolio was truly enabled by the leadership collaboration between Intel and IBM and outlines how the launch of the Intel® Xeon® processor E7 v3 family incorporated into Lenovo solutions will bring end users the highest levels of processor and storage performance as well as memory capacity and resiliency. To learn more, visit www.lenovo.com/systems.


Lisa Spelman, General Manager of Marketing for the Datacenter Group at Intel discusses the launch of the new Intel® Xeon® processor E7 v3 family and how it is driving significant performance improvements for mission critical applications. She highlights how the incredible 12 terabyte memory capacity of the Intel® Xeon® processor E7 v3 is a game changer for in-memory computing that will enable enterprise to capture new business insights through real-time analytics and decision making.


Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Over the years, people have talked about the potential of remote direct memory access (RDMA) to greatly accelerate application performance by bypassing the CPU and enabling direct access to memory. But there was a notable roadblock in this route to low-latency networking: slow storage media.


More specifically, with the slow speeds of widely used spinning disk and the relatively high cost of DRAM, there wasn’t a compelling reason for application developers to use RDMA for general purpose, distributed storage. Storage was basically a bottleneck in the I/O pipeline, and that bottleneck had the effect of negating the need for RDMA.


Now fast forward to 2015 and the arrival of a new generation of lightning-fast non-volatile memory (NVM) technologies, such as the upcoming Intel® Optane™ technology based on 3D XPoint™ memory. These new technologies are going to obliterate the storage bottlenecks of the past.


Consider these metrics from a fact sheet (PDF) from Intel and Micron, the joint developers of 3D XPoint technology:


  • HDD latency is measured in milliseconds, NAND latency is measured in microseconds, and 3D XPoint technology latency is measured in nanoseconds (one-billionth of a second)

  • 3D XPoint technology is up to 1,000x faster than NAND

  • In the time it takes an HDD to sprint the length of a basketball court, NAND could finish a marathon, and 3D XPoint technology could nearly circle the globe.


So how do we make use of these revolutionary storage innovations?


As a first step, we need to remove the bottlenecks in storage software that was written for the era of spinning disk. The assumptions about storage speeds and memory access built into legacy code no longer apply.


After that problem is fixed, we need to move on to the networking side of the equation. With the new generation of NVM technologies, storage performance has leapt ahead of networking performance—at least when using common networking technologies. This evolutionary change in storage creates the need for the speed of RDMA, which does network processing much more efficiently by enabling direct access to memory.


Removing the imbalance between NVM and RDMA isn’t an untested proposition. One big cloud service provider— Microsoft Azure—is already there. They prove the concept every day. They scale workloads out over distributed cores and exploit RDMA to offload cycles related to network processing. RDMA is one of their keys to achieving low latency and high message rates in bandwidth-hungry cloud applications.


If you are attending the SNIA Storage Developer Conference in Santa Clara this week, you will have the opportunity to explore these topics at various levels in presentations from Intel and Microsoft, among others. To learn more about RDMA, check out my pre-conference presentation where will explore RDMA and Four Trends in the Modern Data Center as well as presentations from Chet Douglas and Tom Talpey. I also recommend Bev Crair’s keynote on Next Generation Storage and Andy Rudoff’s talk exploring the Next Decade of NVM Programming.


Meanwhile, for a closer look at today’s new non-volatile memory technologies, including those based of 3D XPoint technology, visit http://www.intel.com/nvm.

For service providers, the rapid momentum of video streaming is both a plus and a minus.  On the plus side, millions of consumers are now looking to service providers to deliver content they used to access through other channels. That’s all good news for the business model and the bottom line.


On the minus side, service providers now have to meet the growing demands of bandwidth-hungry video streams, including new 4K media streaming formats. As I mentioned in a recent blog post, Video Processing Doesn’t Have To Kill the Data Center, today’s 4K streams come with a mind-boggling 8 million pixels per frame. And if you think today’s video workloads are bad, just stayed turned for more. Within five years, video will consume 80 percent of the world’s Internet bandwidth.


While meeting today’s growing bandwidth demands, service providers simultaneously have to deal with an ever-larger range of end-user devices with wide variances in their bit rates and bandwidth requirements. When customers order up videos, service providers have to be poised to deliver the goods in many different ways, which forces them to store multiple copies of content—driving up storage costs.


At Intel, we are working to help service providers solve the challenges of the minus side of this equation so they can gain greater benefits from the plus side. To that end, we are rolling out a new processing solution that promises to accelerate video transcoding workloads while helping service providers contain their total cost of ownership.


This solution, announced today at the IBC 2015 conference in Amsterdam, is called the Intel® Visual Compute Accelerator. It’s an Intel® Xeon® processor E3 based media processing PCI Express* (PCIe*) add-in card that brings media and graphics capabilities into Intel® Xeon® processor E5 based servers. We’re talking about 4K Ultra High Definition (UHD) media processing capabilities.


A few specifics: The card contains three Intel Xeon processor E3 v4 CPUs, which each contain the Intel® Iris™ Pro graphics P6300 GPU. Placing these CPUs on a Gen3 x16 PCIe card provides high throughput and low latency when moving data to and from the card.




The Intel Visual Compute Accelerator is designed for cloud and communications service providers who are implementing High Efficiency Video Coding (HEVC), which is expected to be needed for 4K/UHD videos, and Advanced Video Coding (AVC) media processing solutions, whether in the cloud or in their networks.


We expect that the Intel Visual Compute Accelerator will provide customers with excellent TCO when looking at cost per watts per transcode. Having both a CPU and a GPU on the same chip (as compared to just a GPU) enables ISVs to build solutions that improve software quality while accelerating high-end media transcoding workloads.


If you happen to be at IBC 2015 this week, you can get a firsthand look at the power of the Intel Visual Compute Accelerator in the Intel booth – hall 4, stand B72. We are showing a media processing software solution from Vantrix*, one of our ISV partners, that is running inside a dual-core Intel Xeon processor E5 based Intel® Server System with the Intel Visual Compute Accelerator card installed. The demonstration shows the Intel Visual Compute Accelerator transcoding using both the HEVC and AVC codecs at different bit-rates intended for different devices and networks.


Vantrix is just one of several Intel partners who are building solutions around the Intel Visual Compute Accelerator. Other ISVs who have their solutions running on the Intel Visual Compute Accelerator include ATEME, Ittiam, Vanguard Video* and Haivision*—and you can expect more names to be added to this list soon.


Our hardware partners are also jumping on board. Dell, Supermicro, and Advantech* are among the OEMs that plan to integrate the Intel Visual Compute Accelerator into their server product lines.


The ecosystem support for the Intel VCA signals that industry demand for solutions to address media workloads is high. Intel is working to meet those needs with the Intel Xeon processor E3 v4 with integrated Intel Iris Pro graphics. Partners including HP, Supermicro, Kontron, and Quanta have all released Xeon processor E3 solutions for dense environments, while Artseyn* also has a PCI Express based accelerator add in card similar to the Intel VCA. These Xeon processor E3 solutions all offer improved TCO and competitive performance across a variety of workloads.


To see the Intel Visual Compute Accelerator demo at IBC 215, stop into the Intel booth, No. 4B72. Or to learn more about the card right now, visit http://www.intelserveredge.com/intelvca.

Dawn Moore, GM Networking Division


Data center application performance today uses balanced system performance based on a combination of CPU power, faster storage and high-throughput networks; upgrading just one of these elements will not maximize your data center performance.


This wasn’t always the case. In years past, some IT managers could postpone network upgrades because slow storage would limit overall system performance. But now, with much faster solid state-state drives (SSDs), the performance bottleneck has shifted from the hard drive to the network.


This means that in today’s IT environment—with hyperscale data centers and virtualized servers—it’s crucial that upgrading to the latest technology, like faster SSDs or 10/40GbE, be viewed from a comprehensive systems viewpoint.


Certainly, upgrading to a server with a new Intel® Xeon® Processor E5-2600 v3 CPU will provide improved performance. Similarly, swapping out a hard drive for an SSD or upgrading from 1GbE to 10GbE will improve performance.


Two recent whitepapers highlight how maximum performance depends on the interconnected nature of these systems. If the entire system isn’t upgraded, then the data center doesn’t get the best return from a new server investment.


The first paper* discusses the improvements in raw performance that can be seen in a complete upgrade. For example, when an older server with SATA SSDs and a single 10GbE NIC was replaced with a new Intel® Xeon® processor E5-2695 v3 based server, a PCIe SSD, and four 10GbE ports, the new system delivered 54% more transactions per minute and 42.4% more throughput, as well as much faster response times in these tests.


What can be done with this raw performance increase? The other whitepaper** answers that question by researching the increase in the number of virtual machines supported by an upgraded system.


With SDN in the data center, data center managers can facilitate the ramp up of new virtual machines (VMs) automatically as user needs grow. In the case illustrated in this paper, it was the ability to automatically spin up a VM and a new instance of Microsoft Exchange to support new email users. With all of this automation, the last thing that’s needed is for the infrastructure to restrict that flexibility.


In this example, a Dell PowerEdge R720 server replaced an older Dell PowerEdge R710 server-storage solution. These new systems featured the latest Intel® Xeon® processor, new operating system, SSD storage and Intel® Ethernet CNA X520 (10GbE) adapters. When the tests were finished, the new system supported 4.5 times more VMs than the previous system.


What is interesting to me is that the researchers measured the performance increase for each part of the upgrade—which really illustrates the point that these upgrades need to done comprehensively.


In this test, when the researchers upgraded just the CPU and the OS, they saw performance increase 275 percent. Not bad. But when they added the higher-performance SSDs to the new CPU and OS that resulted in a 325 percent improvement. And finally, when they added the new network adapters, overall VM density improvement climbed 450 percent compared to the original base system.


More details on both of these examples are available in the white papers referenced below.


When it’s time to invest in new servers, take a look at the rest of your system, which includes your Ethernet and storage sub-system, and think about the combination that will give you the best return on your investment.


*Boosting Your Storage Server Performance with the Intel Xeon Processor E5-2600 V3 Product Family


**Increase Density and Performance with Upgrades from Intel and Dell

Here's an interesting disconnect: 84 percent of C-suite executives believe that the Internet of Things (IoT) will create new sources of revenue. However, only 7 percent have committed to an IoT investment.1 Why the gap between belief and action?


Perhaps it's because of the number of zeroes. Welcome to the world of overwhelming numbers: billions of things connecting to millions of sensors with

1.6 trillion dollars at stake.2


What does a billion look or feel like, much less a trillion? If you're like me, it's difficult to relate to such large-scale numbers. So it's not surprising that many companies are taking a wait-and-see approach. They will wait for the dust to settle-and for the numbers to become less abstract-before taking action.


Analysts make some big claims, and it can feel like IoT promises the world. But many businesses both large and small aren't ready to invest in a brand new world, even if they believe that IoT can deliver on its promise. However, the same businesses that are wary of large promises could use connected things today to make small changes that might significantly impact profitability.

For example, changes in the way your users conduct meetings could dramatically improve efficiency. Imagine a routine meeting that is assisted by fully connected sensors, apps, and devices. These connected things, forming a simple IoT solution, could anticipate your needs and do simple things for you to save time. They could reserve the conference room, dim the lights, adjust the temperature, and send notes to meeting attendees.

That's why we here at Intel are so excited to partner with Citrix Octoblu. Designed with the mission to connect anything to everything, Octoblu offers a way for your business to take advantage of IoT today, even before all your things are connected.

Octoblu provides software and APIs that automate interactions across smart devices, wearables, sensors, and many other things. Intel brings Intel IoT Gateways to that mix, which are pretested and optimized hardware platforms built specifically with IoT security in mind. The proven and trusted Intel reputation in the hardware industry, combined with Octoblu, a noted pioneer in IoT, can help address concerns about security and complexity as companies look at the possibilities for connected things.


IoT is shaping up to be more than just hype. Check out a new infographic  that shows small, practical ways you can benefit from IoT today. Or read the Solution Brief  to learn more about how the Intel and Citrix partnership can help you navigate the uncharted territory surrounding IoT.





1 Accenture survey. "CEO Briefing 2015: From Productivity to Outcomes. Using the Internet of Things to Drive Future Business Strategies." 2015. Written in collaboration with The Economist Intelligence Unit (EIU).

2 McKinsey Global Institute. "Unlocking the Potential of the Internet of Things." June 2015.

By Barry Davis, General Manager, High Performance Fabrics Operation at Intel



Intel Omni-Path Architecture (Intel OPA) is gearing up to be released in Q4’15 which is just around the corner! As we get closer to our official release, things are getting real and we’re providing more insight into the fabric for our customers and partners. In fact, more Intel Omni-Path architectural level details were just presented on August 26th at Hot Interconnects. Before I talk about the presentation, I want to remind you that this summer at ISC ’15 in Germany, we disclosed the next level of detail and showcased the first Intel OPA public demo through the COSMOS supercomputer simulation.


For those who didn’t make it to Frankfurt, we talked about our evolutionary approach to building the next-generation fabric. We shared how we built upon key elements of Aries* interconnect and Intel® True Scale fabric technology while adding revolutionary features such as:


  • Traffic Flow Optimization: provides very fine grained control of traffic flow and patterns by making priority decisions so important data, like latency sensitive MPI data, has an express path through the fabric and doesn’t get blocked by low priority traffic. This results in improved performance for high priority jobs and run-to-run consistencies are improved.


  • Packet Integrity Protection: catches and corrects all single/multi-bit errors in the fabric without adding any additional latency like other error detection & correction technologies. Error detection/corrections is extremely important in fabrics running at the speed and scale of Intel OPA.


  • Dynamic Lane Scaling: guarantees that a workload will gracefully continue to completion even if one or more lanes of a 4x link fail, rather than shutting down the entire link which was the case with other high performance fabric.


These features are a significant advancement because together they help deliver enhanced performance and scalability through higher MPI rates, lower latency and higher bandwidth.  They also provide for improved Quality of Service (QoS), resiliency and reliability.  In total, these feature are designed to support the next generation of data centers with unparalleled price/performance and capability.


At Hot Interconnects we provided even more detail. Our Chief OPA software architect, Todd Rimmer, gave an in-depth presentation on the architectural details of our forthcoming fabric. He delivered more insight into what makes Intel OPA a significant advancement in high performance fabric technology.  He covered the major wire-level protocol changes responsible for the features listed above – specifically the layer between Layer 1 and Layer 2 coined as “Layer 1.5.”  This layer provides the Quality of Service (QoS) and fabric reliability features that will help deliver the performance, resiliency, and scale required for our next-generation HPC deployments. Todd closed by keeping to his software roots by discussing how Intel is upping the ante on the software side with a discussion on Intel OPA software improvements, including the next-generation MPI optimized fabric communication library -  Performance Scaled Messaging 2 (PSM2) and powerful new features for fabric management.


Check out the paper Todd presented for a deep dive into the details!


Stay tuned for more updates as the Intel® Omni-Path Architecture continues the run-up towards release in the 4th quarter of this year.


Take it easy

By RadhaKrishna Hiremane, Director of Marketing for SDI, Cloud, and Big Data at Intel


Cloud is at the center of the advent of the digital service economy and the new wave of connected devices and IoT. As more workloads become cloud deployed solutions for anytime, anywhere, and any device consumption, the demand on the cloud infrastructure grows in multiple ways.


Cloud infrastructure needs to scale up, scale out and be reliable. In more precise words, cloud infrastructure must provide a larger footprint of compute, storage, network bandwidth, and service capacity on demand and as application demands grow.


As workloads migrate to cloud the data that resides in the cloud also grows. Whether it is a new startup wanting to develop apps, a growing company requiring a database to manage users or business, or an IoT infrastructure needing analytics capability to distill end-point data and deliver meaningful insights, data storage demands are growing fast. Not only the capacity but for devops, I/O performance also makes a difference in speed and agility of an application.


Microsoft announced today something that I see as bringing together a combination of performance and storage based on the Intel® Xeon® processor E5 v3 to Azure Cloud users. With its Azure* GS-series, devops users can expect up to 64TB of storage per virtual machine (VM), provide 80,000 IOPs and deliver 16,000 Mbps of storage throughput. For applications that require both performance and storage such as SQL applications, analytics and insights, the capacity and performance combination per VM is unparalleled.


This solution is enabled on the previously announced custom Xeon E5 V3 SKU highlighting our collaborative work to drive workload optimized performance capabilities to address every data center workload requirement.


With reliability, performance, and I/O scalability of Intel Xeon E5 v3 based systems and Azure Cloud Platforms, the solution delivery advantage to customers spans from system level to the Azure services level across geographies. For more information on Microsoft Azure GS series, click here.

Traditionally, there has been a balance of intelligence between computers and humans where all forms of number crunching and bit manipulations are left to computers, and the intelligent decision-making is left to us humans.  We are now at the cusp of a major transformation poised to disrupt this balance. There are two triggers for this: first, trillions of connected devices (the “Internet of Things”) converting the large untapped analog world around us to a digital world, and second, (thanks to Moore’s Law) the availability of beyond-exaflop levels of compute, making a large class of inferencing and decision-making problems now computationally tractable.


This leads to a new level of applications and services in form of “Machine Intelligence Led Services”.  These services will be distinguished by machines being in the ‘lead’ for tasks that were traditionally human-led, simply because computer-led implementations will reach and even surpass the best human-led quality metrics.  Self-driving cars, where literally machines have taken the front seat, or IBM’s Watson machine winning the game of Jeopardy is just the tip of the iceberg in terms of what is computationally feasible now.  This extends the reach of computing to largely untapped sectors of modern society: health, education, farming and transportation, all of which are often operating well below the desired levels of efficiency.


At the heart of this enablement is a class of algorithms generally known as machine learning. Machine learning was most concisely and precisely defined by Prof. Tom Mitchell of CMU almost two decades back as, “A computer program learns, if its performance improves with experience”.  Or alternately, “Machine Learning is the study, development, and application of algorithms that improve their performance at some task based on experience (previous iterations).”   Its human-like nature is apparent in its definition itself.


The theory of machine learning is not new; its potential however has largely been unrealized due to the absence of the vast amounts of data needed to take machine performance to useful levels.  All of this has now changed with the explosion of available data, making machine learning one of the most active areas of emerging algorithm research. Our research group, the Parallel Computing Lab, part of Intel Labs, has been at the forefront of such research.  We seek to be an industry role-model for application-driven architectural research. We work in close collaboration with leading academic and industry co-travelers to understand architectural implications—hardware and software—for Intel's upcoming multicore/many-core compute platforms.


At the Intel Developer Forum this week, I summarized our progress and findings.  Specifically, I shared our analysis and optimization work with respect to core functions of machine learning for Intel architectures.  We observe that the majority of today’s publicly available machine learning code delivers sub-optimal compute performance. The reasons for this include the complexity of these algorithms, their rapidly evolving nature, and a general lack of parallelism-awareness. This, in turn, has led to a myth that industry standard CPUs can’t achieve the performance required for machine learning algorithms. However, we can “bust” this myth with optimized code, or code modernization to use another term, to demonstrate the CPU performance and productivity benefits.


Our optimized code running on Intel’s family of latest Xeon processors delivers significantly higher performance (often more than two orders of magnitude) over corresponding best-published performance figures to date on the same processing platform.  Our optimizations for core machine learning functions such as K-means based clustering, collaborative filtering, logistic regression, support vector machine training, and deep learning classification and training achieve high levels of architectural, cost and energy efficiency.


In most cases, our achieved performance also exceeds best-published-to-date compute performance of special-purpose offload accelerators like GPUs. These accelerators, being special-purpose, often have significantly higher peak flops and bandwidth than our general-purpose processors. They also require significant software engineering efforts to isolate and offload parts of computations, through their own programming model and tool chain. In contrast to this, the Intel® Xeon® processor and upcoming Intel® Xeon Phi™ processor (codename Knights Landing) each offer common, non-offload-based, general-purpose processing platforms for parallel and highly parallel application segments respectively.


A single-socket Knights Landing system is expected to deliver over 2.5x the performance of a dual socket Intel Xeon processor E5 v3 family based system (E5-2697v3; Haswell) as measured by images per second using the popular AlexNet neural network topology.  Arguably, the most complex computational task in machine learning today is scaling state-of-the art deep neural network topologies to large distributed systems. For this challenging task, using 64 nodes of Knights Landing, we expect to train the OverFeat-FAST topology (trained to 80% classification accuracy in 70 epochs using synchronous minibatch SGD) in a mere 3-4 hours.  This represents more than a 2x improvement over the same sized two socket Intel Xeon processor E5-2697 v3 based Intel® Endeavour cluster result.


More importantly, the coding and optimization techniques employed here deliver optimal performance for both Intel Xeon and Intel Xeon Phi processors, both at the single-node, as well as multi-node level.  This is possible due to their shared programming model and architecture.  This preserves the software investment the industry has made in Intel Xeon, and hence reduces TCO for data center operators.


Perhaps more importantly, we are making these performance optimizations available to our developers through the familiar Intel-architecture tool chain, specifically through enhancements over the coming couple of quarters to the Intel® Math Kernel Library (MKL) and Data Analytics Acceleration Library (DAAL).  This significantly lowers the software barrier for developers while delivering highly performant, efficient, and portable implementations.


Let us together grow the use of machine learning and analytics to turn big data into deep insights and prescriptive analytics – getting machines to reason and prescribe a course of action in real-time for a smart and connected world of tomorrow, and extend the benefit of Moore’s Law to new application sectors of our society.


For further information click here to view the full presentation or visit http://www.intel.com/idfsessionsSF and search for SPCS008.

While talking to enterprise and cloud data center operators, the subject of hyper-converged infrastructure is a very hot topic. This new approach to infrastructure brings together server, storage, and networking components into an appliance designed for quicker installation and easier management. Some industry observers say hyper-converged systems are likely to play a significant role in meeting the scalability and deployment requirements of tomorrow’s data centers.


One view, for example comes from IDC analyst Eric Sheppard: “As businesses embark on a transformation to become data-driven entities, they will demand a data infrastructure that supports extreme scalability and flexible acquisition patterns and offer unprecedented economies of scale. Hyperconverged systems hold the promise and the potential to assist buyers along this data-driven journey.”


Today, Intel is helping fuel this hyper-converged infrastructure trend with a line of new server products announced at this week’s VMworld 2015 U.S. conference in San Francisco. Intel® Server Products for Hyper-Converged Infrastructure are designed to be high quality, unbranded, semi-integrated, and configure-to-order server building blocks optimized for the hyper-converged infrastructure solutions that enterprise IT and cloud environments have requested.


These new offerings, which provide certified hardware for VMware EVO:RAIL* solutions, combine storage, networking, and compute in an all-in-one system to support homogenous enterprise IT environments in a manner that reduces labor costs. OEMs and channel partners can now provide hyper-converged infrastructure solutions featuring Intel’s most innovative technologies, along with world-class validation, compatibility, certification, warranty, and support.


For OEMs and channel partners, these products pave a path to the rapidly growing and potentially lucrative market for hyper-converged solutions. Just how big of a market are we talking about? According to IDC, workload and geographic expansion will help push hyper-converged systems global revenues past the $800 million mark this year, up 116 percent over 2014.  Intel® Server Products for Hyper-Converged Infrastructure also bring together key pieces of the infrastructure puzzle, including Intel’s most innovative technologies designed hyper-converged infrastructure enterprise workloads.


Intel® Server Products for Hyper-Converged Infrastructure include a 2U 4-Node chassis supporting up to 24 hot-swap hard disk drives, dual-socket compute modules offering dense performance and support for the Intel® Xeon® processor E5-2600 v3 product family, and eight high-speed NVMe* solid-state drives acting as cache to deliver high performance for VMware Virtual SAN* (VSAN*).


With all key server, storage, and networking components bundled together, OEMs and channel partners have what they need to accelerate the delivery of hyper-converged solutions that are easily tuned to the requirements of customer environments. Better still, they can provide their customers with the confidence that comes with Intel hardware that is fully validated and optimized for VMware EVO:RAIL and integrated into enterprise-class VSAN-certified solutions.


For a closer look at these new groundbreaking server products, visit the Intel hyper-converged infrastructure site.








1 IDC MarketScape: Worldwide Hyperconverged Systems 2014 Vendor Assessment. December 2014. Doc # 253267.

2 IDC news release. “Workload and Geographic Expansion Will Help Push Hyperconverged Systems Revenues Past $800 Million in 2015, According to IDC” April 30, 2015.

Last month, Diane Bryant announced the creation of the Cloud for All Initiative, an effort to drive the creation of 10’s of thousands of new clouds across enterprise and provider data centers and deliver the efficiency and agility of hyperscale to the masses. This initiative took another major step forward today with the announcement of an investment and technology collaboration with Mirantis.  This collaboration extends Intel’s existing engagement with Mirantis with a single goal in mind: delivery of OpenStack fully optimized for the enterprise to spur broad adoption.


We hear a lot about OpenStack being ready for the enterprise, and in many cases OpenStack has provided incredible value to clouds running in enterprise data centers today. However, when talking to the IT managers who have led these deployment efforts, a few key topics arise: it’s too complex, its features don’t easily support traditional enterprise applications, and it took some time to optimize for deployment.  While IT organizations have benefitted from the added effort of deployment, the industry can do better.  This is why Intel is working with Mirantis to tune OpenStack for feature optimization, and while this work extends from network infrastructure optimization to storage tuning and beyond, there are a few common themes of the work.


Server Image_Small.png

The first focus is on increasing stack resiliency for traditional enterprise application orchestration.  Why is this important?  While enterprises have begun to deploy cloud native applications within their environments, business is still very much run on what we call “traditional” applications, those that were written without the notion that some day they would exist in a cloud.  These traditional applications require increased level of reliability, uptime during rolling software upgrades and maintenance, and control of underlying infrastructure across compute, storage and network.


The second focus is on increasing stack performance through full optimization of Intel Architecture. Working closely with Mirantis will ensure that OpenStack will be fully tuned to take advantage of platform telemetry and platform technologies such as Intel VT and Cloud Integrity Technology to deliver improved performance and security capabilities.


The final focus is on improving full data center resource pool optimization with improvements targeted specifically at software defined storage and network resource pool integration. We’ll work to ensure that applications have full control of all the resources required while ensuring efficient resource utilization.


The fruits of the collaboration will be integrated into Mirantis’ distribution as well as offered as upstream contributions for the benefit of the entire community.  We also expect to utilize the OpenStack Innovation Center recently announced by Intel and Rackspace to test these features at scale to ensure that data centers of any size can benefit from this work.  Our ultimate goal is delivery of a choice of optimized solutions to the marketplace for use by enterprise and providers, and you can expect frequent updates on the progress from the Intel team as we move forward with this collaboration.

Today at IDF 2015, Sandra Rivera, Vice President and GM of Intel’s Network Platforms Group, disclosed the Intel® Network Builders Fast Track program in her joint keynote “5G: Innovation from Client to Cloud.”  The mission of the program is to accelerate and broaden the availability of proven commercial solutions through a key combination of means such as equity investments, blueprint publications, performance optimizations, and multi-party interoperability testing via 3rd party labs.



This program was specifically designed to help address many of the biggest challenges that the industry faces today with one goal in mind - accelerate the network transformation to software defined networking (SDN) and network functions virtualization (NFV).


Thanks to the new Intel Network Builders Fast Track, Intel® Open Network Platform (ONP) is poised to have an even bigger impact in how we collaborate with end-users and supply chain partners to deliver proven SDN and NFV solutions together.


Intel ONP is a reference architecture that combines leading open source software and standards ingredients together on a quarterly release that can be used by developers to create optimized commercial solutions for SDN and NFV workloads and use cases.


Whereas the Intel Network Builders Fast Track combines market development activities, technical enabling, and equity investments to accelerate time to market (TTM) for Intel Network Builder partners, the Intel ONP then amplifies this with a reference architecture. With Intel ONP, partners can get to market more quickly with solutions based on open industry leading building blocks that are optimized for industry-leading performance on Intel Xeon® processor-based servers.


Intel ONP Release 1.4 includes the following software for example:


  • OpenStack* Kilo 2015.1 release with the following key feature enhancements:
    • Enhanced Platform Awareness (EPA) capabilities
    • Improved CPU pinning to virtual machines
    • I/O based Non-Uniform Memory Architecture (NUMA) aware scheduling
  • OpenDaylight* Helium-SR3
  • Open vSwitch* 2.3.90
  • Data Plane Development Kit release 1.8
  • Fedora* 21 release
  • Real-Time Linux* Kernel, patches release 3.14.36-rt34


We’ll be releasing ONP 1.5 in mid September. However there’s even more exciting news just beyond release 1.5.


Strategically aligned with OPNFV for Telecom


As previously announced this week at IDF, the Intel ONP 2.0 reference architecture scheduled for early next year will adopt and be fully aligned with the OPNFV Arno• software components released in June this year.  With well over 50 members, OPNFV is an industry leading open source community committed to collaborating on a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV solutions.  Intel is a platinum member of OPNFV dedicated to partnering within the community to solve real challenges in key barriers to adoption such as packet processing performance, service function chaining, service assurance, security, and high availability just to name some.  Intel ONP 2.0 will also deliver support for new products such as the Intel® Xeon® Processor D, our latest SOC as well as showcase new workloads such as Gi-LAN.  This marks a major milestone for Intel to align ONP with OPNFV architecturally and to contribute to the OPNFV program on a whole new level.


The impact of the Network Builders Fast Track will be significant. The combination of the Intel Network Builder Fast Track and the Intel ONP reference architecture will mean even faster time to market, a broader range of industry interoperability, and market-leading commercial solutions to fuel SDN and NFV growth in the marketplace.


Whether you are a service provider or enterprise looking to deploy a new SDN solution, or a partner in the supply chain developing the next generation solution for NFV, I encourage you to join us on this journey with both Intel Network Builder Fast Track and Intel ONP as we transform the network together.

Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center