1 2 3 Previous Next

The Data Stack

1,541 posts

While talking to enterprise and cloud data center operators, the subject of hyper-converged infrastructure is a very hot topic. This new approach to infrastructure brings together server, storage, and networking components into an appliance designed for quicker installation and easier management. Some industry observers say hyper-converged systems are likely to play a significant role in meeting the scalability and deployment requirements of tomorrow’s data centers.

 

One view, for example comes from IDC analyst Eric Sheppard: “As businesses embark on a transformation to become data-driven entities, they will demand a data infrastructure that supports extreme scalability and flexible acquisition patterns and offer unprecedented economies of scale. Hyperconverged systems hold the promise and the potential to assist buyers along this data-driven journey.”

 

Today, Intel is helping fuel this hyper-converged infrastructure trend with a line of new server products announced at this week’s VMworld 2015 U.S. conference in San Francisco. Intel® Server Products for Hyper-Converged Infrastructure are designed to be high quality, unbranded, semi-integrated, and configure-to-order server building blocks optimized for the hyper-converged infrastructure solutions that enterprise IT and cloud environments have requested.

 

These new offerings, which provide certified hardware for VMware EVO:RAIL* solutions, combine storage, networking, and compute in an all-in-one system to support homogenous enterprise IT environments in a manner that reduces labor costs. OEMs and channel partners can now provide hyper-converged infrastructure solutions featuring Intel’s most innovative technologies, along with world-class validation, compatibility, certification, warranty, and support.

 

For OEMs and channel partners, these products pave a path to the rapidly growing and potentially lucrative market for hyper-converged solutions. Just how big of a market are we talking about? According to IDC, workload and geographic expansion will help push hyper-converged systems global revenues past the $800 million mark this year, up 116 percent over 2014.  Intel® Server Products for Hyper-Converged Infrastructure also bring together key pieces of the infrastructure puzzle, including Intel’s most innovative technologies designed hyper-converged infrastructure enterprise workloads.

 

Intel® Server Products for Hyper-Converged Infrastructure include a 2U 4-Node chassis supporting up to 24 hot-swap hard disk drives, dual-socket compute modules offering dense performance and support for the Intel® Xeon® processor E5-2600 v3 product family, and eight high-speed NVMe* solid-state drives acting as cache to deliver high performance for VMware Virtual SAN* (VSAN*).

 

With all key server, storage, and networking components bundled together, OEMs and channel partners have what they need to accelerate the delivery of hyper-converged solutions that are easily tuned to the requirements of customer environments. Better still, they can provide their customers with the confidence that comes with Intel hardware that is fully validated and optimized for VMware EVO:RAIL and integrated into enterprise-class VSAN-certified solutions.

 

For a closer look at these new groundbreaking server products, visit the Intel hyper-converged infrastructure site.

 

 

 

 

 

 

 

1 IDC MarketScape: Worldwide Hyperconverged Systems 2014 Vendor Assessment. December 2014. Doc # 253267.

2 IDC news release. “Workload and Geographic Expansion Will Help Push Hyperconverged Systems Revenues Past $800 Million in 2015, According to IDC” April 30, 2015.

Last month, Diane Bryant announced the creation of the Cloud for All Initiative, an effort to drive the creation of 10’s of thousands of new clouds across enterprise and provider data centers and deliver the efficiency and agility of hyperscale to the masses. This initiative took another major step forward today with the announcement of an investment and technology collaboration with Mirantis.  This collaboration extends Intel’s existing engagement with Mirantis with a single goal in mind: delivery of OpenStack fully optimized for the enterprise to spur broad adoption.

 

We hear a lot about OpenStack being ready for the enterprise, and in many cases OpenStack has provided incredible value to clouds running in enterprise data centers today. However, when talking to the IT managers who have led these deployment efforts, a few key topics arise: it’s too complex, its features don’t easily support traditional enterprise applications, and it took some time to optimize for deployment.  While IT organizations have benefitted from the added effort of deployment, the industry can do better.  This is why Intel is working with Mirantis to tune OpenStack for feature optimization, and while this work extends from network infrastructure optimization to storage tuning and beyond, there are a few common themes of the work.

 

Server Image_Small.png

The first focus is on increasing stack resiliency for traditional enterprise application orchestration.  Why is this important?  While enterprises have begun to deploy cloud native applications within their environments, business is still very much run on what we call “traditional” applications, those that were written without the notion that some day they would exist in a cloud.  These traditional applications require increased level of reliability, uptime during rolling software upgrades and maintenance, and control of underlying infrastructure across compute, storage and network.

 

The second focus is on increasing stack performance through full optimization of Intel Architecture. Working closely with Mirantis will ensure that OpenStack will be fully tuned to take advantage of platform telemetry and platform technologies such as Intel VT and Cloud Integrity Technology to deliver improved performance and security capabilities.

 

The final focus is on improving full data center resource pool optimization with improvements targeted specifically at software defined storage and network resource pool integration. We’ll work to ensure that applications have full control of all the resources required while ensuring efficient resource utilization.

 

The fruits of the collaboration will be integrated into Mirantis’ distribution as well as offered as upstream contributions for the benefit of the entire community.  We also expect to utilize the OpenStack Innovation Center recently announced by Intel and Rackspace to test these features at scale to ensure that data centers of any size can benefit from this work.  Our ultimate goal is delivery of a choice of optimized solutions to the marketplace for use by enterprise and providers, and you can expect frequent updates on the progress from the Intel team as we move forward with this collaboration.

Today at IDF 2015, Sandra Rivera, Vice President and GM of Intel’s Network Platforms Group, disclosed the Intel® Network Builders Fast Track program in her joint keynote “5G: Innovation from Client to Cloud.”  The mission of the program is to accelerate and broaden the availability of proven commercial solutions through a key combination of means such as equity investments, blueprint publications, performance optimizations, and multi-party interoperability testing via 3rd party labs.

 

 

This program was specifically designed to help address many of the biggest challenges that the industry faces today with one goal in mind - accelerate the network transformation to software defined networking (SDN) and network functions virtualization (NFV).

 

Thanks to the new Intel Network Builders Fast Track, Intel® Open Network Platform (ONP) is poised to have an even bigger impact in how we collaborate with end-users and supply chain partners to deliver proven SDN and NFV solutions together.

 

Intel ONP is a reference architecture that combines leading open source software and standards ingredients together on a quarterly release that can be used by developers to create optimized commercial solutions for SDN and NFV workloads and use cases.

 

Whereas the Intel Network Builders Fast Track combines market development activities, technical enabling, and equity investments to accelerate time to market (TTM) for Intel Network Builder partners, the Intel ONP then amplifies this with a reference architecture. With Intel ONP, partners can get to market more quickly with solutions based on open industry leading building blocks that are optimized for industry-leading performance on Intel Xeon® processor-based servers.

 

Intel ONP Release 1.4 includes the following software for example:

 

  • OpenStack* Kilo 2015.1 release with the following key feature enhancements:
    • Enhanced Platform Awareness (EPA) capabilities
    • Improved CPU pinning to virtual machines
    • I/O based Non-Uniform Memory Architecture (NUMA) aware scheduling
  • OpenDaylight* Helium-SR3
  • Open vSwitch* 2.3.90
  • Data Plane Development Kit release 1.8
  • Fedora* 21 release
  • Real-Time Linux* Kernel, patches release 3.14.36-rt34

 

We’ll be releasing ONP 1.5 in mid September. However there’s even more exciting news just beyond release 1.5.

 

Strategically aligned with OPNFV for Telecom

 

As previously announced this week at IDF, the Intel ONP 2.0 reference architecture scheduled for early next year will adopt and be fully aligned with the OPNFV Arno• software components released in June this year.  With well over 50 members, OPNFV is an industry leading open source community committed to collaborating on a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV solutions.  Intel is a platinum member of OPNFV dedicated to partnering within the community to solve real challenges in key barriers to adoption such as packet processing performance, service function chaining, service assurance, security, and high availability just to name some.  Intel ONP 2.0 will also deliver support for new products such as the Intel® Xeon® Processor D, our latest SOC as well as showcase new workloads such as Gi-LAN.  This marks a major milestone for Intel to align ONP with OPNFV architecturally and to contribute to the OPNFV program on a whole new level.

 

The impact of the Network Builders Fast Track will be significant. The combination of the Intel Network Builder Fast Track and the Intel ONP reference architecture will mean even faster time to market, a broader range of industry interoperability, and market-leading commercial solutions to fuel SDN and NFV growth in the marketplace.

 

Whether you are a service provider or enterprise looking to deploy a new SDN solution, or a partner in the supply chain developing the next generation solution for NFV, I encourage you to join us on this journey with both Intel Network Builder Fast Track and Intel ONP as we transform the network together.

City-wide traffic visualization. Global shipping data. Airplane traffic patterns. Worldwide Facebook* connections. A stunning video highlighting the current deluge of data as both the world’s most abundant and most underutilized asset kicked off Doug Davis (SVP and GM, Internet of Things Group) and Diane Bryant’s (SVP and GM, Data Center Group) mega session on IoT and Big Data Insights at IDF. They spent their session time highlighting how vital it is that we enable the easy extraction of information from data, as that will allow for disruption across a variety of industries including transportation, energy, retail, agriculture, and healthcare.

 

Takeaway #1: Data doesn’t just disrupt the digital world

 

Even industries – like agriculture – that have been around for thousands of years are ripe for cutting-edge technology transformation. Jesse Vollmar, the Co-Founder and CEO of FarmLogs, joined Diane and Doug to talk about using sensor networks and agricultural robots to make it easier for farmers to make land more productive. By capturing data on everything from fertilization to pesticides to weed control, sensors are capturing massive amounts of data to help farmers make better decisions about their crops.

 

idf-blog-photo-1.jpg
Jesse Vollmar from FarmLogs

 

Takeaway #2 The edge is full of new innovation opportunity.  Even Beyoncé is in play.

 

Edge analytics may seem daunting to traditional enterprises with little experience in BI. To show ease of implementation, Doug brought out a team of Intel interns who were able to program industrial robots in three weeks to pick up gesture control via Intel® RealSenseTM technology. The robots danced to popular tunes, while an on-stage intern controlled their movements. Nothing like hearing a little “Single Ladies” at IDF. To help get started, the Intel® IoT Developer Program has expanded to include commercial solutions, enabling a fast, flexible and scalable path to IoT edge analytics.

 

idf-blog-photo-2.jpg
Intel intern and a gesture-controlled robot

 

So what do we need to develop in IoT to see an impact across a full range of industries? We need more sensors and more robots who are connected to each other and connected to the cloud. Think about what we could accomplish if a robot was connected to a cloud of Intel® Xeon® processors as its brain. The goal is to enable robots that are smart and connected and that will gather info around them, with access to databases, as well as predictive analytics. All resulting in a fluid and natural interaction with the world. To get to this future vision, we need increased computing power, better data analytics, and more security.

 

In a world where extracting information from data is value, the data center becomes the brains behind IoT devices. According to Diane, the number one barrier to enterprise data analytics is making sense of the data. Solutions need to need to be usable by existing IT talent, allow for rapid customization, and enable an accelerated pace of innovation.

 

Takeaway #3 You may have a mountain of data but need to extract the Gold through analytics

 

Diane brought out Dennis Weng from JD.com to discuss how the company used Streaming SQL on an Intel® Xeon® processor based platform to develop streaming analytics for customers based on browsing and purchase history. They’re handling 100 million customers and 4 million categories of products. The company reduced their TCO and development now takes hours instead of weeks.

 

According to Owen Zhang, the top-ranked data scientist on KAGGLE*, the ideal analytics platform will feature easy customization with access to different kinds of data, have an intuitive interface, and run at scale. Intel is committed to reaching that goal – Diane announced release of Discovery Peak, an open-source, standards-based platform that is easy to use and highly customizable.

 

idf-blog-photo-3.jpg
Owen Zhang, a data scientist super hero

 

Takeaway #4 Analytics isn’t just about software.  Hardware innovation is critical

 

Another revolutionary innovation supporting in-memory database computing is Intel® 3D XPointTM technology. First coming to SSDs in 2016, this new class of memory will also make its way to a future Intel® Xeon® processor based platform in the form of DIMMs. Representing the first time non-volatile memory will be used in main memory, 3D XPoint technology will offer a 4x increase in memory capacity (up to 6TB of data on a two socket system) and is significantly lower in cost per GB relative to DRAM.

idf-blog-photo-4.jpg
A giant Intel® 3D XPointTM technology grid in the showcase

 

Takeaway #5 Sometimes technology has the promise to change the world.

 

And finally, Eric Dishman (Intel Fellow and GM of Health & Life Sciences) and Dr. Brian Druker from Oregon Health and Science University joined Diane and Doug for a deep dive into the future of analytics and healthcare. Governments around the world are working towards improving the cost, quality, and access to healthcare for all (government goal). The goal is precision medicine – distributed and personalized care for each individual, or “All in a Day” medicine by 2020. We’ve been working on that goal with OHSU, and other organizations, for a number of years and just announced another large step forward.

 

idf-blog-photo-5.jpg
Dr. Brian Druker from Oregon Health and Science University

 

The Collaborative Cancer Cloud is a precision medicine analytics platform that allows institutions to securely share patient genomic, imaging, and clinical data for potentially lifesaving discoveries. It will enable large amounts of data from sites all around the world to be analyzed in a distributed way, while preserving the privacy and security of that patient data at each site.

 

The data analytics opportunities across markets and industries are endless. What will you take away from your data?

In a series of earlier posts, we took a trip down the road to software-defined infrastructure (SDI). Now that we have established an understanding of SDI and where it is today, it’s a good time to talk about the workloads that will run on the SDI foundation. This is where SDI demonstrates it true value.

 

Much of this post will assume that your code is developed to be cloud-aware (and you understand what those changes are to help). Cloud-aware apps know what they need to do to fully leverage the automation and orchestration capabilities of an SDI platform. They are written to enable expansion and contraction automatically and to maintain optimal levels of performance, availability, and efficiency. (If you want some additional discussion around cloud-aware, just let me know. It’s another topic that’s close to my heart.)

 

With cloud aware taken care of, one key workloads targeting the SDI landing zone is business analytics, which is getting a lot of press today as it rises in importance to the enterprise. Analytics is the vehicle for turning mountains of raw data into meaningful business insights. It takes you from transitions to trends, from customer complaints to sentiment analysis, and from millions of rows of log data to hackers’ intent.

 

Analytics, of course, is not new. Virtually all IT shops have leveraged some form of analytics for years, from simple reporting presented in Excel spreadsheets to more complex data analysis and visualization. What is new is a whole set of technologies that allow for doing things differently, using new data and merging these capabilities. For example, we now have tools and environments, such as Hadoop, that make it possible to bring together structured and unstructured data in an automated manner, which was really difficult to do in an automated way. Over the next few blogs, I will talk about how analytics is changing and how companies might progress through the analytics world in a stepwise manner. For now, let’s begin with the current state of analytics.

 

Today, most organizations have a business intelligence environment. Typically, this is a very reactive and very batch dependent environment. In a common progression, organizations move data from online data sources into a data warehouse through various transformations, and then they run reports or create cubes to determine impact.

 

In these environments, latency between initial event and actual action tends to be very high. By the time data is extracted, transformed, loaded, and analyzed, its relevance has decreased and the associated costs continue to rise. In general, there is the cost of holding data and the cost of converting that data from data store to data warehouse, and these can be very high. It should be no surprise then that decisions on how far you can go back and how much additional data you can use are often are made based on the cost of the environment, rather than the value to the business.

 

The future of this environment is that new foundation technologies—such as Hadoop, in-memory databases NoSQL and graph databases with the use of advanced algorithms, and machine learning—will change the landscape of analytics dramatically.

 

These advances, which are now well under way, will allow us to get to a world in which analytics and orchestration tools do a lot of hard work for us. When an event happens, the analytics environment will determine what actions would best handle the issue and optimize the outcome. It will also trigger the change and lets someone know why something changed … all automatically and without human intervention.

 

While this might be scary for some, it is rapidly becoming a capability that can be leveraged. It is in use today on trading floors, for example, to determine if events are illegal or to trigger specific trades. The financial industry is where much of the innovation around these items is taking place.

 

It is only a matter of time where most companies will be able to figure out how to take advantage of these same fundamental technologies to change their businesses.

 

Another item to keep in mind is as organizations make greater use of analytics, visualization will become even more important. Why? Because a simple spreadsheet and graph will not be able to explain what is happening in a way humans will be able to understand. This is where we start to see the inclusion of what has existed in the high performance computing areas for years around modeling and simulation. These visualizations will help companies pick that needle out of a data haystack in a way that helps them optimize profits, go after new business, and win new customers.

 

In follow-on posts, I will explore the path forward in the journey to the widespread use of analytics in an SDI environment. This is a path that moves first from reactive to predictive analytics, and then from the predictive to the prescriptive.  I will also explore a great use case—security—for organizations getting started with analytics.

Last year I read an article in which Hadoop co-developers Mike Cafarella and Doug Cutting explained how they originally set out to build an open-source search engine. They saw it as serving a specific need to process massive amounts of data from the Internet, and they were surprised to find so much pent up demand for this kind of computing across all businesses. The article suggested it was a happy coincidence.

 

I see it more as a happy intersection of computing, storage and networking technology with business needs to use a growing supply of data more efficiently. Most of us know that Intel has developed much of the hardware technology that enables what we've come to call Big Data, but Intel is working hard to make the algorithms that run on Intel systems as efficient as possible, too. My colleague Pradeep Dubey presented a session this week at the Intel Developer Forum in San Francisco on how developers can take advantage of optimized data analytics and machine learning algorithms on Intel® Architecture-based data center platforms. In this blog I thought I would back up a bit and explain how this came about and why it's so important.

 

The explosion of data available on the Internet has driven market needs for new ways to collect, process, and analyze it. In the past, companies mostly processed the data they created in the course of doing business. That data could be massive. For example, in 2012 it was estimated that Walmart collected data from more than one million customer transactions per hour. But it was mostly structured data that is relatively well behaved. Today the Internet offers up enormous amounts of mostly unstructured data, and the Internet of Things promises yet another surge. What businesses seek now goes beyond business intelligence. They seek business insight, which is intelligence applied.

 

What makes the new data different isn't just that there's so much of it, but that an estimated 80 percent of it is unstructured—comprised of text, images, and audio that defies confinement to the rows and columns of traditional databases. It also defies attempts to tame it with traditional analytics because it needs to be interpreted before it can be used in predictive algorithms.  Humans just can’t process data efficiently or consistently enough to analyze all this unstructured data, so the burden of extracting meaning from it lands on the computers in the data center.

 

First, let's understand this burden a little deeper. A key element of the approach I described above is machine learning. We ask the machine to actually learn from the data, to develop models that represent this learning, and to use the models to make predictions or decisions. There are many machine learning techniques that enable this, but they all have two things in common: They require a lot of computing horsepower and they are complex for the programmer to implement in a way that uses data center resources efficiently. So our approach at Intel is two-fold:

 

  • Optimize the Intel ® Xeon processor and the Intel® Xeon Phi™ coprocessor hardware to handle the key parts of machine learning algorithms very efficiently.

  • Make these optimizations readily available to developers through libraries and applications that take advantage of the capabilities of the hardware using standard programming languages and familiar programming models.

 

Intel Xeon Phi enhances parallelism and provides a specialized instruction set to implement key data analytics functions in the hardware. To access those capabilities, we provide an array of supporting software like the Intel Data Analytics Accelerator Library, a set of optimized building blocks that can be used in all stages of the data analytics workflow; the Intel Math Kernel Library, math processing routines that increase application performance on Xeon processors and reduce development time; and the Intel Analytics Toolkit for Apache Hadoop that lets data scientists focus on analytics instead of mastering the details of programming for Hadoop and myriad open source tools.

 

Furthermore, like the developers of Hadoop itself, we believe it's important to foster a community around data analytics tools that engages experts from all quarters to make them better and easier to use. We think distributing these tools freely makes them more accessible and speeds progress across the whole field, so we rely on the open source model to empower the data analytics ecosystem we are creating around Intel Xeon systems. That's not new for Intel; we're already a top contributor to open source programs like Linux and Spark, and to Hadoop through our partnership with Cloudera. And that is definitely not a coincidence. Intel recognizes that open source brings talent and investment together to create solutions that people can build on rather than a bunch of competing solutions that diffuse the efforts of developers. Cafarella says it's what made Hadoop so successful—and it's the best way we've found to make Intel customers successful, too.

Preventing data loss in servers has been an objective since the invention of the database. A tiny software or hardware glitch that causes a power interruption can result in lost data, potentially interrupting services and, in the worst case, costing millions of dollars. So database developers have been searching for ways to achieve high transaction throughput and persistent in-memory data.

 

The industry took a tentative step with power-protected volatile DIMMS. In the event of a server power failure, the Power Protected DIMM activates its own small power supply, enabling it to flush volatile data to a non-volatile media.  This feature, referred to as Asynchronous DRAM Refresh (ADR) is limited and quite proprietary.  Nevertheless, the power protected DIMM became a concrete device for architects to consider improvements to a persistent memory software model.

 

To build the software model, the Storage and Networking Industry Association (SNIA.org) assembled some of the best minds in the storage and memory industries into a working group.  Starting in 2012, they developed ideas of how applications and operating systems could ensure in-memory data was persistent on the server.  They considered not only the power protected DIMMs, but also how emerging technologies like resistive RAM memories, could fit into the model.  Approved and published in 2013, the SNIA Persistent Memory Programming Model 1.0 became the first open architecture that allowed application developers to begin broad server enabling for persistent memory.

 

isv-libraries-blog-image.png

 

NVM.PM.VOLUME and NVM.PM.FILE mode examples

This graphic from the Storage Networking Industry Association shows examples of the programming model for a new generation of non-volatile memory.

 

Further impetus to program to the model emerged in late July 2015 when Intel and Micron announced they have started production on a new class of non-volatile memory that is the first new memory category in more than 25 years. Introduced as 3D XPoint™ technology, this new class of NVM has the potential to revolutionize database, big data, high-performance computing, virtualization, storage, cloud, gaming, and many other applications.

 

3D XPoint (pronounced “three-D-cross-point”), promises non-volatile memory speeds up to 1,000 times faster  than NAND, today’s most popular non-volatile memory. It accomplishes this performance feat by putting large amounts of quickly accessible data close to the processor, where it can be accessed at speeds previously impossible for non-volatile storage.

 

The new 3D XPoint technology is the foundation for Intel MDIMMs, announced at Intel Developer Forum in August.  These DIMMs will deliver higher up to 4X the system memory capacity than today’s servers, at a much more affordable price than DRAM. The result will be NVM DIMMs that can be widely adopted.

 

Of course, technology alone doesn’t deliver benefits to end users. Applications have to be written to take advantage this disruptive technology.  Building off the SNIA persistent memory programming model, open source developers have converted Linux file systems to be persistent memory aware, and integrated those new capabilities into Linux 4.0 upstream kernel.

 

Adding to the enabling effort, Intel and open source developers have been creating a Non-Volatile Memory Library (NVML) for Linux software developers. NVML enables developers to accelerate application development for persistent memory, based on the open SNIA persistent memory programming model.

 

It’s safe to say that developers will find this open source library to be extremely valuable. It hides a lot of the programming complexity and management details that can slow the development process, while optimizing instructions for better performance.

 

The five libraries in the NVML set will enable a wide range of developers to capitalize on 3D XPoint technology—and push applications into an all-new performance dimension.

 

Here’s the bottom line: 3D XPoint technology is coming soon to an Intel data center platform near you. If you’re a software developer, now is a good time to get up to speed on this new technology. With that thought in mind, here are a few steps you can take to prepare yourself for the coming performance revolution brought by a breakthrough technology.

 

Learn about the Persistent Memory programming model.

 

Read the documents and code supporting ACPI 6.0 and Linux NFIT drivers

http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf

https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git/log/?h=nd

https://github.com/pmem/ndctl

http://pmem.io/documents/

https://github.com/01org/prd

 

Learn about the non-volatile memory library (NVML) and subscribe to mailing list.

 

Explore the Intel Architecture Instruction Set Extensions Programming Reference.

 

And if your application needs access to a large tier of memory but doesn’t need data persistence in memory, there’s an NVM library there for you too.

 

We’ll discuss more on Big Data, Java and 3D XPointTM in a future blog post.

 

1 Performance difference based on comparison between 3D XPoint technology and other industry NAND.

Here’s an interesting disconnect: 84 percent of C-suite executives believe that the Internet of Things (IoT) will create new sources of revenue. However, only 7 percent have committed to an IoT investment.[1] Why the gap between belief and action? Perhaps it’s because of the number of zeroes. Welcome to the world of overwhelming numbers: billions of things connecting to millions of sensors with 1.6 trillion dollars at stake.[2] What does a billion look or feel like, much less a trillion? If you’re like me, it’s difficult to relate to such large-scale numbers. So it’s not surprising that many companies are taking a wait-and-see approach. They will wait for the dust to settle—and for the numbers to become less abstract—before taking action. Analysts make some big claims, and it can feel like IoT promises the world. But many businesses both large and small aren’t ready to invest in a brand new world, even if they believe that IoT can deliver on its promise. However, the same businesses that are wary of large promises could use connected things today to make small changes that might significantly impact profitability. For example, changes in the way your users conduct meetings could dramatically improve efficiency. Imagine a routine meeting that is assisted by fully connected sensors, apps, and devices. These connected things, forming a simple IoT solution, could anticipate your needs and do simple things for you to save time. They could reserve the conference room, dim the lights, adjust the temperature, and send notes to meeting attendees. That’s why we here at Intel are so excited to partner with Citrix Octoblu. Designed with the mission to connect anything to everything, Octoblu offers a way for your business to take advantage of IoT today, even before all your things are connected. Octoblu provides software and APIs that automate interactions across smart devices, wearables, sensors, and many other things. Intel brings Intel IoT Gateways to that mix, which are pretested and optimized hardware platforms built specifically with IoT security in mind. The proven and trusted Intel reputation in the hardware industry, combined with Octoblu, a noted pioneer in IoT, can help address concerns about security and complexity as companies look at the possibilities for connected things. IoT is shaping up to be more than just hype. Check out a new infographic that shows small, practical ways you can benefit from IoT today. Or read the Solution Brief to learn more about how the Intel and Citrix partnership can help you navigate the uncharted territory surrounding IoT. [1] Accenture survey. “CEO Briefing 2015: From Productivity to Outcomes. Using the Internet of Things to Drive Future Business Strategies.” 2015. Written in collaboration with The Economist Intelligence Unit (EIU). https://www.accenture.com/t20150708T060455__w__/ke-en/_acnmedia/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Dualpub_7/Accenture-CEO-Briefing-2015-Productivity-Outcomes-Internet-Things.pdf. [2] McKinsey Global Institute. “Unlocking the Potential of the Internet of Things.” June 2015. http://www.mckinsey.com/insights/business_technology/the_internet_of_things_the_value_of_digitizing_the_physic al_world.

Historically, platform embedded firmware limits the ways system-builders can customize, innovate, and differentiate their offerings. Today, Intel is streamlining the route for implementing new features with the creation of an “open engine” for system-builders to run firmware of their own creation or choosing.

 

This important advance in platform architecture is known as the Innovation Engine. It was introduced this week at the Intel Developer Forum in San Francisco.

 

The Innovation Engine is a small Intel® architecture processor and I/O sub-system that will be embedded into future Intel data center platforms. The Innovation Engine enables system builders to create their own unique, differentiating firmware for server, storage, and networking markets. 

 

Some possible uses include hosting lightweight manageability features in order to reduce overall system cost, improving server performance by offloading BIOS and BMC routines, or augmenting the Intel® Management Engine for such things as telemetry and trusted boot.

 

These are just a few of the countless possibilities for the use of this new path into the heart of Intel processors. Truthfully, the uses for the Innovation Engine are limited only by the feature’s capability framework and the developer’s imagination.

 

It’s worth noting that the Innovation Engine is reserved for system-builder’s code, and not Intel firmware. Intel supplies only the hardware, and the system builder can tailor things from there. And as for security, the Innovation Engine code is cryptographically bound to the system-builder. Code not authenticated by the system-builder will not load.

 

As the name suggests, the Innovation Engine will drive a lot of great benefits for OEMs and, ultimately, end users. This embedded core in future Intel processors will foster creativity, innovation, and differentiation, while creating a simplified path for system-builders implementing new features and enabling full customer visibility into code and engine behavior.

 

Ultimately, this upcoming enhancement in Intel data center platforms is all about using Intel technology advancements to drive widespread innovation in the data center ecosystem.

 

Have thoughts you’d like to share? Pass them along on Twitter via @IntelITCenter, you can also take a listen to our IDF podcasts for more on the Innovation Engine.

network-transformation-blog-banner.jpg

 

Close your eyes and try to count the number of times you’ve connected with computing today.  Hard to do? We have all witnessed this fundamental change: Computing has moved from a productivity tool to an essential part of ourselves, something that shapes the way we live, work, and engage in community.

 

Now, imagine how many times today you’ve thought about the network connectivity making all of these experiences possible.  Unless you’re like me, someone who is professionally invested in network innovation, the answer is probably close to zero.  But all of those essential experiences delivered every day to all of us would not exist without an amazing array of networking technologies working in concert.

 

In this light, the network is everything you can’t see, but you can’t live without.  And without serious innovation of the network, all of the amazing computing innovations expected in the next few years simply won’t be able to be experienced in the way they were intended.

 

At the Intel Developer Forum today, I had the pleasure of sharing the stage with my colleague Aicha Evans and industry leaders from SK Telecom, Verizon, and Ericsson, as we shared Intel’s vision for 5G networks from device to data center.  In this post, I’d like to share a few imperatives to deliver the agile and performant networks required to fuel the next wave of innovation.  IDF was the perfect place to share this message given that it all starts with the power of community: developers from across the industry working together to deliver impactful change.

 

So what’s changing? Think about the connectivity problems we experience today: dropped calls, constant buffering of streaming video, or downloading delays. Imagine if not only those problems disappeared, but new immersive experiences like 3D virtual reality gaming, real-time telemedicine, and augmented reality became pervasive in our everyday lives? In 5G, we believe it will.

 

5G is, of course, the next major upgrade to cellular connectivity and represents improved performance but even more importantly, massive increases in the intelligence and flexibility of the network. One innovation in this area is Mobile Edge Computing (MEC).  To imagine the mobile edge, imagine cell tower base stations embedded with cloud computing based intelligence, or “cloudlets”, creating the opportunity for network operators to deliver high performance, low latency services like the ones I shared above.

 

As networks become more intelligent, the services that run on them become more intelligent too. MEC will provide the computing power to also deliver Service Aware Networks, which will dynamically process and prioritize traffic based on service type and application. As a result, operators gain more control, developers are more easily able to innovate new personalized services, and users gain higher quality of experience.

 

Another exciting innovation is Anchor-Booster technology, which takes advantage of the principles of Software Defined Networking (SDN). It allows devices to take better advantage of spectrum like millimeter wave to boost network throughput by 10X or more.

 

These technologies may seem futuristic, but Intel has already been working with the industry for several years to use cloud technology to reinvent the network similar to how it reinvented the data center. We call this network transformation, and it represents moving from fixed function, purpose built network infrastructure to adaptable networks based on Network Functions Virtualization (NFV) and SDN.  Within this model, network functions now reside within virtual machines or software containers, managed by centralized controllers and orchestrators, and dynamically provisioned to meet the needs of the network.  The change that this represents to the communication service provider industry is massive. NFV & SDN are dramatically changing the rate of innovation of communications networking and creating enormous opportunities for the industry to deliver new services at cloud pace.

 

Our work is well underway.  As a key pillar of our efforts, we established the Intel Network Builders program two years ago at IDF, and since its inception it has grown to over 170 industry leaders, including strategic end users, working together towards solution optimization, trials, and dozens of early commercial deployments.

 

And today, I was excited to announce the next step towards network transformation with the Intel® Network Builders Fast Track, a new investment and collaboration initiative to ignite solution stack delivery, integrate proven solutions through blueprint publications, and optimize solution performance and interoperability through new third party labs and interoperability centers. These programs were specifically designed to address the most critical challenges facing broad deployment of virtualized network solutions and are already being met with enthusiasm and engagement by our Intel Network Builders members, helping us all towards delivery of a host of new solutions for the market.  If you’re engaged in the networking arena as a developer of solutions or a provider, I encourage you to engage with us as we transform the network together.

 

Imagine: No more dropped calls, no more buffered video. Just essential experiences delivered in the manner intended, and exciting new experiences to further enrich the way we live and work. The delivery of this invisible imperative just became much clearer.

Russell L. Ackoff, the pioneer in operations research, said "A system is more than the sum of its parts … It loses its essential properties when it is taken apart." That also suggests the system doesn't exist and its essential properties cannot be observed until it is put together. This is increasingly important as communications service providers and network equipment vendors operationalize network functions virtualization (NFV) and software defined networking (SDN).

 

To date, most NFV efforts have focused on accelerating the parts – both the speed of development and net performance. OpenFlow, for example, defines communication and functionality between the control plane and the equipment that actually does the packet forwarding, and much of the initial effort has been to connect vendor A's controller to vendor B's router and to achieve pairwise interoperability between point solutions. Intel has been a key enabler of that through the Intel® Network Builders program. We've grown an ecosystem of more than 170 contributing companies developing dozens of Intel® Architecture-based network solutions.

 

But the desired vision of NFV—and what service providers tell us they need and want—is to be able to quickly assemble new systems offering new services from best-of-breed components and to enhance existing services quickly by incorporating optimized functions from a number of providers. To do that the parts must plug and play when combined into systems. That means proven integration and solution interoperability across stack layers and across networks. So that's what we're taking on next with the recently announced Intel® Network Builders Fast Track.

 

Intel Network Builders Fast Track builds on the progress we've already made with Intel® Network Builders. It's a natural evolution for us to take with industry partners and leading service providers to move NFV closer to fulfilling its promise. Through Intel market development activities and investments we will accelerate interoperability, quicken adoption of standards-based technologies using Intel® architecture, and drive the availability of integrated, ready-to-deploy solutions.

 

Specifically, through the Intel Network Builders Fast Track we will facilitate:

 

  • Solution stack optimization—we will invest in ecosystem collaborations to optimize solution stacks and further optimize the Open Network Platform reference architecture – a toolkit to enable solutions across the industry. We are also establishing Intel® Network Builders University to drive technical education for the broader Intel Network Builders community, and deeper collaboration on performance tuning and optimizations for Intel Network Builders members.
  • Proven integration—we will publish Solution Blueprints on top use cases targeted for carrier grade deployments, and we'll deepen our collaboration with key system integrators to deliver integrated and optimized solutions.
  • Solution interoperability—we will collaborate to establish third party compliance, performance tuning, plugfests and hackathons for use by Intel Network Builders members through new and existing innovation centers.

 

The concepts of SDN and NFV emerged when the communications industry saw what the IT industry was achieving with cloud computing—interoperable agile services, capacity on demand, and rapid time to market based on industry-standard servers and software.

 

At Intel, we played a key role in achieving the promise of cloud computing—not just with our product offerings, but with market development and our contributions to the open-source programs that have unified the industry behind a common set of interoperable tools and services. With Intel Network Builders Fast Track, we're bringing that experience and commitment to the communication industry, so we can achieve the solutions the industry needs faster and with less risk.

 

The transformed communications systems NFV can enable will flex with the service providers' businesses and customers' needs in a way Russell L. Ackoff couldn't have foreseen. And it will make them more competitive and better able to provide communications solutions to power the Internet era. We'll achieve that co-operatively, as an industry. That's what the Intel Network Builders Fast Track is designed to do.

By Gary Lee, Ethernet Switch Product Manager at Intel

 

 

Intel’s 100GbE multi-host controller silicon, code-named Red Rock Canyon, has been on a worldwide demo tour, with stops at four major industry events in Europe and the U.S. since it was disclosed one year ago at Intel Developer Forum (IDF) San Francisco 2014.

 

And the tour continues at IDF San Francisco 2015 this week, with presentations and live demos in five customer booths.

 

Red Rock Canyon is designed to provide low-latency PCIe 3.0 connections into Intel Xeon® processors in Intel® Rack Scale Architecture applications or Ethernet-based connections into Intel Atom™ processors for dense microserver deployments. The product is also ideal for high-performance Network Functions Virtualization (NFV) applications, providing flexible high-bandwidth connections between Intel Xeon processors and the network.

 

Here’s where you can see Red Rock Canyon in action at IDF 2015:

 

Quanta and EMC at the Intel booth: This demo will use a Software development platform from Quanta showing Intel Rack Scale Architecture software asset discovery and assembly using OpenStack. Also on display in this rack will be a 100GbE performance demo and a software defined storage application. ¬

Intel Rack Scale Architecture and PCN at the Intel booth: This demo will use a software development platform from Quanta demonstrating OpenDaylight and big data (BlueData) running on an Intel Rack Scale Architecture system.

 

At the Huawei Booth: Huawei will show its Intel Rack Scale Architecture-based system based on its X6800 server shelf, which includes Red Rock Canyon.

 

At the Inspur Booth: Inspur will show its new Intel Rack Scale Architecture platform, which will include a live demo of Red Rock Canyon running data center reachability protocol (DCRP), including auto-configuration, multi-path and failover.

 

At the Advantech Booth: Advantech will show its new FWA-6500R Intel Xeon processors-based 2U network application platform, which uses a multi-host Red Rock Canyon based switch module to connect these processors to the network through flexible Ethernet ports.

 

400G NFV Demo: This is a showcase of a new NFV datacenter-in-a-box concept featuring a scalable 400G NFV infrastructure using Network Services Header (NSH) and multiple 100Gbps servers. The 400Gbps front-end NFV server is based on Intel Xeon processors that take advantage of Data Plane Development Kit (DPDK) to deliver performance that matches interface drivers for Red Rock Canyon, and Intel Scalable Switch Route Forwarding (S2RF) for scalable load balancing and routing.

 

In addition to these physical product demonstrations, an NFV technical session titled “Better Together: Service Function Chaining and the Road to 100Gb Ethernet” will be held to present a service function chaining use case based on Intel technology. This session will be held on Aug. 20 at 2:15 pm. Search for session NFS012 the IDF 2015 website. An additional Tech Chat about Red Rock Canyon applications will be held on Aug. 18 at 1 pm.

By David Cohen, System Architect and Senior Principle Engineer at Intel

 

 

With the arrival of new non-volatile memory (NVM) technologies, we are suddenly in the midst of the biggest data center transformation in the past 30 years. Data centers are now poised to move data at unprecedented speeds.

 

This isn’t hyperbole. This is the way it will be with the implementation of solutions built around the new technologies like NVM Express* (NVMe) Over Fabrics, NVMe in PCI Express, and 3D XPointTM. These technologies will bring down the cost of non-volatile memory and replace hard disk drive (HDD) with solid-state storage—while taking storage performance to unprecedented levels.

 

A case in point: The new 3D XPoint (pronounced “3D cross-point”) technology from Intel and Micron enables NVM speeds that are up to 1,000 times faster  than NAND, today’s most popular non-volatile memory. With its unique material compounds and cross-point architecture, 3D XPoint technology is 10 times denser than conventional memory.  We’re talking about a category of NVM that has the potential to revolutionize any device, application, or service that can benefit from fast access to large sets of data.

 

Take a closer look at 3D XPoint technology.

 

Let’s take a step back and look at the bigger picture. In enterprise data centers, spinning disk drives (HDDs), which continue to carry a lot of the data storage load, have always been really slow (in relative terms) while everything else has been really fast. This amounts to a bottleneck in the application performance pipeline. At some level, it doesn’t matter how fast today’s processors and network switches are when overall performance is tied to the speeds of yesterday’s data storage devices.

 

We are now in the process of rewriting this tired equation. With the arrival of solutions based on the new non-volatile storage technologies, storage will be really fast and, in comparison, everything else will be slower. For the software developer, this new reality of lightning-fast storage creates an imperative to optimize other parts of performance pathway to remove overhead that causes latency.

 

Explore the evolution of storage media architectures.

 

In essence, the goal is to move operations that are not critical to performance out of the performance path—such as bookkeeping functions related to the management of transactions and data replication operations that could take place elsewhere. The idea is to tease latency out of the system and allow all things to happen in parallel. The ultimate goal is a balanced system that capitalizes on the full potential of the latest server, storage, and networking components.

 

At Intel, we are committed to making our end customers successful in the transformation to next-generation silicon-based storage. To that end, we are working actively with our ecosystem partners and end-user customers to help ensure that software is addressed in the right way—so that operating systems and applications can gain the greatest benefits from faster storage. At the same time, we are working closely with industry organizations to make sure there are standards in place that allow software developers and OEMs to capitalize on new storage technologies in a uniform way.

 

Here’s the bottom line: The future is upon us. It’s ours to make of it what we will. To maximize the potential of storage solutions, we must first embrace new technologies like NVM Express and Next Generation NVM, and then work actively to optimize the associated software for the new capabilities of NVM.

 

Our success in these efforts will throw open the doors to the next generation of data centers.

 

For a closer look at the new non-volatile memory technologies, and the future of storage itself, visit intel.com/storage.

 

 

 

 

1 Performance difference based on comparison between 3D XPoint technology and other industry NAND.

2 Density difference based on comparison between 3D XPoint technology and other industry DRAM.

For years, people have been talking about the coming convergence of memory and storage. To this point, the discussion has been largely theoretical, because the affordable technologies that enable this convergence were not yet with us.

 

Today, the talk is turning to action. With the arrival of a new generation of economical, non-volatile memory (NVM) technologies, we are on the cusp of the future—the day of the converged memory and storage media architecture.

 

The biggest news in this story is the announcement by Intel and Micron of 3D XPoint technology, which will enable a new generation of DIMMs and solid state drives (SSD). This NVM technology, couples storage-like capacity with memory-like speeds.

 

While it’s not quite as fast as today’s DDR4 technology, 3D XPoint (pronounced 3D Cross Point) is 1000x faster than NAND and has 1000x greater endurance. Intel DIMMs based on 3D XPoint will support up to 4X more system memory per platform, compared to using only standard DRAM DIMMs, and are expected to offer a significantly lower cost-per-gigabyte compared to DRAM DIMMs.

 

With the enormous leaps in NVM performance offered with 3D XPoint technology, latencies are so low that for the first time NVM can be used effectively in main system memory, side by side with today’s DRAM-based DDR4 DIMMs. Even better, unlike DRAM, the Intel DIMMs will provide persistent memory, so that data is not erased in the event of loss of power.

 

The biggest net gain is that a lot more data will be stored in memory, where it is closer to the processors. This is a hugely important advance when it comes to accelerating application performance. Intel DIMMs will allow for faster analysis and simulation results from more complex models, fewer interruptions in service delivery to users, and drive new software innovations as developers adjust their applications to take advantage of rapid access to vastly more data.

 

Elsewhere in the storage hierarchy, 3D XPoint technology will be put work in Intel SSDs that use the NVM Express* (NVMe*) interface to communicate with the processors. Compared to today’s alternatives, these new SSDs will offer much lower latency and greater endurance.

Intel SSDs will be sold under the name Intel® OptaneTM technology, and will be available in 2016. The upcoming 3D XPoint technology based DIMMs will be available in the next all-new generation of the Intel data center platform.

 

The good news is, these next-generation NVM SSDs and DIMMs are coming soon to a data center near you. Their arrival will herald the beginning of the era of the converged memory and storage media architecture—just in time for an onslaught of even bigger data and more demanding applications.

 

 

 

For performance info on 3D XPoint, please visit: http://www.intel.com/content/www/us/en/architecture-and-technology/non-volatile-memory.html.

Earlier this summer, Intel announced our Cloud for All initiative signaling a deepening engagement with the cloud software industry on SDI delivery for mainstream data centers.  Today at IDF2015, I had my first opportunity post the announcement, to discuss why Cloud for All is such a critical focus for Intel, for the cloud industry, and for the enterprises and service providers that will benefit from enterprise feature rich cloud solutions. Delivering the agility and efficiency found today in the world’s largest data centers to broad enterprise and provider environments has the opportunity to transform the availability and economics of computing and reframe the role of technology in the way we do business and live our lives.

 

Why this focus? Building a hyperscale data center from the ground up to power applications written specifically for cloud is a very different challenge than migrating workloads designed for traditional infrastructure to a cloud environment.  In order to move traditional enterprise workloads to the cloud, either an app must be rewritten for native cloud optimization or the SDI stack must be optimized to support enterprise workload requirements.  This means supporting things like live workload migration, rolling software upgrades, and failover. Intel’s vision for pervasive cloud embraces both approaches, and while we expect applications to be optimized as cloud native over time, near term cloud adoption in the enterprise is hinged upon SDI stack optimization for support of both traditional applications and cloud native applications.

 

How does this influence our approach of industry engagement in Cloud for All?  It means that we need to enable a wide range of potential usage models while being pragmatic that a wide range of infrastructure solutions exists across the world today.  While many are still running traditional infrastructure without self-service, there is a growing trend towards enabling self-service on existing and new SDI infrastructure through solutions like OpenStack, providing the well-known “give me a server” or “give me storage” capabilities…  Cloud Type A – server focused.  Meanwhile SW developers over the last year have grown very fond of containers and are thinking not in terms of servers, but instead in terms of app containers and connections… a Cloud Type B – process focused.  If we look out into the future, we could assume that many new data centers will be built with this as the foundation, and will provide a portion of the capacity out to traditional apps.  Convergence of usage models while bringing the infrastructure solutions forward.

 

potential-path-to-sdi-graphic.png


The enablement of choice and flexibility, the optimization of the underlying Intel architecture based infrastructure, and the delivery of easy to deploy solutions to market will help secure broad adoption.

 

So where are we with optimization of SDI stacks for underlying infrastructure? The good news is, we’ve made great progress with the industry on intelligent orchestration.  In my talk today, I shared a few examples of industry progress.

 

I walked the audience through one example with Apache Mesos detailing how hyper-scale orchestration is achieved through a dual level scheduler, and how frameworks can be built to handle complex use cases like even storage orchestration.  I also demonstrated a new technology for Mesos Oversubscription that we’re calling Serenity that helps drive maximum infrastructure utilization.  This has been a partnership with MesoSphere and Intel engineers in the community to help lower the TCO of data centers; something I care a lot about…. Real business results with technology.

 

I also shared how infrastructure telemetry and infrastructure analytics can deliver improved stack management. I shared an example of a power & thermal aware orchestration scheduler that has helped Baidu net a data center PUE of 1.21 with 24% of potential cooling energy savings.  Security is also a significant focus, and I walked through an approach of using Intel VT technology to improve container security isolation.  In fact, CoreOS announced today that its rkt 0.8 release has been optimized for Intel VT using the approach outlined in my talk, and we expect more work with the container industry towards delivery of like security capabilities present only in traditional hypervisor based environments.

 

But what about data center application optimization for SDI?  For that focus, I ended my talk with the announcement of the first Cloud for All Challenge, a competition for infrastructure SW application developers to rewrite for cloud native environments.  I’m excited to see developer response to our challenge simply because the opportunity is ripe for introduction of cloud native applications to the enterprise using container orchestration, and Intel wants to help accelerate the software industry towards delivery of cloud native solutions.  If you’re an app developer, I encourage you to engage in this Challenge!  The winning team will receive $5,000 of cold, hard cash and bragging rights at being at the forefront of your field.  Simply contact cloudforall@intel.com for information, and please see the preliminary entry form.

Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center