Skip navigation

Cloud for All

3 Posts authored by: Das Kamhout

Portland-Oregon.jpg

 

My hometown of Portland, Oregon is home this week to the first ever KuberCon Launch event bringing together the Kubernetes ecosystem at OSCON. While the industry celebrates the delivery of Kubernetes 1.0 and formation of the Cloud Native Computing Foundation, this week is also an opportunity to gauge the state of the development around open source container solutions.

 

Why so much attention on containers? Basically, it is because containers help software developers and infrastructure operators at the same time.  This tech will help put mainstream data centers and developers on the road to the advanced, easy-to-consume, easy to ship and run, hyperscale technologies that are a hallmark of the world’s largest and most sophisticated cloud data centers. The container approach, packages up applications and software libraries to create units of computing that are both scalable and portable—two keys to the agile data center.  With the addition of Kubernetes and other key tech like Mesos, the orchestration and scheduling of the containers is making the impossible now simple.

 

This is a topic close to the hearts of many people at Intel. We are an active participant in the ecosystem that is working to bring the container model to a wide range of users and data centers as part of our broader strategy for standards based stack delivery for software defined infrastructure.  This involvement was evidenced earlier this year through our collaborations with both CoreOS andDocker, two leading software players in this space, as well as our leadership engagement in the new Open Container Project.

 

As part of the effort to advance the container cause, Intel is highlighting the latest advancements in our CoreOS collaboration to advance and optimize the Tectonic stack, a commercial distribution of Kubernetes plus CoreOS software. At KuberCon, Intel, Redapt, Supermicro and CoreOS are showing a Tectonic rack running on bare metal highlighting the orchestration and portability that Tectonic provides to data center workloads.  Local rock-star company Jive has been very successful in running their workloads on this platform showing that their app can move between public cloud and on-premise bare metal cloud.  We’re also announcing extensions of our collaboration with CoreOS to drive broad developer training for Tectonic and title sponsorship in CoreOS’s Tectonic Summit event planned for December 2nd and 3rd in New York. For details, check out the CoreOS news release.

 

We’re also featuring an integration of an OpenStack environment running Kubernetes based containers within an enterprise ready appliance.  This collaboration with Mirantis, Redapt and Dell highlights the industry’s work to drive open source SDI stacks into solutions that address enterprise customer needs for simpler to deploy solutions and demonstrate the progress that the industry has made inintegrating Kubernetes with OpenStack as it reaches 1.0.

 

Our final demonstration features a new software and hardware collaboration with Mesosphere, the company behind much of the engineering for Mesos which provides the container scheduling for Twitter, Apple Siri, AirBnB among other digital giants.  Here, we’ve worked to integrate Mesosphere’s DCOS platform with Kubernetes on a curated and optimized hardware stack supplied by Quanta.  This highlights yet another example of an open source SDI stack integrating efficient container based virtualization to drive the portability and orchestration of hyperscale.

 

For a closer look at Intel’s focus on standards-based innovation for the software-defined infrastructure stack, check out my upcoming presentation at the Intel Developer Forum (Intel IDF). I’ll be detailing further advancements in our industry collaborations in delivery of SDI to the masses as well as going deeper in the technologies Intel is integrating into data center infrastructure to optimize SDI stacks for global workload requirements.

When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data.  To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.

 

In this post, we will walk through the layers of the SDI stack, as shown here.

 

sdi-stack.png

The foundation

 

The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services.  These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.


The OS layer

 

At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.

 

The virtualization layer

 

Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means.  The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use.  For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities ofIntel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation.  For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.

 

The orchestration layer

 

There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes.  This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient.  Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on.  In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.

 

The developer environment

 

The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life.  Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code.  We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.

 

For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.

Earlier this summer, Intel announced our Cloud for All initiative signaling a deepening engagement with the cloud software industry on SDI delivery for mainstream data centers.  Today at IDF2015, I had my first opportunity post the announcement, to discuss why Cloud for All is such a critical focus for Intel, for the cloud industry, and for the enterprises and service providers that will benefit from enterprise feature rich cloud solutions. Delivering the agility and efficiency found today in the world’s largest data centers to broad enterprise and provider environments has the opportunity to transform the availability and economics of computing and reframe the role of technology in the way we do business and live our lives.

 

Why this focus? Building a hyperscale data center from the ground up to power applications written specifically for cloud is a very different challenge than migrating workloads designed for traditional infrastructure to a cloud environment.  In order to move traditional enterprise workloads to the cloud, either an app must be rewritten for native cloud optimization or the SDI stack must be optimized to support enterprise workload requirements.  This means supporting things like live workload migration, rolling software upgrades, and failover. Intel’s vision for pervasive cloud embraces both approaches, and while we expect applications to be optimized as cloud native over time, near term cloud adoption in the enterprise is hinged upon SDI stack optimization for support of both traditional applications and cloud native applications.

 

How does this influence our approach of industry engagement in Cloud for All?  It means that we need to enable a wide range of potential usage models while being pragmatic that a wide range of infrastructure solutions exists across the world today.  While many are still running traditional infrastructure without self-service, there is a growing trend towards enabling self-service on existing and new SDI infrastructure through solutions like OpenStack, providing the well-known “give me a server” or “give me storage” capabilities…  Cloud Type A – server focused.  Meanwhile SW developers over the last year have grown very fond of containers and are thinking not in terms of servers, but instead in terms of app containers and connections… a Cloud Type B – process focused.  If we look out into the future, we could assume that many new data centers will be built with this as the foundation, and will provide a portion of the capacity out to traditional apps.  Convergence of usage models while bringing the infrastructure solutions forward.

 

potential-path-to-sdi.png

 

The enablement of choice and flexibility, the optimization of the underlying Intel architecture based infrastructure, and the delivery of easy to deploy solutions to market will help secure broad adoption.

 

So where are we with optimization of SDI stacks for underlying infrastructure? The good news is, we’ve made great progress with the industry on intelligent orchestration.  In my talk today, I shared a few examples of industry progress.

 

I walked the audience through one example with Apache Mesos detailing how hyper-scale orchestration is achieved through a dual level scheduler, and how frameworks can be built to handle complex use cases like even storage orchestration.  I also demonstrated a new technology for Mesos Oversubscription that we’re calling Serenity that helps drive maximum infrastructure utilization.  This has been a partnership with MesoSphere and Intel engineers in the community to help lower the TCO of data centers; something I care a lot about…. Real business results with technology.

 

I also shared how infrastructure telemetry and infrastructure analytics can deliver improved stack management. I shared an example of a power & thermal aware orchestration scheduler that has helped Baidu net a data center PUE of 1.21 with 24% of potential cooling energy savings.  Security is also a significant focus, and I walked through an approach of using Intel VT technology to improve container security isolation.  In fact, CoreOS announced today that its rkt 0.8 release has been optimized for Intel VT using the approach outlined in my talk, and we expect more work with the container industry towards delivery of like security capabilities present only in traditional hypervisor based environments.

 

But what about data center application optimization for SDI?  For that focus, I ended my talk with the announcement of the first Cloud for All Challenge, a competition for infrastructure SW application developers to rewrite for cloud native environments.  I’m excited to see developer response to our challenge simply because the opportunity is ripe for introduction of cloud native applications to the enterprise using container orchestration, and Intel wants to help accelerate the software industry towards delivery of cloud native solutions.  If you’re an app developer, I encourage you to engage in this Challenge!  The winning team will receive $5,000 of cold, hard cash and bragging rights at being at the forefront of your field.  Simply contact cloudforall@intel.com for information, and please see the preliminary entry form.