Skip navigation

Cloud for All

11 posts

By Imad Sousou

 

Innovation is key to unlocking the full potential of OpenStack. At Intel, we see tremendous opportunity to drive growth in the data center. To achieve this growth the industry as a whole must enable new usage models, develop more robust security solutions, and help ensure that new features and technologies are easy to adopt.

 

In my keynote at the OpenStack Summit Vancouver today, I highlighted two requirements critical to our shared success. The first is for the developer community to focus on removing barriers to adoption including ease of deployment, rolling upgrades, and high availability of services and tenants. The second is for everyone to embrace the spirit of innovation necessary to drive new usages and adoptions while moving OpenStack forward.

 

As industry leaders, we all have this responsibility. Today I shared one of the innovations Intel released as part of our Clear Linux* Project for Intel® architecture, called Intel® Clear Containers. While standard Linux containers are an effective way to spin up an app in a trusted environment, with this approach the underlying kernel still can be attacked from within the container. In turn, all containers on the same host can be compromised, regardless of the intended isolation between them. This has largely limited their use to in-house applications or single-tenant hosts to date.

 

Intel Clear Containers address security concerns surrounding the popular container model for application deployment. Intel’s approach with these containers offers enhanced protection using security rooted in hardware. By using virtualization technology features (VT-x) embedded in the silicon, we can deliver improved security and isolation advantages of virtualization technology for a containerized application. Intel Clear Containers provide a secure, fast Virtual Machine (VM) with a small memory footprint, allowing for more VMs per physical machine.

 

One of the key challenges this solution addresses is boot time. Containers spin up very quickly, on the order of a hundred milliseconds or so. Our goal was to create a Linux environment that boots up as a guest at speeds comparable to a standard container. By focusing on the needs of the application container and optimizing the Linux boot process, we achieved this goal. As a result, containers can now reside in multi-tenant environments with very little performance overhead.

 

This new feature and the entire Intel Clear Linux Project are available on ClearLinux.org. We fully expect this extra security for containers, rooted in hardware, to drive new usages. Innovation is up to all of us. And we invite you to join in.

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others

Cloud computing offers what every business wants: the ability to respond instantly to business needs. It also offers what every business fears: loss of control and, potentially, loss of the data and processes that enable the business to work. Our announcement at the OpenStack Summit of Intel® Cloud Integrity Technology 3.0 puts much of that control and assurance back in the hands of enterprises and government agencies that rely on the cloud.

 

Through server virtualization and cloud management software like OpenStack, cloud computing lets you instantly, even automatically, spin up virtual machines and application instances as needed. In hybrid clouds, you can supplement capacity in your own data centers by "bursting" capacity from public cloud service providers to meet unanticipated demand. But this flexibility also brings risk and uncertainly. Where are the application instances actually running? Are they running on trusted servers whose BIOS, operating systems, hypervisors, and configurations have not been tampered with? To assure security, control, and compliance, you must be sure applications run in a trusted environment. That's what Intel Cloud Integrity Technology lets you do.

 

Intel Cloud Integrity Technology 3.0 is software that enhances security features of Intel® Xeon® processors to let you assure applications running in the cloud run on trusted servers and virtual machines whose configurations have not been altered. Working with OpenStack, it ensures when VMs are booted or migrated to new hardware, the integrity of virtualized and non-virtualized Intel x86 servers and workloads is verified remotely using Intel® Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology on Intel Xeon processors. If this "remote attestation" finds discrepancies with the server, BIOS, or VM —suggesting the system may have been compromised by cyber attack—the boot process can be halted. Otherwise, the application instance is launched in a verified, trusted environment spanning the hardware and the workload.

 

In addition to assuring the integrity of the workload, Cloud Integrity Technology 3.0 also enables confidentially by encrypting the workload prior to instantiation and storing it securely using OpenStack Glance. An included key management system that you deploy on premise gives the tenant complete ownership and control of the keys used to encrypt and decrypt the workload.

 

Cloud Integrity Technology 3.0 builds on earlier releases to assure a full chain of trust from bare metal up through VMs. It also provides location controls to ensure workloads can only be instantiated in specific data centers or clouds. This helps address the regulatory compliance requirements of some industries (like PCI and HIPAA) and geographical restrictions imposed by some countries.

 

What we announced at OpenStack Summit is a beta availability version of Intel Cloud Integrity Technology 3.0. We'll be working to integrate with an initial set of cloud service providers and security vendor partners before we make the software generally available. And we'll submit extensions to OpenStack for Cloud Integrity Technology 3.0 later this year.

 

Cloud computing is letting businesses slash time to market for new products and services and respond quickly to competitors and market shifts. But to deliver the benefits promised, cloud service providers must assure tenants their workloads are running on trusted platforms and provide the visibility and control they need for business continuity and compliance.

 

Intel Xeon processors and Cloud Integrity Technology are enabling that. And with version 3.0, we're enabling it across the stack from the hardware through the workload. We're continuing to extend Cloud Integrity Technology to storage and networking workloads as well: storage controllers, SDN controllers, and virtual network functions like switches, evolved packet core elements, and security appliances. It's all about giving enterprises the tools they need to capture the full potential of cloud computing.

open-container-app.jpgToday, Intel announced that it is one of the founding members of the Open Container Project (OCP), and effort focused on ensuring a foundation of interoperability across container environments. We were joined by industry leaders including Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware in the formation of this group which will be established under the umbrella of the Linux Foundation.  This formation represents an enormous opportunity for the industry to “get interoperability right” at a critical point of maturation of container use within cloud environments.

 

Why is this goal important?  We know the tax limited interoperability represents to workload portability and the limiter it represents to enterprises extracting the full value of the hybrid cloud.  We also know the challenge of true interoperability when it is not established in early phases of technology maturity.  This is why container interoperability is an important part of Intel’s broader strategy for open cloud software innovation and enterprise readiness and why we are excited to be joining other industry leaders in OCP.

 

Intel brings decades of experience in working on open, industry standard efforts to our work with OCP, and we have reason to be bullish about the opportunity for OCP to deliver to its goals.  We have the right players assembled to lead this program forward and the right commitments from vendors to contribute code and runtime to the effort.  We’re looking forward to helping to lead this organization to rapid delivery to its goals and plan to use what we learn in OCP towards our broader engagements in container collaboration.

 

Our broader goal is squarely focused on delivery of containers that are fully optimized for Intel platforms and ready for enterprise environments as well as acceleration of easy to deploy container based solutions to the market.  You may have seen our earlier announcement of collaboration with CoreOS on optimization of their Tectonic cloud software environment with Intel architecture to ensure enterprise capabilities.  This announcement also features work with leading solutions providers such as SuperMicro and RedApt on delivery of ready to deploy solutions at Tectonic GA.  At DockerCon this week, we are highlighting our engineering work to optimize Docker containers for Intel Cloud Integrity Technology extending workload attestation from VM based workloads to containers.  These are two examples of our broader efforts to ready containers for the enterprise and highlight the importance of the work of OCP.

 

If you are engaged in the cloud software arena, I encourage you to consider participation in OCP.  If you’re an enterprise considering integration of containers in your environment the news of OCP should provide confidence of portability of future container based workloads and that evaluation of container solutions should be considered as part of your IT strategy.

virtual-servers-graphic.jpgWith the cloud software industry advancing on a selection of Software Defined Infrastructure ‘stacks’ to support enterprise data centers, the question of application portability comes squarely into focus. A new ‘style’ of application development has started to gather momentum in both the Public Cloud, as well as Private Cloud. Cloud native applications, as this new style has been named, are those applications that are container packaged, dynamically scheduled, and microservices oriented. They are rapidly gaining favor for their improved efficiency and agility as compared to more traditional monolithic data center applications.

 

However, creating a cloud native application, does not eliminate the dependencies on traditional data center services. Foundational services such as networking, storage, automation, and of course, compute are all still very much required.  In fact, since the concept of a full virtual machine may not be present in a cloud native application, these applications rely significantly on their infrastructure software to provide the right components. When done well, a ‘Cloud Native Application’ SDI stack can provide efficiency and agility previously only seen in a few hyperscale environments.

 

Another key aspect of the Cloud Native Application, is that it should be highly portable. This portability between environments is a massive productivity gain for both developers and operators. An application developer wants the ability to package an application component once, and have it be reusable across all clouds, both public and private. A cloud operator wants the freedom to position portions of their application where it makes the most sense. That location may be on their private cloud, or on their public cloud partner. Cloud Native Applications are the next step in true hybrid cloud usage.

 

So, with this promise of efficiency, operational agility, and portability, where do data center managers seek the definitions for how the industry will address movement of apps between stacks? How can one deploy a cloud native app and ensure it can be moved across clouds and SDI stacks without issue?  Without a firm answer, can one really develop cloud native apps with the confidence that portability will not be limited to those environments running identical SDI stacks?  These are the type of questions that often stall organizational innovation and is the reason why Intel has joined with other cloud leaders in the formation of the Cloud Native Computing Foundation (CNCF).

 

Announced this week at the first ever KuberCon event, the CNCF has been chartered to provide guidance, operational patterns, standards and over time , APIs, to ensure container based SDI stacks are both interoperable, and optimized for a seamless, performant developer experience. The CNCF will work with the recently formed Open Container Initiative (OCI) towards a synergistic goal of addressing the full scope of container standards and the supporting services needed for success.

 

Why announce this at KuberCon? The goal of the CNCF is to foster innovation in the community around these application models. The best way to speed innovation is to start with some seed technologies. Much the same way it is easier to start writing (a blog perhaps?) when you have a few sentences on screen, rather than staring at a blank page, the CNCF is starting with some seed technologies. Kubernetes, having just passed its 1.0 release, will be one of the first technologies used to kick start this effort. Many more technologies, and even full stacks, will follow, with a goal of several ‘reference’ SDI platforms that support the portability required.

 

What is Intel’s role here?  Based on our decades of experience helping lead industry innovation and standardization across computing hardware and open source software domains, we are firmly committed to the CNCF goals and plan to actively participate in the leadership body and Technical Oversight Committee of the Foundation.  This effort is reflective of our broader commitment to working with the industry to accelerate the broad use of cloud computing through delivery of optimized, easy to consume, operate, and feature complete SDI stacks.  This engagement complements our existing leadership roles in the OCI, the OpenStack Foundation, Cloud Foundry Foundation as well as our existing work driving solutions with the SDI platform ecosystem.

 

With the cloud software industry accelerating its pace of innovation, please stay tuned for more details on Intel’s broad engagement in this space. To deepen your engagement with Intel, I invite you to join us at the upcoming Intel Developer Forum in San Francisco to gain a broader perspective on Intel’s strategy for acceleration of cloud.

By Imad Sousou

 

At Intel, we firmly believe the industry moves forward faster when everyone is collaborating openly and innovating rapidly. This is why we are strong supporters of open source software and focused on upstreaming our code. These commitments are a key reason Intel is recognized as the leading contributor to the Linux kernel and a top contributor to OpenStack, AOSP, Chromium, BlueZ, and connman among others. In each of these cases, Intel collaborates across the industry to see technologies advance and help transitions occur.

 

Today, we are proud to join the Cloud Native Computing Foundation—also backed by Google, Cisco, IBM, VMware, Twitter, Box, Cycle Computing, CoreOS, Goldman Sachs, Joyent, Mesosphere, Red Hat, SUPERNAP, Twitter – as this foundation will help shape the evolution of cloud technology as application management evolves while easing the adoption and implementation process. These efforts further Intel’s drive to support the entire cloud ecosystem and help ensure technology solutions have a path to adoption.

 

This effort ties in nicely with the Clear Linux Project for Intel Architecture that Intel announced earlier this year. The Clear Linux distribution provides a reference implementation for features within Intel platforms. Our intent is for companies to look at the code, adopt it, and then implement it within their own solutions.

 

This is why I am personally excited to see the advances from companies like CoreOS. In addition to their Tectonic preview announcement today and the training we will do together, CoreOS demonstrated an integration of Tectonic with Kubernetes, using Intel® Clear Container technology, as part of Clear Linux. This demo illustrates how Clear Container technology allows a virtual machine (VM) to act as a container, providing the rapid-boot time needed for cloud applications, but with the security model of a VM.

 

I love seeing this type of innovation. It’s exactly what we need to drive greater adoption of cloud technology.

 

For two years, Intel has worked to make Software Defined Infrastructure (SDI) a reality within IT departments worldwide. The industry has come a long way and work on projects like OpenStack is being validated today by the number of deployments by key companies in the past year. Our collaboration with Red Hat, Canonical, HP Helion, Mesosphere, Rackspace, Mirantis and others continues to advance enterprise cloud availability and functionality.

Portland-Oregon.jpg

 

My hometown of Portland, Oregon is home this week to the first ever KuberCon Launch event bringing together the Kubernetes ecosystem at OSCON. While the industry celebrates the delivery of Kubernetes 1.0 and formation of the Cloud Native Computing Foundation, this week is also an opportunity to gauge the state of the development around open source container solutions.

 

Why so much attention on containers? Basically, it is because containers help software developers and infrastructure operators at the same time.  This tech will help put mainstream data centers and developers on the road to the advanced, easy-to-consume, easy to ship and run, hyperscale technologies that are a hallmark of the world’s largest and most sophisticated cloud data centers. The container approach, packages up applications and software libraries to create units of computing that are both scalable and portable—two keys to the agile data center.  With the addition of Kubernetes and other key tech like Mesos, the orchestration and scheduling of the containers is making the impossible now simple.

 

This is a topic close to the hearts of many people at Intel. We are an active participant in the ecosystem that is working to bring the container model to a wide range of users and data centers as part of our broader strategy for standards based stack delivery for software defined infrastructure.  This involvement was evidenced earlier this year through our collaborations with both CoreOS andDocker, two leading software players in this space, as well as our leadership engagement in the new Open Container Project.

 

As part of the effort to advance the container cause, Intel is highlighting the latest advancements in our CoreOS collaboration to advance and optimize the Tectonic stack, a commercial distribution of Kubernetes plus CoreOS software. At KuberCon, Intel, Redapt, Supermicro and CoreOS are showing a Tectonic rack running on bare metal highlighting the orchestration and portability that Tectonic provides to data center workloads.  Local rock-star company Jive has been very successful in running their workloads on this platform showing that their app can move between public cloud and on-premise bare metal cloud.  We’re also announcing extensions of our collaboration with CoreOS to drive broad developer training for Tectonic and title sponsorship in CoreOS’s Tectonic Summit event planned for December 2nd and 3rd in New York. For details, check out the CoreOS news release.

 

We’re also featuring an integration of an OpenStack environment running Kubernetes based containers within an enterprise ready appliance.  This collaboration with Mirantis, Redapt and Dell highlights the industry’s work to drive open source SDI stacks into solutions that address enterprise customer needs for simpler to deploy solutions and demonstrate the progress that the industry has made inintegrating Kubernetes with OpenStack as it reaches 1.0.

 

Our final demonstration features a new software and hardware collaboration with Mesosphere, the company behind much of the engineering for Mesos which provides the container scheduling for Twitter, Apple Siri, AirBnB among other digital giants.  Here, we’ve worked to integrate Mesosphere’s DCOS platform with Kubernetes on a curated and optimized hardware stack supplied by Quanta.  This highlights yet another example of an open source SDI stack integrating efficient container based virtualization to drive the portability and orchestration of hyperscale.

 

For a closer look at Intel’s focus on standards-based innovation for the software-defined infrastructure stack, check out my upcoming presentation at the Intel Developer Forum (Intel IDF). I’ll be detailing further advancements in our industry collaborations in delivery of SDI to the masses as well as going deeper in the technologies Intel is integrating into data center infrastructure to optimize SDI stacks for global workload requirements.

By Imad Sousou

Intel is a strong believer in the value cloud technology offers businesses around the world. It has enabled new usage models like Uber*, Waze*, Netflix* and even AirBnB*. Unfortunately, access to this technology has been limited because industry solutions are fragmented and hard to deploy, and solution stacks are incomplete. The Intel® Cloud for All initiative aims to help address these challenges within the OpenStack Community and unlock the full cloud potential for companies of all sizes.

The Cloud for All initiative has three key components:

  • Invest in the community to help create enterprise-ready, easy-to-deploy solutions
  • Optimize workloads to make the most of the hardware
  • Work with the industry to help create standards and complete solutions in the open

 

We believe engaging directly with the open source community is a proven path to driving greater innovation across the entire industry. This requires a consistent approach to software development, transparent efforts, and commitment to working across companies and the industry to help ensure the best solutions are available. We share these core values with Rackspace, and it’s why we are together creating what we believe will be the largest single developer team focused on enhancing the OpenStack platform.

 

The Cloud for All team will work directly with the community to deliver bug fixes, patches, and align with the OpenStack Enterprise Work Group to provide features that spur enterprise adoption. This team will also provide training to help raise the skills and capabilities of the entire development community. And finally, Intel and Rackspace will build two 1,000+ node OpenStack Hybrid Cloud Testing Clusters that developers can use to test their code.

More information about the clusters and availability will be shared in the coming months. To receive the latest information on the collaboration, details on how you can get involved, and how to access the clusters when they are available, please connect with us.

Cloud computing has been a tremendous driver of business growth over the past five years.  Digital services such as Uber, AirBnB, Coursera, and Netflix have defined consumer zeitgeist while redefining entire industries in the process. This first wave of cloud fueled business growth, has largely been created by businesses leveraging cloud native applications aimed at consumer services. Traditional enterprises who seek the same agility and efficiency that the cloud provides have viewed migration of traditional enterprise applications to the cloud as a slow and complex challenge. At the same time, new cloud service providers are seeking to compete on a cost parity with large providers, and industry standard solutions that can help have been slow in arriving. The industry simply isn’t moving fast enough to address these very real customer challenges, and our customers are asking for help.

 

To help solve these real issues, Intel is announcing the Cloud for All Initiative with the goal of accelerating the deployment of tens of thousands of clouds over the next five years. This initiative is focused solely on cloud adoption to deliver the benefits of cloud to all of our customers. This represents an enormous efficiency and strategic transition for Enterprise IT and Cloud Service Providers. The key to delivering the efficiency of the cloud to the enterprise is rooted in software defined infrastructure. This push for more intelligent and programmable infrastructure is something that we’ve been working on at Intel for several years. The ultimate goal of Software Defined Infrastructure is one where compute, storage and network resource pools are dynamically provisioned based on application requirements.

 

Cloud for All has three key objectives:

 

  1. Invest in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions
  2. Optimize SDI stacks for high efficiency across workloads
  3. Align the industry towards standards and development focus to accelerate cloud deployment

 

Through investment, Intel will utilize our broad ecosystem relationships to ensure that a choice of SDI solutions supporting both traditional enterprise and cloud native applications are available in easy to consume options. This work will include scores of industry collaborations that ensure SDI stacks have frictionless integration into data center infrastructure.

 

Through optimization, Intel will work with cloud software providers to ensure that SDI stacks are delivered with rich enterprise feature sets, highly available and secure, and scalable to thousands of nodes.  This work will include the full optimization of software to take advantage of Intel architecture features and technologies like Intel virtualization technology, cloud integrity technology, and platform telemetry, all to deliver optimal enterprise capabilities.

 

Through industry alignment, Intel will use its leadership role in industry organizations as well as our work with the broad developer community to ensure that the right standards are in place to ensure workloads have true portability across clouds. This standardization will help enterprises have the confidence to deploy a mix of traditional and cloud native applications.

 

This work has already started. We have been engaged in the OpenStack community for a number of years as a consumer, and more recently our integration into the Foundation board last year. We have used that user and leadership position to push for features needed in the enterprise. Our work does not stop there however, over the past few months we’ve announced collaborations with cloud software leaders including CoreOS, Docker and Red Hat highlighting enterprise readiness for OpenStack and container solutions.  We’ve joined with other industry leaders to form the Open Container Initiative and Cloud Native Computing Foundation to drive the industry standards and frameworks for cloud native applications.

 

Today, we’ve announced our next step in Cloud for All with a strategic collaboration with Rackspace, the co-founder of OpenStack and a company with a deep history of collaboration with Intel. We’ve come together to deliver a stable, predictable, and easy to operate enterprise ready OpenStack scalable to 1000’s of nodes. This will be accomplished through the creation of the OpenStack Innovation Center. Where we will be assembling large developer teams across Intel and Rackspace to work together to address the key challenges facing the Openstack platform. Our upstream contributions will align with the priorities of the OpenStack Foundation’s Enterprise Workgroup. To facilitate this effort we will create the Hybrid Cloud Testing Cluster, a large scale environment, open to all developers in the community wishing to test their code at scale with the objective of improving the OpenStack platform.  In total, we expect this collaboration to engage hundreds of new developers internally and through community engagement to address critical requirements for the OpenStack community.

 

Of course, we’ve only just begun. You can expect to hear dozens of announcements from us in the coming year including additional investments and collaborations, as well as the results of our optimization and delivery.  I’m delighted to be able to share this journey with you as Cloud for All gains momentum. We welcome discussion on how Intel can best work with industry leaders and customers to deliver the goals of Cloud for All to the enterprise.

When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data.  To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.

 

In this post, we will walk through the layers of the SDI stack, as shown here.

 

sdi-stack.png

The foundation

 

The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services.  These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.


The OS layer

 

At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.

 

The virtualization layer

 

Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means.  The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use.  For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities ofIntel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation.  For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.

 

The orchestration layer

 

There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes.  This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient.  Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on.  In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.

 

The developer environment

 

The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life.  Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code.  We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.

 

For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.

By Daniel Chacon


Arjan Van De Ven, Senior Principal Engineer Linux Kernel, Intel’s Open Source Technology Center

 

It has been a very exciting time since Intel announced Intel® Clear Containers, a feature of the Clear Linux* Project for Intel® Architecture, back in May. There has been a great deal of interest and enthusiasm from ecosystem vendors as well as the Linux developer community. My team and I have added a second day job: visiting with partners to discuss optimization of their solutions for Intel architecture by using various features of the Clear Linux Project!

 

Recently we have been working closely with CoreOS to integrate Intel Clear Containers technology with the rkt* container runtime. Due to the modular nature of rkt, which is implemented as pluggable “stages” of execution (0, 1, and 2), we can focus our efforts on Stage 1, which operates as root and sets up the containers, traditionally groups. By modifying only this stage to integrate Intel Clear Containers, CoreOS can launch containers with security rooted in hardware, while continuing to deliver the deployment benefits of containerized apps.

 

We conceived of the Clear Linux Project as a vehicle to deliver a highly optimized Linux distribution that offers developers a reference for how to get the most out of the latest Intel silicon for cloud deployments. We are driving innovative approaches to existing issues, while also thinking of solutions to challenges that we anticipate in data centers and clouds of the near future. Intel Clear Containers offers a solution to one of the biggest challenges inhibiting the broader adoption of containers, especially in a multi-tenant environment like that of a public cloud service provider or even in an enterprise: security. Traditional Linux containers don’t provide enough security to give application developers the confidence to run their applications next to one from an unknown source. By optimizing the heck out of the Linux boot process, we have shown that Linux can boot with the security normally associated with virtual machines, almost as quickly as a traditional container. Thus we combine security rooted in hardware, via Intel Virtualization Technology (VT-x), with the development and deployment benefits which have caused application developers to gravitate to containers. Problem solved.

 

What’s next for the Clear Linux Project? We have recently turned our focus to performance:

 

  • AVX 2.0 deployment: We are making it easier to develop applications to take advantage of new floating point and integer operations in new Haswell-based servers. AVX 1.0 allowed programmers and compilers to do floating point math highly in parallel (SIMD), which helps performance in compute-heavy workloads such as High-Performance Computing and machine learning. AVX 2.0 brought with more operations and support for integer operations in addition to floating point math. However, as applications compiled for AVX 2.0 will generally not run on processors prior to Haswell, developers often favor the older AVX 1.0 for compatibility reasons. In Clear Linux we developed a key enhancement to the dynamic loader of glibc, making it possible to ship 2 versions of a library, one compiled for AVX 2.0 and one compiled for non-AVX 2.0 systems. Thus application performance will be tailored for the underlying platform.
  • ZLib compression improvements: ZLib is a data compression library that is very widely used in the industry to reduce storage and networking requirements. We have previously optimized Zlib for Intel processors to be 1.8x faster, and we continue to work on performance improvements in this library. This work was published in 2014, and we are using Clear Linux to show off current and future improvements with the library.
  • AutoFDO: One of the compiler optimizations available in Linux is Feedback-Directed Optimization (FDO), which provides performance gains but is not widely used due to the high runtime overhead of profile collection, a tedious dual-compile usage model, and difficulties in generating representative data to train the tool. AutoFDO overcomes these challenges by collecting profile data on production systems, without requiring a second, instrumented build of the target binary. By integrating AutoFDO into Clear Linux, we see runtime performance gains resulting from software alone.

 

It’s a great time to be marrying cutting-edge open source software like containers with the features available in the underlying silicon, to solve real-world challenges. Stay tuned to ClearLinux.org to see what features we release next.

 

This week we will be at LinuxCon/CloudOpen/ContainerCon in Seattle, speaking about container security at two sessions. We will also be showcasing integration of Open Container Initiative/Docker application containers with Intel Clear Container technology, and speaking at the local Docker meetup. Right now, it’s all about containers! See related blogs at Core OS and in the Intel Data Stack.

Earlier this summer, Intel announced our Cloud for All initiative signaling a deepening engagement with the cloud software industry on SDI delivery for mainstream data centers.  Today at IDF2015, I had my first opportunity post the announcement, to discuss why Cloud for All is such a critical focus for Intel, for the cloud industry, and for the enterprises and service providers that will benefit from enterprise feature rich cloud solutions. Delivering the agility and efficiency found today in the world’s largest data centers to broad enterprise and provider environments has the opportunity to transform the availability and economics of computing and reframe the role of technology in the way we do business and live our lives.

 

Why this focus? Building a hyperscale data center from the ground up to power applications written specifically for cloud is a very different challenge than migrating workloads designed for traditional infrastructure to a cloud environment.  In order to move traditional enterprise workloads to the cloud, either an app must be rewritten for native cloud optimization or the SDI stack must be optimized to support enterprise workload requirements.  This means supporting things like live workload migration, rolling software upgrades, and failover. Intel’s vision for pervasive cloud embraces both approaches, and while we expect applications to be optimized as cloud native over time, near term cloud adoption in the enterprise is hinged upon SDI stack optimization for support of both traditional applications and cloud native applications.

 

How does this influence our approach of industry engagement in Cloud for All?  It means that we need to enable a wide range of potential usage models while being pragmatic that a wide range of infrastructure solutions exists across the world today.  While many are still running traditional infrastructure without self-service, there is a growing trend towards enabling self-service on existing and new SDI infrastructure through solutions like OpenStack, providing the well-known “give me a server” or “give me storage” capabilities…  Cloud Type A – server focused.  Meanwhile SW developers over the last year have grown very fond of containers and are thinking not in terms of servers, but instead in terms of app containers and connections… a Cloud Type B – process focused.  If we look out into the future, we could assume that many new data centers will be built with this as the foundation, and will provide a portion of the capacity out to traditional apps.  Convergence of usage models while bringing the infrastructure solutions forward.

 

potential-path-to-sdi.png

 

The enablement of choice and flexibility, the optimization of the underlying Intel architecture based infrastructure, and the delivery of easy to deploy solutions to market will help secure broad adoption.

 

So where are we with optimization of SDI stacks for underlying infrastructure? The good news is, we’ve made great progress with the industry on intelligent orchestration.  In my talk today, I shared a few examples of industry progress.

 

I walked the audience through one example with Apache Mesos detailing how hyper-scale orchestration is achieved through a dual level scheduler, and how frameworks can be built to handle complex use cases like even storage orchestration.  I also demonstrated a new technology for Mesos Oversubscription that we’re calling Serenity that helps drive maximum infrastructure utilization.  This has been a partnership with MesoSphere and Intel engineers in the community to help lower the TCO of data centers; something I care a lot about…. Real business results with technology.

 

I also shared how infrastructure telemetry and infrastructure analytics can deliver improved stack management. I shared an example of a power & thermal aware orchestration scheduler that has helped Baidu net a data center PUE of 1.21 with 24% of potential cooling energy savings.  Security is also a significant focus, and I walked through an approach of using Intel VT technology to improve container security isolation.  In fact, CoreOS announced today that its rkt 0.8 release has been optimized for Intel VT using the approach outlined in my talk, and we expect more work with the container industry towards delivery of like security capabilities present only in traditional hypervisor based environments.

 

But what about data center application optimization for SDI?  For that focus, I ended my talk with the announcement of the first Cloud for All Challenge, a competition for infrastructure SW application developers to rewrite for cloud native environments.  I’m excited to see developer response to our challenge simply because the opportunity is ripe for introduction of cloud native applications to the enterprise using container orchestration, and Intel wants to help accelerate the software industry towards delivery of cloud native solutions.  If you’re an app developer, I encourage you to engage in this Challenge!  The winning team will receive $5,000 of cold, hard cash and bragging rights at being at the forefront of your field.  Simply contact cloudforall@intel.com for information, and please see the preliminary entry form.