1 2 3 Previous Next

The Data Stack

1,514 posts

handheld-map.jpgReady to know more about that journey along the road again? In an earlier post, I posed the question, “Where are you on the road to software-defined infrastructure?” In that post, I outlined the road to SDI in terms of a maturity model framework with five stages: Standardized, Virtualized, Automated, Orchestrated, and SLA Managed. In this post, I will take a closer look at the Automated and Orchestrated stages.

 

These stages are very like twin cities on the road to SDI. To get the second stop, Orchestration, you need to get beyond the first, Automation. So we will begin there.

 

In the Automated stage of the SDI maturity model, IT resources are pooled and provisioned with minimal manual intervention and with the right security in place. In a typical progression, an organization will automate the provisioning of routine tasks like the installation of patches and the deployment of virtual machines because of the variability of managing the existing environments.  Olny thenwill they be able to move on to the automation of resource pools.  With this complete, they can finally automate the application stack with approaches like platform as a service (PaaS) or other application related portions.

 

There’s one important twist and turn on the road here. To put your organization on the best path to SDI, you need to start with more than a collection of standalone tools that provide automation capabilities like you have already in most environments. You need tools can or already are integrated into an end-to-end orchestration platform.

 

To achieve this goal of gaining an integrated toolset, you basically have two choices. You can pick out best-of-breed products and do the heavy lifting required to integrate your own custom environment, or you can buy an off-the-shelf solution that is tied to a single vendor’s platform. I explored this topic in an earlier post that asked the question, “Should you take the high road or the low road to SDI?

 

Regardless of the route you take, automation provides the basic toolset and pooling necessary for orchestration, as well as capabilities for cloud-like provisioning models. So once your automation tools are in place, you’re poised to move on to the orchestration of IT processes.

 

Orchestration takes automated processes to a new, more intelligent level. At this higher stage of maturity, an orchestration platform optimizes the allocation of data center resources. It collects hardware telemetry data and uses that information to place applications on the best servers, with features that enable the acceleration of the workloads. (WARNING: For the road to be possible, most applications have to be more cloud-aware, a topic I will cover in a future blog).

 

The orchestration engine takes its direction from policies that you set up at the front end. Following these policies, it places workloads in approved locations for optimal performance and the right levels of security. It also acts as an IT watchdog that spots performance issues and takes remedial actions automatically. Along the way, the orchestration platform accumulates learnings that it will put to use to make even better decisions in the future.

 

Here’s another twist to keep in mind. We’re in a period of rapid evolution in the orchestration space. This is a young and immature part of the SDI ecosystem. We’re basically at Orchestration 1.0. Things will be quite different as we move to Orchestration 2.0, with more mainstream solutions, and 3.0, the point at which solutions will be firmly established and widely used.

 

What does this mean to your organization? At a practical level, you need to take a long-term view. Be flexible in your scripting, and create a foundation that gives you choices for the future. Think about your applications and how they can be designed to work in an orchestrated environment. On the road to SDI, people tend to focus on the next stop, rather than the entire journey. Those who focus on the big picture will be in a better position to realize the full promise of SDI.

 

From a big-picture point of view, automation and orchestration create the foundation for the ultimate destination—the SLA Managed stage in the SDI maturity model. At this stage, SDI allows you to deploy hybrid clouds, control workloads and enforce security through an end-to-end orchestration layer that automatically enforces policies for application workloads and the service levels they require. The SDI environment makes sure the application gets the resources it needs for optimal performance and full compliance with the governing policies.

 

This isn’t a dream. It’s a very definite designation for enterprise data centers. The key is to find the route that is right for your organization.

 

Happy travels!

 

Find me on Twitter @EdLGoldman to share your thoughts.

Are you ready to continue the journey to software-defined infrastructure? In an earlier post, I explored Two Key Stops along the Road to SDI: Automation and Orchestration. These stops are essential milestones in the trip to the ultimate destination: an SLA Managed data center.


driving-photo.jpg

 

In the SDI maturity model, the Automation and Orchestration stages feed into the SLA Managed stage, but the truth is they alone won’t get you there. To get to your final destination, your applications must be written to take full advantage of a cloud environment. Cloud-aware apps are like the vehicles on the road to the SLA Managed data center.

 

In more specific terms, cloud-aware apps know what they need to do to fully leverage the automation and orchestration capabilities of the SDI platform. They are written to allow for expansion and contraction automatically to maintain optimal levels of performance, availability, and efficiency. They understand that in a cloud environment, there are multiple routes to a database and multiple systems available to process data. They, in essence, do not worry about redundancy as the automation and orchestration will manage it in the environment.

 

 

This is quite unlike the conventional approach to apps. Most of today’s apps are tightly coupled with a particular database and a certain set of infrastructure resources. They require items such as session persistence and connection management. If any of the links break—for example, the app loses its connection to the database—the app goes down and IT admins go into fire-drill mode as they scramble to bring the app back online.  Over the past 20 years, we have done our best to automate the fire drill.

 

In a metaphorical view, we’re talking about the difference between baseball and football. In baseball, things pretty much proceed in a linear and predictable manner. There are few moving parts—there’s one pitcher throwing to one batter—and aside from the occasional base-stealer you pretty much know where all the players are at all times. This is the way things work with the conventional app.

 

In a cloud environment, things are more football-like. The players are all over the place and the same play can unfold in very different ways. When a receiver runs the wrong route, the play doesn’t come to a stop. The quarterback simply looks for other receivers who are in position to make a play. The cloud-aware app functions like a quarterback who improvises to keep the ball moving down the field.

 

Here’s where things get harder. It’s not a trivial undertaking to make apps cloud-aware. In the case of legacy apps, the code has to be pretty much rewritten from top to bottom to build in cloud-awareness or the legacy part needs to be wrapped in services so that cloud aware can happen around the legacy portion. So we’re talking about a lot of heavy lifting for your software developers.

 

The good news is you don’t have to do all of this heavy lifting at once. We’re still quite some time away from the day of the SLA Managed data center. We have to first build the integrated orchestration platforms and automation toolsets that enable a software-defined approach to the data center. The key is to understand that this day is coming, and begin taking steps to make your apps cloud-aware.

 

Any new apps should be written to be cloud-aware. As for your legacy apps, you won’t be able to rewrite them all at once, so you’re going to need to identify the apps that are most likely to benefit from cloud awareness as you move to software-defined infrastructure or just wrap them in services.

 

The wrapped applications can help move many critical apps to a more cloud-like environment without rewriting a lot of code. But those apps won’t be able to benefit from all of the goodness of an SLA Managed data center. In an SLA Managed world, software-defined infrastructure and the apps work in tandem to deliver optimal performance with minimal downtime.

 

These gains are made possible by the ability of the orchestration platform to move workloads and supporting resources around on the fly to meet the policies you set for your applications. When demand spikes, the SDI environment grabs the resources the app needs to keep performance in line with the required service levels, even if that means bursting to a public cloud to gain additional processing power.

 

If this sounds like IT nirvana, you’ve got it. In the SLA Managed data center, application downtime will be rare, and unpredictable application performance will seem more like a problem from the past than a constant threat in the present. You’ll be able to breathe easier when unusually large crowds of holiday shoppers converge on a particular app, because you’ll know that the backend systems will take care of themselves.

 

So that’s the 30,000-foot view of the last stretches of the road to SDI. If you consider where we are today and where we need to travel, you can see that we are talking about a long road, and one that can have many unique twists and turns. The key is to think about how you’re going to get to SDI, identify the vehicles that will move you forward, and then begin your journey.

 

Find me @EdLGoldman, and share your thoughts and comments.

connected-resources-circle-graphic.png

If you have watched a movie on Netflix*, called for a ride from Uber* or paid somebody using Square*, you have participated in the digital services economy. Behind those services are data centers and networks that must be scalable, reliable and responsive.

 

Dynamic resource pooling is one of the benefits of a software defined infrastructure (SDI) and helps unlock scalability in data centers to enable innovative services.

 

How does it work? In a recent installment of Intel’s Under the Hood video series, Sandra Rivera, Intel Vice President, Data Center Group and General Manager, Network Platforms Group, provides a great explanation of dynamic resource pooling and what it takes to make it happen.

 

In the video, Sandra explains how legacy networks, built using fixed-function, purpose-built network elements, limit scalability and new service deployment. But when virtualization and software defined networking are combined into a software defined infrastructure, the network can be much more flexibly configured.

 

Pools of virtualized networking, compute and storage functionality can be provisioned in different configurations, all without changing the infrastructure, to support the needs of different applications. This is the essence of dynamic resource pooling.

 

To get to an infrastructure that supports dynamic resource pooling takes the right platform. Sandra talks about how Intel is helping developers build these platforms with a strategy that starts with powerful silicon building blocks and software ingredient technology, in addition to support for open standards development, building an ecosystem, collaborating on technology trials and delivering open reference platforms.

 

It is an exciting time for the digital services economy – who knows what service will become the next Netflix, Uber or Square!

 

There’s much more to Sandra’s overview of dynamic resource pooling, so I encourage you to watch it in its entirety.

 

When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data.  To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.

 

In this post, we will walk through the layers of the SDI stack, as shown here.

 

sdi-stack.png

 

The foundation

 

The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services.  These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.

The OS layer

 

At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.

 

The Virtualization layer

 

Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means.  The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use.  For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities of Intel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation.  For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.

 

The Orchestration layer

 

There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes.  This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient.  Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on.  In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.

 

The Developer environment

 

The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life.  Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code.  We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.

 

For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.

Cloud computing has been a tremendous driver of business growth over the past five years.  Digital services such as Uber, AirBnB, Coursera, and Netflix have defined consumer zeitgeist while redefining entire industries in the process.  This first wave of cloud fueled business growth, has largely been created by businesses leveraging cloud native applications aimed at consumer services.  Traditional enterprises who seek the same agility and efficiency that the cloud provides have viewed migration of traditional enterprise applications to the cloud as a slow and complex challenge.  At the same time, new cloud service providers are seeking to compete on a cost parity with large providers, and industry standard solutions that can help have been slow in arriving.  The industry simply isn’t moving fast enough to address these very real customer challenges, and our customers are asking for help.

 

To help solve these real issues, Intel is announcing the Cloud for All Initiative with the goal of accelerating the deployment of tens of thousands of clouds over the next five years. This initiative is focused solely on cloud adoption to deliver the benefits of cloud to all of our customers.  This represents an enormous efficiency and strategic transition for Enterprise IT and Cloud Service Providers.  The key to delivering the efficiency of the cloud to the enterprise is rooted in software defined infrastructure. This push for more intelligent and programmable infrastructure is something that we’ve been working on at Intel for several years. The ultimate goal of Software Defined Infrastructure is one where compute, storage and network resource pools are dynamically provisioned based on application requirements.

 

Cloud for All has three key objectives:

 

  1. Invest in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions
  2. Optimize SDI stacks for high efficiency across workloads
  3. Align the industry towards standards and development focus to accelerate cloud deployment

 

Through investment, Intel will utilize our broad ecosystem relationships to ensure that a choice of SDI solutions supporting both traditional enterprise and cloud native applications are available in easy to consume options.  This work will include scores of industry collaborations that ensure SDI stacks have frictionless integration into data center infrastructure.

 

Through optimization, Intel will work with cloud software providers to ensure that SDI stacks are delivered with rich enterprise feature sets, highly available and secure, and scalable to thousands of nodes.  This work will include the full optimization of software to take advantage of Intel architecture features and technologies like Intel virtualization technology, cloud integrity technology, and platform telemetry, all to deliver optimal enterprise capabilities.

 

Through industry alignment, Intel will use its leadership role in industry organizations as well as our work with the broad developer community to ensure that the right standards are in place to ensure workloads have true portability across clouds. This standardization will help enterprises have the confidence to deploy a mix of traditional and cloud native applications.

 

This work has already started.  We have been engaged in the OpenStack community for a number of years as a consumer, and more recently our integration into the Foundation board last year. We have used that user and leadership position to push for features needed in the enterprise. Our work does not stop there however, over the past few months we’ve announced collaborations with cloud software leaders including CoreOS, Docker and Red Hat highlighting enterprise readiness for OpenStack and container solutions.  We’ve joined with other industry leaders to form the Open Container Initiative and Cloud Native Computing Foundation to drive the industry standards and frameworks for cloud native applications.

 

Today, we’ve announced our next step in Cloud for All with a strategic collaboration with Rackspace, the co-founder of OpenStack and a company with a deep history of collaboration with Intel.  We’ve come together to deliver a stable, predictable, and easy to operate enterprise ready OpenStack scalable to 1000’s of nodes. This will be accomplished through the creation of the OpenStack Innovation Center. Where we will be assembling large developer teams across Intel and Rackspace to work together to address the key challenges facing the Openstack platform.  Our upstream contributions will align with the priorities of the OpenStack Foundation’s Enterprise Workgroup. To facilitate this effort we will create the Hybrid Cloud Testing Cluster, a large scale environment, open to all developers in the community wishing to test their code at scale with the objective of improving the OpenStack platform.  In total, we expect this collaboration to engage hundreds of new developers internally and through community engagement to address critical requirements for the OpenStack community.

 

Of course, we’ve only just begun.  You can expect to hear dozens of announcements from us in the coming year including additional investments and collaborations, as well as the results of our optimization and delivery.  I’m delighted to be able to share this journey with you as Cloud for All gains momentum. We welcome discussion on how Intel can best work with industry leaders and customers to deliver the goals of Cloud for All to the enterprise.

virtual-servers-graphic.jpg

With the cloud software industry advancing on a selection of Software Defined Infrastructure ‘stacks’ to support enterprise data centers, the question of application portability comes squarely into focus. A new ‘style’ of application development has started to gather momentum in both the Public Cloud, as well as Private Cloud. Cloud native applications, as this new style has been named, are those applications that are container packaged, dynamically scheduled, and microservices oriented. They are rapidly gaining favor for their improved efficiency and agility as compared to more traditional monolithic data center applications.

 

However, creating a cloud native application, does not eliminate the dependencies on traditional data center services. Foundational services such as networking, storage, automation, and of course, compute are all still very much required.  In fact, since the concept of a full virtual machine may not be present in a cloud native application, these applications rely significantly on their infrastructure software to provide the right components. When done well, a ‘Cloud Native Application’ SDI stack can provide efficiency and agility previously only seen in a few hyperscale environments.

 

Another key aspect of the Cloud Native Application, is that it should be highly portable. This portability between environments is a massive productivity gain for both developers and operators. An application developer wants the ability to package an application component once, and have it be reusable across all clouds, both public and private. A cloud operator wants the freedom to position portions of their application where it makes the most sense. That location may be on their private cloud, or on their public cloud partner. Cloud Native Applications are the next step in true hybrid cloud usage.

 

So, with this promise of efficiency, operational agility, and portability, where do data center managers seek the definitions for how the industry will address movement of apps between stacks? How can one deploy a cloud native app and ensure it can be moved across clouds and SDI stacks without issue?  Without a firm answer, can one really develop cloud native apps with the confidence that portability will not be limited to those environments running identical SDI stacks?  These are the type of questions that often stall organizational innovation and is the reason why Intel has joined with other cloud leaders in the formation of the Cloud Native Computing Foundation (CNCF).

 

Announced this week at the first ever KuberCon event, the CNCF has been chartered to provide guidance, operational patterns, standards and over time , APIs, to ensure container based SDI stacks are both interoperable, and optimized for a seamless, performant developer experience. The CNCF will work with the recently formed Open Container Initiative (OCI) towards a synergistic goal of addressing the full scope of container standards and the supporting services needed for success.

 

Why announce this at KuberCon? The goal of the CNCF is to foster innovation in the community around these application models. The best way to speed innovation is to start with some seed technologies. Much the same way it is easier to start writing (a blog perhaps?) when you have a few sentences on screen, rather than staring at a blank page, the CNCF is starting with some seed technologies. Kubernetes, having just passed its 1.0 release, will be one of the first technologies used to kick start this effort. Many more technologies, and even full stacks, will follow, with a goal of several ‘reference’ SDI platforms that support the portability required.

 

What is Intel’s role here?  Based on our decades of experience helping lead industry innovation and standardization across computing hardware and open source software domains, we are firmly committed to the CNCF goals and plan to actively participate in the leadership body and Technical Oversight Committee of the Foundation.  This effort is reflective of our broader commitment to working with the industry to accelerate the broad use of cloud computing through delivery of optimized, easy to consume, operate, and feature complete SDI stacks.  This engagement complements our existing leadership roles in the OCI, the OpenStack Foundation, Cloud Foundry Foundation as well as our existing work driving solutions with the SDI platform ecosystem.

 

With the cloud software industry accelerating its pace of innovation, please stay tuned for more details on Intel’s broad engagement in this space. To deepen your engagement with Intel, I invite you to join us at the upcoming Intel Developer Forum in San Francisco to gain a broader perspective on Intel’s strategy for acceleration of cloud.

Portland.jpg

 

 

My hometown of Portland, Oregon is home this week to the first ever KuberCon Launch event bringing together the Kubernetes ecosystem at OSCON. While the industry celebrates the delivery of Kubernetes 1.0 and formation of the Cloud Native Computing Foundation, this week is also an opportunity to gauge the state of the development around open source container solutions.

 

Why so much attention on containers? Basically, it is because containers help software developers and infrastructure operators at the same time.  This tech will help put mainstream data centers and developers on the road to the advanced, easy-to-consume, easy to ship and run, hyperscale technologies that are a hallmark of the world’s largest and most sophisticated cloud data centers. The container approach, packages up applications and software libraries to create units of computing that are both scalable and portable—two keys to the agile data center.  With the addition of Kubernetes and other key tech like Mesos, the orchestration and scheduling of the containers is making the impossible now simple.

 

This is a topic close to the hearts of many people at Intel. We are an active participant in the ecosystem that is working to bring the container model to a wide range of users and data centers as part of our broader strategy for standards based stack delivery for software defined infrastructure.  This involvement was evidenced earlier this year through our collaborations with both CoreOS and Docker, two leading software players in this space, as well as our leadership engagement in the new Open Container Project.

 

As part of the effort to advance the container cause, Intel is highlighting the latest advancements in our CoreOS collaboration to advance and optimize the Tectonic stack, a commercial distribution of Kubernetes plus CoreOS software. At KuberCon, Intel, Redapt, Supermicro and CoreOS are showing a Tectonic rack running on bare metal highlighting the orchestration and portability that Tectonic provides to data center workloads.  Local rock-star company Jive has been very successful in running their workloads on this platform showing that their app can move between public cloud and on-premise bare metal cloud.  We’re also announcing extensions of our collaboration with CoreOS to drive broad developer training for Tectonic and title sponsorship in CoreOS’s Tectonic Summit event planned for December 2nd and 3rd in New York. For details, check out the CoreOS news release.

 

We’re also featuring an integration of an OpenStack environment running Kubernetes based containers within an enterprise ready appliance.  This collaboration with Mirantis, Redapt and Dell highlights the industry’s work to drive open source SDI stacks into solutions that address enterprise customer needs for simpler to deploy solutions and demonstrate the progress that the industry has made in integrating Kubernetes with OpenStack as it reaches 1.0.

 

Our final demonstration features a new software and hardware collaboration with Mesosphere, the company behind much of the engineering for Mesos which provides the container scheduling for Twitter, Apple Siri, AirBnB among other digital giants.  Here, we’ve worked to integrate Mesosphere’s DCOS platform with Kubernetes on a curated and optimized hardware stack supplied by Quanta.  This highlights yet another example of an open source SDI stack integrating efficient container based virtualization to drive the portability and orchestration of hyperscale.

 

For a closer look at Intel’s focus on standards-based innovation for the software-defined infrastructure stack, check out my upcoming presentation at the Intel Developer Forum (Intel IDF). I’ll be detailing further advancements in our industry collaborations in delivery of SDI to the masses as well as going deeper in the technologies Intel is integrating into data center infrastructure to optimize SDI stacks for global workload requirements.

HP and Intel are again joining forces to develop and deliver industry-specific solutions with targeted workload optimization and deep domain expertise to meet the unique needs of High Performance Computing (HPC) customers. These solutions will leverage Intel’s HPC scalable system framework and HP’s solution framework for HPC to take HPC mainstream.

 

HP systems innovation augments Intel’s chip capabilities with end-to-end systems integration, density optimization and energy efficiency built into each HP Apollo platform.  HP’s solution framework for HPC optimizes workload performance for targeted vertical industries.  HP offers clients Solutions Reference Architectures that deliver the ability to process, analyze and manage data while addressing the complex requirements across a variety of industries including Oil and Gas, Financial Services and Life Sciences.  With HP HPC solutions customers can address their need for HPC innovation with an infrastructure that delivers the right Compute for the right workload at the right economics…every time!

 

In addition to combining Intel’s HPC scalable system framework with HP solutions framework for HPC   to develop HPC optimized solutions, the HPC Alliance goes a step further, by introducing a new Center of Excellence (CoE) specifically designed to spur customer innovation.  This CoE effectively combines deep vertical industry expertise and technological understanding with the appropriate tools, services and support. This approach makes it simple and easy for our customers to drive innovation with HPC. This service is open to all HPC customers from academia to industry.

 

Today, in Grenoble, France, customers have access to HP and Intel engineers at the HP and Intel Solutions Center.  Clients can conduct a Proof of Concept using the latest in HP and Intel technologies.  Furthermore, HP and Intel engineers stand ready to help customers modernize their codes to take advantage of new technologies….resulting in faster performance, improved efficiencies, and ultimately better business outcomes.

 

 

HP and Intel will make the HPC Alliance announcement at ISC'15 in Frankfurt, Germany July 12-16, 2015. To learn more, visit www.hp.com and search ‘high performance computing’.

For High Performance Computing (HPC) users who leverage open-source Lustre* software, a good file system for big data is now getting even better. That’s a key takeaway from announcements Intel is making this week at ISC15 in Frankfurt, Germany.

 

Building on its substantial contributions to the Lustre community, Intel is rolling out new features that will make the file system more scalable, easier to use, and more accessible to enterprise customers. These features, incorporated in Intel® Enterprise Edition for Lustre* 2.3, include support for Multiple Metadata Targets in the Intel® Manager for Lustre* GUI.

 

The Multiple Metadata Target feature allows Lustre metadata to be distributed across servers. Intel Enterprise Edition for Lustre 2.3 supports remote directories, which allow each metadata target to serve a discrete sub-directory within the file system name space. This enables the size of the Lustre namespace and metadata throughput to scale with demand and provide dedicated metadata servers for projects, departments, or specific workloads.

 

This latest Enterprise Edition for Lustre release supports clients running Red Hat Enterprise Linux (RHEL) 7.1 as well as client nodes running SUSE Linux Enterprise 12.

 

The announcements don’t stop there. Looking ahead a bit, Intel is preparing to roll out new security, disaster recovery, and enhanced support features in Intel® Cloud Edition for Lustre 1.2, which will arrive later this year. Here’s a quick look at these coming enhancements:

 

  • Enhanced security— The new version of Cloud Edition adds network encryption using IPSec to provide enhanced security. This feature can be automatically configured to ensure that the communication of important data is always secure within the file system, and when combined with EBS encryption (released in version 1.1.1 of Cloud Edition) provides a complete and robust end-to-end security solution for cloud-based I/O.
  • Disaster recovery—Existing support for EBS snapshots is being expanded to support the recovery of a complete file system. This feature enhances file system durability and increases the likelihood of recovering important data in the case of failure or data corruption.
  • Supportability enhancements—Cloud Edition supportability has been enhanced with the addition of client mounting tools, updates to instance and target naming, and added network testing tools. These changes provide a more robust framework for administrators to deploy, manage, and troubleshoot issues when running Cloud Edition.

 

Making adoption and use of Lustre easier for organizations is a key driver behind the Intel Manager for Lustre software. This management interface includes easy-to-use tools that provide a unified view of Lustre storage systems and simplify the installation, configuration, monitoring, and overall management of the software. Even better, the Intel Distribution includes an integrated adapter for Apache Hadoop*, which enables users to operate both Lustre and Apache Hadoop within a shared HPC infrastructure.

 

Enhancements to the Intel Distributions for Lustre software products are a reflection of Intel’s commitment to making HPC and big data solutions more accessible to both traditional HPC users and mainstream enterprises. This commitment to the HPC and big data space is also evident in Intel’s HPC scalable system framework. The framework, which leverages a collection of leading-edge technologies, enables balanced, power-efficient systems that can support both compute- and data-intensive workloads running on the latest Intel® Xeon processors and Intel® Xeon Phi™ coprocessors.

 

For a closer look at these topics, visit Intel Solutions for Lustre Software and Intel’s HPC Scalable System Framework.

 

 

 

Intel, the Intel logo, Xeon and Xeon Phi are trademarks of Intel Corporation in the United States and other countries.
* Other names and brands may be claimed as the property of others.

Next week will kick off the ISC High Performance conference, July 12 – 16 in Frankfurt, Germany. I will be joining many friends and peers across the HPC industry to share, collaborate, and learn about the advancements in High Performance Computing and Big Data.

 

During this international gathering, Intel’s Raj Hazra, our VP and GM of the Enterprise and HPC Platform Group, will speak about the changing landscape of technical computing and show how recent innovations and Intel’s HPC scalable system framework can help scientists, researchers, and industry maximize the potential of HPC for computation and data intensive workloads. Raj will also share details on upcoming Intel technologies, products, and ecosystem collaborations that are powering research-driven breakthroughs and ensuring that technical computing continues to fulfill its potential as a scientific and industrial tool for discovery and innovation.

 

In our booth, we are excited to feature an inside look at the research breakthroughs achieved by the COSMOS Supercomputing Team at the University of Cambridge. This team, led by the renowned scientist Stephen Hawking, is driving dramatic advances in cosmology in its studies of cosmic microwave background (CMB) radiation—the relic radiation left over from the Big Bang. Observing the CMB is like looking at a snapshot of the early universe.

 

The COSMOS team’s demo will showcase some of the new HPC technologies that are part of Intel’s HPC scalable system framework. One of which is Intel® Omni-Path Architecture, a high performance fabric which is designed to deliver the performance required for tomorrow’s HPC workloads and the ability to scale to tens of thousands—and eventually hundreds of thousands—of nodes. This next-generation fabric builds on the Intel® True Scale Fabric with an end-to-end solution, including PCIe* adapters, silicon, switches, cables, and management software. The demo will also be powered by the second generation of the Intel® Xeon® Phi product family, code-named Knights Landing. This is the first time these technologies will be demonstrated publicly in Europe.

 

Conference participants will also have a chance to collaborate and learn about the latest efforts to modernize industry codes to realize the full potential of the latest advancements in hardware. Come by our collaboration hub in booth #930 or check out software.intel.com/moderncode for  a wide variety of coding resources for developers.

 

Intel has been an active participant and sponsor of ISC for many years, reflecting our commitment to working with the broader ecosystem to advance HPC and supercomputing.

 

I hope to see you at the conference. If you won’t be able to attend the event, you can get a closer look at the work Intel is doing to help push the boundaries of technical computing at intel.com/hpc.

By Andrey Vladimirov, head of HPC research at Colfax International

 

 

When it comes to high-performance computing, consumers can be divided into three basic user groups. Perhaps the most common and obvious case is the performance-hungry users who crave faster time to insight on complex workloads, cutting throughput times of days down to hours or minutes. Another class of users seeks greater scalability, which is often achieved by adding more compute nodes. Yet another type of user looks for more efficient systems that consume less energy to do a comparable amount of processing work.

 

Coincidentally, all of these situations can benefit greatly from parallel processing to take greater advantage of the capabilities of today’s multi core processors to improve performance, scalability, and efficiency. And the first step to realizing these gains is to modernize your code.

 

I will explore the benefits of code modernization momentarily. But first, let’s take a step back and look at the underlying hardware picture.

 

In the last three decades of the 20th century, processors evolved by increasing clock frequencies. This approach enabled ongoing gains in application performance until processors hit a ceiling—the clock speed of around 3GHz and associated heat dissipation issues.

 

To gain greater performance, the computing industry moved to parallel processing. Starting in the 1990s, people used distributed frameworks, such as MPI, to spread workloads over multiple compute nodes, which worked on different aspects of a problem in parallel. In the 2000s, multicore processors emerged that allowed parallel processing within a single chip. The degree to which intra-processor parallelism has evolved is very significant, with tens of cores in modern processors. For a case in point, see the Intel® Many Integrated Core Architecture (Intel® MIC Architecture), delivered via Intel® Xeon Phi™ coprocessors.

 

A simultaneous advance came in the form of vector processing, which adds to each core an arithmetic unit that can apply a single arithmetic operation to a short vector of multiple numbers in parallel. At this point, the math gets pretty interesting. Intel Xeon Phi products are available with up to 61 cores, each of which has 16 vector lanes in single precision. In theory this means that the processor can accelerate throughput on a workload by a factor of 60 x 16—for a 960x gain—in comparison to running the workload on a single processor without vectors (in fact, the correct factor is 2 x 960 because of the dual-issue nature of Knights Corner architecture cores, but this is another story).

 

And here’s where application modernization enters the picture. To realize gains like this, applications need to be modified to take advantage of the parallel processing and vectorization capabilities in today’s HPC processors. If the application can’t take advantage of these capabilities, you end up paying for performance that you can’t receive.

 

That said, as Intel processor architectures evolve, you get performance boosts in some areas without doing anything with your code. For instance, such architectural improvements as bigger caches, instruction pipelining, smarter branch prediction, and prefetching improve performance of some applications without any changes in the code. However, parallelism is different. To realize the full potential of the capabilities of multiple cores and vectors, you have to make your application aware of parallelism. That is what code modernization is about: it is the process of adapting applications to new hardware capabilities, especially parallelism on multiple levels.

 

With some applications, this is a fairly straightforward task. With others it’s a more complex undertaking. The specifics of this part of the discussion are beyond the scope of this post. The big point is that you have to modify your code to get the payoff that comes with a multicore processing platform with built-in vectorization capabilities.

 

As for that payoff, it can be dramatic. This graphic shows the gains made when an astrophysical application HEATCODE was optimized to take advantage of the capabilities of the Intel platforms. In these benchmarks, the same performance-critical code written in C++ was used on an Intel Xeon processor and on an Intel Xeon Phi coprocessor. Review the study.

 

colfax-image1.png

 

Here’s another example of the payoff that comes with code modernization. This graphic illustrates the importance of parallelism and optimization on a synthetic N-body application designed as an educational “toy model.” Review the example.

 

colfax-image2.png

 

As these examples show, when code is modernized to take full advantage of today’s HPC hardware platforms, the payoff can be enormous. That certainly applies to general-purpose multi-core processors, such as Intel Xeon CPUs. However, on top of that, for applications that know how to use multiple cores, vectors, and memory efficiently, specialized parallel processors, such as Intel Xeon Phi coprocessors, can further increase performance and lower power consumption by a factor of up to 3x. For details, see this performance-per-dollar and performance-per-watt study.

 

Intel Xeon Phi coprocessors build on the capabilities of the Intel Xeon platform, which is used in servers around the world. General-purpose Intel Xeon processors are available with up to 18 cores per processor chip, or 36 cores in a dual-socket configuration. These processors are already highly parallel. Intel Xeon Phi coprocessors take the architecture to a new, massively parallel level with up to 61 cores per chip.

 

A great thing about the Intel Xeon Phi architecture is that code written for the Intel Xeon platform can run unmodified on the Intel Xeon Phi coprocessor, as well as general-purpose CPU platforms. But there’s a catch: if the code isn’t modernized, it can’t take advantage of all of the capabilities of the Intel MIC Architecture used in the Intel Xeon Phi coprocessor. This makes code modernization essential.

 

Once you have a robust version of code, you are basically future-ready. You shouldn’t have to make major modifications to take advantage of new generations of the Intel architecture. Just like in the past, when computing applications could “ride the wave” of increasing clock frequencies, your modernized code will be able to automatically take advantage of the ever-increasing parallelism in future x86-based computing platforms.

 

At Colfax Research, these topics are close to our hearts. We make it our business to teach parallel programming and optimization, including programming for Intel Xeon Phi coprocessors, and we provide consulting services on code modernization.

 

We keep in close contact with the experts at Intel to stay on top of the current and upcoming technology. For instance, we started working with the Intel Xeon Phi platform early on, well before its public launch. We have since written a book on parallel programming and optimization with Intel Xeon Phi coprocessors, which we use as a basis for training software developers and programmers who want to take full advantage of the capabilities of Intel’s parallel platforms.

 

For a deeper dive into code modernization opportunities and challenges, explore our Colfax Research site. This site offers a wide range of technical resources, including research publications, tutorials, case studies, and videos of presentations. And should you be ready for a hands-on experience, check out the Colfax training series, which offers software developer trainings in parallel programming using Intel Xeon processors and Intel Xeon Phi coprocessors.

 

For more information: software.intel.com/moderncode

 

Intel, the Intel logo, Xeon, and Xeon Phi are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

©2015 Colfax International, All rights reserved

By Sarah Han and Nicholas Weaver

 

crowdShot.jpg

On Monday and Tuesday close to 3000 software engineers, system administrators, technology leaders, and innovators gathered for the second US-based DockerCon in San Francisco.

 

Since its first demo, Docker has been a major force in the movement to enable Linux containers as a new footprint for deploying applications. Many new startups have emerged from this developing ecosystem focused on monitoring, networking, security and operations of Docker-based containers. This ecosystem arrived in full force for DockerCon 2015, increasing attendance six-fold.

 

Intel was proud to help bring together customers, competitors, partners, and creators alike at the San Francisco Exploratorium for The DockerCon Party. Attendees included thought leaders from leading open source cloud innovators such as CoreOS, Docker, Google, Mesosphere and Redapt capturing the spirit of collective innovation captured with the Open Container Project announcement at the show.

 

Exploratorium.jpg

 

The fascinating exhibits within the Exploratorium provided the perfect backdrop for captivating conversations on innovation within these new cloud models. There were many lively discussions on container persistence and schedulers like Mesos and Kubernetes. As they dove deeper into innovative topics, the intricacies of rewriting distributed protocols, speculations of future hardware optimizations, and operating patterns at scale emerged. Many of the discussions centered on the challenges and opportunities that lie ahead for the newly minted Open Container Project.

 

DockerCon15IntelBanner.jpeg

The Open Container Project is a virtual gathering of innovative leaders paving the way for a new container specification standard. The OCP launch signals the union of two competing standards, the Docker and AppC specifications. OCP provides a common and critical platform for Intel to enable performance, security, and scalability using Intel technologies that customers already have in their data centers. Highlighted in the announcement by Docker was the desire to integrate DPDK (Data Plane Development Kit), SR-IOV (Single Root I/O Virtualization), TPM (Trusted Platform Module), and secure enclaves into the OCP runtime.

 

Historically Intel has created customer value through silicon features, thought leadership, and creating communities around collaborative innovation in open source. These communities become a gathering of minds, providing the perfect environment for innovation and Human-Defined Networking. Intel was proud to bring the container community together at DockerCon. Ultimately the next-generation of sophistication in cloud workloads will emerge from this gathering of minds.

 

To see the challenge facing the network infrastructure industry, I have to look no farther than the Apple Watch I wear on my wrist.

 

That new device is a symbol of the change that is challenging the telecommunications industry. This wearable technology is an example of the leading edge of the next phase of the digital service economy, where information technology becomes the basis of innovation, services and new business models.

 

I had the opportunity to share a view on the end-to-end network transformation needed to support the digital service economy recently with an audience of communications and cloud service providers during my keynote speech at the Big Telecom Event.

 

These service providers are seeking to transform their network infrastructure to meet customer demand for information that can help grow their businesses, enhance productivity and enrich their day-to-day lives.  Compelling new services are being innovated at cloud pace, and the underlying network infrastructure must be agile, scalable, and dynamic to support these new services.

 

The operator’s challenge is that the current network architecture is anchored in purpose-built, fixed function equipment that is not able to be utilized for anything other than the function for which it was originally designed.  The dynamic nature of the telecommunications industry means that the infrastructure must be more responsive to changing market needs. The challenge of continuing to build out network capacity to meet customer requirements in a way that is more flexible and cost-effective is what is driving the commitment by service providers and the industry to transform these networks to a different architectural paradigm anchored in innovation from the data center industry.

 

Network operators have worked with Intel to find ways to leverage server, cloud, and virtualization technologies to build networks that cost less to deploy, giving consumers and business users a great experience, while easing and lowering their cost of deployment and operation.

 

Transformation starts with reimagining the network

 

This transformation starts with reimagining what the network can do and how it can be redesigned for new devices and applications, even including those that have not yet been invented. Intel is working with the industry to reimagine the network using Network Functions Virtualization (NFV) and Software Defined Networking (SDN).

 

For example, the evolution of the wireless access network from macro basestations to a heterogeneous network or “HetNet”, using a mix of macro cell and small cell base-stations, and the addition of mobile edge computing (MEC) will dramatically improve network efficiency by providing more efficient use of spectrum and new radio-aware service capabilities.  This transformation will intelligently couple mobile devices to the access network for greater innovation and improved ability to scale capacity and improve coverage.

 

In wireline access, virtual customer premises equipment moves service provisioning intelligence from the home or business to the provider edge to accelerate delivery of new services and to optimize operating expenses. And NFV and SDN are also being deployed in the wireless core and in cloud and enterprise data center networks.

 

This network transformation also makes possible new Internet of Things (IoT) services and revenue streams. As virtualized compute capabilities are added to every network node, operators have the opportunity to add sensing points throughout the network and tiered analytics to dynamically meet the needs of any IoT application.

 

One example of IoT innovation is safety cameras in “smart city” applications. With IoT, cities can deploy surveillance video cameras to collect video and process it at the edge to detect patterns that would indicate a security issue. When an issue occurs, the edge node can signal the camera to switch to high-resolution mode, flag an alert and divert the video stream to a central command center in the cloud. With smart cities, safety personnel efficiency and citizen safety are improved, all enabled by an efficient underlying network infrastructure.

 

NFV and SDN deployment has begun in earnest, but broad-scale deployment will require even more innovation: standardized, commercial-grade solutions must be available; next-generation networks must be architected; and business processes must be transformed to consume this new paradigm. Intel is investing now to lead this transformation and is driving a four-pronged strategy anchored in technology leadership: support of industry consortia, delivery of open reference designs, collaboration on trials and deployments, and building an industry ecosystem.

 

The foundation of this strategy is Intel’s role as a technology innovator. Intel’s continued investment and development in manufacturing leadership, processor architecture, Ethernet controllers and switches, and optimized open source software provide a foundation for our network transformation strategy.

 

Open standards are a critical to robust solutions, and Intel is engaged with all of the key industry consortia in this industry, including the European Telecommunications Standards Institute (ETSI), Open vSwitch, Open Daylight, OpenStack, and others. Most recently, we dedicated significant engineering and lab investments to the Open Platform for NFV’s (OPNFV) release of OPNFV Arno, the first carrier-grade, open source NFV platform.

 

The next step for these open source solutions is to be integrated with operating systems and other software into open reference software to provide an on-ramp for developers into NFV and SDN. That’s what Intel is doing with our Open Network Platform (ONP); a reference architecture that enables software developers to lower their development cost and shorten their time to market.  The innovations in ONP form the basis of many of our contributions back to the open source community. In the future, ONP will be based on OPNFV releases, enhanced by additional optimizations and proofs-of-concept in which we continue to invest.

 

We also are working to bring real-world solutions to market and are active in collaborating on trials and deployments and deeply investing in building an ecosystem that brings companies together to create interoperable solutions.

 

As just one example, my team is working with Cisco Systems on a service chaining proof of concept that demonstrates how Intel Ethernet 40GbE and 100GbE controllers, working with a Cisco UCS network, can provide service chaining using network service header (NSH).  This is one of dozens of PoCs that Intel has participated in in just this year, which collectively demonstrate the early momentum of NFV and SDN and its potential to transform service delivery.

 

A lot of our involvement in PoCs and trials comes from working with our ecosystem partners in the Intel Network Builders. I was very pleased to have had the opportunity to share the stage with Martin Bäckström and announce that Ericsson has joined Network Builders. Ericsson is an industry leader and innovator, and their presence in Network Builders demonstrates a commitment to a shared vision of end-to-end network transformation.

 

The companies in this ecosystem are passionate software and hardware vendors, and also end users, that work together to develop new solutions. There are more than 150 Network Builder members taking advantage of this program and driving forward with a shared vision to accelerate the availability of commercial grade solutions.

 

NFV and SDN are deploying now - but that is just the start of the end-to-end network transformation. There is still a great deal of technology and business innovation required to drive NFV and SDN to scale, and Intel will continue its commitment to drive this transformation.



I invited the BTE audience – and I invite you – to join us in this collaboration to create tomorrow’s user experiences and to lay the foundation for the next phase of the digital services economy.

With the digital service economy scaling to $450B by 2020, companies are relying on their IT infrastructure to fuel business opportunity.  The role of the data center has never been as central to our economic vitality, and yet many enterprises continue to struggle to integrate the efficient and agile infrastructure required to drive the next generation of business growth.

 

At Intel, we are squarely focused on accelerating cloud adoption by working with the cloud software industry to deliver the capabilities required to fuel broad scale cloud deployment across a wide range of use cases and workloads.  We are ensuring that cloud software can take full advantage of Intel architecture platform capabilities to deliver the best performance, security, and reliability, while making it simpler to deploy and manage cloud solutions.

 

That’s why our latest collaboration with Red Hat to accelerate the adoption of OpenStack in the enterprise holds incredible promise.  We kicked off the OnRamp to OpenStack program in 2013. This program has been centered on educational workshop, early trials, and customer PoCs. Today, we are excited to augment this collaboration with a focus on accelerating OpenStack deployments, by building on our long-standing history of technical collaboration to accelerate feature delivery to drive broad proliferation of OpenStack in the enterprise.

 

This starts by expanding our focus on integrating enterprise class features such as high availability of OpenStack services and tenants, ease of deployment, and rolling upgrades.  What does this entail?  With high availability of OpenStack services we are ensuring an “always on” state for cloud control services.  High availability of tenants focuses on a number of capabilities including improving VM migration, and VM recovery from host failures.  Ease of deployment will help IT shops get up and running faster, and increase capacity whenever required with simplicity.  Once the cloud is up and running, rolling upgrades enable OpenStack upgrades without downtime.

 

We’re also excited to have industry leaders Cisco and Dell join the program to deliver a selection of proven solutions to the market.  With their participation, we expect to upstream much of the work we’ve collectively delivered to ensure that the entire open source community can leverage these contributions.  What does this mean to you? If you’re currently evaluating OpenStack and are seeking improvement in high availability features or predictable and understood upgrade paths, please reach out to us to find out more about what the collaboration members are delivering.  If you’re looking to evaluate OpenStack in your environment, the time is ripe to take action.  Take the time to learn more about Cisco, Dell and Red Hat plans for delivery of solutions based on the collaboration, and comment here if you have questions or feedback on the collaboration.

Today, Intel announced that it is one of the founding members of the Open Container Project (OCP), and effort focused on ensuring a foundation of interoperability across container environments. We were joined by industry leaders including Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware in the formation of this group which will be established under the umbrella of the Linux Foundation.  This formation represents an enormous opportunity for the industry to “get interoperability right” at a critical point of maturation of container use within cloud environments.

 

iStock_000000174631_Medium.jpg

 

Why is this goal important?  We know the tax limited interoperability represents to workload portability and the limiter it represents to enterprises extracting the full value of the hybrid cloud.  We also know the challenge of true interoperability when it is not established in early phases of technology maturity.  This is why container interoperability is an important part of Intel’s broader strategy for open cloud software innovation and enterprise readiness and why we are excited to be joining other industry leaders in OCP.

 

Intel brings decades of experience in working on open, industry standard efforts to our work with OCP, and we have reason to be bullish about the opportunity for OCP to deliver to its goals.  We have the right players assembled to lead this program forward and the right commitments from vendors to contribute code and runtime to the effort.  We’re looking forward to helping to lead this organization to rapid delivery to its goals and plan to use what we learn in OCP towards our broader engagements in container collaboration.

 

Our broader goal is squarely focused on delivery of containers that are fully optimized for Intel platforms and ready for enterprise environments as well as acceleration of easy to deploy container based solutions to the market.  You may have seen our earlier announcement of collaboration with CoreOS on optimization of their Tectonic cloud software environment with Intel architecture to ensure enterprise capabilities.  This announcement also features work with leading solutions providers such as SuperMicro and RedApt on delivery of ready to deploy solutions at Tectonic GA.  At DockerCon this week, we are highlighting our engineering work to optimize Docker containers for Intel Cloud Integrity Technology extending workload attestation from VM based workloads to containers.  These are two examples of our broader efforts to ready containers for the enterprise and highlight the importance of the work of OCP.

 

If you are engaged in the cloud software arena, I encourage you to consider participation in OCP.  If you’re an enterprise considering integration of containers in your environment the news of OCP should provide confidence of portability of future container based workloads and that evaluation of container solutions should be considered as part of your IT strategy.

Filter Blog

By date:
By tag: