1 2 3 Previous Next

The Data Stack

1,503 posts

By Sarah Han and Nicholas Weaver

 

crowdShot.jpg

On Monday and Tuesday close to 3000 software engineers, system administrators, technology leaders, and innovators gathered for the second US-based DockerCon in San Francisco.

 

Since its first demo, Docker has been a major force in the movement to enable Linux containers as a new footprint for deploying applications. Many new startups have emerged from this developing ecosystem focused on monitoring, networking, security and operations of Docker-based containers. This ecosystem arrived in full force for DockerCon 2015, increasing attendance six-fold.

 

Intel was proud to help bring together customers, competitors, partners, and creators alike at the San Francisco Exploratorium for The DockerCon Party. Attendees included thought leaders from leading open source cloud innovators such as CoreOS, Docker, Google, Mesosphere and Redapt capturing the spirit of collective innovation captured with the Open Container Project announcement at the show.

 

Exploratorium.jpg

 

The fascinating exhibits within the Exploratorium provided the perfect backdrop for captivating conversations on innovation within these new cloud models. There were many lively discussions on container persistence and schedulers like Mesos and Kubernetes. As they dove deeper into innovative topics, the intricacies of rewriting distributed protocols, speculations of future hardware optimizations, and operating patterns at scale emerged. Many of the discussions centered on the challenges and opportunities that lie ahead for the newly minted Open Container Project.

 

DockerCon15IntelBanner.jpeg

The Open Container Project is a virtual gathering of innovative leaders paving the way for a new container specification standard. The OCP launch signals the union of two competing standards, the Docker and AppC specifications. OCP provides a common and critical platform for Intel to enable performance, security, and scalability using Intel technologies that customers already have in their data centers. Highlighted in the announcement by Docker was the desire to integrate DPDK (Data Plane Development Kit), SR-IOV (Single Root I/O Virtualization), TPM (Trusted Platform Module), and secure enclaves into the OCP runtime.

 

Historically Intel has created customer value through silicon features, thought leadership, and creating communities around collaborative innovation in open source. These communities become a gathering of minds, providing the perfect environment for innovation and Human-Defined Networking. Intel was proud to bring the container community together at DockerCon. Ultimately the next-generation of sophistication in cloud workloads will emerge from this gathering of minds.

 

To see the challenge facing the network infrastructure industry, I have to look no farther than the Apple Watch I wear on my wrist.

 

That new device is a symbol of the change that is challenging the telecommunications industry. This wearable technology is an example of the leading edge of the next phase of the digital service economy, where information technology becomes the basis of innovation, services and new business models.

 

I had the opportunity to share a view on the end-to-end network transformation needed to support the digital service economy recently with an audience of communications and cloud service providers during my keynote speech at the Big Telecom Event.

 

These service providers are seeking to transform their network infrastructure to meet customer demand for information that can help grow their businesses, enhance productivity and enrich their day-to-day lives.  Compelling new services are being innovated at cloud pace, and the underlying network infrastructure must be agile, scalable, and dynamic to support these new services.

 

The operator’s challenge is that the current network architecture is anchored in purpose-built, fixed function equipment that is not able to be utilized for anything other than the function for which it was originally designed.  The dynamic nature of the telecommunications industry means that the infrastructure must be more responsive to changing market needs. The challenge of continuing to build out network capacity to meet customer requirements in a way that is more flexible and cost-effective is what is driving the commitment by service providers and the industry to transform these networks to a different architectural paradigm anchored in innovation from the data center industry.

 

Network operators have worked with Intel to find ways to leverage server, cloud, and virtualization technologies to build networks that cost less to deploy, giving consumers and business users a great experience, while easing and lowering their cost of deployment and operation.

 

Transformation starts with reimagining the network

 

This transformation starts with reimagining what the network can do and how it can be redesigned for new devices and applications, even including those that have not yet been invented. Intel is working with the industry to reimagine the network using Network Functions Virtualization (NFV) and Software Defined Networking (SDN).

 

For example, the evolution of the wireless access network from macro basestations to a heterogeneous network or “HetNet”, using a mix of macro cell and small cell base-stations, and the addition of mobile edge computing (MEC) will dramatically improve network efficiency by providing more efficient use of spectrum and new radio-aware service capabilities.  This transformation will intelligently couple mobile devices to the access network for greater innovation and improved ability to scale capacity and improve coverage.

 

In wireline access, virtual customer premises equipment moves service provisioning intelligence from the home or business to the provider edge to accelerate delivery of new services and to optimize operating expenses. And NFV and SDN are also being deployed in the wireless core and in cloud and enterprise data center networks.

 

This network transformation also makes possible new Internet of Things (IoT) services and revenue streams. As virtualized compute capabilities are added to every network node, operators have the opportunity to add sensing points throughout the network and tiered analytics to dynamically meet the needs of any IoT application.

 

One example of IoT innovation is safety cameras in “smart city” applications. With IoT, cities can deploy surveillance video cameras to collect video and process it at the edge to detect patterns that would indicate a security issue. When an issue occurs, the edge node can signal the camera to switch to high-resolution mode, flag an alert and divert the video stream to a central command center in the cloud. With smart cities, safety personnel efficiency and citizen safety are improved, all enabled by an efficient underlying network infrastructure.

 

NFV and SDN deployment has begun in earnest, but broad-scale deployment will require even more innovation: standardized, commercial-grade solutions must be available; next-generation networks must be architected; and business processes must be transformed to consume this new paradigm. Intel is investing now to lead this transformation and is driving a four-pronged strategy anchored in technology leadership: support of industry consortia, delivery of open reference designs, collaboration on trials and deployments, and building an industry ecosystem.

 

The foundation of this strategy is Intel’s role as a technology innovator. Intel’s continued investment and development in manufacturing leadership, processor architecture, Ethernet controllers and switches, and optimized open source software provide a foundation for our network transformation strategy.

 

Open standards are a critical to robust solutions, and Intel is engaged with all of the key industry consortia in this industry, including the European Telecommunications Standards Institute (ETSI), Open vSwitch, Open Daylight, OpenStack, and others. Most recently, we dedicated significant engineering and lab investments to the Open Platform for NFV’s (OPNFV) release of OPNFV Arno, the first carrier-grade, open source NFV platform.

 

The next step for these open source solutions is to be integrated with operating systems and other software into open reference software to provide an on-ramp for developers into NFV and SDN. That’s what Intel is doing with our Open Network Platform (ONP); a reference architecture that enables software developers to lower their development cost and shorten their time to market.  The innovations in ONP form the basis of many of our contributions back to the open source community. In the future, ONP will be based on OPNFV releases, enhanced by additional optimizations and proofs-of-concept in which we continue to invest.

 

We also are working to bring real-world solutions to market and are active in collaborating on trials and deployments and deeply investing in building an ecosystem that brings companies together to create interoperable solutions.

 

As just one example, my team is working with Cisco Systems on a service chaining proof of concept that demonstrates how Intel Ethernet 40GbE and 100GbE controllers, working with a Cisco UCS network, can provide service chaining using network service header (NSH).  This is one of dozens of PoCs that Intel has participated in in just this year, which collectively demonstrate the early momentum of NFV and SDN and its potential to transform service delivery.

 

A lot of our involvement in PoCs and trials comes from working with our ecosystem partners in the Intel Network Builders. I was very pleased to have had the opportunity to share the stage with Martin Bäckström and announce that Ericsson has joined Network Builders. Ericsson is an industry leader and innovator, and their presence in Network Builders demonstrates a commitment to a shared vision of end-to-end network transformation.

 

The companies in this ecosystem are passionate software and hardware vendors, and also end users, that work together to develop new solutions. There are more than 150 Network Builder members taking advantage of this program and driving forward with a shared vision to accelerate the availability of commercial grade solutions.

 

NFV and SDN are deploying now - but that is just the start of the end-to-end network transformation. There is still a great deal of technology and business innovation required to drive NFV and SDN to scale, and Intel will continue its commitment to drive this transformation.



I invited the BTE audience – and I invite you – to join us in this collaboration to create tomorrow’s user experiences and to lay the foundation for the next phase of the digital services economy.

With the digital service economy scaling to $450B by 2020, companies are relying on their IT infrastructure to fuel business opportunity.  The role of the data center has never been as central to our economic vitality, and yet many enterprises continue to struggle to integrate the efficient and agile infrastructure required to drive the next generation of business growth.

 

At Intel, we are squarely focused on accelerating cloud adoption by working with the cloud software industry to deliver the capabilities required to fuel broad scale cloud deployment across a wide range of use cases and workloads.  We are ensuring that cloud software can take full advantage of Intel architecture platform capabilities to deliver the best performance, security, and reliability, while making it simpler to deploy and manage cloud solutions.

 

That’s why our latest collaboration with Red Hat to accelerate the adoption of OpenStack in the enterprise holds incredible promise.  We kicked off the OnRamp to OpenStack program in 2013. This program has been centered on educational workshop, early trials, and customer PoCs. Today, we are excited to augment this collaboration with a focus on accelerating OpenStack deployments, by building on our long-standing history of technical collaboration to accelerate feature delivery to drive broad proliferation of OpenStack in the enterprise.

 

This starts by expanding our focus on integrating enterprise class features such as high availability of OpenStack services and tenants, ease of deployment, and rolling upgrades.  What does this entail?  With high availability of OpenStack services we are ensuring an “always on” state for cloud control services.  High availability of tenants focuses on a number of capabilities including improving VM migration, and VM recovery from host failures.  Ease of deployment will help IT shops get up and running faster, and increase capacity whenever required with simplicity.  Once the cloud is up and running, rolling upgrades enable OpenStack upgrades without downtime.

 

We’re also excited to have industry leaders Cisco and Dell join the program to deliver a selection of proven solutions to the market.  With their participation, we expect to upstream much of the work we’ve collectively delivered to ensure that the entire open source community can leverage these contributions.  What does this mean to you? If you’re currently evaluating OpenStack and are seeking improvement in high availability features or predictable and understood upgrade paths, please reach out to us to find out more about what the collaboration members are delivering.  If you’re looking to evaluate OpenStack in your environment, the time is ripe to take action.  Take the time to learn more about Cisco, Dell and Red Hat plans for delivery of solutions based on the collaboration, and comment here if you have questions or feedback on the collaboration.

Today, Intel announced that it is one of the founding members of the Open Container Project (OCP), and effort focused on ensuring a foundation of interoperability across container environments. We were joined by industry leaders including Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware in the formation of this group which will be established under the umbrella of the Linux Foundation.  This formation represents an enormous opportunity for the industry to “get interoperability right” at a critical point of maturation of container use within cloud environments.

 

iStock_000000174631_Medium.jpg

 

Why is this goal important?  We know the tax limited interoperability represents to workload portability and the limiter it represents to enterprises extracting the full value of the hybrid cloud.  We also know the challenge of true interoperability when it is not established in early phases of technology maturity.  This is why container interoperability is an important part of Intel’s broader strategy for open cloud software innovation and enterprise readiness and why we are excited to be joining other industry leaders in OCP.

 

Intel brings decades of experience in working on open, industry standard efforts to our work with OCP, and we have reason to be bullish about the opportunity for OCP to deliver to its goals.  We have the right players assembled to lead this program forward and the right commitments from vendors to contribute code and runtime to the effort.  We’re looking forward to helping to lead this organization to rapid delivery to its goals and plan to use what we learn in OCP towards our broader engagements in container collaboration.

 

Our broader goal is squarely focused on delivery of containers that are fully optimized for Intel platforms and ready for enterprise environments as well as acceleration of easy to deploy container based solutions to the market.  You may have seen our earlier announcement of collaboration with CoreOS on optimization of their Tectonic cloud software environment with Intel architecture to ensure enterprise capabilities.  This announcement also features work with leading solutions providers such as SuperMicro and RedApt on delivery of ready to deploy solutions at Tectonic GA.  At DockerCon this week, we are highlighting our engineering work to optimize Docker containers for Intel Cloud Integrity Technology extending workload attestation from VM based workloads to containers.  These are two examples of our broader efforts to ready containers for the enterprise and highlight the importance of the work of OCP.

 

If you are engaged in the cloud software arena, I encourage you to consider participation in OCP.  If you’re an enterprise considering integration of containers in your environment the news of OCP should provide confidence of portability of future container based workloads and that evaluation of container solutions should be considered as part of your IT strategy.

Enterprises have a love-hate relationship with cloud computing. They love the flexibility. They love the economics. They hate the fact they can't guarantee the infrastructure and applications running their businesses and hosting their corporate data are completely trusted and haven't been tampered with by cyber criminals for nefarious purposes.

iStock_000038394898_Medium.jpg

Even if organizations have confidence in the systems deployed in their data centers, in hybrid cloud environments, on premise systems may be instantly and automatically supplemented by capacity from a public provider. How do we know and control where application instances are running? Who attests to their trust? For cloud service providers, how do they demonstrate the platforms they provide are secure and can be verified for compliance purposes? And how do we manage and orchestrate OS, VM, and application integrity across private and public clouds in an OpenStack environment? At Intel, we're developing a solution for hardware-assisted workload integrity and confidentially that can answer those questions and create a platform for trusted cloud computing.

 

Intel® Xeon® processors offer a hardware-based solution using Intel Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology to attest to the integrity and trust of the platform. That lets us assure nothing has been tampered with and that the platform is running the authorized versions of firmware and software. To access and manage this capability, we provide Intel® Cloud Integrity Technology (CIT) 3.0 software.

 

At the OpenStack Summit in May, we demonstrated how we use Intel CIT 3.0 to verify a chain of trust at boot time from the hardware to the workload in a Linux/Docker and Linux/KVM environment. That includes the hardware, firmware, BIOS, hypervisor, OS, and the Docker engine itself. When integrated with OpenStack, we assure when an application was launched, it is launched in a trusted environment right up through its VM. In addition, VM images can be encrypted to assure their confidentiality. Intel CIT 3.0 provides Enterprise Ownership and Control in clouds through encrypted VM storage and enterprise managed keys.

 

At DockerCon in San Francisco, we have taken that one step farther. We have extended the chain of trust up through the Docker container image and application itself to assure trusted launch of a containerized application.

 

For enterprises that need trusted cloud computing, it means:

 

  • You can assure at boot time that the platform running the Docker daemon or hypervisor has not been tampered with and is running correct versions.

  • You can assure when a VM or container is launched that the container and VM images—including the containerized application—have not been tampered with and are correct versions.

  • You can achieve the above when deploying VMs and containers from the same OpenStack controller to enable trusted compute pools.

 

VMs and containers can be launched from a dashboard, which also displays their execution and trust status. But the real power of the solution will come as the capabilities are integrated into orchestration software which can launch trusted container transparently on trusted compute pools. And we are continuing our work to address storage and networking workloads like storage controllers, software-defined networking (SDN) controllers, and virtual network functions.

 

The demonstration at DockerCon is a proof of concept we built using CIT 3.0. We're currently integrating with a select set of cloud service providers and security vendor partners and will announce general availability after that is complete. CIT 3.0 protects virtualized and containerized workloads (Docker containers) running on OpenStack-managed Ubuntu, RHEL, and Fedora systems with KVM/Docker. It also protects non-virtualized (bare metal) environments. If you have one of those environments running on Xeon TXT-enabled servers with TPM activated by the OEM, we invite you to try it out under our beta program.

 

Integrity and confidentiality assurance is becoming a critical requirement in private, public, and hybrid cloud infrastructures, and cloud service providers must offer trusted clouds to their customers to provide them with the confidence to move sensitive workloads into the cloud. Intel Cloud Integrity Technology 3.0 is the only infrastructure integrity solution in the market that offers complete chain of trust, from the hardware to the application. We think enterprises will be loving cloud computing a lot more.

Three best practices for successful big data projects

 

Many people have asked me why only 27% of respondents in a recent consulting report believed their Big Data projects were successful.

 

I don’t know the particulars of the projects in the report, but I can comment on the key attributes of successful Big Data projects that I’ve seen.

 

Let’s look at an example. Intel recently published a case study about an entirely new Big Data analytics engine that Caesars Entertainment built on top of Cloudera Hadoop and a cluster of Xeon E5 servers. This analytics engine was intended to support new marketing campaigns targeted at customers with interests beyond traditional gaming, including entertainment, dining and online social gaming. The results of this project have been spectacular, increasing Caesars’ return on marketing programs and dramatically reducing the time to respond to important customer events.

iStock_000064050499_Medium.jpg

Three ways that Caesars Entertainment got it right:

 

1. Pick a good use case

 

Caesars chose to improve the segmentation and targeting of specific marketing offers.  This is a great use case because it is a specific, well-defined problem that the Caesars analytics team already understands well.  It has the additional benefit that new unstructured and semi-structured data sources were available that could not be included in the previous generation of analysis.

 

Rizwan Patel, IT director, commented, “When it comes to implementation, it is … essential to select use cases that solve real business problems. That way, you have the backing of the company to do what it takes to make sure the use case is successful.”

 

2. Prioritize what data you include in your analysis

 

“We have a cross-functional team…that meets quarterly to prioritize and select use cases for implementation.”

 

This applies to both data and analytics. There is a common misconception that a data lake is like an ocean: Every possible source of data should flow into it.  My recommendation is to think of a data lake as a single pool where you can easily access all the data that is relevant to your projects. It takes a lot of effort to import, clean and organize each data source. Start with data you already understand.  Then layer in one or two additional sources, such as web clickstream data or call center text, to enrich your analysis.

 

3. Measure your results

 

“The original segments were not generating enough return on customer offers.”

 

It’s hard to declare a project a success if it has no measurable outcome.  This is particularly important for Big Data projects because there is often an unrealistic expectation that valuable insights will magically bubble to the surface of the data lake.  When this doesn’t happen, the project may be judged a failure, even when it has delivered real improvements on a meaningful metric. Be sure to define key metrics in advance and measure them before and after the project.

 

Your organization’s best odds

 

Big Data changes the game for data-driven businesses by removing obstacles to analyzing large amounts of data, different types of unstructured and semi-structured data, and data that requires rapid turnaround on results.

 

Give your organization the best odds possible for a successful Big Data project by following Caesars Entertainment’s good example.

Intel has had a long-standing collaboration with Cisco to advance Ethernet technology; a relationship that is growing stronger as both companies seek to evolve Ethernet to ensure that it meets the needs for next-generation data centers.

 

One indication of this increasing teamwork was the recent announcement that Cisco has joined Intel Network Builders, our ecosystem of companies collaborating to build SDN and NFV solutions. We recently published a white paper to our Network Builders website about our joint work to make NFV part of a flexible, open and successful transformation for service provider customers.

 

I saw more fruit from this partnership at the Cisco Live conference in San Diego, where the two companies worked together on important networking-related technology demonstrations.

 

The first demo showcased new NBASE-T technology, which allows a 10GBASE-T connection to negotiate links at 5Gb and 2.5Gb over a 100M CAT 5E cable. This capability is especially exciting for campus deployments and wireless access points.

 

Cisco and Intel have a long history of working together to ensure that our 10GBASE-T products work seamlessly together. And now in the data center we are seeing rapid growth in the deployments of this technology. In fact, recent market projections from Crehan Research show that the upgrades from 1GBASE-T to 10GBASE-T are driving the largest adoption phase ever for the 10GbE market. **

 

Ethernet is the Best Interconnect for SDI

 

Ethernet is the best option to be the leading interconnect technology in next-generation software defined infrastructure (SDI)-based data centers. With 40GbE and 100GbE options, Ethernet has the throughput for the most demanding data centers as well as the low latency needed for the real-time applications.

 

The next step is to virtualize the network and network functions. To that end, we created a technology demo showing Intel® Ethernet controllers in a service chaining application using Network Service Headers (NSH).

 

Cisco initially developed NSH, but it’s now working its way through the IEEE standardization process. NSH creates a virtual packet-processing pipeline on top of a physical network and uses a chaining header to direct packets to particular VM-based services. This service chaining is an important element in next-generation network functions virtualization (NFV) services.

 

This technology demonstration – which drew big crowds earlier this year at the Mobile World Congress in Barcelona – pairs Cisco’s UCS platform with Intel’s Ethernet Converged Network Adapters XL710 and 100GbE Red Rock Canyon adapters.

 

Red Rock Canyon is our new multi-host controller that is under development. This device provides low-latency, high-bandwidth interfaces to Intel® Xeon processing resources. It also provides flexible Ethernet interfaces including 100GbE ports. Red Rock Canyon has advanced frame-processing features including tunneling and forwarding frames using network service chaining headers at full line rate up to 100Gbps.

 

We have two videos from Cisco Live, one of the NSH demo and one of the NBASE-T demo if you want to see these technologies in action.

 

The future for Ethernet in the evolving software-defined datacenter is bright and I look forward to continuing our work with Cisco to develop and promote the key technologies to meet the needs of this evolving market.

From May 12-14, Citrix Synergy 2015 took over the Orange County Convention Center in Orlando, providing a showcase for the Citrix technologies in mobility management, desktop virtualization, server virtualization and cloud services that are leading the transition to the software-defined workplace. Intel and Citrix have worked [together closely] (https://www.youtube.com/watch?v=gsm26JHYIaY) for nearly 20 years to help businesses improve productivity and collaboration by securely delivering applications, desktops, data and services to any device on any network or cloud.  Operating Citrix mobile workspace technologies on Intel® processor-based clients and Intel® Xeon® processor-based servers can help protect data, maintain compliance, and create trusted cloud and software-defined infrastructures that help businesses better manage mobile apps and devices, and enable collaboration from just about anywhere.

 

During Citrix Synergy, a number of Intel experts took part in presentations to highlight the business value of operating Citrix software solutions on Intel® Architectures.

 

Dave Miller, director of Intel’s Software Business Development group, appeared with Chris Matthieu, director of Internet of Things (IoT) engineering at Citrix, to discuss trends in IoT.   In an interview on Citrix TV, Dave and Chris talked about how the combination of Intel hardware and Intel-based gateways and the Citrix* Octoblu IoT software platform make it easy for businesses to build and deploy IoT solutions that collect the right data and help turn it into insights to improve business outcomes.

 

Dave looked in his crystal ball to discuss what he saw coming next for IoT technologies. He said that IoT’s initial stages have been about delivering products and integrated solutions to create a connected IoT workflow that is secure and easily managed. This will be followed by increasingly sophisticated technologies for handling and manipulating data to bring insights to businesses. A fourth wave will shift the IoT data to help fuel predictive systems, based on the increasing intelligence of compute resources and data analytics.

 

I also interviewed David Cowperthwaite, an engineer in Intel’s Visual and Parallel Computing Group and an architect for virtualization of Intel Processor Graphics. In this video, we discussed how Intel and Citrix work together to deliver rich virtual applications to mobile devices using Citrix* XenApp.  David explained how running XenApp on the new Intel® Xeon® processor E3 v4 family  with Intel® Iris™ Pro Graphics technology provides the perfect platform for mobile delivery of 3D graphics and multimedia applications on the highly integrated, cartridge-based HP* Moonshot System.  

 

One of the more popular demos showcased in the Intel booth was the Intel® NUC and Intel® Compute Stick as zero-client devices.  Take a live look in this video.  We also released this joint paper on XenServer, take a look.

 

For a more light-hearted view of how Citrix and Intel work together to help you Work Anywhere on Any Device, watch this fun animation.

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel

 

I’m very pleased to announce the availability of a new release of the Intel® Open Network Platform – our integrated, open source NFV and SDN infrastructure reference architecture.

 

The latest version (v1.4) now offers improved virtual machine platform and communications performance through the integration of OpenStack Kilo, DPDK, and Open vSwitch among other software ingredients.

 

ONP is the Intel® open source SDN/NFV reference software platform that integrates the main software components necessary for NFV so that Intel Network Builders ecosystem - or any NFV/SDN software developer - have access to a reference high-performance NFV infrastructure optimized for Intel Architecture.

 

One of the goals of ONP is to improve the deployability of the software components (such as OpenStack or OpenDaylight) by integrating recent releases and, in that process, addressing feature gaps, fixing bugs, testing the software and contributing development work back to the open source community.

 

The other major goal is to deliver the highest performance software possible, and with v1.4 there is a significant performance improvement for VMs thanks to new features in OpenStack Kilo and Open vSwitch 2.3.90.

 

Kilo Brings Enhanced Platform Awareness

Advancements in Enhanced Platform Awareness (EPA) in OpenStack Kilo will have a significant impact on the scalability of NFV solutions, predictability and improved performance of virtual machines. EPA is composed of several technologies that expose hardware attributes to the NFV orchestration software to improve performance. For example, with CPU pinning a VM process can be “pinned” to a particular CPU core.

 

The Non-Uniform Memory Architecture (NUMA) topology filter is a complementary capability that enables memory resources proximity to the CPU core resulting with lower jitter and latency. I/O aware NUMA scheduling adds the ability to select the optimal socket based on the I/O device requirements. Now, with all those capabilities in place, a VNF can pin a high-performance process to a core and ensure that it is connected locally to the relevant I/O, and has priority access to the highest performing memory. All of this leads to more predictable workload performance and improves providers’ ability to meet SLA.

 

If you’d like more information on EPA, my colleague Adrian Hoban has written a blog post and whitepaper that offers a great exploration of the topic.

 

Improved Virtual Switching

Another enhancement is the integration of Open vSwitch 2.3.90 with Data Plane Development Kit (DPDK) release 1.8 libraries. The addition of DPDK’s packet acceleration technology to the virtual switch through user space improves VM-to-VM data performance. ONP Release 1.4 also adds support for VFIO which secures the user space driver environment more effectively.

 

I would like to take the opportunity and mention the significant milestone of OPNFV Arno release, which represents an important advancement towards NFV acceleration and adoption. Intel ONP contributes innovation with partners into OPNFV and in parallel will continuously consume technologies delivered by OPNFV. For additional information on Arno go to https://networkbuilders.intel.com/onp.

 

I am proud to announce Intel ONP was awarded Most Innovative NFV Product Strategy (Vendor) by Light Reading Leading Lights Awards 2015 on June 8, 2015. The award is given to the technology vendor that has devised the most innovative network functions virtualization (NFV) product strategy during the past year. For more information, please view the announcement here.

 

 

ONP 1.4 delivers innovation and significant new value to NFV software developers and I encourage all of you to check out this new reference release. The first step is to download our new v1.4 data sheet.

By Chris Buerger

 

I had the opportunity to participate in Dave Ward’s (CTO Cisco) presentation is at the fantastic OpenStack Summit in Vancouver last month. As part of his session, he made a strong statement that warmed my (open source) heart.

 

To repeat his assertion: “By creating a ‘whole stack’ framework with multiple, loosely coupled open source projects, we can satisfy the need in NFVI for advanced functionality without forking. Stop forking up NFV.”

 

Let’s unpack Dave’s statement.

 

In my mind, the ‘whole stack’ framework concept that Dave talked about is squarely aligned with the intent of mid-stream NFV open source initiatives such as the Open Platform for NFV Project. OPNFV’s mission is to accelerate NFV through the creation of an integrated open platform based on the adoption of leading upstream projects such as OpenStack and OpenDaylight (ODL).

 

The Arno release represents the first step to making this goal a reality by releasing fundamental enabling platform integration and validation technology into the community. Two years ago, Rose Schooler articulated a similar vision at Open Networking Summit 2013 in her announcement of the Intel Open Network Platform. We have been steadily releasing quarterly updates of ONP on Intel’s 01.org, and partners such as HP and Oracle are starting to leverage its capabilities.

 

The main point is the same. While there is no doubt that individual ingredient open source projects are critical and are being integrated/consumed into existing systems (or are augmenting/replacing proprietary solutions), the ‘whole stack’ is the ultimate catalyst and delivery framework to create a complete solution to rapidly deploy SDN and NFV.

 

The whole stack concept of ‘loosely coupled open source projects’ also has challenges. One challenge is to provide one or multiple common (generally technical) capabilities that act as linkages between the different projects. Think of it like a red thread running through the whole stack.

 

For Intel, one such connecting capability is the popular Data Plane Development Kit (DPDK), a set of open source libraries and drivers for fast packet processing. This capability is enabled from OpenStack, through ODL and all the way into the Open Virtual Switch (OVS), providing a consistent approach to accelerating packet performance and to optimizing workload placement across the entire NFV/SDN stack.

 

Having spent many years on both sides of the supply/demand equation for service provider software solutions, I am acutely aware of the benefits that tightly integrated, highly innovative proprietary software products can provide.

 

I do believe that creating unique NFV/SDN intellectual property (IP) and technical innovation should generate a commensurate financial return. What I think is different in NFV today, and what I believe is center to Dave’s and my argument, is that vendors will not be successful by creating forks of the open source projects making up the entire NFV stack.

 

The level and timeliness of adherence to the latest open source ingredients has risen to become a primary purchase decision-making criterion. It is perfectly okay to augment open source with unique monetizable IP, just do not attempt to lock-in a customer by having them buy into your fork.

 

We have already seen this in OpenStack with different commercial solution vendors touting the ‘pureness’ and increasingly short timelines to adopt the latest community releases. I also believe that the same movement will occur as OpenDaylight matures.

 

Rose quoted Intel founder Andy Grove in her presentation at ONS two years ago. Her point was that with SDN, the networking industry had reached a strategic inflection point that is changing the way the industry thinks and acts.

 

Now, we are entering a new phase in this process, moving from the open source ingredient level to the complete NFV stack level while elevating the smart integration and non-forked nature of these different components as a main benefit to end-users.  Slightly rephrasing Dave’s statement: Don’t fork NFV if you want to thrive.

By Caroline Chan, Wireless access segment manager, Network Platform Group, Intel

 

 

Make no mistake about it - there’s a lot of work ahead for telecommunications equipment manufacturers (TEMs) who expect to play in 5G. That’s because in some locations, 5G networks will need to provide “much greater throughput, much lower latency, ultra-high reliability, much higher connectivity density, and higher mobility range,” according to Next Generation Mobile Networks (NGMN) Alliance.

 

Put another way, “5G is expected to have 1,000 times the capacity of 4G,” said Dr. Xiang Jiying, Chief Scientist at ZTE, in an interview with the Wall Street Journal.

 

Here are just a few areas where Intel is focusing efforts to help satisfy 5G requirements and the associated challenges.

5G Blog No. 2 image.jpg

 

Greater base station compute performance

 

Today’s 4G base stations typically have two to four antenna elements, moving to hundreds with 5G. This will be accompanied by massive multi-user, multiple-input, multiple-output (MU-MIMO) technology that is expected to deliver Gbit per second data rates. These massive arrays will generate ultra-narrow beam patterns, which can be steered towards intended users while simultaneously suppressing energy to unintended users.

 

“Unfortunately, the promised benefits of large-scale MIMO come at the cost of significantly increased computational complexity in the base station. In particular, data detection in the large-scale MIMO uplink is expected to be among the most critical tasks in terms of complexity and power consumption, as the presence of hundreds of antennas at the base station and a large number of users will increase the computational complexity by orders of magnitude,” writes Michael Wu and others in a paper published in the IEEE Journal.

 

To address the need for these higher compute levels in a space-constrained form factor, Intel is developing faster processors per Moore’s Law and investigating new instructions that could accelerate MIMO algorithms.

 

Higher frequency spectrum

 

The spectrum for 4G is in the range of 700 MHz to 2.5 GHz, but higher frequencies are needed for 5G in order to handle the anticipated traffic increase. “The entire spectrum from 10 GHz up to 100GHz, which is well into the millimeter wave (mmW) range, is being considered for use by 5G mobile communication systems,” suggests Erik Dahlman and others from Ericsson. Since 5G equipment will probably be expected to handle 4G/LTE as well, support for the entire spectrum – from below 1 GHz up to mmW frequency bands –will likely be needed.

 

Contributing to this effort, Intel along with 15 telecoms operators, vendors, research centers, and academic institutions from eight European countries launched a collaborative project to develop millimeter-wave radio technologies for use in ultra-fast 5G radio networks. Members of MiWaveS, which stands for millimeter-wave small-cell access and backhauling, believe deployment of small cells with millimeter-wave access in dense urban areas will increase the flexibility of the access infrastructure as well as improve spectral and energy efficiency for low-power access points.

 

At the Mobile World Congress 2015 in Barcelona, Intel showcased a prototype of Anchor Booster, which anchors signaling at a virtualized macro base station and boosts data capacity for small cells that operate at 60 GHz.

 

Lower edge latency

 

The user experienced data rate for 5G is expected to take a big jump over typical 4G rates of 5 to 12 Mbps and a 100 Mbps peak rate. According to Ericsson, 5G should be able to provide the following data rates for particular scenarios:

 

  • Indoor and dense outdoor environments: 10 Gbps and higher
  • Urban and suburban environments: several 100 Mbps
  • Sparsely-populated rural areas (developed and developing countries): at least 10 Mbps

 

These higher 5G target data rates translate into the need for even lower latency than 4G, with end-to-end latency going down from around 10 milliseconds (ms) to 1 ms or less. one way to address this requirement is to deploy more services at the edge, which can provide users faster access to data. From the perspective of the base station, this means virtualization and interrupt overhead must be cut down by an order of magnitude.

 

Intel, Wind River, and VMware are working on a series of processor architecture and software enhancements aimed at improving latency and determinism of computing platforms. In addition, Intel co-founded the ETSI Mobile-Edge Computing (MEC) Industry Specification Group (ISG) to help drive technical standardization, and enable IT and cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile subscribers. Since the ISG inception in September 2014, its membership has grown to over 40 companies. Making this easier, the Intel® Network Edge Virtualization SDK (NEV SDK) is an open standards-based solution that delivers a ready-to-use, low-latency, virtualized environment for base stations.

 

For more information, learn more about Intel’s solutions for wireless infrastructure.

The 2015 SAP SAPPHIRE NOW+ ASUG Annual Conference, held May 5-7 in Orlando, was a huge success for SAP. And with all due respect, it was a pretty great show for Intel, too.

 

SAP SAPPHIRE NOW is SAP’s major annual show and convention (and, apparently, the world’s largest business technology event) attracting some 25,000 attendees from around the world. It’s also a showcase for SAP’s major partners, and Intel chose the event to highlight some very exciting news.

 

During the week of SAP SAPPHIRE NOW, Intel announced the launch of the new Intel® Xeon® processor E7 v3 product family. These processors combine enormous memory capacity with breakthrough performance and reliability features to deliver the power of in-memory computing for real-time insights and faster decision-making.

 

 

To underscore the deep collaboration between Intel and SAP, Diane Bryant, senior vice president and general manager of Intel’s Data Center Group, spoke in an executive video to explain how the latest version of SAP HANA*, SAP’s flagship in-memory database, can take advantage of new Intel® Transactional Synchronization Extensions (Intel® TSX). Diane’s video was shown in Chairman & Founder Hasso Plattner’s day 3 keynote in SAP CTO Quentin Clark’s presentation. Specifically, extensions (Intel TSX) in the Intel Xeon processor E7 v3 improve parallel processing in multi-threaded systems and greatly improve the performance of in-memory database processing.

 

With that kind of added transactional power—all backed by the latest Intel Xeon processor security features, power management and reliability advances—SAP HANA becomes a database platform for more than analytics.

 

With support from Intel TSX, SAP HANA delivers the speed of in-memory computing to transactional workloads by increasing the scalability of thread synchronization & reducing database lock occurrences in high-core-count systems. This adds another business reason to consolidate both analytical and transactional workloads onto a SAP HANA platform running on Intel Xeon processor E7 v3—in effect, placing a business’s entire data warehouse in-memory to reap huge gains in real-time business intelligence.

 

SAP has long been a leader in optimizing its software to take advantage of Intel’s latest processor technologies. The result is a powerful processor and platform alignment that reduces IT complexity and greatly maximizes performance. According to Diane, the latest SAP HANA platform on Intel Xeon processor E7 generates up to a 6x performance improvement over previous versions. Additionally, on a separate Lenovo test for 8 socket systems they demonstrated up to a 10x transaction performance advantage. This kind of performance gain can help bring a business’s best ideas to life.

 

Read this solution brief to learn how the new generation of Intel Xeon processor E7 running the SAP HANA platform turns real-time transactions and analytics into a real business advantage.

 

How does the latest Intel Xeon processor family look from an SAP point of view? Read this blog from SAP engineer Addi Brosig to learn how the newest generation of Intel Xeon processors E7 family delivers exceptional performance for SAP HANA platforms.

 

 

While at SAPPHIRE, I was interviewed by Rick Speyer, Cisco’s global senior manager for big data and SAP. We discussed the long collaborative relationship between Intel and Cisco, and I had the chance to explain the value to the new Intel Xeon processor E7 v3 to Rick.

 

I also had the privilege of presenting with HP, Lenovo, Cisco and Hitachi, which are a few of the 12 OEMs that have over 400 certified SAP HANA appliances ready to scale on the Intel Xeon E7 processor family.

 

Follow me at @TimIntel and #TechTim for the latest news on Intel and SAP.

modern_paris_photo.jpg


Application of big data analytics is showing exceptional results for enterprises by allowing them to uncover new insights to develop new services, increase competitiveness, and optimize their operational efficiency. Turning an eye outside of the enterprise though, there are number of mounting problems that exist today in large urban areas that will continue to grow as more than 2/3 of the world’s population will live in cities by mid-century. Could data analytics improve the quality of health-care through more targeted care, reduce traffic congestion and pollution in growing cities, and improve crop yields to feed a burgeoning global population?

 

Intel and the European-based Teratec consortium thinks so. To drive the successful adoption of these technologies, Intel and Teratec are collaborating to launch a big data lab that will spearhead research initiatives focused on personalized health-care, smart cities, and precision agriculture. The lab’s efforts will accelerate research initiatives, develop proof of concepts, and promote real-world trials that will eventually lead to full-scale deployments.

 

Ultimately, we all expect that these initiatives will lead to practical and repeatable big data solutions that can be leveraged around the world to improve the human condition, not just in Europe, but around the world. While the goals are lofty, the mission is clear and the alternatives for not pursuing could be dire.

 

The lab is located within the Teratec science and technology park which has brought together more than 80 technological and industrial companies, laboratories and research centers, universities, and engineering schools. Through Teratec, these organizations are combining their resources for simulation and high performance computing to address specific and very challenging problems for various industries.

 

The Teratec members understand that rapid population growth over the coming decades will stress the infrastructure of cities, further tax sources of sustainable food and water resources, and pose new barriers to high-quality health-care. But they aren’t stopping at the problems—they are focused on finding solutions.

 

Consortium members see opportunities to use big data analysis and high-performance computing (HPC) systems to unlock new insights that will help us all understand how to create more sustainable cities and resources, as well as improve health-care.

 

Those are noble goals, for sure, but the reality on the ground is something else. Efforts to solve these challenges are held back by a lack of access to computing resources and data science know-how. At Intel, we want to help address these issues—and advance the Teratec mission. For the new big data lab, we are contributing servers powered by Intel compute, networking, and storage architecture, data analytics software from Intel and other companies we collaborate with, and the expertise of our data scientists to help accelerate the consortium’s work.

 

At Intel, we feel gratified to be in a position to drive these efforts forward through the contribution of leading-edge hardware and software and the often times scarce skill-sets of some very sharp data scientists.

 

Please stay tuned for updates on the accomplishments of the Teratec consortium. And in the meantime, for a closer look at the power of big data and advanced analytics tools to solve human problems, visit our Center of Possibility site.

OPNFV Arno was released this week and that’s big news to companies like Intel that are heavily invested in the success of NFV and of the Open Platform for NFV Project.


We’re already seeing a positive impact from NFV on the telecom market through a wide variety of successful proofs of concept and active involvement in solutions and standards development from every facet of the telecommunication industry.


But Arno, and future OPNFV releases, will help to speed the transition from PoC to industry adoption by providing a standardized, proven, open source NFV infrastructure that is suitable for all NFV applications.


Arno is the first instantiation of an OPNFV platform and comprises the NFV infrastructure (NFVI) and virtual infrastructure manager (VIM) components of the NFV architecture specified by the European Telecommunication Standards Institute (ETSI).


Future releases will also implement APIs to other NFV elements, which together form the basic infrastructure required for virtualized network functions (VNFs) and management and network orchestration (MANO) components. You can find all of the details about Arno in the press release that the OPNFV Project issued this week.



 


Entering the Telecom Supply Chain

 

Arno is a developer release, meaning the software now enters into the industry supply chain. It is available for OSVs, ISVs, TEMS and technology providers like Intel to further test, fix, integrate and customize the platform – a process much like the one that computer operating systems go through.

One example of how the industry will add value to Arno is our own Intel® Open Network Platform architecture, one of the company’s key NFV/SDN initiatives. ONP is a software reference architecture that delivers proven solutions to service providers, enterprise IT and cloud service providers to enable SDN/NFV deployment. Until Arno, the ONP reference has been developed using the upstream open source software components of Arno (OpenStack, OpenDaylight, etc.). Now, we’ll be able to integrate Arno with Intel hardware and software optimizations to accelerate NFV deployments.

From an NFV industry perspective, what’s really momentous about Arno is that it’s the first time that the industry has come together to build a standardized open source NFV platform. The OPNFV community is working with upstream projects like OpenStack, OpenDaylight, Open vSwitch, KVM, DPDK and others to coordinate continuous integration and testing of them while also addressing feature and performance gaps to make them carrier-grade.

Many of the best minds from the nearly 60 industry leaders that currently make up the OPNFV Project have been involved in Arno, pooling their collective expertise on developing what is key to an NFV infrastructure. This expertise spans industry leaders across telecom equipment manufacturers, operating system vendors, independent software providers, telecom and cloud service providers and technology providers like Intel.

NFV is facilitating a transition to open, multi-vendor software based solutions running on industry standard, high volume servers. Ensuring that every aspect of the NFV solution chain – from the infrastructure to the orchestration to the virtual network functions – can all interoperate, and can be automated, makes the rapid deployment of NFV-based services possible. And it all starts with a standardized and scalable NFV infrastructure.

I think back to the Mobile World Congress 2015 event in February and the five NFV proof-of-concept demos that we hosted in the Intel booth. Significant efforts by each PoC team went toward building the platform infrastructure (NFVI) and the VIM. Arno and future OPNFV releases will deliver a standardized, scalable NFVI and VIM infrastructure that can be absorbed quickly and solidified by the industry supply chain to provide commercially supported NFV platforms. Instead of having to worry about the infrastructure, VNF developers can focus their resources and investments on their own value-add functionality, and getting software to market more quickly.

I’m truly excited about Arno and the possibilities it unleashes. My congratulations to the OPNFV community for the Arno release and I look forward to more great things to come.

The 2015 SAP SAPPHIRE NOW+ ASUG Annual Conference, held May 5-7 in Orlando, was a huge success for SAP. And with all due respect, it was a pretty great show for Intel, too.

 

SAP SAPPHIRE NOW is SAP’s major annual show and convention (and, apparently, the world’s largest business technology event) attracting some 25,000 attendees from around the world. It’s also a showcase for SAP’s major partners, and Intel chose the event to highlight some very exciting news.

 

Diane Bryant, senior vice president and general manager of Intel’s Data Center Group, intentionally chose the week of SAP SAPPHIRE NOW to announce the launch of the new Intel® Xeon® processor E7 v3 product family. These processors combine enormous memory capacity with breakthrough performance and reliability features to deliver the power of in-memory computing for real-time insights and faster decision-making.

 

 

To underscore the deep collaboration between Intel and SAP, Diane spoke in an executive video to explain how the latest version of SAP HANA*, SAP’s flagship in-memory database, can take advantage of new Intel® Transactional Synchronization Extensions (Intel® TSX). Diane’s video was shown in Chairman & Founder Hasso Plattner’s day 3 keynote in SAP CTO Quentin Clark’s presentation. Specifically, extensions (Intel TSX) in the Intel Xeon processor E7 v3 improve parallel processing in multi-threaded systems and greatly improve the performance of in-memory database processing.

 

With that kind of added transactional power—all backed by the latest Intel Xeon processor security features, power management and reliability advances—SAP HANA becomes a database platform for more than analytics.

 

With support from Intel TSX, SAP HANA delivers the speed of in-memory computing to transactional workloads by increasing the scalability of thread synchronization & reducing database lock occurrences in high-core-count systems. This adds another business reason to consolidate both analytical and transactional workloads onto a SAP HANA platform running on Intel Xeon processors E7 v3—in effect, placing a business’s entire data warehouse in-memory to reap huge gains in real-time business intelligence.

 

SAP has long been a leader in optimizing its software to take advantage of Intel’s latest processor technologies. The result is a powerful processor and platform alignment that reduces IT complexity and greatly maximizes performance. According to Diane, the latest SAP HANA platform on Intel Xeon processor E7 generates up to a 6x performance improvement over previous versions. Additionally, on a separate Lenovo test for 8 socket systems they demonstrated up to a 10x transaction performance advantage. This kind of performance gain can help bring a business’s best ideas to life.

 

Read this solution brief to learn how the new generation of Intel Xeon processor E7 running the SAP HANA platform turns real-time transactions and analytics into a real business advantage.

 

How does the latest Xeon processor family look from an SAP point of view? Read this blog from SAP engineer Addi Brosig to learn how the newest generation of Intel Xeon processors E7 family delivers exceptional performance for SAP HANA platforms.

 

Not to be undone I also had the privilege of presenting with HP, Lenovo, Cisco and Hitachi which are a few of the 12 OEMs that have over 400 certified SAP HANA appliances ready to scale on the Intel Xeon E7 processor family.

 

Follow me at @TimIntel and #TechTim for the latest news on Intel and SAP.

Filter Blog

By date:
By tag: