The Data Stack

15 Posts authored by: JimBlakley

Data centers everywhere are dealing with a flood of video traffic. This deluge is only going to grow in scope in the years to come. Consider numbers like these: Online video streaming viewership jumped by 60 percent in 2014 alone,  and video delivery has now become the number one source of Internet traffic   and by 2018, video traffic is set to comprise 80 percent of the Internet’s traffic2.

 

And it’s not just YouTube videos and Netflix streaming that are causing problems for data center operators. Many organizations are also dealing with the demands of complex 3D design applications and massive data sets that are delivered from secure data centers and used by design teams scattered around the world.

 

To keep pace with current and future growth in these graphics-intensive workloads, data center operators are looking to optimize their data center computing solutions specifically to handle an ever-growing influx of graphics-intensive traffic.

 

That’s the idea behind the new Intel® Xeon® processor E3-1200 v4 family with integrated Intel® Iris™ Pro graphics P6300 — Intel’s most advanced graphics platform. This next-generation processor, unveiled today at the Computex 2015 conference, also features Intel® Quick Sync Video which accelerates portions of video transcoding software by running it hardware.

 

This makes the Intel Xeon processor E3-1200 v4 family an ideal solution for streaming high volumes of HD video. It offers up to 1.4 times the transcoding performance of the Intel® Xeon® processor E3-1200 v3 family and can handle up to 4300 simultaneous HD video streams per rack.

 

The new Intel Xeon Processor E3-1200 v4 family is also a great processing and graphics solution for organizations that need to deliver complex 3D applications and large datasets to remote workstations. It supports up to 1.8 times the 3D graphics performance  versus the previous generation Intel Xeon processor E3 v3 family.

 

I’m pleased to say that the new platform already has a lot of momentum with our OEM partners. Companies designing systems around the Intel Xeon Processor E3-1200 v4 family include Cisco, HP, Kontron, Servers Direct, Supermicro, and QCT (Quanta Cloud Technology).

 

Early adopters of the Iris Pro graphics-enabled solution include iStreamPlanet, which streams live video to a wide range of user devices via its cloud delivery platform. In fact, they just announced a new 1080p/60 fps service offering:

 

“We’re excited to be among the first to take advantage of Intel’s new Xeon processors with integrated graphics that provide the transcode power to drive higher levels of live video quality, up to 1080p/60 fps, with price to performance gains that allow us to reach an even broader market.” --- Mio Babic, CEO, iStreamPlanet

 

The Intel Xeon processor E3-1200 v4 product family also includes Intel® Graphics Virtualization Technology for direct assignment (Intel® GVT-d). Intel GVT-d directly assigns a processor’s capabilities to a single user to improve the quality of remote desktop applications.

 

Looking ahead, the future is certain to bring an ever-growing flood of video traffic, along with ever-larger 3D design files. That’s going to make technologies like the Intel Xeon processor E3-1200 v4 family and Iris Pro graphics P6300 all the more essential.

 

For a closer look at this new data center graphics powerhouse, visit intel.com/XeonE3

 

 

 

[1] WSJ: “TV Viewing Slips as Streaming Booms, Nielsen Report Shows.” Dec. 3, 2014.

[2] Sandvine report. 2014.

[3] Measured 1080p30 20MB streams: E301286L v3=10, E3-1285L v4=14.

[4] Measured 3DMark® 11: E301286L v3=10, E3-1285L v4=14.

For cloud, media, and communications service providers, video delivery is now an essential service offering—and a rather challenging proposition.

 

In a world with a proliferation of viewing devices—from TVs to laptops to smart phones—video delivery becomes much more complex. To successfully deliver high-quality content to end users, service providers must find ways to quickly and efficiently transcode video from one compressed format to another. To add another wrinkle, many service providers now want to move transcoding to the cloud, to capitalize on cloud economics.

 

That’s the idea behind innovative Intel technology-based solutions showcased at the recent Streaming Media East conference in New York. Event participants had the opportunity to gain a close-up look at the advantages of deploying virtualized transcoding workflows in private or public clouds, with the processing work handled by Intel® architecture.

 

I had the good fortune to join iStreamPlanet for a presentation that explained how cloud workflows can be used to ingest, transcode, protect, package, stream, and analyze media on-demand or live to multiscreen devices. We showed how these cloud-based services can help communications providers and large media companies simplify equipment design and reduce development costs, while gaining the easy scalability of a cloud-based solution.

 

iStreamPlanet offers cloud-based video-workflow products and services for live event and linear streaming channels. With its Aventus cloud- and software-based live video streaming solution, the company is breaking new ground in the business of live streaming. Organizations that are capitalizing on iStreamPlanet technology include companies like NBC Sports Group as well as other premium content owners, aggregators, and distributors.

 

In the Intel booth Vantrix showcased a software-defined solution that enables service providers to spread the work of video transcoding across many systems to make everything go a lot faster. With the company’s solution, transcoding workloads that might otherwise take up to an hour to run can potentially be run in just seconds.

 

While they meet different needs, solutions from iStreamPlanet and Vantrix share a common foundation: the Intel® Xeon® processor E3-1200 product family with integrated graphics processing capabilities. By making graphics a core part of the processor, Intel is able to deliver a dense, cost-effective solution that is ideal for video transcoding, cloud-based or otherwise.

 

The Intel Xeon processor E3-1200 product family supports Intel® Quick Sync Video technology. This groundbreaking technology enables hardware-accelerated transcoding to deliver better performance than transcoding on the CPU—all without sacrificing quality.

 

Want to make this story even better? To get a transcoding solution up and running quickly, organizations can use the Intel® Media Server Studio, which provides development tools and libraries for developing, debugging, and deploying media solutions on Intel-based servers.

 

With offerings like Intel Media Server Studio and Intel Quick Sync Video Technology, Intel is enabling a broad ecosystem that is developing innovative solutions that deliver video faster, while capitalizing on the cost advantages of cloud economics.

 

For a closer look at the Intel Xeon processor E3-1200 product family with integrated graphics, visit www.intel.com/XeonE3.

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

Today, 70 percent of US consumer Internet traffic is video, and it’s growing every day with over-the-top (OTT) providers delivering TV and movies to consumers, broadcasters and enterprises streaming live events. Cloud computing is changing the landscape for video production as well. Much of the work that used to require dedicated workstations is being moved to servers in data centers and offered remotely by cloud service providers and private cloud solutions. As a result, the landscape for content creation and delivery is undergoing significant changes. The National Association of Broadcasters (NAB) show in Las Vegas highlights these trends. And Intel will be there highlighting how we help broadcasters, distributors, and video producers step up to the challenges.

 

Intel processors have always been used for video processing, but today's video workloads place new demands on processing hardware. The first new demand is for greater processing performance. As video data volume explodes, encoding schemes become more complex, and processing power becomes more critical. The second demand is for increased data center density. As video processing moves to servers in data centers, service cost is driven by space and power. And the third demand is for openness. Developers want language- and platform-independent APIs like OpenCL* to access CPU and GPU graphics functions. The Intel® Xeon® processor E3 platform with integrated Intel® IrisTM Pro Graphics and Intel® Quick Sync Video transcoding acceleration provides the performance and open development environment required to drive innovation and create the optimized video delivery systems needed by today's content distributors. And does it with unparalleled density and power efficiency.

 

The NAB 2015 show provides an opportunity for attendees to see how these technologies come together in new, more powerful industry solutions  to deliver video content across the content lifecycle—acquire, create, manage, distribute, and experience.

 

We've teamed with some of our key partners at NAB 2015 to create the StudioXperience showcase that demonstrates a complete end-to-end video workflow across the content lifecycle. Waskul TV will generate real time 4k video and pipe it into a live production facility featuring Xeon E3 processors in an HP Moonshot* server and Envivio Muse* Live. The workflow is divided between on air HD production for live streaming and 4K post-production for editorial and on demand delivery. The cloud-based content management and distribution workflow is provided by Intel-powered technologies from technology partners to create a solution that streams our content to the audience via Waskul TV.

 

Other booths at the show let attendees drill down into some of the specific workflows and the technologies that enable them. For example, "Creative Thinking 800 Miles Away—It's Possible" lets attendees experience low latency, remote access for interactive creation and editing of video content in the cloud. You'll see how Intel technology lets you innovate and experiment with modeling, animation, and rendering effects—anywhere, anytime. And because the volume of live video content generated by broadcasters, service providers, and enterprises continues to explode, we need faster and more efficient ways of encoding it for streaming over the Internet. So Haivision's "Powerful Wide-Scale Video Distribution" demo will show how their Intel-based KulaByte* encoders and transcoders can stream secure, low latency HD video at extremely low bitrates over any network, including low cost, readily available, public Internet connections.

 

To learn more about how content owners, service providers, and enterprises are using Intel Xeon processor E3 based platforms with integrated HD Graphics and Intel Quick Sync video to tame the demand for video, check out the interview I did on Intel Chip Chat recently. And even if you're not attending NAB 2015, you can still see it in action. I'll be giving a presentation Tuesday, April 14 at 9:00 a.m. Pacific time. We'll stream it over the very systems I've described, and you can watch it on Waskul.TV. Tune in.

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

TV binge watching is a favorite past time of mine. For an 8 weeks span between February and March of this year, I binge watched five seasons of a TV series. I watched it on my Ultrabook, on a tablet at the gym, and even a couple episodes on my smart phone at the airport. It got me thinking about how the episodes get to me as well as my viewing experience on different devices.

 

Let me use today’s HP Moonshot server announcement to talk about high-density servers. You may have seen that HP today announced the Moonshot ProLiant m710 cartridge. The m710, based on the Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200, is the first microserver platform to support Intel’s best media and graphics processing technology. The Intel® Xeon® processor E3-1284L v3 is also a great example of how Intel continues to deliver on its commitment to provide our customers with industry leading silicon customized for their specific needs and workloads.

 

Now back to video delivery. Why does Intel® Iris™ Pro Graphics matter for Video Delivery? The 4k Video transition is upon us. Netflix already offers mainstream content like Breaking Bad in Ultra HD 4k. Devices with different screen sizes and resolutions are proliferating rapidly. The Samsung Galaxy S5 and iPhone 6 Plus smartphones have 1920x1080 Full HD resolution while the Panasonic TOUGHPAD 4k boasts a 3840x2560 Ultra HD display. And, the sheer volume of video traffic is growing. According to Cisco, streaming video will make up 79% of all consumer internet traffic by 2018 – up from 66% in 2013.

 

At the same time, the need to support higher quality and more advance user experiences is increasing. Users have less tolerance for poor video quality and streaming delays. The types of applications that Sportvision pioneered with the yellow 10 yard marker on televised football games are only just beginning. Consumer depth cameras and 3D Video cameras are just hitting the market.

 

For service providers to satisfy these video service demands, network and cloud based media transcoding capacity and performance must grow. Media transcoding is required to convert video for display on different devices, to reduce the bandwidth consumed on communication networks and to implement advanced applications like the yellow line on the field. Traditionally, high performance transcoding has required sophisticated hardware purpose built for video applications. But, since the 2013 introduction of the Intel® Xeon® Processor E3-1200 v3 family with integrated graphics, application and system developers can create very high performance video processing solutions using standard server technology.

 

These Intel Xeon processors support Intel® Quick Sync Video and applications developed with the Intel® Media Server Studio 2015.  This technology enables access to acceleration hardware within the Xeon CPU for the major media transcoding algorithms. This hardware acceleration can provide a dramatic improvement in processing throughput over software only approaches and a much lower cost solution as compared to customized hardware solutions. The new HP Moonshot M710 cartridge is the first server to incorporate both Intel® Quick Sync Video and Intel® Iris Pro Graphics in a single server making it a great choice for media transcoding applications.

As video and other media takes over the internet, economical, fast, and high quality transcoding of content becomes critical to support user demands. Systems built with special purpose hardware will struggle to keep up with these demands. A server solution like the HP Moonshot ProLiant m710, built on standard Intel Architecture technology, offers the flexibility, performance, cost and future proofing the market needs.

 

In part B of my blog I’m going to turn the pen over to Frank Soqui. He’s going to switch gears and talk about another workload – remote workstation application delivery. Great processor graphics are not only great for transcoding and delivering TV shows like Breaking Bad, they’re also great at delivering business applications to devices remotely.

Cloud computing models are based on gaining maximum yields for all resources that go into the data center. This is one of the keys to delivering services at a lower cost. And power is one of the biggest bills in a cloud environment. Cloud data centers now consume an estimated 1–2 percent of the world’s energy.[1] Numbers like that tell you the cloud’s success hinges on aggressive power management.

 

So let’s talk about some of the steps you can take to operate a more efficient cloud:

 

  • Better instrumentation. The basis for intelligent power management in your data center is better instrumentation at the server level. This includes instrumentation for things like CPU temperature, idle and average power, and power and memory states. Your management capabilities begin with access to this sort of data.

 

  • Better power management at the server and rack level. Technologies like dynamic power capping and dynamic workload power distribution can help you reduce power consumption and place more servers into your racks. One Intel customer, Baidu.com, increased rack-level capacity by up to 20 percent within the same power envelope when it applied aggregated power management policies. For details, see this white paper.

 

  • Better power policies across your data center. Put in place server- and rack-level power policies that work the rest of the policies in your data center. For example, you might allocate more power capacity to a certain set of servers that runs mission-critical workloads, and cap the power allocated to less important workloads. This can help you reduce power consumption while still meeting your service-level agreements.

 

  • Better power management at the facilities level. There are lots of things you can do to drive better efficiency across your data center. One of those is better thermal management through the use of hot and cold server aisles. Another is thermal mapping, so you can identify hot and cold spots in your data center and make changes to increase cooling efficiency.

 

Ultimately, the key is to look at power the way you look at all other resources that go into your data center: seek maximum output for all input.

 


 

[1] Source: Jonathan Koomey, Lawrence Berkeley National Laboratory scientist, quoted in the New York Times Magazine. “Data Center Overload,” June 8, 2009.

Historically, IT data centers operated like warehouses that focused on housing the equipment brought by application developers. Today, these passive warehouses are being converted into dynamic factories that focus on achieving the maximum “application yield” from all of the resources that go into the factory. This yield is much like that of an auto factory that must produce a variety of models with the same shared set of resources.

 

There are two fundamental ways to increase the yield from your data center: efficiency by design and efficiency by operations.

 

Efficiency by design is all about designing for optimal output. One example: As you update your infrastructure over time, your new servers, storage systems, and networking equipment should deliver measurable increases in throughput and power efficiency. With each generation of technology, you should get more yield out of your equipment investments.

 

Efficiency by operations is all about managing the “resource inventory” of the data center through automation. The key here is to use automated solutions to carry out time-consuming tasks that were previously handled manually. Automation not only helps your administrators increase their productivity, it helps your data center managers ensure that the inventory of compute, storage, and network resources is used to its maximum capacity.

 

For example, you can use automated tools to:

  • Move demanding workloads to systems with excess capacity
  • Allocate additional storage to applications that are running out of disk space and reduce storage allocated to those applications that are not using it
  • Cap the power that flows to certain workloads without impacting performance
  • Update security tools and firewall settings on user systems

 

This is just a small sample or the actions you can take to increase the yield against your data center assets—including people, equipment, software, and power. There are many other things you can do. Just keep your eyes on the factory manager’s prize: maximum output for all resources that go into your facility.

How do you rate the maturity level of your power infrastructure?

 

As data centers grow in size and density, they take an ever-larger bite out of the energy pie. Today, data centers eat up 1.2 percent of the electricity produced in the United States. This suggests that IT organizations need to take a hard look at the things they are doing to operate more efficiently.

 

How do you get started down this path? Consider the following four steps toward a more energy-efficient data center. The degree to which you are doing these things is an indication of your power management maturity.

 

1. Power usage effectiveness (PUE) measurements: Are you using PUE measurements to determine the energy efficiency of your data center? PUE is a measure of how much power is coming into your data center versus the power that is used by your IT equipment. You can watch your PUE ratio to evaluate your progress toward a more energy-efficient data center. To learn more about PUE, see The Green Grid.

 

2. Equipment efficiency: Are you buying the most efficient equipment? Deploying more efficient equipment is one of the most direct paths to power savings. One example: You can realize significant power savings by using solid-state drives instead of power-hungry, spinning hard-disk drives. For general guidance in the U.S., look for Energy Star ratings for servers.

 

3. Instrumentation: Are your systems instrumented to give you the information you need? The foundation of more intelligent power management is advanced instrumentation. This is a pretty simple concept. To understand your power issues and opportunities, you have to have the right information at your fingertips. For a good example, see Intel Data Center Manager.

 

4. Policy-based power management: Have you implemented policy-based power management? This approach uses automated tools and policies that you set to drive power efficiencies across your data center. A few examples: You can shift loads to underutilized servers, throttle servers and racks that are idle, and cap the power that is allocated to certain workloads.

 

If you can answer yes to all of these questions, you’re ahead of the power-management maturity curve. But even then, don’t rest on your laurels. Ask yourself this one additional question: Could we save more by doing all of these things to a greater degree?

 

For a closer look at your power management maturity level, check out our Data Center Power Management Maturity Model. You can find it on the Innovation Value Institute site at http://ivi.nuim.ie/.

Years ago, data center managers didn’t think a whole lot about power expenditures. They were just a cost of doing business. But today, power expenditures have grown to the point that they are overwhelming IT budgets. Just how bad has it gotten? An IDC study conducted in Europe found that the cost of powering data centers now exceeds the costs of acquiring new networking hardware or new external disk storage.[1]

 

So let’s talk about five steps you can take to corral runaway power costs.

 

1. Dynamic power capping. With some workloads you can cap power without sacrificing performance. This might save you up to 20 watts per server. Power capping tends to work best with I/O intensive workloads, where CPUs spend a lot of time waiting for data. We’ve seen outstanding results with IT organizations that take a workload-centric approach to power capping.

 

2. Dynamic workload power distribution. When you have servers that not fully loaded you have the opportunity to shift virtualized workloads off of some servers, which can be put in a low-power state until they are called back into service. VMware’s Dynamic Power Management tool is the tip of the iceberg on this model.


3. Power capping to increase data center density. When server racks are under-populated, you’re probably paying for power capacity that you aren’t using. Intelligent power node management allows you to throttle system and rack power based on expected workloads and put more servers per rack.

 

4. Optimized server platforms. Optimized server platforms can give you more bang for your energy buck. Here’s one example: When cores within a CPU are idling, they are still drinking up power. Integrated power gates on processors allow idling cores to drop to near-zero power consumption.


5. Solid state drives. Today, lots of people are talking about performance gains with solid state drives. But that’s only part of the story. In addition to performance benefits, solid state drives can save you a bundle on power when compared to standard hard-disk drives.

 

And those runaway power costs we were talking about? Let’s go rope them in.

 

The first output out of the Intel Cloud Builder Program:

 

For Cloud Service Providers, Hosters and Enterprise IT who are looking to build their own
cloud infrastructure, the decision to use a cloud for the delivery of IT services is best done
by starting with the knowledge and experience gained from previous work. This white
paper gathers into one place a complete example of running a Canonical Ubuntu Enterprise
Cloud on Intel®-based servers and is complete with detailed scripts and screen shots. Using
the contents in this paper should significantly reduce the learning curve for building and
operating your first cloud computing instance.


Since the creation and operation of a cloud requires integration and customization to
existing IT infrastructure and business requirements, it is not expected that this paper 
can be used as-is. For example, adapting to existing network and identify management
requirements are out of scope for this paper. Therefore, it is expected that the user of
this paper will make significant adjustments to the design to meet specific customer
requirements. This paper is assumed to be a starting point for that journey.

 

http://software.intel.com/en-us/articles/intel-cloud-builder/

 

http://blog.canonical.com/?p=348

Learn about Intel IT’s proof-of-concept testing and total cost of ownership (TCO) analysis to assess the virtualization capabilities of Intel® Xeon® processor 5500 series. Our results show that, compared with the previous server generation, two-socket servers based on Intel Xeon processor 5500 series can support approximately 2x as many VMs for the same TCO.

 

http://communities.intel.com/servlet/JiveServlet/downloadBody/3425-102-1-5699/VirtualizationXeon5500.pdf

So, its not clear from this posting whether VMware's "Code Central" was announced or escaped but this looks to be a very valuable repository for sharing vSphere scripts.

 

I'm a recent convert to the wonders of creating new capabilities through the vSphere SDK. Our team has been using it to prototype some interesting new usages for power aware virtualization that we hope eventually will find their way into the VMWare Distributed Power Management (DPM) tool.

 

The most interesting usage is what we call "platooning" where different server resource pools are kept in different power states from fully powered on through power capped to standby and full off. Servers are moved from one platoon to the next (and workloads are migrated onto them) based on a set of policies for required application capacity headroom and power on latency as load increases. Our belief is that, by carefully designing these policies, we'll be able to save significant power across the data center without impacting peak throughput or response time of any of the applications.

 

Unfortunately, we don't have the data to demonstrate this savings yet. That's where the SDK comes in. We're able to prototype the usage, collect the data, validate the feasibilty and, if it never shows up in DPM, still be able to implement it in production.

 

We're just coming up to speed on the SDK, having completed our first "Hello World" integration with it but we think its going to be a very valuable tool for experimenting and going to production with many new usages. I'm hoping Code Central provides a good source of examples to help bootstrap our development.

What if every server in your virtualized data center was driving 10Gbps of traffic?

My team just completed a test with an end user where we drove nearly 10Gbps of traffic over Ethernet through a single Xeon 5500 based server running ESX4.0. The workload was secure FTP. Our results will be published in the next 30 days. We’ve seen 10Gbps through a server in several other cases (notably, video streaming and network security workloads) but this is first time we’ve really tried to do a 10GB “enterprise” workload in a virtualized environment. It took a fair amount of work to get the network and the solutions stack to work (we had to get a highly threaded open source SSH driver from the Pittsburgh Supercomputer Center, for example, to make it scale). We also found some good value for some of our specialized network virtualization technologies (i.e., the VT-c feature known as VMDQ). But, regardless, by working at it moderately diligently, we got it to work at 10Gbps and don’t see any real barriers to doing that in real production environments.

We also found that the solution throughput is not particularly CPU-bound, it’s “solution stack bound”. That means that workloads that are more “interesting” than virtualized secure FTP and video streaming are likely to be able to source and sync more than 10Gbps/server, too. And, when we get to converged fabrics like iSCSI and FCOE that put the storage traffic on the same network path (or at least the same medium) as the LAN traffic, we’d expect that the application needs for higher Ethernet throughput will increase.

So what? Well, if you buy the fact that virtualized servers can do interesting things and still drive 10GB/s of Ethernet traffic, you have to wonder what’s going to happen to the data center backbone network. If you have racks with 20 servers each, putting out a nominal 6Gbps of Ethernet traffic, each rack will have a flow of 120Gbps and a row of 10 racks will need to handle 1.2 Tbps. I’m not sure what backbone data center network architecture will be able to handle that kind of throughput. Fat tree architectures help especially if there are lots of flows between servers in close proximity to each other in the same data center. But, fat tree networks are very new and not widely deployed. Thoughts?

So, building off Bob's post from September (http://communities.intel.com/openport/thread/1905), I contend that, at least from a performance perspective, with the new capabilities in the next generation of virtualized infrastructure coming this year, the answer is yes!

As we look at the availability of ESX 4.0 from VMWare and servers based on the Intel Nehalem-based Xeon servers with new VT features for CPU, chipset and IO later this year, we're not seeing any of the mainstream applications that can't be virtualized. In the past, some of the mainstream apps that (allegedly) couldn't be virtualized that we've consistently heard are SAP and other complex business processing apps, middle sized databases and large enterprise email systems like Microsoft Exchange. While it's a little early to declare victory, we're thinking the next generation of technology will be more than good enough to run these workloads in most environments. We're currently running testing on the lastest generation infrastructure software and not seeing any reason why most of these apps won't be capable of being virtualized over the next couple of years.

Anyone think differently? Why?

Note, other issues remain:

  • Even if I don't run the applications on the same physical server as other applications, is the virtual infrastructure secure and reliable enough to support these important applications?
  • And, if I try to consolidate the app with other apps, can I be guaranteed that the app won't interfere or be interfered with by other apps. Interference could be either unintentional resource contention or intential security attacks.
  • Do I have the tools and support infrastructure to do such a critical application in a virtual infrastructure.


I'm making no claims on whether these particular challenges have been solved but I would be interested in whether they are real issues for you.

What do you think?

 

Jim Blakley

 

<Note: This is a duplicate to the blog I posted at VMWorld Europe last week. I'll pull over the responses as replies to this>

So, after four days of VMWorld, there were two announcements that really resonated with me as an end user proxy within Intel. For those who don't know me, my team's role is to look at the new technologies that are coming (or might come) from Intel from the eyes of the end user. We try to understand and quantify whether end users really find any value in these technology innovations and, through hands on work in our own labs and directly in end user IT environments, identify any technical and ecosystem barriers to adoption. When we find barriers, we work across the industry to address them. My team is specifically focused on the data center and we have a big focus on data center virtualization. So, yes, the vision that Paul Maritz outlined in his keynote makes absolute sense to me. Plenty has been written about the keynotes (and maybe I'll add my own thoughts in a bit). I wanted to talk about a couple of specific things that Paul mentioned and that, to me, were very encouraging and significant.

 

Technology innovations that directly and specifically address an expressed customer need don't always come to market quickly, especially if they require coordinated effort across different companies. I also don't believe the new conventional wisdom that, with virtualization, "the hardware doesn't matter". Two announcements at VMWorld demonstrate great examples of the former and give lie to the latter.

 

 

The first announcement was Cisco's unveiling of the Nexus 1000v virtual switch. One of the big issues for IT shops deploying virtualization has been that it's next to impossible to easily integrate virtual networking into the existing network management processes and roles and responsibilities. It's been the CCNE's that have enabled physical networks to be managed for reliability, security and compliance and, until now, virtual switches have not allowed that separation of duties and transfer of skills that are embodied in the CCNE's. The Nexus 1000V, a virtual softswitch that will launch next year (according to the demonstrator in their booth), will run side-by-side with the VMWare vSwitch inside ESX server and give CCNEs full Nexus OS access to configuring and monitoring the vSwitch using the same interfaces they're used to on the "hard switches". It also can enforce a separation of duties between the network administrator and the server administrator. This issue has been something that we've heard repeatedly from end users as a barrier to adoption for virtualization 2.0 in the enterprise and Cisco and VMWare have deserve a lot of credit for collaborating closely to make this a reality. (BTW, it also looks to me like the first tangible evidence that higher level networking functionality is beginning to migrate back to where it started: to software on general purpose computers. Perhaps more on that later).

 

 

The second was the announcement by VMWare of Enhanced VMotion and by Intel of VT FlexMigration. (Sorry if this part seems a little self serving from an Intel guy). These two capabilities, working together address another key need of end users. Until now, each new generation of CPU needed to maintained in a separate resource pool in the data center. If you didn't and you VMotioned backward from a new generation to an old one, it was possible that the guest application would make use of an instruction that didn't exist in the older generation. So, that kind of migration was not permitted. This restriction means that end users had to either grow resource pools by purchasing older generation hardware (and foregoing the energy efficiency and performance gains of the new hardware) or live with increasing fragmentation into resource "puddles". With EVmotion and FlexMigration, the hypervisor can now assure that the backward migrated VM doesn't use any of those new instructions. Voila, the backward migration can be allowed! Pools can be grown by adding new generation servers to a pool of older servers, a much smoother and more efficient approach to evolution in the data center.

 

 

Now, in retrospect, both of these innovations seem "obvious" but actually getting them to market is challenging and significant challenges still remain to implement them in real world environments. Perhaps more significant is that they both required the two companies to recognize the need, align their business interests to address, design a joint solution and coordinate the launch of their respective product offerings. Hard enough to do this across teams in the same company, let alone across two companies.

 

 

So, do you see other technology challenges like this with your virtualization projects? Simple problems that seem obvious but no one seems to be addressing?

Filter Blog

By date:
By tag: