Skip navigation
1 2 Previous Next

The Data Stack

21 Posts authored by: JimBlakley

Graphics virtualization and design collaboration took a step  forward this week with the announcement of support for Intel Graphics  Virtualization Technology-g (Intel® GVT-g) on the Citrix XenServer* platform.

 

Intel GVT-g running on the current generation graphics-enabled  Intel Xeon processor E3 family, and future generations of Intel Xeon®  processors with integrated graphics capabilities, will enable up to seven Citrix  users to share a single GPU without significant performance penalties. This new  support for Intel GVT-g in the Citrix virtualization environment was unveiled  this week at the Citrix Synergy conference in Las Vegas.

 

A little bit of background on the technology: With Intel  GVT-g, a virtual GPU instance is maintained for each virtual machine, with a  share of performance-critical resources directly assigned to each VM. Running a  native graphics driver inside a VM, without hypervisor intervention in  performance-critical paths, optimizes the end-user experience in terms of features,  performance and sharing capabilities.

 

All of this means that multiple users who need to work with  and share design files can now collaborate more easily on the XenServer  integrated virtualization platform, while gaining the economies that come with  sharing a single system and benefiting from the security of working from a  trusted compute pool enabled by Intel  Trusted Execution Technology (Intel® TXT).

 

Intel GVT-g is an ideal solution for users who need access  to GPU resources to work with graphically oriented applications but don’t  require a dedicated GPU system. These users might be anyone from sales reps and  product managers to engineers and component designers. With Intel GVT-g on the  Citrix virtualization platform, each user has access to separate OSs and apps  while sharing a single processor – a cost-effective solution that increases  platform flexibility.

 

The back side of this story is one of close collaboration  among Intel, Citrix, and the Xen open source community to develop and refine a  software-based approach to virtualization in an Intel GPU and XenServer  environment. It took a lot of people working together to get us to this point.

 

And now we’ve arrived at our destination. With the  combination of Intel GVT-g, Intel  Xeon processor-based servers with Intel Iris Pro Graphics, and Citrix  XenServer, anywhere, anytime design collaboration just a got a lot easier.

For a closer look at Intel GVT-g, including a technical  demo, visit our Intel Graphics Virtualization  Technology site or visit our booth #870 at Citrix  Synergy 2016.

In just the last four months, Microsoft announced its Hololens VR headset, Google launched its VR view SDK that allows users to create interactive experiences from their own content, Facebook expanded its live video offering, Yahoo announced that it will live stream 180 Major League Baseball games, Twitter announced it will live stream 10 NFL games, Amazon acquired image recognition startup Orbeus and Intel acquired immersive sports video startup Replay Technologies.


Are these events unrelated or are they part of something bigger? To me, they indicate the next wave of the Visual Cloud. The first wave was characterized by the emergence of Video on Demand (e.g., Netflix), User Generated Video Content (e.g., YouTube) and MMORPG (e.g., World of Warcraft). The second phase will be characterized by virtual reality, augmented reality, 3D scene understanding and interactivity and immersive live experiences. To paraphrase William Gibson, the announcements I listed above indicate that the future is already here – it’s just not evenly distributed. And it won’t take long for it to spread to the mainstream – remember that YouTube itself was founded in 2005 and NetFlix only started streaming videos in 2007. By 2026, the second wave will seem like old technology. In the technology world, in five years, nothing changes; in ten years, everything changes.

 

But why now? As with any technology, a new wave requires the convergence of two things: compelling end user value and technology capability and maturity.

 

It’s pretty clear that this wave can provide enormous user value. One early example is Google Street View (launched 2007). I’m looking for a new house right now and I can’t tell you how much time I’ve saved not touring houses that are right next to a theater or service station or other unappealing neighbor. While this is a valuable consumer application, the Visual Cloud also unlocks many business and public sector applications like graphics-intensive design and modelling applications and cloud-based medical imaging.

 

But, is the technology ready? The Visual Cloud Second Wave is an integration of several technologies – some are well established, some still emerging. The critical remaining technologies will mature over the next few years – driving widespread adoption of the second wave applications and services. In my opinion, the key technologies are (in decreasing order of maturity):

 

1. Cloud Computing – the Visual Cloud requires capabilities that only cloud computing can deliver. In most ways, the Visual Cloud First Wave proved out this technology. These capabilities include:

 

    • Massive, inexpensive, on-demand computing. Even something as comparatively simple as speech recognition (think Siri, Google Now, Cortana) requires the scale of the cloud to make it practical. Imagine the scale of compute required to support real time global video recognition for something like traffic management.

 

    • Massive data access and storage capacity. Video content is big - a single high quality 4k video requires 30-50 GB of storage, depending on how it compressed.

 

    • Ubiquitous access. Many Visual Cloud applications are about sharing content between one user and another regardless of whether they might be in the world or what devices they are using to create and consume content.

 

    • Quick Start Development. The easy access to application development tools and resources through Infrastructure as a Service (IaaS) offerings like Amazon Web Services and Microsoft Azure make it much faster for innovative Visual Cloud developers to create new applications and services and get them out to users.

 

2. High Speed Broadband. See above re: Video Content is Big. Even today, moving video data around is a challenge for many service providers. Video is already over 64% of consumer internet traffic and is expected to grow to over 80% by 2019. High quality visual experiences also require relatively predicable bandwidth. Sudden changes in latency and bandwidth wreak havoc on visual experiences even with compensating technologies like HLS and MPEG-DASH. This is especially true for interactive experiences like cloud gaming or virtual and augmented reality. The deployment of wireless 5G technologies will be critical to enable the Visual Cloud to grow.

 

3. New End User Devices. – Most of these advanced experiences don’t rely solely on the cloud. For both content capture and consumption, devices need to evolve and improve. Device technologies like Intel® RealSense Technology’s depth images provide innovative visual information to applications that isn’t available from traditional devices. Consumption technologies and form factors like VR headsets are necessary to consume some experiences.

 

4. Visual Computing Technologies. While many visual computing technologies like video encoding and decoding, raster and ray traced rendering have been around for many years, they have not been scaled to the cloud in any significant way. This process is just beginning. Other technologies, like the voxel 3D point clouds used by Replay Technologies, are just emerging. Advanced technologies like 3D Scene Reconstruction and Videogrammetry have several years to reach the mainstream.

 

5. Deep Learning. Computer vision, image recognition, and video object identification have long depended on model based technologies like HOG. While these technologies have had some limited use, in the last couple of years, deep learning for image and video recognition– using neural networks to classify objects in image and video content as emerged as one of the most significant new technologies in many years.

 

If you’re interested in learning more about emerging workloads in the data center that are being made possible by the Visual Cloud, you can watch our latest edition of the Under the Hood video series or check out our Chip Chat podcasts recorded live at the 2016 NAB Show. Much more information about Intel’s role in the Visual Cloud can be found at www.intel.com/visualcloud.

For several years, developers have been asking for a native integration of Intel Quick Sync Video in their FFmpeg-based applications. We heard your requests – and are excited to announce that an Intel-supported integration is now available on the FFmpeg 2.8 release.  This integration of Intel Quick Sync Video is enabling 10x faster AVC transcodes compared to Intel®-based systems not using this technology.

 

FFmpeg has been around for over 15 years and is probably the most popular open source media processing framework for media server applications. Intel Quick Sync Video is a hardware-based media acceleration technology available in Intel processors that integrate Intel® HD graphics, Iris™ graphics, and Iris™ Pro graphics . It can offer speed increases and improved densities for large scale media processing deployments. Until now there has not been a fully supported and optimized version of FFmpeg that enables easy access to Intel Quick Sync Video. This has made it a chore for media application developers building solutions around FFmpeg to realize the benefits of Intel’s accelerated media transcode technology.

 

Once FFmpeg is built and installed with the free Intel® Media Server Studio – Community Edition, Intel Quick Sync Video can be enabled through a simple command line interface:

 

$   FFmpeg –i in.mp4 –vcodec h264_qsv out_qsv.mp4

 

The integration supports MPEG-2 and H.264/AVC encode/decode, VC-1 decode and, with Intel® Media Server Studio – Professional Edition, H.265/HEVC encode and decode.

 

The performance is pretty amazing. For example, on a single Intel® Xeon® Processor E3-1285L v4 featuring Iris™ Pro Graphics and Intel Quick Sync Video, it’s possible to transcode nine simultaneous 1080p @ 30fps AVC streams in real time at a medium quality preset. In contrast, if Intel Quick Sync Video is not used, a single processor running the same test can support slightly less than one real time stream. So – by using Intel Quick Sync Video under FFmpeg, developers will see a nearly 10X-density-increase over software only.

 

To see for yourself, follow the install guide, check out our performance test results, and let us know what you think.

A recent AnandTech post gave me pause. As the leader of Intel’s efforts in Visual Cloud which  include our Intel® Xeon® processor E3 family with integrated graphics, I  thought everyone knew that there were many (and growing number of) options for  getting Intel’s best graphics, Iris™ Pro Graphics in a server system.  Obviously, we’ve not done a good job of getting the word out. I’m using this  post to let you know what systems are in volume production that support our  latest (Intel® microarchitecture codename Broadwell) generation Iris  Pro Graphics and with it, Intel® Quick  Sync Video, Intel’s video processing acceleration technology.

 

A few caveats first: we don’t track all the systems that may  be out there so apologies if I’ve missed any. Also, more are coming from new  suppliers all the time so this blog represents a snapshot of what I know as of  yearend 2015. If you know of ones missed, feel free to post in the comment  space below. We also don’t endorse, promote, or certify any specific platform  or vendor or their capabilities. You’ll have to check directly with that  supplier with any questions about the level of support that they provide for  Intel’s flagship visual cloud technologies: Iris Pro Graphics and Quick Sync  Video.


A quick cheat to figure out if system might be able to  support Intel’s visual cloud technologies. If a server system supports the  Intel Xeon processor E3 v4 family and uses the Intel® C226 Chipset, it has the  basic capability to use our visual cloud technologies. This is not an absolute  answer because sometimes we’ve run into BIOS or other issues that prevent the  technologies from working. The ones listed below have been used for visual  cloud applications by solution providers and service providers. Typically, a  user would populate these servers with the Intel Xeon E3-1285L v4 processor to  get the maximum performance per watt. Please refer to the manufacturer’s web  site for more information on specific systems.


There are three main types of systems that enable Intel  visual cloud technologies.


 

 

  • Finally, in some environments, it’s useful to be  able to land Intel visual cloud technologies in a more general purpose server  system. For that purpose, there are two PCI Express* (PCIe) accelerator cards  available. These plug into standard PCIe slots that meet the specific space and  power requirements for the cards. A number of OEM and distributors provide  these cards pre-integrated into server systems.
    • Artesyn* SharpStreamer*  PCIE-7207 (Contains four Intel® Dual Core processor i7-5650U 2.2 GHz with Intel® HD Graphics 6000) and  SharpStreamer* Mini PCIE-7205 (contains 1 or 2 Intel® Dual Core processor  i5-5350U with Intel® HD Graphics 6000). These cards are known to be compatible  with:

 

I hope that gives you plenty of systems to choose from. If  you want more info out Intel’s efforts in Visual Cloud Computing, please see: http://www.intel.com/visualcloud.

For service providers, the rapid momentum of video streaming is both a plus and a minus.  On the plus side, millions of consumers are now looking to service providers to deliver content they used to access through other channels. That’s all good news for the business model and the bottom line.

 

On the minus side, service providers now have to meet the growing demands of bandwidth-hungry video streams, including new 4K media streaming formats. As I mentioned in a recent blog post, Video Processing Doesn’t Have To Kill the Data Center, today’s 4K streams come with a mind-boggling 8 million pixels per frame. And if you think today’s video workloads are bad, just stayed turned for more. Within five years, video will consume 80 percent of the world’s Internet bandwidth.

 

While meeting today’s growing bandwidth demands, service providers simultaneously have to deal with an ever-larger range of end-user devices with wide variances in their bit rates and bandwidth requirements. When customers order up videos, service providers have to be poised to deliver the goods in many different ways, which forces them to store multiple copies of content—driving up storage costs.

 

At Intel, we are working to help service providers solve the challenges of the minus side of this equation so they can gain greater benefits from the plus side. To that end, we are rolling out a new processing solution that promises to accelerate video transcoding workloads while helping service providers contain their total cost of ownership.

 

This solution, announced today at the IBC 2015 conference in Amsterdam, is called the Intel® Visual Compute Accelerator. It’s an Intel® Xeon® processor E3 based media processing PCI Express* (PCIe*) add-in card that brings media and graphics capabilities into Intel® Xeon® processor E5 based servers. We’re talking about 4K Ultra High Definition (UHD) media processing capabilities.

 

A few specifics: The card contains three Intel Xeon processor E3 v4 CPUs, which each contain the Intel® Iris™ Pro graphics P6300 GPU. Placing these CPUs on a Gen3 x16 PCIe card provides high throughput and low latency when moving data to and from the card.

 

VCA-Card.jpg

 

The Intel Visual Compute Accelerator is designed for cloud and communications service providers who are implementing High Efficiency Video Coding (HEVC), which is expected to be needed for 4K/UHD videos, and Advanced Video Coding (AVC) media processing solutions, whether in the cloud or in their networks.

 

We expect that the Intel Visual Compute Accelerator will provide customers with excellent TCO when looking at cost per watts per transcode. Having both a CPU and a GPU on the same chip (as compared to just a GPU) enables ISVs to build solutions that improve software quality while accelerating high-end media transcoding workloads.

 

If you happen to be at IBC 2015 this week, you can get a firsthand look at the power of the Intel Visual Compute Accelerator in the Intel booth – hall 4, stand B72. We are showing a media processing software solution from Vantrix*, one of our ISV partners, that is running inside a dual-core Intel Xeon processor E5 based Intel® Server System with the Intel Visual Compute Accelerator card installed. The demonstration shows the Intel Visual Compute Accelerator transcoding using both the HEVC and AVC codecs at different bit-rates intended for different devices and networks.

 

Vantrix is just one of several Intel partners who are building solutions around the Intel Visual Compute Accelerator. Other ISVs who have their solutions running on the Intel Visual Compute Accelerator include ATEME, Ittiam, Vanguard Video* and Haivision*—and you can expect more names to be added to this list soon.

 

Our hardware partners are also jumping on board. Dell, Supermicro, and Advantech* are among the OEMs that plan to integrate the Intel Visual Compute Accelerator into their server product lines.

 

The ecosystem support for the Intel VCA signals that industry demand for solutions to address media workloads is high. Intel is working to meet those needs with the Intel Xeon processor E3 v4 with integrated Intel Iris Pro graphics. Partners including HP, Supermicro, Kontron, and Quanta have all released Xeon processor E3 solutions for dense environments, while Artseyn* also has a PCI Express based accelerator add in card similar to the Intel VCA. These Xeon processor E3 solutions all offer improved TCO and competitive performance across a variety of workloads.

 

To see the Intel Visual Compute Accelerator demo at IBC 215, stop into the Intel booth, No. 4B72. Or to learn more about the card right now, visit http://www.intelserveredge.com/intelvca.

When it comes to video streaming, a hard problem keeps getting harder. Files are growing in size and complexity, the numbers of people streaming videos are multiplying, and end-user viewing devices are evolving rapidly. Video is on the verge of overrunning the data centers of many service providers.

Let’s pause to consider a few relevant statistics. New 4K media streaming formats come with an unfathomable 8 million pixels per frame. In just five years, video will consume 80 percent of the world’s Internet bandwidth, and already Netflix alone accounts for a third of Internet traffic at peak hours.

 

As I noted in an earlier post on Optimizing Media Delivery in the Cloud, video delivery is becoming an ever-more complex proposition for the service providers who deliver content to end-user devices. Device types are proliferating, and video files come with more pixels, more frames per second, and more colors for every pixel. This all adds up to a heavier load for video streamers.


To help address this challenge, Intel is working with the ecosystem to deliver products that accelerate the delivery of high-quality video via faster transcoding and file compression. This week at the Intel Developer Forum (IDF) in San Francisco, we’re demonstrating one of these new products—a soon-to-be-launched Intel media processing card that supports the development of faster media processing solutions based on technologies like high-efficiency video coding (HEVC) and advanced video coding (AVC).

 

This new video processing card, code-named Valley Vista, is designed to accelerate high-end media transcoding in a standard Intel® Xeon® Processor E5 server. This means that applications that use these technologies can now run in many of the most common server platforms in the industry. More specific details will be announced later this year, when the card is formally launched.

 

We expect Valley Vista to be popular with software developers who want to capitalize on leading-edge media processing technologies for the cloud. Developers are going to need the kind of capabilities Valley Vista will deliver to stay ahead of the explosive demands of the era of video streaming and to differentiate their media solutions. These aren’t nice-to-have capabilities. These are capabilities that are essential to success in a time when video processing threatens to overrun the data center.

 

If you’re attending IDF this week, you can get a close-up look at the Valley Vista card in a demo offered in the Intel® Data Center and Software Defined Infrastructure Community, booth 288. Or for a deep dive into the topic of optimizing video processing and delivery with Intel® Xeon® processor solutions, sign up for Session DCWS003.

 

Of course, you don’t have to be at IDF to stay in the loop on Intel news that emerges at the conference. To get the latest updates, just follow @IntelITCenter and join the #IDF15 conversation on Twitter. You can follow me directly @jimblakley.




(1) The Washington Post. “In 5 years, 80 percent of the whole Internet will be online video.” May 27, 2015.

Data centers everywhere are dealing with a flood of video traffic. This deluge is only going to grow in scope in the years to come. Consider numbers like these: Online video streaming viewership jumped by 60 percent in 2014 alone,  and video delivery has now become the number one source of Internet traffic   and by 2018, video traffic is set to comprise 80 percent of the Internet’s traffic2.

 

And it’s not just YouTube videos and Netflix streaming that are causing problems for data center operators. Many organizations are also dealing with the demands of complex 3D design applications and massive data sets that are delivered from secure data centers and used by design teams scattered around the world.

 

To keep pace with current and future growth in these graphics-intensive workloads, data center operators are looking to optimize their data center computing solutions specifically to handle an ever-growing influx of graphics-intensive traffic.

 

That’s the idea behind the new Intel® Xeon® processor E3-1200 v4 family with integrated Intel® Iris™ Pro graphics P6300 — Intel’s most advanced graphics platform. This next-generation processor, unveiled today at the Computex 2015 conference, also features Intel® Quick Sync Video which accelerates portions of video transcoding software by running it hardware.

 

This makes the Intel Xeon processor E3-1200 v4 family an ideal solution for streaming high volumes of HD video. It offers up to 1.4 times the transcoding performance of the Intel® Xeon® processor E3-1200 v3 family and can handle up to 4300 simultaneous HD video streams per rack.

 

The new Intel Xeon Processor E3-1200 v4 family is also a great processing and graphics solution for organizations that need to deliver complex 3D applications and large datasets to remote workstations. It supports up to 1.8 times the 3D graphics performance  versus the previous generation Intel Xeon processor E3 v3 family.

 

I’m pleased to say that the new platform already has a lot of momentum with our OEM partners. Companies designing systems around the Intel Xeon Processor E3-1200 v4 family include Cisco, HP, Kontron, Servers Direct, Supermicro, and QCT (Quanta Cloud Technology).

 

Early adopters of the Iris Pro graphics-enabled solution include iStreamPlanet, which streams live video to a wide range of user devices via its cloud delivery platform. In fact, they just announced a new 1080p/60 fps service offering:

 

“We’re excited to be among the first to take advantage of Intel’s new Xeon processors with integrated graphics that provide the transcode power to drive higher levels of live video quality, up to 1080p/60 fps, with price to performance gains that allow us to reach an even broader market.” --- Mio Babic, CEO, iStreamPlanet

 

The Intel Xeon processor E3-1200 v4 product family also includes Intel® Graphics Virtualization Technology for direct assignment (Intel® GVT-d). Intel GVT-d directly assigns a processor’s capabilities to a single user to improve the quality of remote desktop applications.

 

Looking ahead, the future is certain to bring an ever-growing flood of video traffic, along with ever-larger 3D design files. That’s going to make technologies like the Intel Xeon processor E3-1200 v4 family and Iris Pro graphics P6300 all the more essential.

 

For a closer look at this new data center graphics powerhouse, visit intel.com/XeonE3

 

 

 

[1] WSJ: “TV Viewing Slips as Streaming Booms, Nielsen Report Shows.” Dec. 3, 2014.

[2] Sandvine report. 2014.

[3] Measured 1080p30 20MB streams: E301286L v3=10, E3-1285L v4=14.

[4] Measured 3DMark® 11: E301286L v3=10, E3-1285L v4=14.

For cloud, media, and communications service providers, video delivery is now an essential service offering—and a rather challenging proposition.

 

In a world with a proliferation of viewing devices—from TVs to laptops to smart phones—video delivery becomes much more complex. To successfully deliver high-quality content to end users, service providers must find ways to quickly and efficiently transcode video from one compressed format to another. To add another wrinkle, many service providers now want to move transcoding to the cloud, to capitalize on cloud economics.

 

That’s the idea behind innovative Intel technology-based solutions showcased at the recent Streaming Media East conference in New York. Event participants had the opportunity to gain a close-up look at the advantages of deploying virtualized transcoding workflows in private or public clouds, with the processing work handled by Intel® architecture.

 

I had the good fortune to join iStreamPlanet for a presentation that explained how cloud workflows can be used to ingest, transcode, protect, package, stream, and analyze media on-demand or live to multiscreen devices. We showed how these cloud-based services can help communications providers and large media companies simplify equipment design and reduce development costs, while gaining the easy scalability of a cloud-based solution.

 

iStreamPlanet offers cloud-based video-workflow products and services for live event and linear streaming channels. With its Aventus cloud- and software-based live video streaming solution, the company is breaking new ground in the business of live streaming. Organizations that are capitalizing on iStreamPlanet technology include companies like NBC Sports Group as well as other premium content owners, aggregators, and distributors.

 

In the Intel booth Vantrix showcased a software-defined solution that enables service providers to spread the work of video transcoding across many systems to make everything go a lot faster. With the company’s solution, transcoding workloads that might otherwise take up to an hour to run can potentially be run in just seconds.

 

While they meet different needs, solutions from iStreamPlanet and Vantrix share a common foundation: the Intel® Xeon® processor E3-1200 product family with integrated graphics processing capabilities. By making graphics a core part of the processor, Intel is able to deliver a dense, cost-effective solution that is ideal for video transcoding, cloud-based or otherwise.

 

The Intel Xeon processor E3-1200 product family supports Intel® Quick Sync Video technology. This groundbreaking technology enables hardware-accelerated transcoding to deliver better performance than transcoding on the CPU—all without sacrificing quality.

 

Want to make this story even better? To get a transcoding solution up and running quickly, organizations can use the Intel® Media Server Studio, which provides development tools and libraries for developing, debugging, and deploying media solutions on Intel-based servers.

 

With offerings like Intel Media Server Studio and Intel Quick Sync Video Technology, Intel is enabling a broad ecosystem that is developing innovative solutions that deliver video faster, while capitalizing on the cost advantages of cloud economics.

 

For a closer look at the Intel Xeon processor E3-1200 product family with integrated graphics, visit www.intel.com/XeonE3.

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

Today, 70 percent of US consumer Internet traffic is video, and it’s growing every day with over-the-top (OTT) providers delivering TV and movies to consumers, broadcasters and enterprises streaming live events. Cloud computing is changing the landscape for video production as well. Much of the work that used to require dedicated workstations is being moved to servers in data centers and offered remotely by cloud service providers and private cloud solutions. As a result, the landscape for content creation and delivery is undergoing significant changes. The National Association of Broadcasters (NAB) show in Las Vegas highlights these trends. And Intel will be there highlighting how we help broadcasters, distributors, and video producers step up to the challenges.

 

Intel processors have always been used for video processing, but today's video workloads place new demands on processing hardware. The first new demand is for greater processing performance. As video data volume explodes, encoding schemes become more complex, and processing power becomes more critical. The second demand is for increased data center density. As video processing moves to servers in data centers, service cost is driven by space and power. And the third demand is for openness. Developers want language- and platform-independent APIs like OpenCL* to access CPU and GPU graphics functions. The Intel® Xeon® processor E3 platform with integrated Intel® IrisTM Pro Graphics and Intel® Quick Sync Video transcoding acceleration provides the performance and open development environment required to drive innovation and create the optimized video delivery systems needed by today's content distributors. And does it with unparalleled density and power efficiency.

 

The NAB 2015 show provides an opportunity for attendees to see how these technologies come together in new, more powerful industry solutions  to deliver video content across the content lifecycle—acquire, create, manage, distribute, and experience.

 

We've teamed with some of our key partners at NAB 2015 to create the StudioXperience showcase that demonstrates a complete end-to-end video workflow across the content lifecycle. Waskul TV will generate real time 4k video and pipe it into a live production facility featuring Xeon E3 processors in an HP Moonshot* server and Envivio Muse* Live. The workflow is divided between on air HD production for live streaming and 4K post-production for editorial and on demand delivery. The cloud-based content management and distribution workflow is provided by Intel-powered technologies from technology partners to create a solution that streams our content to the audience via Waskul TV.

 

Other booths at the show let attendees drill down into some of the specific workflows and the technologies that enable them. For example, "Creative Thinking 800 Miles Away—It's Possible" lets attendees experience low latency, remote access for interactive creation and editing of video content in the cloud. You'll see how Intel technology lets you innovate and experiment with modeling, animation, and rendering effects—anywhere, anytime. And because the volume of live video content generated by broadcasters, service providers, and enterprises continues to explode, we need faster and more efficient ways of encoding it for streaming over the Internet. So Haivision's "Powerful Wide-Scale Video Distribution" demo will show how their Intel-based KulaByte* encoders and transcoders can stream secure, low latency HD video at extremely low bitrates over any network, including low cost, readily available, public Internet connections.

 

To learn more about how content owners, service providers, and enterprises are using Intel Xeon processor E3 based platforms with integrated HD Graphics and Intel Quick Sync video to tame the demand for video, check out the interview I did on Intel Chip Chat recently. And even if you're not attending NAB 2015, you can still see it in action. I'll be giving a presentation Tuesday, April 14 at 9:00 a.m. Pacific time. We'll stream it over the very systems I've described, and you can watch it on Waskul.TV. Tune in.

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

TV binge watching is a favorite past time of mine. For an 8 weeks span between February and March of this year, I binge watched five seasons of a TV series. I watched it on my Ultrabook, on a tablet at the gym, and even a couple episodes on my smart phone at the airport. It got me thinking about how the episodes get to me as well as my viewing experience on different devices.

 

Let me use today’s HP Moonshot server announcement to talk about high-density servers. You may have seen that HP today announced the Moonshot ProLiant m710 cartridge. The m710, based on the Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200, is the first microserver platform to support Intel’s best media and graphics processing technology. The Intel® Xeon® processor E3-1284L v3 is also a great example of how Intel continues to deliver on its commitment to provide our customers with industry leading silicon customized for their specific needs and workloads.

 

Now back to video delivery. Why does Intel® Iris™ Pro Graphics matter for Video Delivery? The 4k Video transition is upon us. Netflix already offers mainstream content like Breaking Bad in Ultra HD 4k. Devices with different screen sizes and resolutions are proliferating rapidly. The Samsung Galaxy S5 and iPhone 6 Plus smartphones have 1920x1080 Full HD resolution while the Panasonic TOUGHPAD 4k boasts a 3840x2560 Ultra HD display. And, the sheer volume of video traffic is growing. According to Cisco, streaming video will make up 79% of all consumer internet traffic by 2018 – up from 66% in 2013.

 

At the same time, the need to support higher quality and more advance user experiences is increasing. Users have less tolerance for poor video quality and streaming delays. The types of applications that Sportvision pioneered with the yellow 10 yard marker on televised football games are only just beginning. Consumer depth cameras and 3D Video cameras are just hitting the market.

 

For service providers to satisfy these video service demands, network and cloud based media transcoding capacity and performance must grow. Media transcoding is required to convert video for display on different devices, to reduce the bandwidth consumed on communication networks and to implement advanced applications like the yellow line on the field. Traditionally, high performance transcoding has required sophisticated hardware purpose built for video applications. But, since the 2013 introduction of the Intel® Xeon® Processor E3-1200 v3 family with integrated graphics, application and system developers can create very high performance video processing solutions using standard server technology.

 

These Intel Xeon processors support Intel® Quick Sync Video and applications developed with the Intel® Media Server Studio 2015.  This technology enables access to acceleration hardware within the Xeon CPU for the major media transcoding algorithms. This hardware acceleration can provide a dramatic improvement in processing throughput over software only approaches and a much lower cost solution as compared to customized hardware solutions. The new HP Moonshot M710 cartridge is the first server to incorporate both Intel® Quick Sync Video and Intel® Iris Pro Graphics in a single server making it a great choice for media transcoding applications.

As video and other media takes over the internet, economical, fast, and high quality transcoding of content becomes critical to support user demands. Systems built with special purpose hardware will struggle to keep up with these demands. A server solution like the HP Moonshot ProLiant m710, built on standard Intel Architecture technology, offers the flexibility, performance, cost and future proofing the market needs.

 

In part B of my blog I’m going to turn the pen over to Frank Soqui. He’s going to switch gears and talk about another workload – remote workstation application delivery. Great processor graphics are not only great for transcoding and delivering TV shows like Breaking Bad, they’re also great at delivering business applications to devices remotely.

Cloud computing models are based on gaining maximum yields for all resources that go into the data center. This is one of the keys to delivering services at a lower cost. And power is one of the biggest bills in a cloud environment. Cloud data centers now consume an estimated 1–2 percent of the world’s energy.[1] Numbers like that tell you the cloud’s success hinges on aggressive power management.

 

So let’s talk about some of the steps you can take to operate a more efficient cloud:

 

  • Better instrumentation. The basis for intelligent power management in your data center is better instrumentation at the server level. This includes instrumentation for things like CPU temperature, idle and average power, and power and memory states. Your management capabilities begin with access to this sort of data.

 

  • Better power management at the server and rack level. Technologies like dynamic power capping and dynamic workload power distribution can help you reduce power consumption and place more servers into your racks. One Intel customer, Baidu.com, increased rack-level capacity by up to 20 percent within the same power envelope when it applied aggregated power management policies. For details, see this white paper.

 

  • Better power policies across your data center. Put in place server- and rack-level power policies that work the rest of the policies in your data center. For example, you might allocate more power capacity to a certain set of servers that runs mission-critical workloads, and cap the power allocated to less important workloads. This can help you reduce power consumption while still meeting your service-level agreements.

 

  • Better power management at the facilities level. There are lots of things you can do to drive better efficiency across your data center. One of those is better thermal management through the use of hot and cold server aisles. Another is thermal mapping, so you can identify hot and cold spots in your data center and make changes to increase cooling efficiency.

 

Ultimately, the key is to look at power the way you look at all other resources that go into your data center: seek maximum output for all input.

 


 

[1] Source: Jonathan Koomey, Lawrence Berkeley National Laboratory scientist, quoted in the New York Times Magazine. “Data Center Overload,” June 8, 2009.

Historically, IT data centers operated like warehouses that focused on housing the equipment brought by application developers. Today, these passive warehouses are being converted into dynamic factories that focus on achieving the maximum “application yield” from all of the resources that go into the factory. This yield is much like that of an auto factory that must produce a variety of models with the same shared set of resources.

 

There are two fundamental ways to increase the yield from your data center: efficiency by design and efficiency by operations.

 

Efficiency by design is all about designing for optimal output. One example: As you update your infrastructure over time, your new servers, storage systems, and networking equipment should deliver measurable increases in throughput and power efficiency. With each generation of technology, you should get more yield out of your equipment investments.

 

Efficiency by operations is all about managing the “resource inventory” of the data center through automation. The key here is to use automated solutions to carry out time-consuming tasks that were previously handled manually. Automation not only helps your administrators increase their productivity, it helps your data center managers ensure that the inventory of compute, storage, and network resources is used to its maximum capacity.

 

For example, you can use automated tools to:

  • Move demanding workloads to systems with excess capacity
  • Allocate additional storage to applications that are running out of disk space and reduce storage allocated to those applications that are not using it
  • Cap the power that flows to certain workloads without impacting performance
  • Update security tools and firewall settings on user systems

 

This is just a small sample or the actions you can take to increase the yield against your data center assets—including people, equipment, software, and power. There are many other things you can do. Just keep your eyes on the factory manager’s prize: maximum output for all resources that go into your facility.

How do you rate the maturity level of your power infrastructure?

 

As data centers grow in size and density, they take an ever-larger bite out of the energy pie. Today, data centers eat up 1.2 percent of the electricity produced in the United States. This suggests that IT organizations need to take a hard look at the things they are doing to operate more efficiently.

 

How do you get started down this path? Consider the following four steps toward a more energy-efficient data center. The degree to which you are doing these things is an indication of your power management maturity.

 

1. Power usage effectiveness (PUE) measurements: Are you using PUE measurements to determine the energy efficiency of your data center? PUE is a measure of how much power is coming into your data center versus the power that is used by your IT equipment. You can watch your PUE ratio to evaluate your progress toward a more energy-efficient data center. To learn more about PUE, see The Green Grid.

 

2. Equipment efficiency: Are you buying the most efficient equipment? Deploying more efficient equipment is one of the most direct paths to power savings. One example: You can realize significant power savings by using solid-state drives instead of power-hungry, spinning hard-disk drives. For general guidance in the U.S., look for Energy Star ratings for servers.

 

3. Instrumentation: Are your systems instrumented to give you the information you need? The foundation of more intelligent power management is advanced instrumentation. This is a pretty simple concept. To understand your power issues and opportunities, you have to have the right information at your fingertips. For a good example, see Intel Data Center Manager.

 

4. Policy-based power management: Have you implemented policy-based power management? This approach uses automated tools and policies that you set to drive power efficiencies across your data center. A few examples: You can shift loads to underutilized servers, throttle servers and racks that are idle, and cap the power that is allocated to certain workloads.

 

If you can answer yes to all of these questions, you’re ahead of the power-management maturity curve. But even then, don’t rest on your laurels. Ask yourself this one additional question: Could we save more by doing all of these things to a greater degree?

 

For a closer look at your power management maturity level, check out our Data Center Power Management Maturity Model. You can find it on the Innovation Value Institute site at http://ivi.nuim.ie/.

Years ago, data center managers didn’t think a whole lot about power expenditures. They were just a cost of doing business. But today, power expenditures have grown to the point that they are overwhelming IT budgets. Just how bad has it gotten? An IDC study conducted in Europe found that the cost of powering data centers now exceeds the costs of acquiring new networking hardware or new external disk storage.[1]

 

So let’s talk about five steps you can take to corral runaway power costs.

 

1. Dynamic power capping. With some workloads you can cap power without sacrificing performance. This might save you up to 20 watts per server. Power capping tends to work best with I/O intensive workloads, where CPUs spend a lot of time waiting for data. We’ve seen outstanding results with IT organizations that take a workload-centric approach to power capping.

 

2. Dynamic workload power distribution. When you have servers that not fully loaded you have the opportunity to shift virtualized workloads off of some servers, which can be put in a low-power state until they are called back into service. VMware’s Dynamic Power Management tool is the tip of the iceberg on this model.


3. Power capping to increase data center density. When server racks are under-populated, you’re probably paying for power capacity that you aren’t using. Intelligent power node management allows you to throttle system and rack power based on expected workloads and put more servers per rack.

 

4. Optimized server platforms. Optimized server platforms can give you more bang for your energy buck. Here’s one example: When cores within a CPU are idling, they are still drinking up power. Integrated power gates on processors allow idling cores to drop to near-zero power consumption.


5. Solid state drives. Today, lots of people are talking about performance gains with solid state drives. But that’s only part of the story. In addition to performance benefits, solid state drives can save you a bundle on power when compared to standard hard-disk drives.

 

And those runaway power costs we were talking about? Let’s go rope them in.

 

Filter Blog

By date: By tag: