1 2 3 Previous Next

The Data Stack

1,488 posts

We're in the depth of winter and, yes, the snow can be delightful… until you have to move your car or walk a half block on icy streets. Inside the datacenter, the IT Wonderland might lack snowflakes but everyday activities are even more challenging year round. Instead of snowdrifts and ice, tech teams are faced with mountains of data.

 

So what are the datacenter equivalents of snowplows, shovels, and hard physical labor? The right management tools and strategies are essential for clearing data paths and allowing information to move freely and without disruption.

 

This winter, Intel gives a shout-out to the unsung datacenter heroes, and offers some advice about how to effectively avoid being buried under an avalanche of data. The latest tools and datacenter management methodologies can help technology teams overcome the hazardous conditions that might otherwise freeze up business processes.

 

Tip #1: Take Inventory

 

Just as the winter holiday season puts a strain on family budgets, the current economic conditions continue to put budget pressures on the datacenter. Expectations, however, remain high. Management expects to see costs go down while users want service improvements. IT and datacenter managers are being asked to do more with less.

 

The budget pressures make it important to fully assess and utilize the in-place datacenter management resources. IT can start with the foundational server and PDU hardware in the datacenter. Modern equipment vendors build in features that facilitate very cost-effective monitoring and management. For example, servers can be polled to gather real-time temperature and power consumption readings.

 

Middleware solutions are available to take care of collecting, aggregating, displaying, and logging this information, and when combined with a management dashboard can give datacenter managers insights into the energy and temperature patterns under various workloads.

 

Since the energy and temperature data is already available at the hardware level, introducing the right tools to leverage the information is a practical step that can pay for itself in the form of energy savings and the ability to spot problems such as temperature spikes so that proactive steps can be taken before equipment is damaged or services are interrupted.

 

Tip #2: Replace Worn-Out Equipment

 

While a snow shovel can last for years, datacenter resources are continually being enhanced, changed, and updated. IT needs tools that can allow them to keep up with requests and very efficiently deploy and configure software at a rapid pace.

 

Virtualization and cloud architectures, which evolved in response to the highly dynamic nature of the datacenter, have recently been applied to some of the most vital datacenter management tools. Traditional hardware keyboard, video, and mouse (KVM) solutions for remotely troubleshooting and supporting desktop systems are being replaced with all-software and virtualized KVM platforms. This means that datacenter managers can quickly resolve update issues and easily monitor software status across a large, dynamic infrastructure without having to continually manage and update KVM hardware.

 

Tip #3: Plan Ahead

 

It might not snow everyday, even in Alaska or Antarctica. In the datacenter, however, data grows everyday. A study by IDC, in fact, found that data is expected to double in size every two years, culminating in 44 zettabytes by 2020. An effective datacenter plan depends on accurate projections of data growth and the required server expansion for supporting that growth.

 

The same tools that were previously mentioned for monitoring and analyzing energy and temperature patterns in the datacenter can help IT and datacenter architects better understand workload trends. Besides providing insights about growth trends, the tools promote a holistic approach for lowering the overall power budget for the datacenter and enable datacenter teams to operate within defined energy budget limits. Since many large data centers already operate near the limits of the local utility companies, energy management has become mission critical for any fast-growing datacenter.

 

Tip #4: Stay Cool

 

Holiday shopping can be a budget buster, and the credit card bills can be quite a shock in January. In the datacenter, rising energy costs and green initiatives similarly strain energy budgets. Seasonal demands, which peak in both summer and the depths of winter, can mean more short-term outages and big storms that can force operations over to a disaster recovery site.

 

With the right energy management tools, datacenter and facilities teams can come together to maximize the overall energy efficiency for the datacenter and the environmental conditions solutions (humidity control, cooling, etc.). For example, holistic energy management solutions can identify ghost servers, those systems that are idle and yet still consuming power. Hot spots can be located and workloads shifted such that less cooling is required and equipment life extended. The average datacenter experiences between 15 to 20 percent savings on overall energy costs with the introduction of an energy management solution.

 

Tip #5: Reading the Signs of the Times

 

During a blizzard, the local authorities direct the snowplows, police, and rescue teams to keep everyone safe. Signs and flashing lights remind everyone of the rules. In the datacenter, the walls may not be plastered with the rules, but government regulations and compliance guidelines are woven into the vital day-to-day business processes.

 

Based on historical trends, regulations will continue to increase and datacenter managers should not expect any decrease in terms of required compliance-related efforts. Public awareness about energy resources and the related environment impact surrounding energy exploration and production also encourage regulators.

 

Fortunately, the energy management tools and approaches that help improve efficiencies and lower costs also enable overall visibility and historical logging that supports audits and other compliance-related activities.

 

When “politically correct” behavior and cost savings go hand in hand, momentum builds quickly. This effect is both driving demand for and promoting great advances in energy management technology, which bodes well for datacenter managers since positive results always depend on having the right tools. And when it comes to IT Wonderlands, energy management can be the equivalent of the whole tool-shed.

By Mike Yang General Manager of QCT (Quanta Cloud Technology)

 

quanta-image.png

 

In my last blog, I talked about how the pursuit of greater efficiency is a critical tenet of business survival for any company that operates datacenters. These efficiencies are measured by metrics like total operational cost, dollar per watt, and total cost of ownership (TCO).

 

At QCT (Quanta Cloud Technology), we have been extremely successful in the hyperscale datacenter space by delivering high performance and high efficiency systems equipped with dual-socket Intel® Xeon® processor E5-2600 or quad-socket Intel® Xeon® processor E7-4800. We launched more than dozen of new server series supporting Intel Xeon processor E5-2600 v3 product family that greatly improve efficiency for our customers.

 

That’s why we’re happy to announce today that we have become a time-to-market partners on the latest Intel® Xeon® processor E3-1200 v4 product family.

 

The Intel Xeon processor E3 is targeted at low-end servers and microservers, which are an emerging category of dense servers for web hosting and cloud implementations. Microservers usually have lower-power processors and are designed to handle large volumes of lightweight web or cloud transactions, like search queries and social networking page renderings. QCT offers a range of server platforms, each designed to meet different workloads perfectly.

 

In addition to the hosting applications noted above, the latest Intel Xeon processor E3-1200 v4 product family is the first generation processor focused on media/graphic workloads. This is an emerging datacenter application, based on feedback from our customers.

 

Media service providers all face the same challenge: streaming ever-increasing volumes of content to a rapidly growing global market connected with billions of mobile devices. These customers need transcode solutions with cost-efficient, dense designs that can deliver high-definition video to an array of devices and mobile operating systems.

 

A datacenter graphics server based on the Intel Xeon processor E3-1200 v4 product family does just that. The new processor can support more transcoding jobs per node when compared with discrete graphics.

 

This is big news, made possible by Intel. Now you can support more concurrent media transcoding functions in parallel, lowering your total cost of ownership while enabling a better experience for user-generated media, on-demand viewing, live broadcasting or videoconferencing. Whether you host desktops and workstations remotely or deliver video in the cloud, the graphics performance of the Intel Xeon processor E3-1200 v4 product family can provide the rich visual experiences end users seek. At the same time, these customers will benefit from greater energy efficiency.

 

But the potential is bigger than optimal efficiency for video transcoding and streaming. I can envision using the Intel Xeon processor E3-1200 v4 product family for big data analytics. These new processors deliver great computing power to capture valuable metrics, gain insights, and perform data-intensive tasks like video search indexing, digital surveillance, and automated ads that react to user behavior.

 

We at QCT will be working closely—and quickly—with Intel and the Intel Xeon processor E3-1200 v4 product family to help customers build datacenters that are reliable and efficient, so customers can focus on core business growth and innovation of new software products.

 

The revolution is here. Let’s see how Intel and QCT transform the future datacenter together.

By Bill Rollender, AM PLM, Media, & CPU Technology, Intel

 

 

Everyone knows video streaming is popular, but who was expecting it to take up the lion’s share of network bandwidth? Cisco* reports mobile video traffic exceeded 50 percent of total mobile data traffic for the first time in 2012 and forecast it will increase to three-fourths by 2019.1 On the Internet side, Cisco expects IP video traffic will be 79 percent of all global consumer Internet traffic in 2018, up from 66 percent in 2013.2

 

npg-pic.jpg

 

Service provider dilemma

 

As video continues to be a major consumer of network bandwidth, due to the popularity of social media and mobile devices, service providers really need to come up with more effective strategies to handle additional media traffic without breaking the bank.

 

“Service providers have to drive down both the capital and operating costs with video delivery—without sacrificing quality, reliability, or scale,” said Robert Courteau, Executive V.P., Communications BU, Kontron*.3

 

Equipment manufacturer challenge

 

Exploding video traffic is placing unprecedented demands on equipment manufacturers to increase workload density and throughput of media servers. In general, they need new ways to increase performance while keeping power consumption in check.

 

Plus, media servers must help service providers balance user demand with operational factors, such as power, bandwidth, advanced traffic control, differing standards, and quality of service. From the core of the network to its edges, service providers are demanding equipment that satisfies their needs for growth, quality delivery, and diversified services.

 

Optimized content delivery platform

 

With mobile device battery life top of mind for users, the need for efficient video transcoding in the cloud has never been greater. Working with Intel, Kontron has created the optimal solution that streams content in formats suited to mobile devices while addressing the issues of energy efficiency, scalability, and cost for service providers.

 

The Kontron* SYMKLOUD* platform features up to 18 Intel® Xeon® E3 V4 Series processors with integrated Intel® Iris Pro that transcode video streams without using the CPU cycles, so there’s plenty of headroom remaining for other applications, like video analytics. It also supports OpenFlow* for software-defined networking (SDN) and network functions virtualization (NFV) deployments.

 

According to Kontron, the Intel processor is the best-in-class solution for media optimization applications. This means smoother visual quality, spectacular HD media playback, and improved ability to decode and transcode simultaneous video streams.4

 

Higher throughput now available

 

Intel recently launched two Intel® Xeon® processors expressly designed to deliver an exceptionally large number of video transcoding channels per watt for demanding media processing applications. The Intel® Xeon® processor E3-1278L v4 and the Intel® Xeon® processor E3-1258L v4 integrate Intel® Iris™ graphics (i.e., on-processor graphics) to help minimize the CapEx and OpEx of equipment executing media applications.

 

Since the processor graphics is on-chip, it consumes less power than an add-in graphics card and delivers four to five times more media acceleration than software-only media processing.5

 

The 5th generation Intel Xeon processor E3-1278L v4 increases the number of H.264 transcoded streams from 12 to 18 for about a 50 percent improvement over 4th generation Intel® Core™ processor-based designs in the same thermal envelope.6

 

Transcoding reduces video traffic

 

The growing popularity of video streaming services, such as YouTube*, Hulu*, and Netflix*, and the proliferation of 4K high-definition content concerns many service providers. But they can reduce network bandwidth requirements for video content with media servers based on Intel® architecture that enable a range of low-power, high-density, and scalable solutions.

 

 

 

 

1 Source: “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2014–2019,” February 3, 2015, http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html

2 Source: “Cisco Visual Networking Index: Forecast and Methodology, 2013–2018,” June 10, 2014, http://www.cisco.com/c/en/us/solutions/collateral/service-provider/ip-ngn-ip-next-generation-network/white_paper_c11-481360.html.

3 Source: “Kontron and Genband to Showcase HD Video Delivery Reference Solution for Service Provider NFV Environments,” February 25, 2015, http://www.kontron.com/about-kontron/news-events/detail/genband.

4 Source: “Kontron Launches the Symkloud MS2900 Media Platform for Clloud Trasncoding,” February 25, 2013, http://www.kontron.com/about-kontron/news-events/detail/kontron-launches-the-symkloud-ms2900-media-platform-for-cloud-transcoding.

5 Source: AnandTech, “Intel Iris Pro 5200 Graphics Review: Core i7-4950HQ Tested,” June 1, 2013, http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/18.

6 Source: Intel testing.

*Other names and brands may be claimed as the property of others.

Data centers everywhere are dealing with a flood of video traffic. This deluge is only going to grow in scope in the years to come. Consider numbers like these: Online video streaming viewership jumped by 60 percent in 2014 alone,  and video delivery has now become the number one source of Internet traffic   and by 2018, video traffic is set to comprise 80 percent of the Internet’s traffic2.

 

And it’s not just YouTube videos and Netflix streaming that are causing problems for data center operators. Many organizations are also dealing with the demands of complex 3D design applications and massive data sets that are delivered from secure data centers and used by design teams scattered around the world.

 

To keep pace with current and future growth in these graphics-intensive workloads, data center operators are looking to optimize their data center computing solutions specifically to handle an ever-growing influx of graphics-intensive traffic.

 

That’s the idea behind the new Intel® Xeon® processor E3-1200 v4 family with integrated Intel® Iris™ Pro graphics P6300 — Intel’s most advanced graphics platform. This next-generation processor, unveiled today at the Computex 2015 conference, also features Intel® Quick Sync Video which accelerates portions of video transcoding software by running it hardware.

 

This makes the Intel Xeon processor E3-1200 v4 family an ideal solution for streaming high volumes of HD video. It offers up to 1.4 times the transcoding performance of the Intel® Xeon® processor E3-1200 v3 family and can handle up to 4300 simultaneous HD video streams per rack.

 

The new Intel Xeon Processor E3-1200 v4 family is also a great processing and graphics solution for organizations that need to deliver complex 3D applications and large datasets to remote workstations. It supports up to 1.8 times the 3D graphics performance  versus the previous generation Intel Xeon processor E3 v3 family.

 

I’m pleased to say that the new platform already has a lot of momentum with our OEM partners. Companies designing systems around the Intel Xeon Processor E3-1200 v4 family include Cisco, HP, Kontron, Servers Direct, Supermicro, and QCT (Quanta Cloud Technology).

 

Early adopters of the Iris Pro graphics-enabled solution include iStreamPlanet, which streams live video to a wide range of user devices via its cloud delivery platform. In fact, they just announced a new 1080p/60 fps service offering:

 

“We’re excited to be among the first to take advantage of Intel’s new Xeon processors with integrated graphics that provide the transcode power to drive higher levels of live video quality, up to 1080p/60 fps, with price to performance gains that allow us to reach an even broader market.” --- Mio Babic, CEO, iStreamPlanet

 

The Intel Xeon processor E3-1200 v4 product family also includes Intel® Graphics Virtualization Technology for direct assignment (Intel® GVT-d). Intel GVT-d directly assigns a processor’s capabilities to a single user to improve the quality of remote desktop applications.

 

Looking ahead, the future is certain to bring an ever-growing flood of video traffic, along with ever-larger 3D design files. That’s going to make technologies like the Intel Xeon processor E3-1200 v4 family and Iris Pro graphics P6300 all the more essential.

 

For a closer look at this new data center graphics powerhouse, visit intel.com/XeonE3

 

 

 

[1] WSJ: “TV Viewing Slips as Streaming Booms, Nielsen Report Shows.” Dec. 3, 2014.

[2] Sandvine report. 2014.

[3] Measured 1080p30 20MB streams: E301286L v3=10, E3-1285L v4=14.

[4] Measured 3DMark® 11: E301286L v3=10, E3-1285L v4=14.

For cloud, media, and communications service providers, video delivery is now an essential service offering—and a rather challenging proposition.

 

In a world with a proliferation of viewing devices—from TVs to laptops to smart phones—video delivery becomes much more complex. To successfully deliver high-quality content to end users, service providers must find ways to quickly and efficiently transcode video from one compressed format to another. To add another wrinkle, many service providers now want to move transcoding to the cloud, to capitalize on cloud economics.

 

That’s the idea behind innovative Intel technology-based solutions showcased at the recent Streaming Media East conference in New York. Event participants had the opportunity to gain a close-up look at the advantages of deploying virtualized transcoding workflows in private or public clouds, with the processing work handled by Intel® architecture.

 

I had the good fortune to join iStreamPlanet for a presentation that explained how cloud workflows can be used to ingest, transcode, protect, package, stream, and analyze media on-demand or live to multiscreen devices. We showed how these cloud-based services can help communications providers and large media companies simplify equipment design and reduce development costs, while gaining the easy scalability of a cloud-based solution.

 

iStreamPlanet offers cloud-based video-workflow products and services for live event and linear streaming channels. With its Aventus cloud- and software-based live video streaming solution, the company is breaking new ground in the business of live streaming. Organizations that are capitalizing on iStreamPlanet technology include companies like NBC Sports Group as well as other premium content owners, aggregators, and distributors.

 

In the Intel booth Vantrix showcased a software-defined solution that enables service providers to spread the work of video transcoding across many systems to make everything go a lot faster. With the company’s solution, transcoding workloads that might otherwise take up to an hour to run can potentially be run in just seconds.

 

While they meet different needs, solutions from iStreamPlanet and Vantrix share a common foundation: the Intel® Xeon® processor E3-1200 product family with integrated graphics processing capabilities. By making graphics a core part of the processor, Intel is able to deliver a dense, cost-effective solution that is ideal for video transcoding, cloud-based or otherwise.

 

The Intel Xeon processor E3-1200 product family supports Intel® Quick Sync Video technology. This groundbreaking technology enables hardware-accelerated transcoding to deliver better performance than transcoding on the CPU—all without sacrificing quality.

 

Want to make this story even better? To get a transcoding solution up and running quickly, organizations can use the Intel® Media Server Studio, which provides development tools and libraries for developing, debugging, and deploying media solutions on Intel-based servers.

 

With offerings like Intel Media Server Studio and Intel Quick Sync Video Technology, Intel is enabling a broad ecosystem that is developing innovative solutions that deliver video faster, while capitalizing on the cost advantages of cloud economics.

 

For a closer look at the Intel Xeon processor E3-1200 product family with integrated graphics, visit www.intel.com/XeonE3.

In enterprise IT and service provider environments these days, you’re likely to hear lots of discussion about software-defined infrastructure. In one way or another, everybody now seems to understand that IT is moving into the era of SDI.

 

There are good reasons for this transformation, of course. SDI architectures enable new levels of IT agility and efficiency. When everything is managed and orchestrated in software, IT resources— including compute, storage, and networking—can be provisioned on demand and automated to meet service-level agreements and the demands of a dynamic business.map-graphic.jpg

 

For most organizations, the question isn’t, “Should we move to SDI?” It’s, “How do we get there?” In a previous post, I explored this topic in terms of a high road that uses prepackaged SDI solutions, a low road that relies on build-it-yourself strategies, and a middle road that blends the two approaches together.

 

In this post, I will offer up a maturity-model framework for evaluating where you are in your journey to SDI. This maturity model has five stages in the progression from traditional hard-wired architecture to software-defined infrastructure. Let’s walk through these stages.

 

Standardized

 

At this stage of maturity, the IT organization has standardized and consolidated servers, storage systems, and networking devices. Standardization is an essential building block for all that follows. Most organizations are already here.

 

Virtualized

 

By now, most organizations have leveraged virtualization in their server environments. While enabling high level of consolidation and greater utilization of physical resources, server virtualization accelerates service deployment and facilitates workload optimization. The next step is to virtualize storage and networking resources to achieve similar gains.

 

Automated

 

At this stage, IT resources are pooled and provisioned in an automated manner. In a step toward a cloud-like model, automation tools enable the creation of self-service provisioning portals—for example, to allow a development and test team to provision its own infrastructure and to move closer to a frictionless IT organization.

 

Orchestrated

 

At this higher stage of IT maturity, an orchestration engine optimizes the allocation of data center resources. It collects hardware platform telemetry data and uses that information to place applications on the best servers, with features that enable acceleration of the workloads, located in approved locations for optimal performance and the assigned levels of trust. The orchestration engine acts as an IT watchdog that spots performance issues and takes remedial actions—and then learns from the events to continue to meet or exceed the customer’s needs.

 

SLA Managed

 

At this ultimate stage—the stage of the real-time enterprise—an organization uses IT service management software to maintain targeted service levels for each application in a holistic manner. Resources are automatically assigned to applications to maintain SLA compliance without manual intervention. The SDI environment makes sure the application gets the infrastructure it needs for optimal performance and compliance with the policies that govern it.

 

In subsequent posts, I will take a closer look at the Automated, Orchestrated, and SLA Managed stages. For now, the key is to understand where your organization falls in the SDI maturity model and what challenges need to be solved in order to take this journey. This understanding lays the groundwork for the development of strategies that move your data center closer to SDI—and the data center of the future.

Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.

 

Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.

 

Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.

 

Shifting Workloads to Lower Energy Costs

 

Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.

 

That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.

 

A Case for Cloud Computing

 

This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.

 

From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.

 

Other Benefits of Fine-Grained Visibility

 

For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.

 

Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.

 

Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.

 

Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes, and can even help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.

 

Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission-critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.

 

Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15 to 20 percent of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.

 

Understand and Utilize Energy Profiles

 

As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast-changing businesses.

 

With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.

 

Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision-making process.

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.

 

This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by myself and Wael Noureddine of Chelsio Communications, describes two new extensions that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies that are based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE. By bringing these technologies into alignment, we move closer toward the goal of the Open Fabrics Alliance, that the application developer need not concern herself with which of these is the underlying network technology -- RDMA will "just work" on all.

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

The international melting pot of Vancouver, BC provides a perfect background for the OpenStack Summit, a semi-annual get together of the developer community driving the future of open source in the data center. After all, it takes a melting pot of community engagement to build “the open source operating system for the cloud”. I came to Vancouver to get the latest from that community. This week’s conference has provided an excellent state of the union on where OpenStack is on delivering its vision to be the operating system for the cloud, how both the industry and user community are working to innovate on top of OpenStack to drive the ruggedness required for enterprise and telco deployments, and where gaps still exist between industry vision and deployment reality.  This state of the union is delivered across Summit keynotes, hundreds of track sessions, demos, and endless meetings, meet ups and other social receptions.

rule_the_stack_view.jpg

 

Intel’s investment in OpenStack reflects the importance of open source software innovation to deliver our vision of Software Defined Infrastructure.  Our work extends from our core engagement as a leader in the OpenStack Foundation, projects focused on ensuring that software is taking full advantage of Intel platform features to drive higher levels of security, reliability and performance, and collaborations driven to ensure that demos of today become mainstream deployments tomorrow.

 

So what’s new from Intel this week? Today, Intel announced Clear Containers, a project associated with Intel Clear Linux designed to ensure that container based environments leverage Intel virtualization and security features to both improve speed of deployment and enable a hardware root of trust to container workloads.  We also announced the beta delivery of Cloud Integration Technology 3.0, our latest software aimed delivering workload attestation across cloud environments, and showcased demos ranging from trusted VMs to intelligent workload scheduling to NFV workloads on trusted cloud architecture.


To learn more about Intel’s engagement in the OpenStack community, please check out a conversation with Jonathan Donaldson as well as learn about Intel’s leadership on driving diversity in the data center as seen through the eyes of some leading OpenStack engineers.


Check back tomorrow to hear more about the latest ecosystem and user OpenStack innovation as well as my perspectives on some of the challenges ahead for industry prioritization.  I’d love to hear from you about your perspective on OpenStack and open source in the data center. Continue the conversation here, or reach out @techallyson.

Cloud computing offers what every business wants: the ability to respond instantly to business needs. It also offers what every business fears: loss of control and, potentially, loss of the data and processes that enable the business to work. Our announcement at the OpenStack Summit of Intel® Cloud Integrity Technology 3.0 puts much of that control and assurance back in the hands of enterprises and government agencies that rely on the cloud.

 

Through server virtualization and cloud management software like OpenStack, cloud computing lets you instantly, even automatically, spin up virtual machines and application instances as needed. In hybrid clouds, you can supplement capacity in your own data centers by "bursting" capacity from public cloud service providers to meet unanticipated demand. But this flexibility also brings risk and uncertainly. Where are the application instances actually running? Are they running on trusted servers whose BIOS, operating systems, hypervisors, and configurations have not been tampered with? To assure security, control, and compliance, you must be sure applications run in a trusted environment. That's what Intel Cloud Integrity Technology lets you do.

 

Intel Cloud Integrity Technology 3.0 is software that enhances security features of Intel® Xeon® processors to let you assure applications running in the cloud run on trusted servers and virtual machines whose configurations have not been altered. Working with OpenStack, it ensures when VMs are booted or migrated to new hardware, the integrity of virtualized and non-virtualized Intel x86 servers and workloads is verified remotely using Intel® Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology on Intel Xeon processors. If this "remote attestation" finds discrepancies with the server, BIOS, or VM —suggesting the system may have been compromised by cyber attack—the boot process can be halted. Otherwise, the application instance is launched in a verified, trusted environment spanning the hardware and the workload.

 

In addition to assuring the integrity of the workload, Cloud Integrity Technology 3.0 also enables confidentially by encrypting the workload prior to instantiation and storing it securely using OpenStack Glance. An included key management system that you deploy on premise gives the tenant complete ownership and control of the keys used to encrypt and decrypt the workload.

 

Cloud Integrity Technology 3.0 builds on earlier releases to assure a full chain of trust from bare metal up through VMs. It also provides location controls to ensure workloads can only be instantiated in specific data centers or clouds. This helps address the regulatory compliance requirements of some industries (like PCI and HIPAA) and geographical restrictions imposed by some countries.

 

What we announced at OpenStack Summit is a beta availability version of Intel Cloud Integrity Technology 3.0. We'll be working to integrate with an initial set of cloud service providers and security vendor partners before we make the software generally available. And we'll submit extensions to OpenStack for Cloud Integrity Technology 3.0 later this year.

 

Cloud computing is letting businesses slash time to market for new products and services and respond quickly to competitors and market shifts. But to deliver the benefits promised, cloud service providers must assure tenants their workloads are running on trusted platforms and provide the visibility and control they need for business continuity and compliance.

 

Intel Xeon processors and Cloud Integrity Technology are enabling that. And with version 3.0, we're enabling it across the stack from the hardware through the workload. We're continuing to extend Cloud Integrity Technology to storage and networking workloads as well: storage controllers, SDN controllers, and virtual network functions like switches, evolved packet core elements, and security appliances. It's all about giving enterprises the tools they need to capture the full potential of cloud computing.

By Tony Dempsey


I’m here attending the OpenStack Summit in Vancouver, BC and wanted to find out more about OPNFV, a cross industry initiative to develop a reference architecture for operators to use as a reference for their NFV deployments. Intel is a leading contributor to OPNFV, and I was keen to find out more, so I attended a special event being held as part of the conference.

 

Heather Kirksey (OPNFV Director) kicked off today’s event by describing what OPNFV is all about, including the history around why OPNFV was formed as well as an overview on what areas OPNFV is focused on. OPNFV is a carrier-grade integrated open source platform to accelerate the introduction of new NFV products and services, which was an initiative coming out of the ETSI SIG group and its initial focus is on the NFVI layer.

 

OPNFV’s first release will be called Arno (naming is themed on names of rivers) and will include OpenStack, OpenDaylight, and Open vSwitch.  No date for the release is available just yet but is thought to be soon. Notably, Arno is expected to be used in lab environments initially, versus a commercial deployment. High Availability (HA) will be part of the first release (control and deployment side is supported). The plan is to make OpenStack Telco-Grade instead of trying to make a Telco-Grade version of OpenStack. AT&T gave an example as to how they were going to use the initial Arno release.  As an example of how this release will be implemented, AT&T indicated they going to bring the Arno release into their lab, add additional elements to it, and test for performance and security. They see this release very much as a means to uncover gaps in open source projects, help identify fixes and upstream these fixes. OPNFV is committed to working with the upstream communities to ensure a good relationship.  Down the road it might be possible for OPNFV releases to be deployed by service providers but currently this is a development tool.

 

An overview on OPNFV’s Continuous Integration (CI) activities was given along with a demo. The aim of the CI activity is to give fast feedback to developers in order to increase and improve the rate at, which software is developed. Chris Price (TSC Chair) spoke about requirements for the projects and working with upstream communities. According to Chris, OPNFV’s focus is working with the open source projects to define the issues, understand which open source community can likely solve the problem, work with that community to find a solution, and then upstream that solution. Mark Shuttleworth (founder of Canonical) gave an auto-scaling demo showing a live VIMS core (from Metaswitch) with CSCF auto-scaling running on top of Arno. 

 

I will be on the lookout for more OPNFV news throughout the Summit to share. In the meantime, check out Intel Network Builders for more information on Intel’s support of OPNFV and solutions delivery from the networking ecosystem.

By Suzi Jewett, Diversity & Inclusion Manager, Data Center Group, Intel

 

I have the fantastic job of driving diversity and inclusion strategy for the Data Center Group at Intel.  For me it is the perfect opportunity to align my skills, passions, and business imperatives in a full time role.  I have always had the skills and passions, but it was not until recently that the business imperative portion grew within the company to a point that we needed a full time person to fill this role and many similar throughout Intel.  Being a female mechanical engineer I have always known I am one of the few and at times that was awkward, but even I didn’t know the business impact of not having diverse teams.IMG_2216.JPG


Over the last 2-3 years the information on the bottom line results to the business of having diverse persons on teams and in leadership positions has become clear, and has provided overwhelming evidence that says that we can no longer be okay with having a flat or dwindling diverse persons representation in our teams.  We also know that all employees actually have more passion for their work and are able to bring their whole-selves to work when we have an inclusive environment.  Therefore, we will not achieve the business imperatives we need to unless we embrace diverse backgrounds, experiences, and thoughts in our culture and in our every decision.

 

Within the Data Center Group one area that we recognize as well below where we need it to be is female participation in open source technologies. So, I decided that we should host a networking event for women at the OpenStack Summit this year and really start making our mark in increasing the number of women in the field.

 

Today I had my first opportunity to interact with people working in OpenStack at the Women of OpenStack Event. We had a beautiful cruise around the Vancouver Harbor and then chatted the night away at Black + Blue Steakhouse. About 125 women attended and a handful of male allies (yeah!). The event was put on by the OpenStack foundation and sponsored by Intel & IBM. The excitement of women there and the non-stop conversation was so energizing to be a part of and it was obvious that the women loved having some kindred spirits to talk tech and talk life with. I was able to learn more about how OpenStack works, why it’s important, and see the passion of everyone in the room to work together to make it better. I learned that many of the companies design features together, meeting weekly and assigning ownership to divvy up the work between the companies to complete feature delivery to the code…being new to open source software I was amazed that this is even possible and excited at the same to see the opportunities to really have diversity in our teams because the collaborative design has the opportunity to bring in a vast amount of diversity and create a better end product.

 

IMG_2218.JPG

 

A month or so ago I got asked to help create a video to be used today to highlight the work Intel is doing in OpenStack and the importance to Intel and the industry of having women as contributors. The video was shown tonight along with a great video from IBM and got lots of applause and support throughout the venue as different Intel women appeared to talk about their experiences. Our Intel ‘stars’ were a hit and it was great to have them be recognized for their technical contributions to the code and leadership efforts for Women of OpenStack. What’s even more exciting is that this video will play at a keynote this week for all 5000 attendees to highlight what Intel is doing to foster inclusiveness and diversity in OpenStack!

 

By Mike Pearce, Ph.D. Intel Developer Evangelist for the IDZ Server Community

 

 

On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family.  One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize business operations—in healthcare, financial, enterprise data center, and telecommunications environments—through real-time analytics. The new Xeon processor is a game-changer for those organizations seeking better decision-making, improved operational efficiency, and a competitive edge.

 

The Intel Xeon processor E7 v3 family’s performance, memory capacity, and advanced reliability now make mainstream adoption of real-time analytics possible. The rise of the digital service economy, and the recognized potential of "big data," open new opportunities for organizations to process, analyze, and extract real-time insights. The Intel Xeon processor E7 v3 family tames large volumes of data accumulated by cloud-based services, social media networks, and intelligent sensors, and enable data analytics insights, aided by optimized software solutions.

 

A key enhancement to the new processor family is its increased memory capacity – the industry’s largest per socket1 - enabling entire datasets to be analyzed directly in high-performance, low-latency memory rather than traditional disk-based storage. For software solutions running on and/or optimized for the new Xeon processor family, this means businesses can now obtain real-time analytics to accelerate decision-making—such as analyzing and reacting to complex global sales data in minutes, not hours.  Retailers can personalize a customer’s shopping experience based on real-time activity, so they can capitalize on opportunities to up-sell and cross-sell.  Healthcare organizations can instantly monitor clinical data from electronic health records and other medical systems to improve treatment plans and patient outcomes.

 

By automatically analyzing very large amounts of data streaming in from various sources (e.g., utility monitors, global weather readings, and transportation systems data, among others), organizations can deliver real-time, business-critical services to optimize operations and unleash new business opportunities. With the latest Xeon processors, businesses can expect improved performance from their applications, and realize greater ROI from their software investments.

 

 

Real Time Analytics: Intelligence Begins with Intel

 

Today, organizations like IBM, SAS, and Software AG are placing increased emphasis on business-intelligence (BI) strategies. The ability to extract insights from data is a something customers expect from their software to maintain a competitive edge.  Below are just a few examples of how these firms are able to use the new Intel Xeon processor E7 v3 family to meet and exceed customer expectations.

 

Intel and IBM have collaborated closely on a hardware/software big data analytics combination that can accommodate any size workload. IBM DB2* with BLU Acceleration is a next-generation database technology and a game-changer for in-memory computing. When run on servers with Intel’s latest processors, IBM DB2 with BLU Acceleration optimizes CPU cache and system memory to deliver breakthrough performance for speed-of-thought analytics. Notably, the same workload can be processed 246 times faster3 running on the latest processor than the previous version of IBM DB2 10.1 running on the Intel Xeon processor E7-4870.

 

By running IBM DB2 with BLU Acceleration on servers powered by the new generation of Intel processors, users can quickly and easily transform a torrent of data into valuable, contextualized business insights. Complex queries that once took hours or days to yield insights can now be analyzed as fast as the data is gathered.  See how to capture and capitalize on business intelligence with Intel and IBM.

 

From a performance speed perspective, Apama* streaming analytics have proven to be equally impressive. Apama (a division of Software AG) is an extremely complex event process engine that looks at streams of incoming data, then filters, analyzes, and takes automated action on that fast-moving big data. Benchmarking tests have shown huge performance gains with the newest Intel Xeon processors. Test results show 59 percent higher throughput with Apama running on a server powered by the Intel Xeon processor E7 v3 family compared to the previous-generation processor.4

 

Drawing on this level of processing power, the Apama platform can tap the value hidden in streaming data to uncover critical events and trends in real time. Users can take real-time action on customer behaviors, instantly identify unusual behavior or possible fraud, and rapidly detect faulty market trades, among other real-world applications. For more information, watch the video on Driving Big Data Insight from Software AG. This infographic shows Apama performance gains achieved when running its software on the newest Intel Xeon processors.

 

SAS applications provide a unified and scalable platform for predictive modeling, data mining, text analytics, forecasting, and other advanced analytics and business intelligence solutions. Running SAS applications on the latest Xeon processors provides an advanced platform that can help increase performance and headroom, while dramatically reducing infrastructure cost and complexity. It also helps make analytics more approachable for end customers. This video illustrates how the combination of SAS and Intel® technologies delivers the performance and scale to enable self-service tools for analytics, with optimized support for new, transformative applications. Further, by combining SAS* Analytics 9.4 with the Intel Xeon processor E7 v3 family and the Intel® Solid-State Drive Data Center Family for PCIe*, customers can experience throughput gains of up to 72 percent. 5

 

The new Intel Xeon processor E7 v3 processor’s ability to drive new levels of application performance also extends to healthcare. To accelerate Epic* EMR’s data-driven healthcare workloads and deliver reliable, affordable performance and scalability for other healthcare applications, the company needed a very robust, high-throughput foundation for data-intensive computing. Epic’s engineers benchmark-tested a new generation of key technologies, including a high performance data platform from InterSystem*, new virtualization tools from VMware*, and the Intel Xeon processor E7 v3 family. The result was an increase in database scalability of 60 percent,6, 7 a level of performance that can keep pace with the rising data access demands in the healthcare enterprise while creating a more reliable, cost-effective, and agile data center. With this kind of performance improvement, healthcare organizations can deliver increasingly sophisticated analytics and turn clinical data into actionable insight to improve treatment plans and ultimately, patient outcomes.

 

These are only a handful of the optimized software solutions that, when powered by the latest generation of Intel processors, are enabling tremendous business benefits and competitive advantage. From the highly improved performance, memory capacity, and scalability, the Intel Xeon E7 v3 processor family helps deliver more sockets, heightened security, increased data center efficiency and the critical reliability to handle any workload, across a range of industries, so that your data center can bring your business’s best ideas to life. To learn more, visit our software solutions page and take a look at our Enabled Applications Marketing Guide.

 

 

 

 

 

 

1 Intel Xeon processor E7 v3 family provides the largest memory footprint of 1.5 TB per socket compared to up to 1TB per socket delivered by alternative architectures, based on published specs.

2 Up to 6x business processing application performance improvement claim based on SAP* OLTP internal in-memory workload measuring transactions per minute (tpm) on SuSE* Linux* Enterprise Server 11 SP3. Configurations: 1) Baseline 1.0: 4S Intel® Xeon® processor E7-4890 v2, 512 GB memory, SAP HANA* 1 SPS08. 2) Up to 6x more tpm: 4S Intel® Xeon® processor E7-8890 v3, 512 GB memory, SAP HANA* 1 SPS09, which includes 1.8x improvement from general software tuning, 1.5x generational scaling, and an additional boost of 2.2x for enabling Intel TSX.

3 Software and workloads used in the performance test may have been optimized for performance only on Intel® microprocessors. Previous generation baseline configuration: SuSE Linux Enterprise Server 11 SP3 x86-64, IBM DB2* 10.1 + 4-socket Intel® Xeon® processor E7-4870 using IBM Gen3 XIV FC SAN solution completing the queries in about 3.58 hours.  ‘New Generation’ new configuration: Red Hat* Enterprise LINUX 6.5, IBM DB2 10.5 with BLU Acceleration + 4-socket Intel® Xeon® processor E7-8890 v3 using tables in-memory (1 TB total) completing the same queries in about 52.3 seconds.  For more complete information visit http://www.intel.com/performance/datacenter

4 One server was powered by a four-socket Intel® Xeon® processor E7-8890 v3 and another server with a four-socket Intel Xeon processor E7-4890 v2. Each server was configured with 512 GB DDR4 DRAM, Red Hat Enterprise Linux 6.5*, and Apama 5.2*. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

5 Up to 1.72x generational claim based on SAS* Mixed Analytics workload measuring sessions per hour using SAS* Business Analytics 9.4 M2 on Red Hat* Enterprise Linux* 7. Configurations: 1) Baseline: 4S Intel® Xeon® processor E7-4890 v2, 512 GB DDR3-1066 memory, 16x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.11 sessions/hour. 2) Up to 1.72x more sessions per hour: 4S Intel® Xeon® processor E7-8890 v3, 512 GB DDR4-1600 memory, 4x 2.0 TB Intel® Solid-State Drive Data Center P3700 + 8x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.19 sessions/hour.

6 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance

7 Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued its Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”

 

In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).  1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which, ironically, has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.

 

motophone.jpg

 

The key reason for Ethernet’s longevity, IMHO, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.

 

The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it, is done when data gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.

 

Intel developed a technology to help address that problem, and we call it Intel® Ethernet Flow Director.  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does, in a nutshell, is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic.

 

I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  I hope you find both the video and white paper enjoyable and illuminating.

 

Intel, the Intel logo, and Intel Ethernet Flow Director are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

Filter Blog

By date:
By tag: