The role of IT organizations is to deliver value by increasing employee productivity, driving business efficiency, facilitating business growth, or by delivering IT efficiency and continuity.  During economic downturns, IT is often called upon to step up and deliver even more value.  This is exactly what Windows 7 with the latest Intel Core-based mobile platforms delivers.  This is why I am so excited that we are well down the path to full deployment of the new OS with our enterprise release just around the corner.

 

Windows 7, along with the performance from the latest Intel-based mobile clients, enables us to deliver to our overall client strategy of delivering productivity and flexibility to our employees while reducing TCO.  Our testing has shown that Windows 7 improves productivity with higher application responsiveness, a better user interface, and improved stability.  The new OS also enables clients to be managed easier, helping to drive lower TCO through reduced service desk calls. Finally, the enhanced security and application control features are complementary with Intel vPro technology and give us better data protection.  To ensure we capture this value as quickly as possible, we are preparing for an aggressive enterprise adoption of Windows 7 coupled with our continued PC refresh strategy.

 

I hope you find this video pertinent and I encourage you to respond and share your ideas on what your company is doing to drive employee productivity or how you are taking advantage of Windows 7 and the new Core-based platforms.

 

Thank you,
Diane Bryant, Intel CIO

 

Cloud Computing & the Psychology of Mine

Legacy Thinking in the Evolving Datacenter

The 1957 Warner Brothers* cartoon “Ali Baba Bunny” shows a scene where an elated Daffy Duck bounds about a pile of riches and gold in Ali Baba’s cave exclaiming, “Mine, Mine.. It’s all Mine!” Daffy Duck, cartoons, Ali Baba… what do these have to do with the evolving datacenter and cloud computing?

The answer to this question is ‘everything’! Albeit exaggerated, Daffy’s exclamation is not far from the thinking of the typical application owner in today’s datacenter. The operating system (OS), application, servers, network connections, support, and perhaps racks are all the stovepipe property of the application owner. “Mine, Mine… It’s all Mine!” For most IT workloads, a singularly purposed stack of servers, 50-70% over-provisioned for peak load, and conservatively sized at 2-4x capacity for growth over time. The result of this practice is an entire datacenter running at 10-15% utilization in case of unforeseen load spikes or faster than expected application adoption. Given a server consumes 65% of its power budget when running at 0% utilization, the problem of waste is self-evident.

Enter server virtualization, the modern Hypervisor or VMM, and the eventual ubiquity of cloud computing. Although variations in features exist between VMware*, Microsoft*, Xen*, and other flavors of virtualization, all achieve abstraction of the guest OS and application stack from the underlying hardware and workload portability.

This workload portability and abysmal utilization rates allows consolidation of multiple OS-App stacks into single physical servers, and the division of ever larger resources such as the 4-socket Intel Xeon 7500 series platform which surpasses the compute capacity of mid-90s supercomputers. Virtualization is a tool that helps reclaim datacenter space, reduce costs, and simplify the provisioning and re-provisioning of OS-App stacks. However much like a hammer, virtualization requires a functioning intelligence to wield and could result in more management overhead if one refuses to break the paradigm of ‘mine’...

A portion of this intelligence lies with the application owner. In the past, the application owner had to sequester dedicated resources and over-provision to ensure availability and accountability. Although this thinking is still true to a degree, current infrastructure is much more fungible than the static compute resources of 10 or even 5 years ago. The last eight months working on the Datacenter 2.0 project, a joint Intel IT and Intel Architecture Group (IAG) effort, brought this thinking to the forefront as every Proof of Concept (PoC) owner repeatedly asked for dedicated resources within the project’s experimental ‘mini-cloud’. Time and time again, end users asked for isolated and dedicated servers, network, and storage demonstrating a fundamental distrust of the ability of cloud to meet their expectations. Interesting, most of the PoC owners cited performance as the leading reason for dedicated resource request yet were unable to articulate specific requirements such as network bandwidth consumption, memory usage, or disk IO operations.

The author initially shared this skepticism as virtualization and ‘the cloud’ have some as-yet immature features. For broad adoption, the cloud compute model must demonstrate both the ability to secure & isolate workloads and the ability to actively respond to demands from all four resource vectors of; compute, memory, disk i/o, and network i/o. Current solutions easily respond to memory and compute utilization however, most hypervisors are blind to disk and network bottlenecks. In addition, current operating systems lack the mechanisms for on-the-fly increase or decrease in the number of CPUs and memory available to the OS. Once the active measurement, response, trend analysis, security, and OS flexibility issues are resolved virtualization and cloud compute are poised to revolutionize the way IT deploys applications. However, this is the easy piece as it is purely technical and one of inevitable technology maturation.

The more difficult piece of this puzzle is the change in thinking and paradigm shift that the end users and application owners must make. This change in thinking happens when the question asked becomes, “is my application available” instead of, “is the server up?” and when application owners think in terms of meeting service level agreements and application response time requirements instead of application uptime. After much testing and demonstration, end users will eventually become comfortable with the idea that the cloud can adapt to the needs of their workload regardless the demand vector.

Although not a panacea, cloud computing promises flexibility, efficiency, demand-based resourcing, and an ability to observe and manage the resources consumed by application workloads like never before. As this compute model matures, our responsibility as engineers and architects is to foster credibility, deploy reliable solutions, and push the industry to mature those underdeveloped security and demand-vector response features.

Christian D, Black, MCSE

Technologist/Systems Engineer

Intel IT – Strategy, Architecture, & Innovation

Filter Blog

By author:
By date:
By tag: