1 2 Previous Next

IT Peer Network

19 Posts authored by: ajayc47

Intel IT has delivered significant business value with our enterprise private cloud.  We have saved USD $9M net of investment to date from efficiencies gained through our private cloud.  Over 64% of our Office and Enterprise environment is virtualized and 80% of new business services are delivered through our enterprise private cloud.  Most importantly, the time it takes to provision new IT services decreased dramatically from 3 months to 45 minutes.

 

The next step in our enterprise cloud vision is to develop a hybrid cloud usage model where we use a combination of our internal private cloud and always active and secure external clouds.  But for that model to work, we need to protect data according to Intel information security policies. As part of that effort and in parallel, Intel IT is actively seeking ways to implement data anonymization techniques to enhance the security of Intel’s data in the public cloud while still allowing the data be to analyzed and used.

 

We realize that a 100% totally secure cloud infrastructure is unrealistic. So instead of hoping that cloud infrastructures be totally secure, we can prepare for potential security breaches in the cloud by proactively anonymizing data, thereby making it worthless to others, while still allowing Intel IT to process it in a useful way. Data anonymization is an important tool in our continuing pursuit of secure cloud computing.

 

So what is data anonymization? Data anonymization is the process of obscuring published data to prevent the identification of key information.  Data Anonymization can enhance security of data stored in public clouds by rendering the data worthless to those who stole it but still allows for us to conduct useful analytics and reporting.  This can be accomplished in a number of ways. For example, “shifting” adds a fixed offset to the numerical values, while “truncation” shortens data. Another technique is to add fictitious data records to obscure patterns and relationships. Other data anonymization techniques include: hashing, truncation, permutation, and value shifting, just to name a few.

 

We conducted a PoC that showed that data anonymization is a viable technique for enhancing the security of cloud computing.  It can ease the way for a simpler demilitarized zone and security provisioning and enabling more secure cloud computing. It can also help alleviate some of the potential legal problems encountered by U.S. companies that store data associated with customers living in the EU.

 

@ajayc47

Many enterprises run two separate parallel networks in their data center.  Ethernet-based LANs are used to connect servers and clients and the internet.  Fiber channel based networks are used for connecting servers to SANs and block storage arrays for storing data,.

 

In late 2010 we determined that our existing 1GbE network infrastructure was no longer adequate to meet our rapidly growing business requirements and the resource demands of our increasingly virtualized environment.  Furthermore, to achieve the full benefits of the performance gains of the latest Intel processors and clustering technologies required network performance upgrades. 

 

Our solution, based on the extensive test results documented in this FCoE white paper, was to implement a unified network infrastructure using FCoE through dual-port 10Gb Intel Ethernet X520 server adapters.  As a result, we are able to reduce total network costs per server rack by more than 50% among other benefits described in the paper. 

 

@ajayc47

The density of our data centers has increased since 2004 as we expanded our rate of virtualization, proactively refreshed our servers and storage with the latest generation of Xeon processors, and consolidated the number of data centers.  As a result we had to innovate and optimize the efficiency and reliability of our data center facilities to accommodate the increasing cabinet sizes from 15kW to 22.5kW and to 30kW currently.  We focused on 4 key areas to optimize: Air and Thermal management, Architecture, Electrical, and Harvesting Stranded Capacity.  Read this data center facilities paper to learn and see (in 3D color) how our facilities changed as we increased the density of our data centers.

 

For low-density cabinets up to 15kW, we use traditional computer room air conditioning (CRAC) units in the return airstream without any air segragation as illustrated below:

Fac Fig1.JPG

For 22.5 kW cabinets, we use hot aisle enclosure (HAE) with the CRAC unit in the return airstream as illustrated below:

HAE Fig2.JPG

For 30 kW cabinets, we use flooded supply air design and passive chimneys as well as HAEs as shown below:

RAP Fig3.JPG

For some facilities with a raised metal floor (RMF) we still use a flooded air design by raising the room air handlers' supply path above the RMF as shown below:

RMF Fig4.JPG

Follow me on twitter at @ajayc47.

Like many of our enterprise IT peers, we are being challenged with rapid growth in storage demand.  In 2011 alone, we faced a 53% increase in storage capacity to 38.2 PB from 2010 and the continued build out of our private cloud could further increase demand.  Clearly we could not increase costs linearly with demand.  Through a variety of techniques described in this data storage solutions paper including: thin provisioning, tiering, storage refresh, using SSDs, and increasing utilization, we have been able to support significant capacity and performance improvements while saving $9.2M.

 

These techniques combined with deploying new Intel® Xeon® processor based storage technologies allow us to meet steep storage demand growth in a cost-efficient manner while not compromising on quality of service in our virtualized and multi-tenant computing environment.

 

@ajayc47

We recently formulated a new data center strategy to help us deliver improved IT services at lower cost.  At the center of our strategy is a new decision making model called "Model of Record".  This model is based on an approach Intel uses in its highly regarded and world-class Manufacturing environment.  We benchmark each of our data centers against a best achievable model which allows us to address the gaps that deliver the greatest improvements in velocity, quality, efficiency, and capacity.

 

In this paper we also describe a new set of key performance indicators and a new unit-costing model that better assesses and identifies where to deploy new technologies to deliver the greatest return on investment.  The 3 key goals we are striving towards include: 80% effective resource utilization across our environment, 10% annual improvement in cost efficiency, and meeting a tiered service-level agreement model.

 

Learn more about our new data center strategy in this white paper.

 

@ajayc47

Server virtualization is the first step to achieving the business benefits of cloud computing.  Virtualization enables real business value such as server consolidation, higher compute capacity utilization, data center space/power/cooling efficiencies, and operational agility.

 

However, security concerns are often cited as the #1 obstacle for deploying cloud capabilities in the enterprise.  Virtualization brings with it an aggregation of risks to the enterprise when consolidating application components and services of varying risk profiles onto a single physical server platform.   The aggregation of risk is created due to the added potential of compromise of the hypervisor layer, which in turn could lead to a potential compromise of all the shared physical resources of the server that it controls such as memory and data as well as other virtual machines on that server.  Concerns about security initially prevented virtualization of several categories of applications, including Internet-facing applications used to communicate with customers and consumers.

 

Intel IT has more than tripled the number of virtualized applications inside our Office and Enterprise environments in 2010 from 12% to 42% and are currently over 50%. To achieve our goal of 75% required us to overcome the security challenges of virtualizing externally facing applications.

 

I am pleased to announce the release of Intel IT's first cloud security white paper that describes how we neutralized these security risks to expand server virtualization into the DMZ thereby allowing us to further accrue the business benefits of virtualization.  Through our comprehensive approaches described in this paper, we have been able to ensure that our virtualized environment is as secure as our physical environment.  In fact with some of the additional security measures deployed in our virtual environment, one could argue that our virtual environment is even more secure.  Read this cloud security paper to learn more and decide for yourself.

 

@ajayc47

As I engage with my IT and non-IT peers alike, I'm often asked, "So, what does a private cloud really look like?"

 

Well, here are 2 new 3D representations that illustrate the current and near-future core foundation of Intel IT's private cloud:

 

Current Cloud Architecture.JPG

Future Cloud Achitecture.JPG

5 Key Points to Consider:

1. Two-socket Intel® Xeon® processor-based servers are the foundation of our data center and private cloud because of their versatility and cost efficency.

2. We have begun shifting from rack-mount to blade servers within our private cloud to enable converged network fabric while reducing hardware TCO by about 27 percent.

3. To continue to improve overall system performance, we have invested in IA based storage and networking solutions including 10GbE, SSDs and Xeon® based storage solutions.

4. Our cloud foundation clusters comprise up to 16 physical hosts per pod with a blended 15:1 virtual to physical consolidation ratio.

5. Proactively refreshing with the latest Xeon® based servers will help us further drive up utilization to hit our goal of 80% through multi-tenancy while maintaining QoS.

 

 

@ajayc47

Intel IT began our cloud roadmap implementation back in 2006 starting with our Grid computing environment in Silicon Design.  Since then we expanded that early roll-out to our Office and Enterprise environments.   As we plan forward, we anticipate new bottle necks in overall system performance shifting away from compute to storage and networking I/O.  This is mostly driven by the increased capabilities of the latest generation Xeon processors that has enabled more and denser VMs.

 

My observations talking to my peers who manage similar IT enterprises is that they seem focused on capacity management in terms of CPU, memory and storage capacity without adequate focus on IO capacity.  Properly monitoring and controlling network and storage IO bottlenecks is paramount to maintaining overall system performance.

 

Check out this cloud podcast interview that addresses the technologies Intel IT has on its cloud roadmap.

 

cloud roadmap.JPG

 

Ajay

Cloud computing is real inside Intel IT. In fact, it is one of our Top 3 Priorities this year (the other two are “consumerization” and security).  At Intel, our private cloud infrastructure-as-a-service implementation is important because it means we can deliver a highly available computing environment that provides secure services and data on-demand to authenticated users and devices from a shared, elastic, and multi-tenant infrastructure.  Discover the Top 5 business drivers that led to the implemention of our private cloud strategy in my recent blog post on Data Center Knowledge.

Virtualization of our infrastructure is a vital part of our eight year data center strategy that is on track to create $650M of Business Value for Intel by 2014. Currently 53% of our Office and Enterprise environment is virtualized. We are accelerating our virtualization efforts as we continue to build out our enterprise private cloud architecture for higher efficiency and agility.  We have tripled our rate of virtualization during 2010 and have set an internal goal to be 75% virtualized.

 

Along the way we realized that not all cores are created equal and the differences are perceptible in virtualization.  Here's why higher performing cores matter:

  1. For demanding and business critical VM workloads where throughput and responsiveness matters
  2. For dynamic VMs whose compute demands fluctuate throughout the day and could exceed the headroom of a lesser performing core
  3. Reduce software licensing costs as some vendors charge on a per core basis and higher performing cores require less sw instances

 

This video illustrates nicely why server core performance matters in virtualization.

 

Ajay

In this IT@Intel Executive Insights on Intel IT’s Cloud Computing Strategy, we share how cloud computing is affecting our overall data center strategy.  Like many of our enterprise IT peers, Cloud Computing is a key area of innovation for Intel and one of Intel IT's top 3 objectives this year.  Cloud Computing is changing the way we inside Intel IT look at our architecture: from the client technology our employees will use to access business services and data to the data center infrastructure necessary to support those services.  We are adopting cloud computing as part of our data center strategy to provide cost-effective, highly agile back-end services that boost both employee and business productivity.

 

The IT@Intel Executive Insights provide a short 2 page summary that shares the insights and steps taken by the Intel IT team on core topics.

  • Page 1 provides a short overview of the topic and a summary of the key considerations that the Intel IT organization used while developing our strategy.
  • Page 2 provides summaries and links to supporting content for our IT peers to learn more from Intel IT generated white papers, videos, blogs and more.

 

Ajay

Prior to 1998, Intel used RISC systems in several of our mission-critical environments.  As part of a multiyear strategy, Intel IT began migrating Intel’s most mission-critical computing environments from proprietary systems to Intel® architecture. These environments included silicon design, manufacturing, and global enterprise resource planning (ERP).  Intel’s business results depend on these environments and so we approached the migrations with careful planning.

 

We ultimately chose Intel® Itanium® processor based solutions for 24x7 reliability, large scale-up database and HP-UX based application stacks.  Intel® Xeon® processor based solutions were chosen for highly reliable, scale out applications running on industry standard OS-based application stacks.

 

Migrating our infrastructure from expensive proprietary architectures to Intel architecture solutions was a strategy we used within the Intel IT Data Center environment to improve infrastructure efficiency, solution performance and business agility.  The migration from RISC to IA has allowed us to create business value by reducing total cost of ownership, improve performance while maintaining the reliability, availability and scalability requirements (RAS) for these environments.  Some key milestones during our migration journey include:

  • All new RISC investment ended in 2004
  • Intel IT successfully implemented a decentralized ERP environment that is based on Intel Xeon  processor based servers and supports more than 10,000 active users, after moving off a previously scale-up RISC based infrastructure
  • Intel’s Manufacturing Execution Systems, which track all material, routes and production steps for our manufacturing environments, were migrated from RISC (VAX/Alpha) to Xeon (applications servers) and Itanium (database servers)
  • Our Yield Analysis systems in Fab/Sort was migrated to Itanium-based servers running HPUX
  • Factory Scheduling has been moved from a scale-up SPARC based solution to a scale-out to Xeon-based solution running an industry standard OS.
    Manufacturing Assembly Test is currently being moved from Alpha to a combined solution architecture of Xeon (application stack) and Itanium (database)

 

As a result of our meticulous planning process and best practices we developed along the way, the  migrations turned out to be less  difficult—and the benefits greater—than we originally anticipated. Some of the more significant benefits of RISC Migration to Intel’s business can be summarized as follows:

  • Saved $1.4B in capital spending from 1998-2004
  • Data Center density improved 2X to 5X and overall eliminated 20,000 sq ft of data center footprint
  • Reduced server and solution acquisition costs in both hardware and software
  • 2X average performance improvement between IA and RISC

 

Check out this Mission-Critical paper that documents and shares Intel IT's RISC migration best practices and how we ultimately achieved significant business value with little or no interruption to our core business results.

 

Ajay

Intel IT manages 91 data centers that are home to about 75,000 servers that support and enable our core Silicon Design, Manufacturing, and Office/Enterprise/Services business functions.  In 1998 we embarked on a multi-year strategy to migrate our RISC systems to Intel Architecture which saved Intel over $1.4B.  I will be publishing a white paper on this topic in the next few days.  Beginning in 2006 we began the next major phase of our DC strategy by implementing proactive server refresh, data center consolidation, and investing in HPC.  These actions have put us on track to deliver $650M of value for Intel’s business by 2014.  All of this was accomplished while our compute, storage, and networking demands are growing rapidly each year (45% and 35% and 53% YoY respectively).

Through a variety of proactive investments and innovations, including the adoption of the latest generation of Intel® Xeon® processors, deploying advanced storage, networking and facilities innovations, we have increased the performance of our data centers by 2.5x while reducing our capital investments by 65% over the last four years - even as the number of data centers was reduced from 150 to 91  In addition, we have reduced the number of servers from 100k to 75k primarily driven by: (1) our proactiver 4-year server refresh strategy, (2) Accelerating the use and deployment of virtualization as we build an enterprise private cloud computing environment, and (3) Targeted investments in manufacturing infrastructure aimed at improving factory automation and efficiencies.

While these investments have delivered tremendous IT efficiency and scale, we have realized that horizontal component optimization of servers, network, storage and applications by themselves are not enough.  We saw both opportunity and necessity to customize solutions to optimize the lines of business we support.

We group our data center infrastructure environment into five unique verticals that represent main computing solution areas (referred to as DOMES) that include Design, Office, Manufacturing, Enterprise, and Services.  Silicon Design computing requires the most servers -- about 70% -- with the remaining 30% of our servers supporting Office, Enterprise, Manufacturing and Services applications.

We deploy unique solutions for each of these areas to deliver unique business value to Intel:

  • Design: Large, shared distributed grid computing solution complemented with the use and deployment of High Performance Computing solutions to support quicker design and validation of increasingly complex and powerful Intel® processors and technologies
  • Office/Enterprise: In transition from a traditional enterprise infrastructure to implementation of an enterprise private cloud, built on a highly virtualized infrastructure and on-demand self-service capability improving cost efficiency and business agility.
  • Manufacturing: Deployment of dedicated, highly reliable Data Center infrastructure for Intel’s manufacturing facilities with a focus on IT innovation that improves factory automation and supply chain planning efficiency.
  • Services: A new approach started in 2009 to support Intel’s new external service businesses such as the online Intel AppUpSM center for netbook applications.

We optimized our overall server storage strategy around the key metrics of: reliability, availability, performance, and scalability. Achieving results in these areas increases efficiency and reduces capital and operational costs. We are challenged with managing 25 petabytes (PB) of primary and backup storage in our design computing, office, and enterprise environments that is growing at an average of 35% YoY.  A variety of factors will stimulate future growth such as: the increasing complexity of silicon designs to continue to deliver on the promise of Moore's law, the growth of enterprise transactions, cross-site data sharing and collaboration, regulatory and legal compliance, and the ongoing need for retention.

 

Our storage landscape is mapped to multiple computing areas: silicon design, office, manufacturing, and enterprise. We choose storage and backup and recovery solutions based on the application use models for these respective areas:

 

  • Design computing:  Our silicon design computing primarily relies on network attached storage (NAS) for file-based data sharing. In addition to NAS, we use parallel storage for our high-performance computing (HPC) needs. We have more than 8 PB of NAS storage capacity and 1 PB of parallel storage in our design computing environment. We use slightly less than 1 PB of storage area network (SAN) storage in design computing, primarily to serve database and post-silicon validation needs.
  • Office, Enterprise, and Manufacturing:  We rely primarily on SAN storage for block-level data sharing, with more than 8 PB of capacity. Limited NAS storage is used for file-based data sharing. For both NAS and SAN, storage is served in a three-level tier model (Tiers 1, 2, and 3) based on required performance, reliability, availability, and cost of various solutions offered in respective areas.
  • Backup, archive, and restore:  Backup, archive, and restore are major operations used in data management. We use both disk and tapes for our backups. Tapes are used for archive functions to facilitate long-term offsite data storing for disaster recovery. The tapes remain offline, which saves significant energy and offers a cost-effective solution. Our disk-based backups serve specific needs whenever faster backup and recovery are required. Our virtual tape library serves the disk-to-disk backup for faster backup and recovery needs, especially in the office and enterprise computing areas.

 

I'd love to hear your server storage challenges and how you are addressing them.

 

Ajay

A: Intel

 

Because

~ 600 phones require 1 server

and

~ 122 tablets require 1 server

 

Find out more by watching this this video about the new Intel Xeon E7 processor.

Filter Blog

By author:
By date:
By tag: