I am very pleased to announce two new compelling IT@Intel white papers that further demonstrate Intel IT industry leadership in cloud computing, virtualization, and data center network architecture. 

 

Virtualizing Mission-Critical Applications

In 2010, Intel IT more than tripled the number of virtualized servers in our Office and Enterprise environment from 12% to 42%.  To create the infrastructure for a private enterprise cloud, Intel IT has a goal of virtualizing up to 75 percent of our office and enterprise computing environments.  To achieve this, Intel IT will need to virtualize mission-critical applications, which is challenging due to rigorous performance, availability, and other requirements.  Learn how Intel IT is overcoming these challenges and how we plan to virtualize the first of many mission-critical applications in 2011 as part of our cloud environment.

 

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Data center trends like server virtualization and consolidation along with rapid growth in design computing has put a strain on Intel’s network.  While high-performance Intel processors and clustering technologies are rapidly improving server performance, the network has become the limiting factor in supporting faster throughput.  Learn how Intel IT upgraded our data center network architecture to 10GbE to accommodate these increasing demands.

 

I'd love to hear from you on what you think of the papers or if you have any questions.

 

Ajay

24 months of Intel SSDs…. What we’ve learned about MLC in the enterprise…

 

The Enterprise Integration Center (EIC) private cloud lab (a joint Intel IT and Intel Architecture Group program) has been working with Intel SSDs (solid state disks) for the last two years in a number of configurations ranging from individual boot/swap volumes for servers to ultra performance iSCSI software based mini-SANs. So, what have we learned about performance, tuning, and use cases?

 

There are plenty of industry resources and comparisons available out at any number of trusted review sites, but most of these revolve around client usage and not server/datacenter uses. From my contact with industry, most engineers seem to think that using an SSD in the datacenter requires a SLC NAND device (Single Level Charge - Intel X25-E product) due to endurance requirements. For those new to NAND characteristics, endurance (usable lifetime) is determined by writes to the NAND device as block-erase cycles stress and degrade the ability of the flash cells to be read back. Basically, SLC devices last through more block-erase cycles than their less expensive and larger capacity MLC cousins (Multi Level Charge - Intel X25-M product). The assumption that ‘only SLC will do’ for the enterprise raises the $/GB cost flag and mires discussion. Endurance is the number one, “but those won’t for my use-case” argument.

 

The EIC cloud lab has some good news here, lower cost MLC or consumer grade devices can do just as well, especially in RAID arrays. To get the best out of these MLC devices though, we have to employ a few techniques that allow the drive and its components to function more efficiently. These techniques manipulate the three vectors in MLC… space, speed, and endurance by altering the useable size of the disk.

 

Assume I have a 160 GB X25-M MLC drive; this device is spec’ed at 250MB/s read and 100MB/s write (sequential) and has a lifetime of around 4-5 years in a ‘consumer’ use case (laptop-desktop). So if I was to use this same device as a repository for a database transaction log (lots of writes), the lifetime would shorten significantly (maybe as little as a year). There are specific formulas to determine endurance & speed, some that are unavailable to the public, but Principal Engineer Tony Roug wraps up the case for MLC in the enterprise quite well in this presentation from Fall 2010 Storage and Networking World.

 

Back to trade offs (space, speed, and endurance); my 160GB MLC drive won’t work for my database transaction log because the workload is too write intensive… What I can do about this is to take the 160GB drive and modify it to use only 75% (120GB) of the available capacity. Reducing the ‘user’ available space gives the wear-leveling algorithm in the drive more working room and increases both the speed (write speed as reads are unaffected by this) and the endurance of the drive, but also increases the $/GB as you have less available space.

 

With the ‘user’ space reduced to 120GB (over-provisioned is official term), that same 160GB is now capable of 250MB/s read and 125MB/s write (sequential) and has a lifetime of 8-10 years in the ‘consumer’ use case. Not terribly appealing to the average end-user who just spent $350 on an SSD as they lost 25% of the capacity, but in the performance and enterprise space this is huge. Once modified, my ‘consumer grade’ MLC drive gets roughly 75-80% of the speed & endurance of the X25-E SLC drive with 4x the space at about the same ‘unit cost’ per drive. Since the drive is 4x larger than SLC, will likely last as long as a standard hard disk once over-provisioned, has great throughput at 125-250MB/s, and can reach 100-400x the IO operations of a standard hard drive we can now begin the discussion around which particular enterprise application benefit from Intel MLC SSD.

 

For the enterprise, once we overcome the endurance hurdle, the value discussion can begin. For the performance enthusiast at home, this same technique allows a boost in disk write throughput, higher benchmark scores, and of course more FPS (frames per second) in whatever game they are thoroughly stressing their over-clocked water-cooled super-system with at the moment.

 

BKMs (Best Known Methods) for enterprise and use-case evaluation… AKA: The technical bits…

 

  • Get to know the IO characterization (reads/writes) of the target application & use case
  • Baseline the application before any SSD upgrades with standard disks, collecting throughput and utilization metrics
  • Knock a maximum of 25% off the top of any MLC drive you’re using in the datacenter
    • More than 25% has diminishing value
    • Use either an LBA tool, RAID controller, or partitioning tool after a fresh low level format
    • That % can be smaller based on the write intensity of the target application - less writes = less % off the top on a case by case basis
  • SAS/SATA RAID controller settings
    • Activate on-drive cache – OK to do in SSD
    • Stripe size of 256k if possible to match block-erase cycle of drive
    • Read/write on-controller DRAM cache should be on and battery backed
  • Make sure any drive to controller channel relationship in SAS controllers stays at 1:1
    • Avoids reducing drive speed from 3.0 Gbps to 1.5 Gbps
  • Avoid using SATA drives behind SAS expanders
    • Again… avoids reducing drive speed from 3.0 Gbps to 1.5 Gbps
  • SSDs are 5v devices, make sure the 5v rail in the power supplies has a high enough rating to handle to power-on of X number of SSDs
    • Only necessary if you’re putting 16+ drives in any particular chassis
  • Baseline the application after SSD upgrade to determine performance increase collecting throughput and utilization metrics
    • Look for higher IOPS and application throughput but also be looking for higher CPU utilization numbers now that you have eliminated the disk bottleneck from your system
    • There will likely be a new bottleneck in other components such as network, memory, etc… look for that as a target for your next improvement
  • Last but not least, when testing an application you’ll need to ‘season’ your SSDs for a while before you see completely consistent results
    • For benchmarks, fill the drive 2x times completely and then run the target test 2-3 times  before taking final measurements
    • For applications, run the app for a few days to a week before taking final performance measurements
    • Remember, a freshly low level formatted SSD doesn’t have to perform a block-erase cycle before writing to disk

 

Well, that’s it in a fairly large nutshell… We see using MLC disks in enterprise use cases as something that is growing now that the underlying techniques for increasing endurance are better understood. In addition, as Intel’s product lines and individual device capacities expand… so can enterprise use cases of these amazing solid-state disks. The question left to answer is, “In your datacenter, are there applications and end-users you can accelerate using lower cost MLC based Intel SSDs?”

 

- Chris

ssrini4

Got RSS?

Posted by ssrini4 Jan 28, 2011

Well, we did, just last week. With the implementation of our Enterprise RSS solution, we have provided all Intel employees with a web based reader for RSS content, which will soon be integrated with the intranet portal. Cool features include ability to subscribe content from unspecified sources based on key words, ability to share feeds with team members (no more forwarding hyperlinks over email!) and a really easy way to tag and categorize by topics.

 

To be honest, the decision to implement this solution wasn't easy. Instinctively, having an enterprise RSS capability (vs. standalone tools, or using email) felt like a good idea, because it promised to extend our social computing platform to the area of information sharing. Also, it offered the opportunity to reduce information overload -- with an enterprise RSS capability, employees will no longer be bombarded with long newsletters covering every topic under the sun. They can pick and choose what they want to read, without having to sift through a pile. However, an environmental scan revealed that there were more naysayers than advocates. (Just search for "R.I.P Enterprise RSS" to see what I mean). There were very few success stories and even fewer case studies of successful implementations in large corporations such as ours. Why then did we go for it?

 

Short answer - We believe that the single biggest barrier to adoption is the lack of a holistic corporate communication strategy. Few intranets are content-rich, fewer still are RSS friendly. Employee communications, in most organizations, still happens through one-way download using legacy tools and methods -- lengthy newsletters, email blasts and crowded intranet homepages. An enterprise RSS reader offers a paradigm shift in this area. It offers the ability to publish content without the readers even being aware that the information they are viewing comes from a different back-end feeds. Readers can apply their own filters and pull only the content that they are interested in. The value is immense, but it calls for big changes in the way corporations communicate. Sadly, we didn't see this recognized by any of the analysts who had written this product category off.

 

So we decided to take the plunge (Remember, Risk-taking is a cherished Intel Value!), and went live with the solution last week. There is a lot of work to be done in re-designing our communication model. Good news is that our corporate communication team is among the strongest believers in this capability - they get it, and are eager to put the technology to good use. If we succeed, we will have cracked the solution to the problem of information overload - so watch this space for a white paper. If we don't, we will still share what we learn -- and keep looking for the right answer :-)

jghmesa

Rescuing a Troubled Project

Posted by jghmesa Jan 18, 2011

The holidays are over and I’ve gotten my immediate tasks of closing out last years’ program results and setting up processes and establishing metrics for the New Year.  Time to start blogging again!  For my first blog it is the time that many programs and projects are reviewed formally or informally for last years’ results.  It might also be the time that a program and/or project is formally considered, ‘in trouble’ or ‘failed’.  Perhaps I can offer advice on ‘rescuing a troubled project.

 

Anyone is not impervious to having a troubled project. Any project can fail. Even the most seasoned and skilled project manager may, at one time or another, find themselves at the helm of a troubled project.  Having a project in trouble does not necessarily signal the Project Manager is doing a poor job. Projects can go off course for a variety of reasons; some reasons are outside the span of control of the Project Manager. Let’s consider together what are the common causes for projects to failing and what are some prudent steps to get the project back on course?

 

If you poll a group of seasoned project professionals with the question, “What are the chief causes of Troubled Projects?” you are likely to receive a variety of responses, though quite possibly there will be some commonly attributed causes. At the macro level, projects generally fall into trouble for one or more of three reasons:

1) Poor Planning

2) Misaligned Expectations

3) Ineffective Risk Management.

 

Let’s elaborate on each of these points:

Poor Planning: Planning is a foundation of project management. Planning is not limited to the development of the “Project Plan”. Having a well defined Project Plan, with realistic estimates and work packages covering each necessary activity to achieve the project objectives, does not inoculate a project from falling into trouble.   Proper planning includes identifying all project stakeholders, understanding their attitudes, influence levels, and communication needs, and ensuring the plan covers these needs. Additionally, proper planning for your project should include defining, gathering and properly documenting all of the project requirements. Vague or open-ended project requirements are a recipe for trouble in most situations unless your organization has mature processes or uses time boxing for requirements such as in Agile. Failure to capture all requirements and gain absolute clarity on them, can lead to too much change during Execution, and potentially derailing behavior on the project. It is not good behavior to debate with your key stakeholders “What the requirements really meant”; after the work has been performed.

 

For an example of aligning attitudes and expectations to avoid your project getting into trouble, imagine you work in a functional organization, and you know that project priorities and your Project Manager authority over team members is low. A functional manager assigns “his top person” to fill a key project role. Whilst this may “sound OK”, remember the type of organization you work in: this “top person” is likely to have competing objectives with the project, and it is probable that high-priority functional tasks will take priority over tasks for your project. So, instead of ignoring this risk or hoping for the best; set up the resourcing to succeed, by planning the work appropriately. This is one of many planning elements that can cause a project to veer into trouble.

 

Misaligned Expectations: A stakeholder’s expectations often change through a project’s life. Indeed, stakeholders themselves, including the sponsor, often change. Most project teams capture their stakeholder’s expectations at the start, and devise means of prioritizing and deciding when conflicting expectations exists. However, do you continue to pay attention to changing needs and changing stakeholders? Projects that fail to identify and respond to stakeholder changes (e.g. when new people come on board, and/or the organization needs to change direction) are prone to sway into trouble.

Have you worked on or known of a project where key stakeholders have suggested changes very late in the project (what could be called “constructive feedback” on what has been built)? Late changes, or the potential for them, can signal trouble quite quickly. A project should have a natural cycle that allows stakeholder’s “constructive feedback” and input in the requirements early, and to taper off as the project progresses through Execution. If you have properly planned, managed and captured stakeholder expectations, and have good communications in place, the level “feedback for changes” should be minimal and controllable.

 

Ineffective Risk Management: Risk management should underpin all project activities. Remember that risks can be positive (opportunities) as well as negative however there is no such thing as “positive trouble”. All trouble is bad. Risk management is not just about maintaining a Risk Register. It is about considering all risks, and devising ways – as a team – to categorize risks, devise ways to respond to them, agree on these responses and put actions into place to track them. Risks are related to all aspects of projects – schedule, budget, safety, quality and everything else. Ineffective risk management comes about when the project fails to carry out these activities properly. Trouble on projects can arise from the “unknown Unknowns”. Therefore, management and contingency reserves planning should be included in your risk response planning.

 

In highlighting some of the things that can cause a project to be ‘in trouble’, what steps can a project manager take to steer a project back on course if in this position?  Depending on the type of organization you work in, and the authority granted to you, the exact tasks will vary.

 

Below are a few “corrective actions” that can span most types of organizations:

  1. Early detection. First, try to prevent it from straying into trouble. Projects do not normally fall immediately into trouble; they “take a path towards it”. Having a system and routines in place to provide early detection is the key to limiting the impact when projects begin to display telltale signs of trouble. A project manager must be willing to “sound the alarm bell” and know that they have the support of the project’s key stakeholders to implement early corrective actions. However, many factors can prevent such early warning signs being recognized or heeded, which may be the subject of a future article.
  2. Accept responsibility. The Project Manager and others must accept the responsibility for the project being off course (within their extent to control it). The Project Manager must also take responsibility for getting the project back on track – with the help of the right stakeholders. If the Project Manager cannot do this, management needs to work out how to help the Project Manager overcome the problems, perhaps with the help of a Risk Response Team that works alongside the main project team.
  3. Be Flexible and open to feedback. Every project has a unique set of stakeholder and project team members. What may have worked well for you in previous projects, may not work best for your current project. Be willing to solicit feedback from your team and adapt the workings of your project as needed.
  4. Be willing to re-contract or re-baseline. This is especially true if expectations have been missed. Consider the steps and processes used to identify, prioritize and agree on a collective set of project expectations. If needed, conduct a thorough review and be willing to go back to “square one” and revisit the business case for the project, ask “does it still align to strategy objectives” and “Is the project still worth undertaking?” Expectations do change and stakeholders change. Be willing to review expectations in your stakeholder routines and embrace changes via change controls if needed.

In conclusion, only a few aspects of troubled projects have been covered in this blog. If you work on many projects in your career, it is likely that you will be, or have been, involved in a poorly performing project at some point in time. Key to limiting the damage is to know how to spot the signs, and to “stop the rot” early if you can. If it does happen to you, try stepping back and looking for the root causes of the problem (knowing that this can take time to do), don’t fall prey to rash reactions, and determine solid ways to address the problem or trouble proactively. Denial can be a powerful force preventing you from acting.  Keep close communication with your project stakeholders, be open about things, and if you have to implement a mitigation plan, make sure you keep track of actions, and as positive progress starts to occur let them know how things are shaping up hopefully for the better.  ...JGH

Three things happened in 2010 that nudged Intel’s ‘Social Computing’ program to reposition itself as an ‘Enterprise 2.0’ program.

 

First – the adoption of our social computing tools grew along corporate use cases in an unprecedented way. While employees’ personal blogs and special interest communities still continue to be very popular, and rightfully so, we also saw some new kids on the block make their way to the ‘Top 10’ leader board. These were programs that needed to assemble stakeholders and individuals across business units and physical locations for a many-way dialog, and chose blogs and forums over traditional web tools for their collaboration needs. The total number of forums doubled through the course of the year, thus bringing social computing platform into prime-time.

 

Second – We made our first inroads into enabling secure collaboration by implementing data encryption, and a tighter permissions model. A secure wiki solution was piloted late last year, which enabled our design engineers to use wikis for restricted content, in a compliant manner. This solution is poised to scale to all enterprise users in 2011.

 

Finally – we got ratification for our strategy to embed ‘social’ capabilities into line-of-business applications, starting with our intranet portal. When this kicks in a few weeks from now, our users will have a seamless experience in accessing information from varied sources without having to navigate to different sites.


With these key shifts, it became imperative to consider some policy changes required to improve workforce productivity by enabling easy access to the right information  as well as to the right experts within the corporation. One such change within Intel was brought about by the decision to enable communities to certify and rate content, as well as recognize content contributors for their expertise. The vision is to enable a self-regulated community that helps great ideas to bubble up from across the corporation - without shackles of process, hierarchy or other organizational filters that typically impede free exchange of thought.

 

In his acclaimed book 'The Long Tail', author Chris Anderson comments on the phenomenon of 'a large number of individuals having a huge impact on culture and results' (versus top-down impact of a few big players), which is largely characteristic of the internet economy.  In the organizational context, the 'long tail' fosters cognitive diversity -- with many eyeballs on a problem, companies can tap into a wide range of perspectives and break away from the beaten path of operating in a certain established way. Enterprise 2.0 technologies offer the right platform for such collaborative innovation - and I am excited about our opportunity to catalyze a groundswell.

 

I am also very eager to learn how E2.0 technology adoption is picking up momentum in your organization. What are some of the trends? Is this technology making the transition from 'nice to have' to 'business important'? Do share your perspective.

 

Footnote: Sudha Srinivasan is the Engineering Manager for E2.0 capabilities at Intel. Her team, which is part of Intel's IT division, is responsible for providing our employees and teams across the world with solutions for internal blogging, professional networking, wiki, RSS, self-serve video, etc. Sudha has been in Intel for 7 years, and is based in Bangalore, India.

WorldVision Plaque.jpgI have been with Intel many years and beyond the fact they let me do what I love (information security strategy), Intel is a company with incredible social and humanitarian conscience.

 

In early 2010, shortly after the devastating earthquake in Haiti, I volunteered to help with Intel’s efforts to assist Non-Governmental Organizations (NGO’s) like the Red Cross, World Vision, NetHope and others who rallied humanitarian aid to the people of Haiti.  Intel, as what seems normal practice, began fundraising and matching employee contributions to various NGO’s.  Worldwide, Intel employees donated tremendous amounts of money to the cause.

 

But the Intel story does not end there.   Unbeknownst to most, Intel has another tradition when it comes to disasters.  With a world class Information Technology team, Intel also contributes behind the scenes.  We work with NGO’s to develop and deploy technology systems which can expedite aid, empower agents in the field, and created a force-multiplier effect to improve disaster response services.

 

Specifically for Haiti, Intel worked to prepare and deploy donated laptops to volunteers rushing to Port au Prince.  When the United Nations asked World Vision to lead the Emergency Telecommunications NGO community in Haiti, World Vision proposed a unified NGO datacenter at the epicenter, and immediately reached out to Intel for professional guidance and support.  The Intel team worked to develop and donate a virtualized data center server cluster with the latest Intel chips, providing the capability to consolidate traditional servers by a 20-1 ratio, thus greatly reducing power, cooling, and space requirements, which were all in short supply after the earthquake.  Intel employees volunteered their expertise in a race against time to create viable solutions.  NGO's were shocked at the speed, level of contribution, and resulting capabilities.

 

It is not just Intel who contributes in non-traditional ways.  Our partners across the technology industry also stepped up and worked together.  Hardware, OS, and software companies collaborated in unprecedented ways with one unified goal: helping victims in Haiti. 

 

Recently, Intel was recognized by WorldVision for our efforts.  I was fortunate enough to accept a plaque of appreciation on behalf of the entire Intel team who worked relentlessly day and night to enable WorldVision and other NGO’s with technology solutions. (picture: Matthew Rosenquist - Intel Corp, Grace Davis - Intel Corp, and Lou August - WorldVision)

 

Support for such humanitarian causes runs through the veins of the corporation, from top to bottom. It is just another reason I am proud to work at Intel as part of an employee community who are willing to toil tirelessly to help the world.

I work inside the Intel IT organization and my job is to share our key learnings with other IT professionals.  The purpose of this blog is to kick-off an information exchange between IT professionals.


Most people around the world understand that the best way to collect information to help us be successful in our professional or personal life is through information sharing (exhibit social media).


And the best source of this information often comes from our peers - the people that do the same jobs as us.  Why?  Because our peers are doing the same thing we are and they might know a better way to do something or maybe they are willing to share a horror story around a mistake they made - that can help us avoid repeating it. The best practices and the painful lessons learned in this information exchange are extremely valuable.


Have you ever bought something off Amazon* or any other on-line retailer?  Did you read the user ratings and comments - both the good and bad - of people who had already bought the product or service before making your decision?   I do.


Peer to Peer information sharing is a best practice for IT professionals as well - and the implications of making a mistake are larger - you may not be able to return that investment later if you don't like it.


As IT professionals, we all share a common bond and purpose --> to use Information Technology to support, enable and grow our respective businesses.  The end game? For us, the IT organization at Intel, delivering the value of IT to Intel and create a competitive advantage for our business.


To start our discussion, I want to share the 2010-2011 Intel IT Annual Performance Report (APR).  This is the 10th year the Intel IT organization has created this report which describes the methods, techniques, investments, and strategies that we have already deployed, while also covering the future trends, challenges and opportunities.


What you will find inside the Intel IT APR includes:

 

  • Letter and Video from Diane Bryant, Intel's CIO
  • Key IT initiatives and the impact they have on our business
  • Sections covering hottest IT trends including Consumerization, Cloud Computing and Enterprise Security
  • Projects that saved money and resources in 2010
  • Projects that improved productivity of our employees
  • Projects that have accelerated the speed, agility and efficiency of core business processes

 

There are several versions of the Intel IT Annual Performance Report available - take your pick

 

  • (Coming soon) Download Standard PDF - the regular, ready to print version great for reading cover to cover
  • Download Interactive PDF version - an easy to navigate version to bounce quickly from topic to topic  
  • (Coming soon) Interactive Tool - a rich media interactive experience to dive deeper and explore tools we use


While I encourage you to explore the resources above, what I’d like most is for you to join me in some focused discussions in this community on the topics below with my fellow Intel IT professionals

 

Chris


Explore all things Intel IT at intel.com/IT

Malcolm Harkins, our CISO, recently published a blog “Clear Focus on Risk Leads to Laptop Security” discussing how the odds of having your laptop stolen or missing is one in ten! This represents not only costs in terms of hardware, but the loss of IP and confidential data is nearly priceless. Seventy percent of companies in a recent study admit to doing nothing to protect their laptops and data. No encryption.  No back-up.  No antitheft technologies. Is that because they misperceive the risk? The misperception of risk can blindside you.

 

See how Intel IT approaches this idea in a presentation given by Malcolm, The Misperception of Risk, and this blog by my colleague Chris Peters on the same topic: Information Security Best Practices from Intel ITWhat do you think of this approach?  Malcolm and his team have driven down Intel’s number of wayward laptops to less that 1 percent, about 700 computers a year.  That’s 5 to 10 times fewer than any of the companies in the study.

One of the core roles of any IT organization is to support employees and help improve their productivity.   Improving employee has two fundamental advantages for the business.  First, it makes getting assigned jobs done faster (more efficient business) and frees up employees cycles for increased ability to do more (enables business growth and innovation). Additionally, improving employee productivity enables a better, more satisfying work environment improving employee retention (business stability), not to mention user satisfation with IT.

 

Personally speaking, when I feel like technology and solutions limit my capabilities, I get frustrated ... want to work less, not more.

When I feel enabled by technology, I get more work accomplished in less time and want to do more.

 

OK ... you get it ... productivity is good for everyone - the employee and the business.  However, the challenge for any IT organization is that "you can lead the horse to water, but you can't make the horse drink the water".  As IT rolls out new technology to employees, we can't force employees to use the new solutions and often don't have time or resources to teach them how to use technology best.  I am extremely busy with my day often not able to consume the wide variety of new services that are coming from IT  This is a classic product marketing problem - how does IT create awareness and accelerate adoption of new IT solutions inside your business. .

 

Within the Intel IT organization we have a team that is dedicated to helping educate Intel employees in their adoption and use of technology and services.  For the past five years, the Intel IT Products & Services team has published a bi-weekly internal corporate wide newsletter (we call it "Digital Edge") focused on the technology and solutions available from Intel IT. Over time readership of these newsletters has grown to 70% - exhibiting a clear and sustained value in our approach.

 

The Intel IT Products and Services team creates and publishes these articles for Intel employees to educate them on a variety of information technology subjects. Our goal is to help employees improve productivity, take advantage of new IT services and raise awareness on other IT topics of interest. Over the past year, we have had requests to share this IT best practice with other IT organizations, so let me introduce to you IT@Intel Technology Tips.

 

Here are the first few you can start looking at:

 

 

We will continue to publish these Tech Tips from Intel IT on www.intel.com/IT

 

How does your IT organization market new services to your fellow employees - please share your best practices

 

Chris, Intel IT

More than 2,000 data center professionals attended the annual 2010 Gartner Data Center Conference in Las Vegas. Some of the year’s most significant issues and trends discussed were: the ever-expanding need for storage, data center design, what’s working in virtualization, public versus private cloud computing, how to retain a skilled IT workforce, energy efficiency, legacy migration, converged fabrics, and the recession’s impact on IT investment.  It was also interesting to see some of the new products and services on the conference show floor, including Intel’s which was a sponsor and had a booth.

 

The conference also had several interactive features like live audience polling.  For example, in the opening session, Conference Chair Mike Chuba took the pulse of IT budgets and discovered the following: most attendees’ budgets will stay flat or even increase by as much as 10% in 2011. A third of respondents (33%) said they expect their budgets to stay even, while nearly a quarter (24%) anticipated an increase of at least 6%, and half of those were looking at a 10% rise. When asked to identify their biggest data center challenge, 23% of the audience zeroed in on power and cooling, while 16% cited the need for a cloud strategy as one of their burning issues.

 

Some of my key takeaways by focus area:

 

Cloud computing

Cloud computing is coming to the enterprise from all directions, and quickly. Cloud computing isn’t defined by one product or technology alone. Gartner believes its a style of computing that conforms to these key characteristics: delivery of IT-enabled capabilities “as a service” to consumers, delivery of services in a highly scalable and elastic fashion, and the use of Internet technologies and techniques to develop and deliver services.

 

In another cloud poll, 55% of polled attendees cited agility/speed as the main driver in moving to private clouds, while 21% said it was cost.  Intel has been citing agility as one of our key drivers also.

 

There were a lot of other themes expressed throughout the conference that were consistent with Intel's approach to cloud computing like:

  • Cloud computing services can be delivered at many layers: infrastructure as a service (IaaS); platform as a service (PaaS); software as a service (SaaS); business process as a service (BPaaS).
  • Some IT services are destined for cloud computing; others are not. The latter include those IT services that are business differentiators.
  • When developing a cloud-computing strategy, ask the following questions:
    • Do you need a flexible amount of capacity
    • Do you need management services
    • Do you need a pool ofcapacity that can be dynamically reallocated between applications
    • Can the application be Internet-facing
    • Are you using a popular application framework etc.
  • The decision to leverage cloud computing is a business one, and it should address the following concerns:
    • Does it generate a new business opportunity or just reduce costs
    • Is there a business case for investment
    • Should the initial effort be public or private
    • Can it meet service-level agreements and security requirements at a lower cost etc.

 

Cloud computing doesn't come with no risks attached.  Some of the top concerns of implementing the cloud included: security and privacy, performance, and nascency of the ecosystem.  Specifically on security some of the fears were around data compromise risk (encryption is a partial solution), data loss risk, data portability (standards still immature), and cloud hacking (highly distributed and virtualized doesn't make hacking any harder...)  Virtual Machines are inherently less secure than physical servers.  For example, compromise of a hypervisor affects all hosted workloads, there's a lack of visibility and controls on internal VM to VM communications, and risks from combining workloads of different trust levels (ie., DMZ workloads, PCI, ERP (SOX), etc.).  Today some of the most important virtualized security controls are virtual firewalls, virtual IDS/IPS, and Virtual Anti-Virus without agents in each VM.

 

Virtualization

Virtualization is neither a single technology, nor a set of technologies buried in infrastructure. Rather it’s now an obligatory function that has important ramifications for business use of IT, business itself and for all aspects of IT—from architecture to development and production. It’s not just about servers, it’s about everything. Virtualization enables IT to become more scalable and flexible while using fewer resources.  Virtualization is a continuing process, not a project that should be finished.

 

More than half of attendees said their annual growth rate for storage capacity is at least 50%.  Meanwhile 30% of the attendees expect an annual growth rate of between 50% to 70%.  With respect to servers, 38% have more than 10 sites that house servers. However 55% of polled attendees indicated that their organizations will use fewer sites over the next three years.

 

Energy and green IT

Today, consumption is as critical to an organization as performance. New construction as well as retrofits tend to focus on efficiency and reuse. But with growing EPA and EU involvement, the power issue is moving up the food chain.

 

Do you agree with the trends above or have your experience been different?  I'd love to hear from you!

 

Ajay

Filter Blog

By author:
By date:
By tag: