Rob@Intel

Teaching Consumerization

Posted by Rob@Intel Apr 27, 2011

Do you remember how you learned to be creative? Some people think that Schools should teach free thinking, creativity and innovation, but this is almost impossible. There is a great TED speech I would encourage you to watch on this.

 

So what does any of this have to do with consumerization? Well here is the key point, years ago we were able to tell an employee in detail what they could and could not do; now we have to teach principles and let them apply these to their day to day life. Not only do we have to trust our employees can follow these principles but do we even know what we are asking?

 

Intel recently took on Will.i.am as creative director; He comes to us with a massive social media following which makes him a valuable resource to our company. We want our employees to help spread good messages and we value those that have strong social media followings. So what should we tell employees? Blog? Don’t Blog? This is just one example where we need to think differently about security policy.

 

Alan Ross said “Users are the stewards of information” That means that we need to work out a way of employees making a choice about what they can and cannot communicate when they are on their own. Communicating too much or too little means the company is not tuned for performance. So we need to start working out how we can build policies on principles that we can enforce. But how can you enforce something you can’t measure, even if you knew what you wanted this would be hard.

 

Many security professionals have made their careers by authoring very exact and defined security policies, now consumerization comes and the policy is invalid or worse still it’s not updated as a new “Common Practise” grips the company.

 

We are beginning to see documents that have phrases like “avoided where possible” and backed up with greater employee training. We don’t know really where this is all going but companies have some really direct questions to consider.

 

So will you train your staff or will YouTube do it for you? Are you looking at the next generation of workers and comprehending the idea that they may not know what email is? They may think that posting their most intimate experiences on facebook is normal behaviour and important messages are communicated as tweets because hiding it in a document is silly.

 

Protecting data is going to change over the next few years, that’s not because of the technology it’s because people are different.

 

If you enjoyed the first video here Sir Ken Robinson adds a bit more!

 

Rob

Every year, the Intel IT team hosts an internal Global IT Leaders Conference to discuss our strategy to better align our activities, initiatives and goals to better support Intel's business.

 

This year's event was attended by about 300 of Intel's global senior IT leadership team.  While this conference took place in January 2011 and I'm behind in sharing my insights like I did last year - 2010 Intel IT Leaders Conference - I figure, better late than never.

 

One of the most interesting presentations I attended was a QnA session with Intel's CFO, Stacy Smith.  I had the chance to ask him what he wanted and

expected from IT - trying to extract his view on the role of IT.

 

Stacy summarized his viewpoint in a single statement that has stuck with me for the last three months.

"The Most Important is the Least Sexy"

 

Stacy elaborated four key points behind this statement (my summary - not his exact words) articulating some critical roles of IT at Intel:

 

1) Keep the Business Running (IT continuity and reliability) - Intel ships 1M units per day to our customers and Intel's factory and supply

chain is dependent on the IT organization and our solutions - even the smallest hiccup in IT continuity can have an immediate and direct

impact to Intels business.

 

2) Protect Intel's IP (IT and Enterprise Security) - Our business success is becoming increasingly dependent on our ability to collaborate

with our design partners and customers.  To do this we must be able to protect the IP of all companies in both directions during the B2B

exchange. It is a critical imperative that we enable the collaboration in a secure way, minimizing risks.

 

3) Influence Product Definition (add business value outside of IT) - As the IT organization inside Intel, our business leaders want our opinions on how we use technologies to extract business value and want our insights into the requirements for future products and technology on Intel's roadmap.

And given the emerging trends around IT consumerization  the insights from Intel IT are not only important for enterprise oriented products

but also the experiences employees will have running an expanded set of solutions and services on employee owned devices also.

 

4) Create Value Through Acquisitions (IT facilitated business growth) - Acquisitions is an important way that corporations grow, develop

new competitive capabilities and create shareholder value. Stacy articulated a strategic need for Intel IT to create (vs destroy) value through successful integration of new resources, capabilities and employees -  not only in IT solution integration but also in helping retain and integrate the cultural aspects of the way people work and innovate.

 

As I have talked with my peers in the past several years, we often discuss the fact that the majority of a large portion  of IT's annual budget (typically 60-70%) is spent on "run" or "mandatory" investments vs aiming these investments at business "growth" or "transform".  Stacy's insights this year have given me a new appreciation for the value and importance of KTBR. Thanks Stacy.

 

I leave you with a single question: What does your CFO want from your IT organiziation?


If you don't know - I highly recommend asking ... and be sure to share your insights below.

 

Chris Peters, Intel IT

Ilene

RFID and IT Sustainability?

Posted by Ilene Apr 26, 2011

What does RFID and IT Sustainability have to do with one another? Intel IT is using RFID to track our assets in order to help us lower our carbon footprint. I asked my colleagues Rob to share the detail behind using RFID and some of the success we have had:

 

Asset Accuracy with Location Based Services


Is there sustainability value to be gained from upgrading your data centers  to track assets with a Location Based Service (LBS) such as RFID? The answer is a resounding "YES". As we have begun proliferating RFID within our DC's and increasing our tracking accuracy to 100%, we are now able to use the visibility that LBS is providing us to add solid numbers that can be used for accurate capacity planning. Prior to implementing RFID, our audit system was a very manual process and a difficult burden on the employees. They would print out documents and walk the floor to validate the location of a sample set of assets within the Data Centers. 100% validation was rarely done, and was only done in spot cases where accuracy levels were suspected to be below an 80% "target". Also important to note is that our audits were book to floor only, and did not identify or correct any EOL assets that were still plugged in pulling power.

 

After re-vamping our DC and introducing Location Based Service tracking, we have transformed our process from a simplistic tracking mechanism into a powerful visibility resource that provides us direct actionable information. We now can do 100% floor to book and book to floor validation without burdening the employees. To give some examples of the numbers we pulled right after install that are relevant to planning DC power:

  • 17% of our assets were documented in an incorrect DC and are now correct.
  • 28% of the assets we expected to see in the DC weren't there, so their association was removed.
  • 2% of our assets were expected to be retired, but were still found to be in the DC pulling power.

Obviously the variability in these numbers make used to make any actual planning difficult. Our DC planners had been forced to use general plan instead of actual plan numbers. But now that we have upgraded to an LBS methodology for collecting actual numbers, we are managing DC planning (including power planning) much more efficiently.

 

Location Based Services is definitely a growth opportunity for IT for a multitude of reasons, and I’m very interested in collaborating with other companies that are doing any work in this space.
---- Rob Colby - Intel LBS/RFID Architect.

In support of the Intel Xeon E7 processor launch April 5th, Intel IT developed several pieces of content that capture our assessment of the new Xeon E7 processor.  Intel IT’s key findings include:

  • Our Intel IT Datacenter Center strategy is on track to deliver $650M of value by 2014 – large driver is adopting the latest generation Intel Xeon processors, including the new E7.
  • The results of our E7 testing show up to a 35% performance improvement over prior generation Intel Xeon 7500 and
  • We also see up to a 7.81x improvement over the 4S dual-core Xeon 7100 servers (~4 yrs old and targeted to be refreshed).
  • We continue to use 4-socket Intel Xeon-based servers for specialized design, ERP, and mission critical workloads that demand more CPUs, memory, drives and multiple network connections.

 

Read our latest whitepaper and watch our latest video to learn more:

  • Whitepaper – “Accelerating Silicon Design with Intel Xeon E7 Series – link
  • Video – Results of Intel IT’s testing of Westmere EX / Value of 4S – video

 

Additionally, our popular Xeon Server Refresh tool and the Intel® Server Sizing Tool are updated with the latest E7 and E3 skus.

 

Ajay

My job with Intel IT just keeps getting better; I have the opportunity to share new information, new approaches and new ideas from our own IT organization.  Many of the stories I share are projects from my colleagues that are large and can take many months or even years to implement, like our Rethinking Information Security to Improve Business Agility paper that outlines a radical five-year redesign of our security architecture.

 

That’s business as usual and it’s fun and fast-paced.  However, as Earth Month begins many of us, including me, think about the impacts we can make as individuals too. I thought it would be appropriate to share this story from my colleague, Karen who manages Intel IT Sustainability communications and messaging for our employees. Karen’s story is about just how powerful the combination of talent and passion can be, even when it’s to the power of one!

 

Below, in Karen’s words, is about the difference one person can make for IT sustainability…


===============================================================

Think globally, act locally”.  It’s a pretty familiar mantra for guiding environmentally responsible behavior.  Just recently our IT Sustainability team here at Intel got a quick refresher course on just how well it can actually work, courtesy of one of our program team co workers.

 

As a team, we’d been discussing helping employees become aware of their individual energy footprints (Computer/monitor energy consumption/carbon emission/paper usage etc).  We were thinking globally and asking great questions like “What’s the best way to report progress?”  “Will employees feel like we are “snooping” over the network on their computer or printer use habits?”… You get the picture.

 

One of our team members, Randy, was inspired to  literally act locally, specifically to see how much info the  local machine could report to its user about energy use.  He went to work on it, and what he came back with was an awesome desktop “gadget” that a user can self install to provide visibility via a simple score into energy behavior (how often the machine goes to sleep, how many jobs are sent through the local print queue etc) and offers tips and tricks to improve energy use and reduce waste.

 

Did it happen overnight?  Of course not, we did take the gadget through all required due diligence and testing but it was still pretty quick and everyone we worked with as we moved through normal process was impressed by how simple and effective the concept was.

 

Did it make a difference?  Yes! Today it’s officially known as the “Green Gadget” and it is up on our internal “gadget store” for user “pull” install.  And yes, users are installing it and talking or blogging about it and the impact it has.  I can say personally that since I have installed it on my own machine maintaining a good “score” has been really important to me, I was surprisingly reluctant to print out a series of test images for a major event recently because I wanted to keep my good energy score!   Other users of the gadget have reported making similar adjustments to their habits in order to manage their scores, it is driving positive change. 

 

Employees take their impacts on energy use seriously here, and the Green Gadget is a great example of how one person successfully used his programming skill, technical curiosity and passion for conservation took local action that is literally driving thought, conversation and change globally within Intel.

DonAtwood

Retro-fitting data centers

Posted by DonAtwood Apr 19, 2011

Is it worth the capital investment to Retro-fit data centers for sustainability?  the answer is YES..  Consider this, for every watt of power your IT equipment consumes within the DC, the real total consumption per watt is > 2w with power losses to the utility and cooling.  Let’s face it, typically if the investment does not provide a financial return it’s often a hard sell to management.  It’s my experience working at a mega global company that the story and financial justification is possible, but what you pick and how you deliver the message is important.  In our 90+ DC’s worldwide, we have every conceivable design, age, and efficiency level but every one of our DC’s has low hanging fruit. The easy answer to the question is find your low hanging fruit and pick it; keep it simple and go after the big wins with little investment or risk to your company.  Here are my top 5 “low hanging fruit” opportunities for energy conservations via retro-fit in the DC:

 

1) control the air – stop 99% of the air mixing with blanking panels, row/rack realignment, walls, air barriers or whatever it takes (typically delivers >15% cooling efficiency)

2) measure and tune – stop over cooling the room and make sure you are running the right amount of cooling for your needs (many DC’s overcool routinely)

3) Free cooling – outside air is great in the right regions and wet side economizers for pre-cooling in other locations can save you a ton of cooling (no pun intended)

4) increase electrical efficiency - replace transformers / breakers for 415v power to the DC (saves about 7% in efficiency loss),

5) buy and run high efficiency IT equipment (this matters).

Don Atwood – Intel DC Architect

DOJ Emblem.jpgIn a progressive leap forward, the US Justice Department has received approval to take control of a seized botnet command structure with the purpose of sending instructions to undermine the infection at the client.  This is the first time the US agencies have ever directed efforts to directly clean the remote controlled ‘bots’.

In a press release, Shawn Henry, executive assistant director of the FBI’s Criminal, Cyber, Response and Services Branch, stated “These actions to mitigate the threat posed by the Coreflood botnet are the first of their kind in the United States and reflect our commitment to being creative and proactive in making the internet more secure”.

This is a game changing tactic for computer security!  Until recently, when law enforcement agencies 'took down' botnets, it was limited to the handful of command-and-control servers.  Although temporarily effective in stopping the organized use of the bots, it still kept the mass of infected systems intact.  In some previous cases new command servers were established shortly thereafter and the waiting drones picked up where they had left off.  Past enforcement efforts against the herders was largely ineffective and simply amounted to a temporary disruption in the botnet malicious services.  But now, with the ability to reach out to the infected systems and 'kill' the malware present on the PC's, the bot army can be dissolved.  So even if new command servers are established, they have nothing to control. This is great news for owners of all those infected systems, most of which don't even know their home PC is contributing to the botnet problem.

 

This policy is not without controversy.  The mere thought of a government reaching out to privately owned and managed computers can make some people nervous.  If unchallenged, this will set a legal precedent and more countries will likely follow suit.  But all politics aside, strictly from a security perspective this is new and potentially very effective weapon in the war against botnets. 

I predicted just such type of activities in my year end blog Security Predictions for 2011 and Beyond.  Attackers are being targeted with more ferocity from governments, service providers and organizations worldwide.  Another recent example this year was with the Rustock botnet takedown.

Reference Links:

Last week I presented at the client summit in London. This is an event that allows Intel’s IT group to share with our customers how we do things here. We hope of course that our customers can be successful not only by looking at what we have done but by learning from our mistakes.

 

One question I got asked was to do with application development of mobile devices. Today there are lots of companies writing their own clients for mobile devices which allow a direct connection to a company server.  The advantage is that it’s very quick and easy to develop an application because the vendor has done most of the work for you.

 

The disadvantage is that they may not have the level of security you need and now you have a different way of connecting to back end databases for example then you do for Email, intranet SCADA etc . In short - lots of different solutions each with their own authentication, own tunnel and encryption means that standardisation is really hard.

 

So why do we need to standardise?

In the PC world we are always chasing standardisation as a way of reducing costs, however in a mobile platform the device is mainly driven by users (who are just not standard... how inconsiderate).

 

So are we starting to need a development lifecycle that puts applications into production much quicker, after all we may only need an application for 2 years before the platform has moved on and the users have a different device?

 

Maybe we need an application development lifecycle that focuses on the decommissioning of the application as the start point for development. This would certainly make sense from a security perspective as old applications are more prone to security holes. But would moving from a standards base to accepting that quick and cheep development that’s only around for 2 years make sense?

 

We like standards because that’s less support calls, but in the consumerized world support calls are less anyway.

 

I think we need to revisit the whole development lifecycle and ask if we are doing what we have always done or if or background experience is helping us move forward. Now that would be an interesting debate!

We are ramping up our internal software practices community for IT developers. The question in the title recently came up and I thought, "why not share it with the larger IT crowd?". What does it take to write really outstanding guidance that not only helps close the gaps on current projects but goes so far as to answer questions they didn't know they had?

 

As we've gone through and looked at the software developement lifecycle (SDLC), we discovered that developers have nagivated each phase doing the best they can. Often they have documented processes and tips along the way. Most often they rely upon their learnings from previous projects and apply those to the next one. We hope to leverage each lessons in growing process and reference, as well as samples in order accelerate our software application processes internally.

 

That doesn't mean we don't buy or configure software. Instead we understand that there is a challenge in creating truly amazing software that not only performs at the edge of the envelope, but also makes the customer say wow!

 

With that in mind, this outline is being shared with each team member building guidance on mobile device specific or client-server technologies.

 

  1. Development installation
  2. Design
  3. Development
    • Source code management
    • Naming conventions
    • Support (build for supportability)
      • Help build
      • Monitoring
    • Best practices
    • Guidance
    • Code samples
    • Communities (developers)
  4. Test
  5. Build
  6. Deploy
  7. Support
      • Help scripts
      • Monitoring wire-up
      • Communities (users)

     

    As we work with a specific capability or technology, we fill in the gaps and identify our next work. Although to the outsider this may appear simple enough to just fill int he blanks; it is not. When looking at large enterprises (100k+) with hundreds of developers and increasing development languages, technologies and platforms to deploy to, it becomes complex quickly.

     

    Good thing those developers have been doing this work already. We just need to throw it to paper and place some governance around it.

    Our story continues. How is your story going?

    Hello World,

     

    I have been so busy with my normal work that I haven’t had time to share what we are doing in a while, so on a recent flight home from Washington D.C. where we just held our first IT@Intel Cloud Summit, I thought I could spend a little time to share where we are at, and where we are going.

     

    First of all 2010 was a busy year for all of us working on introducing the cloud to the Office and Enterprise environment at Intel.  We took some tough challenges, and pulled most of them off.   Here is a recap…

     

    1.)    Pervasive Virtualization – our Cloud foundation is moving forward fast, we went from 18% of our environment virtual at the end of 2009, and beat our goal of 37% by end of 2010, and we are now at around 45%, we are starting to hit some of the tougher workloads but we continue to move at a rapid pace here.

    2.)    Elastic Capacity and Measured Services – we made some pretty great strides in ensuring all of our cloud components have instrumentation, and getting that data into our data layer so we can consume it.  Our Ops team is now starting to use the massive amount of data (from guests, to hosts, to storage) to look at aggregate at what is happening in our Cloud, as well as use it to dig into the specifics where we are exceeding thresholds.  We also run our massive DB running an ETL of around 40M records a day on a VM, just to make sure we walk the talk.

    3.)    End-to-End Service Monitoring – we made a decision to tightly couple our Cloud work with our move to a true ITIL Service Management environment – this isn’t a simple task and we have lots of more work to do here.  But I think most of my peers I talk to the industry agree that ITIL with Cloud is a great way to combine the discipline of an Enterprise IT shop with the dynamic natures of on demand capacity.  We have completed end-to-end service monitoring for a few entire services, and are going to be making this the norm as we continue through 2011, eventually creating the service models automatically when self-service happens.

    4.)    On-Demand Self-Service – we took an extremely manual environment, and made it automated, and we didn’t do it in a pristine greenfield environment, we did this across our entire Office and Enterprise environment.  This means that basically across all of our data centers, and all of our virtual infrastructure we can serve out infrastructure services on-demand to entitled users.  We took a goal of under 3 hours, and we are doing a pretty good job of hitting this consistently.  This year we are going after the last piece of the environment which is our DMZ and secure enclaves, and our teams are busy working through the business process automation as well as new connectors to automate some very laborious manual tasks.

     

    Now nothing any of us do in IT is simple, and everything has challenges…  a few retrospective points I would like to share:

     

    1.)    Know your workloads – with the data we are pulling from all of our OS instances, we can see what the workloads are doing to the most important components (CPU, memory, network, storage, and I/O).  In fact we have so much data that sometimes it is tough to find the right data.  However with this data you can pick the top 2-3 counters per component and make sure you are optimizing the OS instance as it moves to the multi-tenant environment.  I like to think of what we are doing as moving families out of the suburbs and into high-rise extremely efficient leased apartments.  Being that we control the city, we can make these decisions, but as we do this we need to be careful to make sure we have enough square footage to let the family thrive, if we give them to little space, or we don’t allow them to cool their apartment – we could end up with angry tenants.  Also no one wants a rock band living next door, so we have to make sure those noisy neighbors keep the noise down, or give them a room away from the rest of the tenants.

    2.) 

              Know your environment thresholds – most IT shops work in silos, and many of the silos make decisions on their specific component that may not comprehend the entire IT ecosystem, this can be as simple as how large a subnet range is, to how many spindles are provided out to handle a handful of DB VMs.  In my Design background we would go in and break our infrastructure as a practice (of course not while we are using it) and we would then understand specifically how/why we were able to break it, and set a threshold.  This threshold also serves as a challenge – meaning how do you take a 2x or even 10x goal to lift up the threshold as you take on more business, and as the business grows.  If you don’t know how to break your environment, when/if it does break you will be struggling to figure out how to get it back to normal.

     

    3.)    Don’t underestimate the cultural shift required to move from a manual environment to an automated environment – our factories and our design environment work extremely well due to our large investments we make on automation.  This isn’t the case for most traditional IT shops I talk too, and neither was it for ours.  We made huge strides of bringing in automation to this environment, but we have a long way to go still.  This isn’t just a technical challenge either, you need to help your organization and workers understand that just because we are automating their work, it doesn’t mean they are going away.  When I started at Intel one of the most valuable pieces of advice I got was to always seek to engineer myself out of a job.  This didn’t mean I was getting laid off, it meant that I could then apply my skills to a higher level task, we are constantly under headcount in IT, especially for those of us that are a cost center and not a profit center – however there is no shortage of valuable work we can do in IT to improve the business services and make evolutionary changes to help the bottom line and the top line.  Also, make automation a part of everyone’s job…  a script with good documentation in it, is always better than documentation with a pointer to a script.

    4.) 

              Many years of manual environments means that automation will hit walls – when someone takes a document and uses it to setup something in one data center, it is almost a given that someone in another datacenter is going to follow that doc slightly differently.  Configuration drift leads to some tough challenges, and automation will quickly find these problems and point them out to you – usually with a big red X.  Fortunately we phased in the automation so we were able to see a lot of the problems before we turned on self-service globally.  Now that we have self-service we see configuration and performance issues almost immediately.

     

    I am about to land home in Portland, and the captain just said it is cloudy with a chance of rain…  we have a long path still ahead of us  as we continue to enable new businesses at Intel, and existing business rapid growth – the last year of work took us a big leap forward, and I am excited about the coming year.

     

    Where are you with your cloud efforts?  How have you handled the challenges and were yours similar or different?

     

    Until next time,

    -Das

    Intel IT Cloud Engineering Lead

    In some of my other blogs you may have read, I have talked about the Intel IT Sustainability Program Office and some of the work that Intel IT is doing to lower our carbon footprint. In honor of Earth Day 2011, I asked my colleague Bill, Program Manager for IT Sustainability to give his perspective on Intel IT Sustainability and where we are finding energy savings and where our investments have paid off. Below are Bill’s thoughts! HAPPY EARTH DAY!

    =======

    What is your footprint... As an, individual, I have a house, yard, drive to work every day, recycle as much as we can, a hybrid, that vehicle for the heavier lifting, recycle as much as we can etc. would love to put solar panels on the roof.. kids of collage age is deferring that  investment so far. I think I am balanced, managing our footprint, strive to do more.

     

    For an IT organization what is it's footprint and how do we change it, what else can IT do to improve the corporations footprint. A couple of years ago Intel IT took a strategic step and established an IT Sustainability Program office. I get asked frequently how is it going, my first thought is great we have been hitting our goals,  getting the organization engaged, decreased our footprint 10% .

     

    We started with defining  just what an IT Sustainability footprint is, Co2?, electricity?, water? , waste? etc and where do we have leverage. The corporation already had mature processes and a culture for reuse recycle and conservation. As we looked closer we concluded electricity and it's associated co2 emissions is IT's biggest opportunity.

     

    The 1st bump in the road was measuring the footprint and sensing change, it was pretty easy to do some back of the envelope calculations of our footprint but we needed to  sense change in standard way, we have very few metering systems to isolate IT consumption, tens of thousands of assets distributed with no way of measuring their actual energy  consumption. What we do have is an IT inventory data base tracking assets and their locations.  We developed a model using the inventory data to estimate an assets energy usage, ie power x hours in a year x percent on etc. sum it up and bingo an estimate of energy usage, which correlated pretty well with what meters we do have, as the inventory changes we sense energy change.

     

    Servers are our highest energy consumers , followed by DC facilities overhead, network and storage, a surprise was desktops and Laptops being less than common perception. Their power draw per system is lower tends to have some power management, and  for laptops users have a natural incentive to turn off conserving battery life. With the model we are able to estimate change nicely for our environment.

     

    As with most IT organizations ours is constantly focusing on satisfying our customers needs and improving our efficiency, balancing cost with performance..  Over the past two years our efficiency goals have been aligning nicely with Sustainability.  Server refresh, network port consolidation , server virtualization all are having the net effect of  reducing our energy footprint, avoiding construction of new data centers, while increasing our computing capacity.  With few new Data Centers being built, we had less opportunity to design in energy efficiency. The Data center engineering team started looking at low cost retrofit opportunities, such as tile management, blanking panels and hot aisle cold aisle  containment work, variable speed fan retrofits etc..  Adding up to 10% improvement over two years.

     

    Now to my second thought, how is it going… there is so much more opportunity, how can we apply IT better to enable human behavior, manage buildings consumption, the supply chain , other  potential  applications of IT capabilities. Where is the innovation going to come from, technology efficiencies will keep coming  we will continue to see power improvements servers , storage, clients, network etc, need to adopt within business environment boundary conditions. We need some breakthroughs in applying IT outside IT's 4 walls, breakthroughs in valuing and financing energy efficiency work , I hear there is pay back on those solar panels I want, I just can't write the check. Building engineers and IT experts work in different circles so may not see breakthroughs by applying their combined skills.

     

    What  worries me,  Jevons Paradox (which I just learned of recently), with increased efficiency in using  a resource the consumption tends to increase. I remember  a factory manager twenty years ago saying "I am not going to give automation any more Ethernet, they'll  just use it all, and ask for more". Is there a killer app on the way that everyone has to have ..needing much more compute capacity, out pacing Moore's law.  We all start talking to our computers Star Trek style, cpu's reading our  thoughts, hopefully running at just a few watts.

    What’s wrong with high-efficiency in the data center today?  When we talk about high efficiency, going green, and sustainability, the real intent is to reduce carbon footprint and consume less energy overall but it seems “we” as an industry often get sucked into chasing a number for good public relations and /or bragging rights instead of doing what might be the right thing to do despite the net “PUE” result.  I’m working on a fantastic project right now that will technically deliver the lowest PUE in the world if I wanted to chase the number but because I define “high efficiency” as (IT + Mechanical + Electrical = highest efficiency opportunity) , my PUE number looks worse although my total power consumption is much better (1.02 vs. 1.11).  It’s time for the Data Center industry to step up and step away from our current industry best measurement because it could cause wrong actions in the chase for a good numbers.  Just my two cents.

     

    Don Atwood – Intel DC Architect

    Warija

    Wiki, wiki, wah, wah

    Posted by Warija Apr 5, 2011

    In my earlier blog, "Enterprise 2.0 to the rescue of project managers", we discussed how Enterprise 2.0/ Social Computing can be leveraged for project management. Here I would like to discuss with you more on a specific tool - wiki, not specifically for project management needs only.

     

    In Intel IT, we do use wiki for many activities. They act as a content mashup tool - a centralized location for all the related information, while the information may be stored somewhere else. We use it for strategy creation, project management, agenda sharing, storing the minutes. If you get a chance, please visit the Website http://wikipatterns.com, which shows how wikis can be used for different needs. This is a good source on how we can leverage wiki and structure our information accordingly.

     

    Although not consciously using wiki patterns, I could see some of the patterns that we are using are:

     

    Overview Pages

    One Wiki space per Group

    Scaffold

    Agenda

    and many more..

     

    In your organization, do you use wiki? If yes, do you consciously select a wiki pattern for modeling your business process? Do you use wiki for project management? We would like to hear from you if and how wiki made a difference to your workflow.

     

    When I got introduced to wiki, it has been no looking back.. I have been saying, wiki, wiki, wah, wah ever since. ( Similar to what Shakira sang for the World cup - Waka, waka, yeah, yeah).

    Filter Blog

    By author:
    By date:
    By tag: