Energy Use in the Office PoC (phase 2)

 

It’s been a while since I’ve talked about Energy Use in the Office.  The small PoC we did early this summer had some pretty interesting results but due to the size of the PoC and time constraints, it’s was unclear as to how the data we obtained would scale up.  So, building on the results from the first phase, we are planning a second phase of this PoC on a much larger scale: We are involving about 1,000 users, and the second phase will not be subject to the limiting time constraints that characterized the first phase.  During this second phase, we will focus on user awareness and enforced energy profile settings. We are also building a real-time energy-awareness user interface that PoC participants will be able to access with web browsers, as well as view on large screens in the building’s lobby and cafeteria.  I’ll keep you up to date as the project progresses.

 

Making IT Real!

 

By the way, the second video in the “Making IT Real!” series has been released.  If you haven’t already seen it, you can see it here and in case you missed the first video, you can see it here.

 

 

 

 

-Mike Breton

IT Technology Evangelist

Russell C Thomas delivers a great post on How to Value Digital Assets.  It covers many basics and more importantly gives a good direction to take while spotlighting common pitfalls in the valuation journey.


“This tutorial article presents one method aimed at helping line-of-business managers (”business owners” of digital assets) make economically rational decisions.  It’s somewhat simplistic, but it does take some time and effort.    Yet it should be feasable for most organizations if you really care about getting good answers.  Warning: No simple spreadsheet formulas will do the job.  Resist the temptation to put together magic valuation formulas based on traffic, unique visits, etc.”

 

Definitely a good read for anyone wondering where to start the valuation process.  I especially like the Three Principles section.  He makes a logical separation between assets which provide direct revenue (Class 1) and those which are in a support function (Class 2).

 

As follow-on, I believe some other aspects may be covered under the Class 2 section including liability avoidance, direct efficiency gain, life safety, and regulatory compliance.  In certain cases we must apply a different method to determine the value, outside what has been explained.  As management may be willing to replace or upgrade, but typically such investments must have a positive ROI, therefore they provide much more value than the replacement/repair costs.

 

Years ago I had a stimulating conversation with the late (and some would say infamous) Dr. Bill Hancock.  Bill had trudged through the information security swamps for decades and had a unique insight to valuations of vulnerable systems, particularly single-points-of-critical-failure.  He recanted his experience evaluating an airline’s security and discovery of a minor system which was largely ignored, a weights and balances server.  Apparently when planes take off, the distribution of weight must be calculated to insure they don’t become giant ‘lawn darts’ (Bill’s colorful description) at the end of the airfield.  A data integrity compromise of this system could cause catastrophic consequences, leading to the end of the business.  Who would fly on an airline which had several take-off crashes in a single day?  It would be the critical factor to likely cause the airline to no longer exist as a viable business.  Although this was a support system, the integral value was far beyond the cost of the equipment, software, and support.

 

Secondly, the blog is written with the assumption the assets are already in place.  Thus, in a perfect world, a proper ROI/justification has already been made to assist the decision to acquire and land these assets.  But what if a decision to purchase or not, is the objective?  The Class 2 method then becomes circular.  The value is the expenditure management is willing to invest?  How do they know?

 

Overall it is a great blog.  I think it would be helpful if the author could give an example for a medium sized enterprise, with particular focus on Class 2 areas (specifically security or safety assets).  Hopefully he is willing to post such details.

No.  Just the people who use them.


Passwords of reasonable strength (8 characters or more consisting of upper/lower case and special keys) coupled with timely expiration, are secure.  Passphrases with comparable measures are equally secure.  The systems and users are currently the weakest links in the security chain.  Security Chain.jpg


The interfaces and tools which we input the passwords may be vulnerable.  This includes but is not limited to key-loggers, sniffers, input redirections, etc.  But it is the user, where the most significant weakness exists.  They can be duped into divulging their passwords (phone, web, chat, email, etc.) and in many cases make them available in other ways (sticky note under the keyboard).


A recent Newsweek article covered the topic of building a better password:

"...a short but hard-to-remember string like "J4fS<2" can be broken by what is called a brute-force attack (in which a computer attempts "a," then "ab," then "abc," and so on) in 219 years, while a long but easy-to-remember phrase like "du-bi-du-bi-dub" will stand for 531,855,448,467 years. (Two hundred nineteen years is actually very good, but the lesson remains: simpler can be stronger.) The idea of passphrases isn't new. But no one has ever told you about it, because over the years, complexity-mandating a mix of letters, numbers, and punctuation that AT&T researcher William Cheswick derides as "eye-of-newt, witches'-brew password fascism"-somehow became the sole determinant of password strength."

 


The difference between passwords which can be cracked in two-hundred versus a billion years is immaterial if users are forced to change passwords every few months.   The bad guys just don’t have the time to crack the password before it is changed or the data is sufficiently aged to not be of value. 

To undermine cracking attempts, we force users to use 'strong' passwords so that dictionary attacks are fruitless and threat agents must resort to a laborious brute force attack, trying massive numbers of combinations in order to be successful.  All passwords can be cracked via brute force, but it takes time.   It becomes an exercise in how many attempts can be made over a given period.  The faster the process the more combinations can be tried and therefore the shorter the time to discover the one which works.  The length and possible characters determines the number of combinations.

Undermining the strength of a password is not the biggest concern.  It is far more likely for a password to be sniffed on the network, captured on a system, or duped from a user, rather than be cracked.

The most significant vulnerability is with the user and systems where passwords are entered and stored.  There is no practical benefit to further abuse users with new diabolical password schemes.  We should pay less attention to stronger and better password formats and instead invest in better behavioral controls, user education, and the strengthening of system and interfaces.

With a painful taste of irony, it was recently reported that the Ministry of Defense's (MoD) manual explaining how to prevent leaks, was itself leaked. 

Source: The telegraph.co.uk

 

"The Defense Manual of Security is intended to help MoD, armed forces and intelligence personnel maintain information security in the face of hackers, journalists, foreign spies and others.  But the 2,400-page restricted document has found its way on to Wikileaks, a website that publishes anonymous leaks of sensitive information from organizations including governments, corporations and religions."

 

Is this a fluke or is the world suffering from abhorrent information security practices, culture, and capabilities? 

 

YES, the world is terrible at securing data!  Yes, you and I are part of the problem!  Yes it can be fixed, but it is unlikely unless dramatic steps are taken!

To hear my full rant and opinions, check out my blog/video "It is Time for a Data Security Revolution!"

Is data security really that bad?  What do you think?  Don't be shy.  YOUR data is at risk too.

 

 

 

It is Time for a Data Security Revolution!

It is Time for a Data Security Revolution!

Information technology has lagged behind society’s skyrocketing need to manage and secure data.  Information is growing exponentially and our demands for control and oversight continue to develop rapidly.  Efforts to create or improve current paradigms are fractured and have failed to reach the tipping point of the maturity cycle necessary to catch up.  We have failed.  It is time we shed our entrenched archaic ways and leap forward to revolutionize how data is protected and managed.  The confluence of changes in our culture’s expectations of data, demand we succeed.  A revolution in data security is coming; we can either lead or be trampled by it.

The problem

The world is demanding more control, security, oversight, and awareness of where our data is and how it is being used.  This includes information generated and processed at work, as well as our own personal information including financial, health, and privacy data.  As a society, we are just starting down the road to explore data loss prevention issues, privacy expectations, digital rights management, and electronic discovery requirements.  Additionally, we are just beginning to understand the vast, hidden, and expanding world of data breaches, identity theft, user profiling, and online victimization.  Intellectual property controls are more important than ever to businesses in the information age and the social networking phenomenon is opening our eyes to the need for better security and management of individual’s data and the systems which control it. 

Yet the current behaviors, tools, and infrastructure is vastly insufficient for what we need today and the gap is increasing, leading to a critical failure point in every way for what will be needed a decade from now.  As fast as technology evolves, it simply cannot keep pace given the confines of current structures.  We will be left with a snarl of vague and unrealistic regulations, unsatisfied community demands, incompatible point solutions, tools which can’t scale, and an entire generation of information victims.  A radical change is needed!

Information2.jpg

The storm is brewing

A confluence of conditions is manifesting to create a perfect storm for radical change.  Consider the following social and technical changes which will change people’s opinion:

·        Data exposures are becoming public, showing the terrible depth of the problem

·        The number of data victims, for identity theft and online crimes, is increasing as are the losses

·        Data, system, and privacy regulations are emerging across the world with complex variations, creating severe challenges for global compliance, interpretation, and compatibility

·        Social media users are realizing the honeymoon is ending, their data is exposed, and being used in ways they never intended

·        Malware is reaching epic proportions.  The trend is shifting to target capturing victim’s data

·        Individual opportunists, organized criminals, and nation states are actively working to control systems, data, and networks

·        Surveillance, profiling, and filtering controls are becoming mainstream to target or seek control of user data

·        The sheer number of people and businesses on the internet is reaching a critical mass to determine how the world communicates, and the engine driving an exponential growth in the amount of data being generated

 

This problem may be complex in the details, but it is simple in principle.  Basically, we manage data poorly.  If I create a document today and email it to a co-worker, I essentially surrender almost all control.  In a week’s time, I will have virtually no idea who has seen it, how many copies exist, how long it will stay buried on storage devices, or what modifications have been made to it.  I have no control to update the copies, control access, or revoke the files.  Chances are good that after a year I will likely lose it myself or forget the content of the document.  It is terribly inefficient and represents poor overall management of data.

 

This situation presents as both a technical and behavioral problem.  The personal computer revolution has bestowed the tools to easily create and store data.  The pervasiveness of the internet established the unprecedented ability to share and disseminate information.  The natural limitations of the pencil and paper generation supported modest but adequate physical management solutions.  The creation, distribution, and control were tangible and restricted to local resources.  Our newfound ability to generate and distribute information has not been coupled with equitable management solutions.  Caught in the euphoria of new freedoms, we ignored the capabilities to control and secure.  The shortcomings of technology have been tolerated due to an apathetic and disjointed demand from society.  We have failed as consumers to recognize the importance of our data and the deficiencies in the realization of how it should easily be managed.

It’s the 21st century; do you know where your data is?

Today, data is easily created, lost, transferred, edited, stolen, abused and destroyed with very few mechanisms to prevent, detect, or respond. 

Consider the following:

·        We don’t track who creates files and who owns them

·        Rarely do we consider if files should be secured or how

·        We don’t take steps to determine who should access, view, or edit files and where they can be stored

·        Destroying data after it is no longer useful, is a foreign concept, as is who should be responsible and when

·        We don’t understand who, at any given time, has possession of our data and how to effectively recall it

·        We have little insight to data content.  We rely on short and sometimes cryptic filenames to give clues, but we don’t comprehend contents in a meaningful way

·        Sharing data is mostly ad-hoc for specific files or locations, with little thought of content or other security factors which should be considered

 

In summary, we are poor custodians of data.  In fact, people keep better track of the clothes in their closet than the information assets they create every day.  I would wager you know where your clothes are, which are clean and which are soiled, and you have designated places for both.  You regularly maintain your wardrobe by cleaning, pressing, matching, folding and storing clothes in an organized manner.  Items are added, minor repairs made, and eventually clothes are purged when they no longer fit, are outdated, or simply not needed.  You plan and may budget when new clothes are required.  Depending on your age and habits, you may even have your name on them for ownership identification.  You organize your closet for easy searching and you know which articles have been loaned out and to whom.  For important items you would likely detect if they went missing and probably have a good idea of likely suspects, as you know and control who has access.  So why do we do such a good job at managing our clothes, yet such a miserable job at managing our data?

 

People have not yet put the mental pieces together, but they will.  When they do, they will demand technology deliver a solution.  Revolt will be at hand.

Current efforts

A number of current initiatives have been struggling to gain modest traction but will always lack the ability to deliver a complete solution.  Digital Rights Management(DRM) is well known in the online music circles, focusing on file based locks.  Data Loss Prevention(DLP) is a collection of practices and tools which can scan, classify, and block inappropriate transmission of data. 

Structures like Role Based Access Controls(RBAC), Mandatory Access Controls(MAC), Discretionary Access Controls(DAC), and Lattice Based Access Controls(LBAC) have attempted for years to establish controls within homogeneous and small environments, but rarely work as intended in large mixed environments like modern networks.  A variety of secure data repositories have emerged, which do a stellar job protecting a few critical items akin to a vault, but are largely inaccessible, inconvenient, and not scalable.

 

A quick summary of current solutions highlights why they are not scalable, will fail to provide a complete solution, and likely never be widely adopted.  Each of these does have its place and function but overall they will not deliver what is needed; a comprehensive capability to manage data security. 

1.      Vault solutions:  Secure some files in a locked system or repository and provide access via custom interface applications.  Not scalable for vast amounts of data, poor accessibility, high level of permissions management needed, inconvenient to use, and the trend to use proprietary software will keep the price tag high

2.      Scan and classify DLP systems:  Can apply controls both on clients and networks but relies on rules which are complex and a nightmare to maintain.  Ultimately this is why they eventually just get ignored.  Sustaining accuracy is not practical in environments which change and grow rapidly

3.      Scan and alert/intervene DLP systems:  Similar to Scan and Classify DLP systems, with an added benefit of intervention. Blocking suspect traffic and communications is a double edged sword, which requires high overhead to insure it does not interfere with legitimate business.  These suffer from the same drawbacks as their cousins.

4.      Employee policies:  Policies which rely on manual intervention are hit or miss.  For simple straightforward decisions they can be quite effective.  For complex data decisions, changing environments, and potentially vague situations they fail miserably.  People simply don’t act consistently when faced with complex decisions

5.      System policy (MAC, DAC, and LBAC) solutions:  System based solutions which can work well while data stays on the system but fails when collaboration across systems and users is required.  They simply lack the applicability, scalability, and compatibility across a network with various uses and complex situations of collaboration and security.

6.      Group/role access policies (RBAC): The natural evolution of the MAC, DAC, and LBAC concepts, can work great for small groups and data in an environment which does not change often.  As the numbers and data size grows, the administration increases and ultimately does not scale efficiently.

7.      File lockdown systems (DRM): Locking down files with digital rights (DRM) can work in situations needing a simple access control.  Allowing a file to be opened or not, for example.  But it does not work well when a multitude of access options are needed and other controls are required.  Compatibility also poses a problem when sharing such files across systems.

8.      Secure critical files and data solutions:  File encryption is the major player in this field.  Target only the most critical data and files, and focus on protecting those.  Not scalable with the increasing amount of data organizations are processing and the shift of data across a much broader user and system landscape.  Works great for handfuls of people with a small number of files needing protection.  Those days are gone.

9.      System data protection solutions:  As file encryption has too much overhead necessary to scale, just encrypt the entire system and network.  Works great for lost laptops but does little when the user has logged in and everything is now easily accessible.  Network encryption only protects against sniffing.  A good evolution but not nirvana.  It is a one trick horse for confidentiality.   

10.  Do little to nothing and hope for the best.  Don’t laugh.  You might be surprised with how many financial, health, educational, and governmental systems followed this model for most of the past decade. 

 

The list goes on.  This is not comprehensive, but does give a taste of some stovepipe solutions which are struggling to evolve even slightly and will never leap forward on their own to meet what will be demanded.

Overview of solution

How do we succeed?  We combine some of these technologies, integrate into the base computing infrastructure, and ease in the necessary user behaviors into the fabric of how people create, use, share, and destroy data.  It must combine an object oriented definition structure and network based management controls. 

 

 

Four core aspects for identification, security, and management of files

Data objects must carry specific characteristics to enable the computing environment to effectively and efficiently manage security.  Although discrete parameters may differ based upon data type and parent organization, these aspects represent the necessary structures which work together to enable automation and to define security practices.  Additionally the characteristics themselves must be secured and compartmentalized.Data Security Aspects.jpg

1.    Confidentiality Designation – Level of sensitivity and confidentiality for the data.  This has implications on required controls for data at rest, in use, and in transit.  Also can define requirements for where and who can access and store the data.  Examples might be Top Secret, Secret, Business Confidential , Personal, and Public.  Classifications have implications to the Access and Handling aspects.

2.    Access Rights and Permissions – Who has ability to access, edit, store, copy, transfer, etc. the data objects.  DRM and RBAC technologies and DLP principles are a good start.  The object must securely contain the concepts of ownership and those trusted to use the data in different ways, including to open, edit, destroy, move, copy , and transmit.

3.    Content Synopsis, Tags, and Keywords – Identifying content supports indexing and understanding relationships between files.  It facilitates scanning and auditing against policy as well as automation for determining access, classification, and secure handling requirements.

4.    Secure Handling – Secure handling parameters determine retention, backup, destruction, storage, usage and transport requirements.  These can be set by a default policy and updated based upon other aspects.  Data Lifecycle Management (DLM) provide a good foundation for some practices.

 

These four aspects cooperate and influence each other.  If for example, file content changes to include secret information, the classification may automatically bump to a secret designation, the secure handling settings will force persistent encryption, and change the access rights to allow access by a smaller community. 

 

 

 

Cookbook of requirements:

This is the wakeup call for firmware, operating system, application and security solution providers.  To change how people manage data, from creation to deletion, will require the major players to work together with standards and Application Programming Interfaces (API’s).  We are not just altering one piece or bolting on additional security, we must change the fundamentals of the very infrastructure we use to manipulate data.

Some inroads have begun.  DLP and DRM systems have established expertise in some preventative, detective and responsive functions.  Social media is leading the way in many respects with tagging, sharing, collaboration and most importantly tracking and metrics.  On the most modern sites, an author can post a video and track how often it is watched, by whom, and if they are using it in other mash-ups.  A great deal of data can be gathered and if analyzed correctly, transformed into usable intelligence.  Social media is the looking glass for what is to come.

 

These requirements are critical for success:

·        Must apply system wide, embedded seamlessly in hardware, Operating Systems, and applications.  It must include all data which is created, viewed, modified, transported, or deleted by users

·        Must span across users, client systems, and into the backend infrastructure

·        It must be holistic in nature and apply  from creation to deletion (birth to death) for data and files

·        Must possess default security for creation, storage, transit, and when in use

·        Support at a minimum, basis functions of DLP, DRM, meta-data, content tagging, RBAC, client agents, data tracking, and control repositories

·        Maintain a centralized structure for metrics, audits, maintenance, discovery, and reporting

·        Distributed and centralized hybrid system supporting comprehensive scanning, indexing and auditing

·        Enable data tracking, verification, auditing, and ownership administration

·        End-user involvement and empowerment, to directly access and manage control systems and distributed data

·        System interoperability across separately controlled domains and networks

·        Establish end-user ease of use, manageability, and scalability at all integration points:

o  Straightforward setup with additional modular extensibilities

o  Default settings based upon role for confidentiality and handling

o  User interface validation of parameters, and extra owner options, when saving, editing, moving or transmitting files

o  Default access rights based upon groups, tags/keywords, and storage location (for example, inherited rights based upon storage location or of like files)

o  Escalation and resolution options when actions are prohibited by the system

 

 

Vision of Success

We have the intellect to succeed.  We can create a new paradigm which meets the needs of legal, privacy, security and most importantly the maturing expectations of everyday people.

 

Keys to strategic success:

·        Make the capability embedded, easy to use, and secure by default.  Minimize impact and overhead to the users

·        Champion behavioral changes of users and administrators, show the value

·        Drive client Operating Systems and Applications to conform and support standards

·        Leverage security tools to extend services and controls

·        Establish back-end infrastructure support via standards

·        Foster competition to drive affordability, scalability, support and continuous improvement

 

 

Key capabilities for value and functionality

·        Automated intelligent determination of initial core file aspects, with validation by users during file management requests (save, transmit, copy, etc.)

·        Automated security controls applied and enforced based upon file aspects and derived control requirements

·        Automated data cleanup, archival, and destruction based upon file aspects and settings

·        Data owners can easily search and organize their files both local and across the network

·        Data owners can easily take control to manage access, confidentiality settings, change file handling parameters, and revoke files across the network

·        Administration can conduct broad electronic discovery searches for files and data content, generate operational metrics, and gain an understanding of where sensitive data is located and how it is being used

·        Automated security alerting and logging to assist with detection of unacceptable actions, resolution to events, and predictive information to facilitate the establishment of future preventative controls

 

 

 

 

Example Use casesData Security Mock App1.jpg

New document creation

Capturing the meta-attributes at the point of creation is a critical step.  As a mock-up, this email was created and a default set of icons appear in the toolbar, showing the status of the 4 aspects.  These default settings align to Confidentiality Designation, Access Permissions, Content Synopsis, and Secure Handling settings configurable by the organization or user.  They establish base parameters but change dynamically as content is added.

 

As text is added, the system determines the content to match criteria which changes the classification, associates to a current project, adds to the content tags, and modifies access permissions automatically.  Data Security Mock App2.jpgThe icons change in appearance to show how the data will be treated.  The user can intercede manually by clicking the icons, which will open the user interface showing more options and configurations.

 

 

Saving, moving, deleting or transmitting data

A modified window appears whenever users attempt to save, move, delete or transmit data.  This confirms settings and if needed, solicits additional necessary data to complete the transaction.

 

 

End state vision

·        From creation to destruction, data is automatically classified, secured, and under the control of the owner

·        Additional capabilities extend to allow complex management, sharing, security, and tracking

·        End users are empowered to easily organize and revoke their data, control access, and know where it resides

·        Through leveraging technology, data files are treated like assets and security is efficiently managed across user domains

Conclusion:

Change is coming.  The underlying community, regulatory, and behavioral factors are present and becoming more prevalent.  The information technology and security industries must escape the façade and false hope of small improvements and truly revolutionize how data is secured and managed.  This can only be accomplished with aligned industry partnership, a realization of necessity, commitment to user efficiency, common technical standards, and most importantly a shared strategy.  It is possible.  Now is the time to think, discuss, and plan.

As I started my transition into a new job within Intel IT a few months ago, I discovered that one our internal IT strategic imperatives was “Partnership”.  I have to admit that at first I dismissed this a simply one of many standard business leadership terms that any organization could choose to operate on (I hope Diane Bryant, Intel CIO, is not reading this ).  However, I’m learning how critical partnerships are for a high functioning and value driven IT organization, both within the IT organization and between IT and the business groups they support.

 

 

With much of the focus these days on the lack of capital budgets limiting IT investment and innovation, I’m learning that a larger underlying barrier for IT organizations to enhance and maximize value inside their businesses, centers around the themes of trust, alignment and ultimately, partnership.  Organizational Silos inside any business create natural barriers to innovation.  Some silos exist naturally and others are self imposed.

 

 

Let’s look inside a typical IT organization where you are likely to find three functional areas: Architecture, Engineering, Operations.  These functions exist naturally inside most IT organizations.  Recently, I had an opportunity to talk about the inner workings of these functions inside an IT organization with Gregg Wyant, Intel IT CTO and Chief Architect.  These groups are designed to fulfill very unique roles in the IT organization and designed to create an expertise in these functional areas to maximize effectiveness within their chartered goals (chart below). However, if partnership (or at least an understanding of these different roles and goals) doesn’t exist across these groups the credibility of the IT organization can be at risk and the value IT delivers to the business undermined.

 

IT2ITpartnership.jpg

 

Imagine if the architecture group creates a vision that can not be implemented by engineering or was is cost prohibitive in the manpower or solutions needed to implement it operationally.  IT’s costs would rise dramatically and/or the architecture design efforts would simply be wasted.  Or imagine if IT never challenged the status quo operational processes and just continued to operate “the way it has always been done”.  If this happens, we would never improve business processes.  Obviously there is a balance required here and partnership across these disciplines can help an organization operate at a higher level of delivered business value and IT efficiency.  After completing a recent job coverage rotation himself, Gregg articulated to me the importance of IT to IT partnership across these disciplines and cross functional job rotations within IT.  The benefits help an IT organization maximize operational cost savings and service levels, react quickly to changing business and technical conditions while balancing and prioritizing investments for the good of the overall business - versus optimizing any one individual discipline or organization.

 

 

If we look outside the walls of the IT organization, we can also see how silos can negatively affect the business – this brings me to the subject of Server Huggers. 

 

 

A Server Hugger is someone who currently has or is demanding to IT that they have a physical server (or many servers) dedicated to their business function or department --> they want to touch it, know it is theirs and know that they don’t have to share it with anyone else (either in IT or another business unit).  Server Huggers can be individuals or business groups.  And in a world where most servers still run an average of 5-10% utilization, it is easy to see how these silo-oriented “server huggers” can create inefficiency in the business. To deploy virtualization (or accelerate the rate of virtualization adoption) inside any business, the business teams and IT often need to breakdown this silo’d approach and find ways to delivered required or higher service levels while running on shared, virtualized hardware resources. 

 

 

This was at the heart of a discussion I recently had around Intel IT’s strategy to accelerate virtualization inside our Office and Enterprise computing environments.  The first step in executing this strategy is to identify the target servers, document who owns them (if IT doesn’t – in many cases we don’t), size the new environment and convince the business owners that virtualizing is OK.  With demonstrated proof of concept virtualization ratios at up to 20:1 using the latest Intel Xeon 5500 based servers, our opportunity for savings is dramatic if we can rid our organization of server hugger behavior.  With tops down support from IT management and an environment of partnership already established with our business customers, I believe we have a clear path to success.

 

 

Partnerships inside Intel IT can be seen in how we create and measure business value with our business partners, how our own IT organization encourages IT rotation and how we strategically align our IT planning efforts with our business plans. 

 

 

It is clear to me that our Intel IT Strategic Imperative of Partnership is much more than management lip-service … it is at the heart of our IT operational philosophy … and for good reason.

 

Good bye Silos!  Good bye Server Huggers!  … we have no use for you any more.

 

 

Chris Peters, Intel IT

Engage Intel experts in IT to IT discussions inside the IT@Intel community

Follow me on Twitter

jimmywai

More on the Cash Machines...

Posted by jimmywai Oct 11, 2009

Watch Diane Bryant, Intel CIO, talks about the cash machines in data centers in this press breifing. Haven't heard about the amazing cash machines for your data centers yet?! Better check it out now: Installing Cash Machines in your Data Center

IT@Intel is producing a series of four videos to highlight various Intel IT sustainability projects and the Intel IT experts that work on them.  The videos will be published on Intel’s IT@Intel site as well as on the IT@Intel Playlist within Intel’s YouTube channel.  I was privileged to be featured in the first video, which covers some of my personal expertise in home control and energy management as well as how I’m now using that experience conducting proof of concepts in the office environment for Intel IT. Here’s a link to the first video and stay tuned for future videos in the series.

 

Here’s the first video in the series.

 

You can also check the IT@Intel Playlist on Intel’s YouTube Channel for this video series as well as other IT@Intel videos.

 

Feel free to ask if you have any questions about the first video.

 

-Mike Breton

At a recent event our CIO, Diane Bryant, talked about our continued plan to replace old servers in our Data Centers (http://www.tgdaily.com/content/view/44213/135/). Here is a summary of her key points:

  • Not replaceing servers could have costed Intel $19 million due to high maintenance and cooling cost
  • Our plan of refreshing old servers with Nehalem servers will save Intel $250 million over 8 years

 

If you are an IT manager looking at where you can find extra dollar in your IT budget to invest in new technology, new innovation and new competitive capability for your organization, this must be good news for you! Moreover, if you do nothing, you are opening a hole in your IT budget.

 

Here is a recent white paper and a video we published to discuss our server refresh strategy and how we are getting the cost benefit Diane Bryant shared:

Realizing Data Center Savings with an Accelerated Server Refresh Strategy

 

We have also developed a Server Refresh ROI estimator so you can calculater the amount of savings you can get from these cash machines:

http://www.intel.com/go/xeonestimator

 

If you ain't satisfied, here is a video showing you how to use the estimator!

 

Go and install those cash machines into your data centers now! 8-)

Reading from news (http://news.cnet.com/8301-13577_3-10368956-36.html) today, a survey has shown that 54% workplaces block social networks completely. I'm glad to be in a company which is the 10% which allow social-network use at work so I can stay connected with my external partners and industry peers. It seems the debate on whether social media is a effective business tool or a productivity drain is still going on.

 

In Intel, we are embracing social media as a mean to transform collaboration in Intel. We see the opportunity out weights the potential risk. We are deploying a social media platform for our employees. You can find out more about our social media strategy from our recent white paper (Developing an Enterprise Social Computing Strategy) and the blogs from Laurie Buczek (Why Intel is investing in Social Computingand Intel's Enterprise Social Computing Strategy Revealed).

 

Personally, I think social media is going to repeat the history of email and instant messaging (IM) at work. Few years ago, there were skeptics about IM at work. Our CIO at that time, John Johnson, took the risk and deployed IM in Intel. Today, it's a productivity tool that I cannot live without. This morning I was troubleshooting a problem with a colleague waiting to broad a plane 16 hours away thru IM. I frequently talk to my colleagues around the world. They could be anywhere in office, at home, or on the road, when I need to connect with them. Whenever they pop up online, I can get hold of them. Without IM, life will be much more difficult and less productive.

 

I have been participating in a IT pilot program testing out Windows 7 in our environment. We have a Windows 7 group setup in our social media platform where we share BKM and help each other. I got workarounds from the forum for issues I ran into with the beta version of the operating system. I also contribute my findings and solutions back to the group. Together we are creating a rich knowledge base for the Windows 7 program team. The pilot users around the world were helping each other and saving each one of us a lot of time learning about the new OS, troubleshooting and finding workarounds. This is an excellent success story for social media at work. (Find out our Windows 7 experience here: The Value of PC Refresh with Microsoft Windows 7*)

 

What is your view of social media at work? Is your company putting up a strategy to adopt the technology?

I just read this paper authored by some of Intel's IT experts in the area of client management.  As an employee of Intel, I am now a huge fan of these rock stars.  Why? because they were able, through proactive IT management practices, reduce blue screens inside Intel's employee base by over 50% in the last year (Q2'08 to Q3'09).  There are now 3,000 fewer laptop blue screens than there were a year ago --> that is a huge productivity advantage for Intel workers.

 

blue scree reduction q2'08 - q3'09.JPG

Issue Tracking, Pareto Analysis and use of new management capabilities and technologies like Intel vPro Technology were at the center of these capabilities.

 

Read about how Refael Mizrahi, Shachaf Levi and Jeff Kilford made my life as an intel employee a whole lot easier by Improving Client Stability with Proactive Problem Management.  You Rock!

 

Chris

Filter Blog

By author:
By date:
By tag: