Skip navigation

I am a strong advocate of programs which establish and support security savvy end-users.  Security ‘common sense’ is neither common nor intuitive, yet plays a significant role in the protection of entire computing environments.  It is an important and frugal way of improving the overall security posture of an organization.  Although ongoing investment in behavioral security programs is valuable, it can be difficult to measure and justify.


The purpose of security awareness campaigns is to change or reinforce behaviors of the community in a manner which improves the overall state of cyber-defense.  Users can be the best asset or the worst enemy of an organization. Some would say the value is limited to social engineering attacks, but consider that well informed users will purposely stay aware of ever changing security issues and are more apt to apply patches/updates, take precautions with new technologies, and use common security sense when dealing with less trustworthy aspects of computer use as compared to uninformed or careless users.  In this manner, such behaviors extend well beyond the obvious social engineering attacks.


How should the value be measured?  The obvious approach may not be the best.  Take for example, an employee training program instituted to improve general awareness, identify dangerous situations, recommend good practices, and communicate how to get assistance.  Typical metrics for training programs are focused on saturation and recollection.  They measure how many people or what percentage has completed the course.  More ambitious metrics may actually test absorption, administering a test at the end of the class and scoring users’ knowledge.  These metrics track the progress of the project and have their place, but neither actually measures the security value.


Avoid investing in the wrong measures.  To estimate the value, as a factor of reducing security risk, the impact of what is being taught must be measured.  Did the number of successful attacks or the average loss per incident decrease?


If a behavioral security program succeeds, it changes the actions of users to be more secure.  The end result will manifest in a number of measurable ways:

  1. A reduction in the number of successful attacks.  Attack attempts may not decrease, but due to better decisions on behalf of the users, fewer will succeed.

  2. A reduction in losses for those attacks which do succeed.  Security smart users play a key role in detection and rapid response to attacks which can reduce the overall losses experienced.  Measuring the average loss per incident is a good tactic to recognize underlying value.

  3. A change in the type of attacks targeted at the organization.  When attackers find a winning method, they stick with it.  When those methods become ineffective, attackers specifically targeting the organization must adapt.  A security aware workforce can change the game dynamic by forcing threats to evolve their attack methods.  These adaptations can be measured and more importantly, communicated to users in a continuous feedback cycle to keep them informed of emerging attack vectors, thereby sustaining the security value proposition.

A widespread behavioral security awareness campaign, to establish and maintain security experienced users, is both valuable and important.  It is not a silver bullet, but it is one of the most powerful components to a defense-in-depth security strategy, as users can play a role in the prediction, prevention, detection, and response aspects.  Security savvy users are the core of behavioral security.  Measure the true success and value by understanding their impact.

Recently, I've helped to define some transformation rules for the raw data as collected by our storage management system (see my previous post).


With this system, we collect tens of thousands of "facts" about our storage per day. These facts have to be analyzed in some way to support business-specific decisions.  They can't be processed on-demand every time, as it would result in significant delay for the analyst to get the report.


To make it successful:

  • Someone has to understand the business topic (storage, in our case), to define what type of decisions should be made. For example:  should we order new storage capacity based on current utilization, staleness and trend?
  • Someone has to understand the limitations and capabilities of the existing collecting system - what types of data are collected, how and how often are they collected, how data quality can be assessed, etc. For example, the data is collected once a day, usage is collected in megabytes, per each filesystem.
  • And now one has to define how to bridge the gap between these two views within our BI infrastructure. The analysis is not always straightforward. The transformation rules should result in significant reduction of the original data set - but this reduction should not be too drastic. What details are not related to the specific business question and can be dropped?..

Now, as there are always multiple business questions which could be asked - should the transformation model strive to be as extensive and as complicated as possible to address all of them, including the ones to be defined in the future? Or, it would be easier to maintain multiple separate transformation models - one per question? In the latter case, it may result in extra transformation resources - but the actual reports would be much faster. And, ideally - how one would be able to provide "ad hoc" query capability on the data set as granular as possible, which would still complete in some reasonable period of time?
Unfortunately, it's not always possible to find someone who can equally represent all the views above - in this case, transformation rules definition may become extremely painful, and require too many iterations.


I wonder if others have similar experience - do you?

Diane Bryant, our CIO, recently did an interview with ZDNet Asia. She covered a wide variety of topics including her job in Intel and hot topics among her CIO peers. Here are some key points covered. Or, head over to the article to read the full interview.


  • CIO's job at Intel IT

  • Increased dependency between business and IT

  • Tough economy... meant we needed to be much more disciplined in presenting ROI to CFO to get funding

  • Justified replacment of 20,000 servers by saving Intel US$19 million dollars

  • Popular topics among CIOs: consumerization, videoconferencing, data center efficiency

  • BYOC - Bring Your Own Computer to work

  • Top project for 2010: Windows 7 deployment


How did you see 2009 and what is your top project for 2010? Please share your thoughts below.




I don't want it! I want to bring my own Mac to work!


Consumerization, client virtualization and dynamic virtual client are buzz words that I have increasingly heard from both internal discussion here in Intel IT and sharing with other IT professionals in peer organizations. All those are seems to tell that client virtualization is going mainstream in 2010 and it is also in Gartner's top ten list as Chris What is IT Thinking?. Prehaps, it's not a coincident that we have two whitepapers (see below) on this topic within the last few weeks, and our CIO, Diane Bryant, just shared her view toward BYOC (Bring Your Own Computer) in a recent interview.


So here is my wish for 2010: I wish my Mac is my single primary PC for work, home and travel!


What's your PC wish for 2010?


Let users accessing their applications and information from any device, anywhere and anytime: Enabling Device-Independent Mobility with Dynamic Virtual Clients
Integrating Mac into our Windows environment: Using Virtualization to Integrate Mac OS X* into PC-centric Environments

Our R&D infrastructure depends heavily on storage, which is over 10PBs today.
Vast majority of this data is kept on centrallized file servers accessible via Network File System (NFS).
We implement global name space to provide unified view to all this storage from thousands of compute servers - so the same NFS filesystem gets mounted under the same path at any compute server.
Overall, we are dealing with 10s of thousands of such filesystems across the company. This allows us to better control capacity allocation, ease load balancing and improve allocation scalability between multiple file servers. The name space decouples physical location of the filesystem from the logical path used to access the data, so the data can be migrated between file servers.
In the last few years, we've developed an application which provides self-service capabilities to our design community to manage this large storage capacity. Instead of requests to the HelpDesk, customers can instantly provision additional storage capacity, reclaim unused space, get precise reports about usage, and more.
It's possible to define total allocation limits for various organizational units, so one project can't consume all configured storage capacity.
Policies allow to define self-healing scenarios. For example, the application can enforce automatic reclamation of the entire file system or part of it, based on various policies.
This system is designed in such a way that we can utilize different storage solutions from various vendors, and still use the same interface for storage administration.
This application is integrated with our internal batch scheduling system, as well as with various design flows. As an example, running batch job may find that it is running out of disk space and allocate additional capacity on-demand through this application.


Are you dealing with similar challenges? How do you manage your storage?


Till the next post,
   Gregory Touretsky

On Wed 12/16, I will join Matt Brooks from Dell IT to talk about several topics surrounding the proactive use of technology to aid in the creation of business value.  We will talk about what IT is facing in 2010 - what does business want from IT - what technologies are shaping our strategies - and what lessons can we learn from the past that will help us tomorrow.You can read the full abstract below.


Mark Your Calendars. We hope that you join us Wednesday 12/16 from 11-12 PST.  Register Here



Date: Wednesday, December 16, 2009
11:00 AM PT / 2:00 PM ET
1 hour




2010 promises to be a watershed year for business, with the economy turning around and a plethora of powerful technologies ready to be exploited:

  • Thinking  about cloud computing for collaboration or business intelligence?
  • How about  deploying virtualization for availability?
  • Need  to roll out applications for a mobile workforce or customers with mobile  devices?
  • Or  perhaps you're looking to consolidate platforms and cut energy consumption in the data center?

If your New Year's resolution calls for improving how your IT infrastructure meets its service level agreements, join us for an interactive webinar on developing a server refresh strategy that supports business agility while reducing energy usage.


Experts from Dell and Intel will discuss their experiences consolidating and virtualizing on the latest generation of servers. Gain insight into why refreshing servers sooner rather than later generates improved ROI, productivity and performance across the enterprise.


We'll address hot technologies like cloud computing and explore how to best deploy them through virtualized servers and storage. Dell and Intel will shed light on the cost benefits of refreshing servers and why it made sense for their internal departments to upgrade.


Learn why it is a smart financial decision to start refreshing servers early in the year before business demands and workloads escalate. Don't miss this opportunity to gain insights into enabling your organization to become a more efficient enterprise.

I spent the past couple of months working as the Program Manager of our internal Social Computing Program. Laurie Buczek just wrote "All I Want For Christmas is my E2.0". In her post, she points out  2009 and the state of the program. I want to talk about my experiences filling in for about the two months she was enjoying time away from Intel. My normal job is managing a number of program managers and systems analyst for Productivity and Collaboration of our internal users. I was asked to try something different, for a couple of reasons. First, I have a passion for social computing and feel that it does have a strong use cases for inside an enterprise. I have been blogging internally for just over three years. I have learned so much from my posts, that I decided to continue my passion externally. Second reason, I needed a break from being a people manager. Not that I don't enjoy managing people, but I have been doing it for over 26 years. Stepping away from it for a short time - really helped me with refreshing my batteries.  I saw this as a huge opportunity for me (and hopefully for the program team).


My temporary coverage time is now over. When I complete anything, I like to reflect on the experience and below are some of my key learning's:

  • There are some extremely passionate people pushing for E2.0 capabilities within the company. Besides the program team, we have a very strong sub set of users that are very passionate. They not only ask for capabilities but are very willing to help. Our internal community managers are a group of volunteers that not only want to evangelize usage but help with administering the guidelines. That is not the only ones that are passionate. We have many users that are stretching the limits, adding requirements and helping to develop strong use cases . Without these folks, I think the program would just be another set of tools and/or capabilities that would be available, but not used.
  • Lots of folks want to, but just need some help to get started. I spent much of my time meeting with individuals and teams discussing what E2.0 is and how it could potentially help. I wish I had a nickel for every time I heard - "What is Social Computing?" or "How do I get started?"  Not having participated in many of those sessions in the past, I was somewhat surprised by the lack of understanding.
  • Please help to improve communications and collaboration across the organization. With many of the teams within Intel, they are very dispersed around the globe. With many I met with, they have the challenge of being in all geographies and meeting together is very difficult. Leaders were looking for ways to help get their message out to the organization. E2.0 capabilities can help with those issues and concerns, but it takes some effect and changes in order to be effective. With communications - just about every team has email blast communications, newsletters and web sites already. I would ask the simple question - "Are those effective today - really?" Adding E2.0 capabilities is just another avenue, if the message is not getting through - look at the message. I enjoyed looking for and watching the ah ha moments!
  • People just want to belong. New employee's come into a company the size of Intel and are overwhelmed by the size, team globally dispersed and all the new things they need to learn. There is time to ramp up, but speed of ramp is watched as well. If you are lucky, you may know some folks or your team is very willing to share. For the most part, you are left to fend for yourself. Making effective connections is key. E2.0 capabilities have that ability! For me, I have been with Intel for 27+ years - I have a network, just not digitally connected. I need to get reconnected to my network officially, so that the folks that work for me or know me - can use my network.


My temporary assignment lived up to its expectations; challenging, plenty of discussions and demonstrations, keeping the focus on releasing more capabilities and meeting new people. The hungry by internal end users for using E2.0 capabilities is large. Demand for additional demonstrations are still high. Our internal Social Computing program has gained traction, but there is still much to do. For me, I will continue to participate as one of those extremely passionate people that can improve communications and collaboration within Intel.

    I've attended IGT2009 Cloud Summit last week. There were 50 speakers, few panels and several workshops.
    Ric Telford from IBM has made an interesting note about Cloud Scale Economics, which supports our findings regarding higher TCO with external cloud (see my previous post). He has mentioned that even if an Enterprise is somewhat less efficient than the cloud provider, the comparison is still made of its own internal cost with their price, which includes provider's interest. If our compute capacity is comparable to the cloud vendor's one, and we are efficient enough - we may always prefer internal infrastructure to the external one.
    Still, appearance of such public cloud solutions, such as Amazon Web Services and others, with clear price model, certainly becomes great benchmark for the internal IT shops in various companies, and results in increased focus on internal efficiency.
    Other talks from Sun, NYSE, Amazon, Microsoft and many others were also interesting. I liked the statement made by Liam Lynch, eBay Chief Security Strategist:  "Cloud is overhyped in the short term,  but is under hyped in the long term".
     Another interesting idea was a comparison of commonly used platforms of today (Windows/Linux/...), with the ones of tomorrow (MS Azure/AWS/GoogleApps). Who is going to be the dominant one?

Do you think cloud concepts' proliferation will change our life in 10 years?  How?


Till the next post,

    Gregory Touretsky

I am fresh off sabbatical and back in the trenches implementing 2.0 technologies within our enterprise. This year has been crazy busy.  It was our big year of deploying the first phases of our multi-phased approach.  So how did we do?  Well….good news is that I don’t think I will get coal in my Christmas stocking, however,  I am only sitting on ½ leg of a three legged stool.  We have done a ton of work, but we still have a long way to go.  As I laid out in Intel's Enterprise Social Computing Strategy Revealed, Intel has been dabbling internally with web 2.0 since 2004.  We made a concerted decision to take the momentum and learning from the grass root efforts, and drive a globally deployed framework for social computing inside Intel.  It is no small task.  Not only do we have to evaluate and deploy solutions, but we also have to address Governance, Security Concerns, provide quantifiable ROI, capture use cases, and tackle transition change management one person and one team at a time.  Here are my reflections on 2009:


Got Community?  We successfully completed phase one and deployed the new community framework that now includes a blog, forum, groups (a.k.a. communities) and limited professional networking.  We also entered into pilot with our new enterprise wiki.  As we close the year, we will be upgrading the community platform to resolve some usability challenges, give the search a huge boost and enrich the current capabilities.  We are also expanding the wiki pilot while we continue to work on a migration strategy to get over 200+ wikis rolled over.  Lastly, we should have a vendor recommendation for a new on demand video solution (think internal You Tube).  I am happy to say that the individual pieces are coming together.


We built it and people are coming:  We have been shocked and awed by the adoption & usage rate of the new technologies.  Since the launch of the first phase in March 2009, we have tripled & maintained traffic; have 12% of the workforce that enhanced their profile and have over 800 groups (communities).  In a survey conducted after 3 months, we found that 84% of the group owners felt that they have been successful in achieving their business objectives of using the new platform.  About 57% of the groups formed were trying to improve communication & collaboration amongst a globally dispersed team – 53% felt they achieved improvement!


Stranded on an island:  A critical underpinning for success is to integrate the various 2.0 capabilities together and also integrate the pieces into our “traditional” office computing solutions.  It is what I call integrated into the workflow or how people get work done.  We completed integration into Enterprise Search, but we haven’t begun the complex task of unifying profiles, tagging, search, activity streams amongst the 2.0 tools, let alone, insertion into the office computing tools.  Without the “in flow” abilities, we will continue to see end user confusion about which “tool” to use and reach an adoption limit.  If we strand the 2.0 technologies on an island, they will be out of sight and out of mind.  It is a top priority to fix in 2010.


Clean up on Aisle 12:  As I mentioned above,  we have a lot of grass root efforts.  One of the worst is wikis gone wild.  We have at least 200+ unique instances of various wikis deployed inside Intel.  Now that we are getting the new global solution up and running, we have a daunting task of migration.  Unfortunately we have found that a “clean” migration doesn’t exist and business customers aren’t willing to resource the move themselves.  It appears that IT might have to pay a “tax” in 2010 and undertake some migration work to consolidate the various niche solutions.


Where’s the ROI?  My entire year has been a quest to find quantifiable ROI. I swear I have nightmares with the "Where's the Beef" lady crackling out a "Where's the ROI?!"  In the fall, we ended a joint effort with finance to look under every stone and quantify what we could.  Finance agreed- it ain’t easy.  Where we did quickly find quantifiable business value during an ideation proof of concept.  Ideas that are discovered and turned into action have produced dollarized return of business value.  Where we are finding it tougher to quantify is determining improvements in team collaboration, communication, individual productivity and the softer side of enterprise 2.0.  We aren’t off the hook, but there is now a better understanding of the challenges of ROI for enterprise 2.0.


Governance:  We spent a lot of time working through governance, legal and security concerns.  A joint steering committee was formed with HR and IT.  We have updated internal policies and tackled risk assessments for various features of 2.0 technologies.  We have a solid foundation of governance for social computing which is critical to ensure we are doing the right things right.


That is my 2009 in a nutshell.  While I am still singing “all I want for Christmas is my E2.0”, I am happy to say that we are beginning to reap the benefits of a TON of hard work.  I have a whole team of highly dedicated people on my program that deserve all the credit for our success to date.  Now on to 2010…..Happy New Year!


As part of our commitment to IT sustainability, Intel IT is reducing office power consumption by evaluating and implementing enabling technologies, changing business practices, and educating employees to change usage behaviors. Reducing PC energy use is a major component of our efforts to reduce office energy consumption. We have learned that raising employee awareness can be highly effective in reducing office energy use.



Check out all the details in a recent White Paper,Reducing Energy Use in Offices to Increase IT Sustainability,I co-authored.



Let me know if you have any questions.



-Mike Breton

I'm working with my colleagues in The Server Room to organize an Intel Data Center Live Chat on Open Port. This is the first IT@Intel Live Chat on Open Port. I'm very excited about it. Come and join us!


In the Live Chat, we will be focusing on the industry hot topics in the data centers including IT Data Center Strategies and Server Refresh. Our IT experts has recently published a white paper on Intel IT's Data Center Strategies and our approach to improve efficiency. They will attend this Live Chat to discuss our strategy, respond to your questions and listen to your views.


The Live Chat will be hosted at 10am - 12pm PST on December 8th, 2009 at The Server Room.


Please join us for the Live Chat and ask questions directly to our IT experts. At the Live Chat, we will give away a free "Energy Efficiency for Information Technology" book to all attendees. Details about the book giveaway will be annouced at the chat session.


Mark your calendar now, or download the event into your calendar directly.


Speakers at the Live Chat
Shesha Krishnapura, Senior Principal Engineer, Intel IT
Shesha is a senior principal engineer in Intel Platform and Design Capability Engineering group responsible for the development of High Performance Computing (HPC) solutions for Design, Tapeout and optimal platform foundation for Enterprise Computing. Additionally, Shesha is leading the enablement of Intel Architecture (IA) based solutions in Electronic Design Automation (EDA) industry and driving EDA application vendors to bring just-in-time IA optimized solutions for silicon design engineers.


Ananth Sankaranarayanan, Technical Program Manager, Intel IT
Ananth is a Technical Program Manager within Intel IT Core Systems Engineering. He is responsible for High Performance Computing (HPC) and Platform Engineering Programs within Intel. Ananth chairs the Technical Review Committee to recommend Servers, Storage, Backup/Recovery and Virtualization solutions for Intel Silicon Design, Office, Manufacturing and Enterprise computing environments.


Brad Ellison, Data Center Architect, Intel IT
baellison is the Data Center Architect for the IT-OPS Data Center Services team at Intel.  Brad is a former Board member of the Data Center Institute and is a charter member of the Infrastructure Executive Board Data Center Operations Council as well as ICEX Knowledge Exchange Program Data Center Excellence Practice.


Chris Peters, Strategic Marketing Manager, Intel IT
ChrisPeters is a Strategic Marketing Manager with Intel IT’s Industry Engagement Group.  Utilizing an integral knowledge of Intel IT operations, Chris works closely with strategic IT decision makers world-wide to help them optimize their IT infrastructure to deliver better business value.


Ed Groden, Product Managing Manager
egroden is a Product Marketing Manager in the Server Product Group at Intel.  Ed has been with Intel for over 9 years, and is has been involved with the Intel Xeon® 5500 processor since the inception of the program.  He works closely with OEM accounts world-wide to help integrate Intel technologies across their server and workstation products.


Ken Lloyd, Sales Engineer
K_Lloyd is a sales engineer working with the Intel and fellow traveler field in support of enterprise solutions built on Intel products and technologies. Ken has a broad background in Information Technology, systems integration, and solutions architecture.


Dave Hill, Product Marketing Engineer
dave_hill is a 15 year veteran at Intel and currently in the Data Center Group working with customers and server vendors on how to optimize their servers for best energy efficient performance.  He holds a bachelor degree in electrical engineering and has held various positions at Intel in test engineering, technical marketing and product marketing.

Filter Blog

By date: By tag: