Skip navigation

One of the points emphasized in project management is to document key learnings from a project, as an ongoing exercise. In Intel, we have post-implementation-reviews (PIR) after each project where we document what went well and what we could have done better, among other things. The goal is to learn from our work and use the learnings in our new projects.

Is this process really working? Although the process of having PIR is well established, learning from the PIRs seems to be missing. How many of us search for old projects and get these learnings before we start new projects?

In the external world,* and digg* are example of some of the social bookmarking tools. These tools are used to find relalated information using tags. What if we do the same within the enterprise? The project managers/ team members can start tagging their projects with relevant labels. Then when a new project is spawned, the employee searches for related projects using the tag, and reuses wherever possible.

Technology-wise, this is an easy solution to implement using the existing Enterprise 2.0 tools. The challenge is with the user discipline. Unless it becomes a habit and people start tagging the projects, it is very difficult to implement social bookmarking to search for old projects. Though we have started embracing some of the enterprise 2.0 tools in our project management, social bookmarking does not yet have a place.

We would be interested in hearing from you. Do you use social bookmarking in your project management activities? If yes, how difficult was it to get the employees adopt the practice? Would love to hear from you on this subject.

*Names and brands may be claimed as the property of others.

Whether you run a small or large enterprise, trying to find the proper level of governance can be a bit despairing. In this context governance, is referring to the level of decision management needed to keep the enterprise running as close to optimal as possible.


Unlike the intent, many people see governance as a burden on the way business is ran. A not necessary step.


I can only speak for myself in that most of the governance we do is needed in order to focus the enterprise on being ran safely and with minimal problems. I would argue that without some level of control our company would fall into chaos. Our systems would not work well together, we would spend more money doing rework and people would be far less productive.


In a word, we would "fail".


The challenge is finding that sweet spot between the minimum necessary to ensure you make the best decisions; without adding so much that you bring the system to a grinding halt.


Do you draw out a master plan and compare every decision against it? If you lack the time (or people) for the master blueprint, do instead focus on key items and hope for the best? Perhaps you use outside specialists and industry experts to influence where you go?


What if you are the expert that others look to for guidance?


These are all part of the larger balancing act of managing the architectural constructs of enterprise IT. You need to put in place the right talent, decide on the master plan (and adjust it often) and hold everyone accountable to it. Measure and identify your successes while adjusting for your failures. Use specialists whether inside or outside your company; recognize talent regardless of where it is.


Be flexible, be innovative and above all put eyes on it. It is only with those eyes that you will find you are making the best decisions on how you run your enterprise.

Increasingly organizations are conducting more and more of their business online. In fact, I was reading some research from McKinsey today about Measuring the Net’s growth dividend where they report that in mature countries the internet is responsible for up to 20% of GDP (gross domestic product).


Intel actively reaches out to consumers, enterprise customers and business partners worldwide using web sites and external social media platforms to conduct business and drive collaborative innovation - and keeping this environment safe falls to the Intel IT organization.


Intel’s business groups use hundreds of Web sites and third-party solutions—including social media platforms—to communicate and conduct business with customers and business partners. Collectively, these externally facing Intel-branded solutions are known as Intel’s external presence.


Until 2006, these web sites proliferated rapidly in response to business needs, without centralized oversight. Given this growth, we established the Intel Secure External Presence (ISEP) program inside our IT organization to manage the risk associated with Intel’s external presence.


By January 2011, we had completed the ISEP security review process for more than 750 new projects and we conduct daily vulnerability scans on all of our externally facing web sites—more than 450 in total—to maintain a high compliance level against a vulnerability assessment standard.


Overall, ISEP has effectively helped secure Intel-branded externally facing Web sites and solutions, resulting in a significant reduction in risk for Intel’s external presence. This enterprise security whitepaper shares the history, evolution and next stage challenges of this program which ensures the Intel IT organizaiton is able to secure our web-based solutions as a means to enable our business growth.



Intel IT manages 91 data centers that are home to about 75,000 servers that support and enable our core Silicon Design, Manufacturing, and Office/Enterprise/Services business functions.  In 1998 we embarked on a multi-year strategy to migrate our RISC systems to Intel Architecture which saved Intel over $1.4B.  I will be publishing a white paper on this topic in the next few days.  Beginning in 2006 we began the next major phase of our DC strategy by implementing proactive server refresh, data center consolidation, and investing in HPC.  These actions have put us on track to deliver $650M of value for Intel’s business by 2014.  All of this was accomplished while our compute, storage, and networking demands are growing rapidly each year (45% and 35% and 53% YoY respectively).


Through a variety of proactive investments and innovations, including the adoption of the latest generation of Intel® Xeon® processors, deploying advanced storage, networking and facilities innovations, we have increased the performance of our data centers by 2.5x while reducing our capital investments by 65% over the last four years - even as the number of data centers was reduced from 150 to 91  In addition, we have reduced the number of servers from 100k to 75k primarily driven by: (1) our proactiver 4-year server refresh strategy, (2) Accelerating the use and deployment of virtualization as we build an enterprise private cloud computing environment, and (3) Targeted investments in manufacturing infrastructure aimed at improving factory automation and efficiencies.


While these investments have delivered tremendous IT efficiency and scale, we have realized that horizontal component optimization of servers, network, storage and applications by themselves are not enough.  We saw both opportunity and necessity to customize solutions to optimize the lines of business we support.


We group our data center infrastructure environment into five unique verticals that represent main computing solution areas (referred to as DOMES) that include Design, Office, Manufacturing, Enterprise, and Services.  Silicon Design computing requires the most servers -- about 70% -- with the remaining 30% of our servers supporting Office, Enterprise, Manufacturing and Services applications.


We deploy unique solutions for each of these areas to deliver unique business value to Intel:

  • Design: Large, shared distributed grid computing solution complemented with the use and deployment of High Performance Computing solutions to support quicker design and validation of increasingly complex and powerful Intel® processors and technologies
  • Office/Enterprise: In transition from a traditional enterprise infrastructure to implementation of an enterprise private cloud, built on a highly virtualized infrastructure and on-demand self-service capability improving cost efficiency and business agility.
  • Manufacturing: Deployment of dedicated, highly reliable Data Center infrastructure for Intel’s manufacturing facilities with a focus on IT innovation that improves factory automation and supply chain planning efficiency.
  • Services: A new approach started in 2009 to support Intel’s new external service businesses such as the online Intel AppUpSM center for netbook applications.

Please take a moment and re-read this blog title. Internalize it. Ask yourself why and how?

I'll wait. <pause for effect>


This seemed like a silly exercise, but I wanted you to take some time in order to prevent you from filling your mind with the reasons why you disagree with the statement. Instead of coming up with excuses like:

  1. We will be using the same workforce
  2. We don't have anyone with mobile experience
  3. Mobile development is just another language in their toolbox
  4. Any developer can work in the mobile space


Let me explain a little more why I think mobile developers are different than the classic, enterprise developer. I'm not saying that the physical person is different, however, the style of approach and the framing they bring to a project is definitely different. It needs to be different or it would be the same. Due to its nature, mobile development is different because of the following. That's besides the Be smart about moving your enterprise into the mobile space to move content onto a mobile device.



Enterprise developers are spoiled. Most applications land on an environment that is isolated with firewalls, corporate malware protection and multi-layers of validation and authentication. Even when some innocuous item explodes into a monster with many arms on the inside, it is short-lived and minimal in impact. We write very little internal validation routines and depend greatly on an ever present infrastructure. In the mobile space this does not exist. These devices are in the wild and oftentimes even belong to the employee who manages their own application stack on the device.


What needs to change?

  • Form-level validation before submission
    • Re-check submission on the server side
  • Multiple levels of encrypted authentication
  • Use of encryption devices/containers
  • Manage encryption keys - do not hard code anything
  • Understand pattern recognition, protect when exploit detected


Know your data

We like to reuse and connect different data stores in order to deliver all the data that will be needed. On the enterprise, there are different layers of data protection through classification. In the wild, outside of the protection of our enclaves and firewalls, we want a higher level of control over what is transported and exposed in order to protect our intellectual property. Data will be needed for the mobile applications, you need to be very smart and decisive about what we push out there.


What needs to change?

  • Establish a parallel data classification for mobile
  • Determine where your threshold is to expose
  • Put in place controls if going above the threshold (encryption, VPN, etc.)
  • Remove data at rest to avoid loss (sensitive data)
  • Minimize cache


Screen size

Ten years ago, developers working on the web or most desktop applications were building for a very small screen size. As monitors became cheaper and as the underlying hardware/software stack matured, screen resolutions grew. As such, most enterprise applications that target a desktop/laptop client (web or native) does so with a screen size that exceeds most (if not all) mobile devices.


What needs to change?

  • Auto-sizing GUI
  • Segregate content
  • One function, one screen


Overly complex navigation

We are creatures of habit, and over the last decade our designs have been driven based on the needs of our customers. With our habit may come a tendency to implement overly complex and bulky navigation. Too many choices means too much to display on a mobile device screen. We must learn to simplify. We need to compartmentalize what we deliver.


What needs to change?

  • Reduce displayed navigation options (horizontal)
  • Reduce the number of options in drop-downs (vertical)
  • Only provide those which will be needed for the immediate function/feature


Bloated applications

The mobile space is one of limited horsepower and limited real-estate. With that thought, you need to also understand how (Be smart about moving your enterprise into the mobile space) you migrate over. Breakdown your solution to discreet options based on use-cases of your consumers. Minimize how much "extra" you place into the application because of assumed need or false opportunity.


What needs to change?

  • Create use-cases; validate there is opportunity in the mobile device
  • Partition based on functionality
  • Target the right consumers with the right application at the right time
  • Keep out fluff
  • Validate with consumers (before delivery)



The Jack-of-all-trades is a nice person to meet, but not necessarily a nice application to deliver. If you consider that "Bloated applications" have no place on mobile devices, you can then think about targeting certain features/functions/groups, and focus.


What needs to change?

  • Review "Bloated Applications above"
  • Focus, focus, focus


Large datasets

When you deliver a native application into the mobile space, you get the flexibility of working with some local storage. But each device is different and you shouldn't consider your consumers devices as your storage space. Think of it this way; do you want someone using all your available storage simply to store content you may never use or do not see value in?


What needs to change?

  • Consider keeping content in the cloud somewhere
  • Cache is one way to minimize the delay in content for the user
  • For the enterprise space, consider a Content Delivery Network, as a faster middle-ground, to accelerate cache delivery
  • Delivery a picture instead of a large dataset


Complex processing

Although many of our devices now have multi-core capabilities, they still lack the full computing power of our desktop and server systems. Distributed computing tenets state you should utilize many systems to perform tasks faster, giving the perception of speed. However, avoid doing this inside the mobile space. Using off-device computing services can help your consumers experience a fast application regardless of device and regardless of the size of the payload that is being processed.


What needs to change?

  • Off-device processing (through services implemented in server systems)
  • Stage pre-processed content (reports,summaries, images)
  • Offer preferences for on-device processing
  • Use idle-time (on device) for processing, if the consumer selects this in preferences



Distribution methods

Typical internal application stores may not have high levels of "signing" and packet checking, to ensure that no malware has modified the content. We fall under the umbrella of a protected environment (see Security, above), and use open environments.


What needs to change?

  • Signing code
  • Verification method, separately maintained from signing mechanism
  • Distribution methods which uses code verfication


Data entry

We use applications to create and manage our data. The use of keyboards are integral to that data entry. Our training and ergonomics allow for effective keyboards which accelerates data entry. Two hands, nice and comfortable, rapid typing. Our mobile devices do not have that capability, even those with keyboards.


What needs to change?

  • Know that data entry should be minimized
  • Use other mechanisms/sensors (camera, barcode, location) to facilitate data capture
  • Present options for selecting vs. asking for specifics through data entry


Personal safety

When someone has a laptop, and some data is stolen, we don't often worry that the thief can track you back to your home and harm you. Of course that depends greatly on the level of data that the individual stores as well as how enterprise solution is protecting it. On a mobile device, in addition to data which may be similar to our PC's, we have sensor data. A motivated  person could easily assemble a map of your usual activities and try to cause harm.


What needs to change?

  • All data should be protected
  • Personal information needs


How is your mobile strategy shaping up?
Do you have any learning's you would be willing to share?

This is a very exciting time to be alive and developing content for consumers (internal and external to your company).

"Measure what is measurable and make measurable what is not" - Galileo Galilei


Industry Consortium for Advancement of Security on the Internet (ICASI) has released a framework for the standardization of computer security vulnerabilities.  Although thousands of vulnerabilities are discovered every year, they lack the necessary consistency necessary for automatic processing, prioritization, and cataloging.  Vendors, researchers, and security firms use different or proprietary formats when describing vulnerabilities.  This new framework converts the data into XML which is easily read and manipulated by computers.  If widely adopted, it will aid in processing and give the industry a better picture of the threat landscape. 

ICASI is a consortium with some of the big players, including Cisco, IBM, Juniper Networks, Microsoft, Nokia, Oracle, Red Hat, and Intel.   The Common Vulnerability Reporting Framework (CVRF) is free so let's get our industry aligned!

It seems that you can’t go anywhere these days without hearing talks about cloud computing and how this new paradigm shift is going to change the use of the Internet in the coming years. But you can also hear that one of the biggest concerns is information security and privacy for information being passed around on this new way of using the Internet.  But could it be true that Information Security might be better in Cloud Computing? The answer to this question for cloud based architectures involving Software as a Service (SAAS) will most commonly precede with “it depends”. As always, it mostly depends on the decisions made during the (hopeful) use of processes for application security development being adhered to during the SDLC. Coming to the conclusion that an external cloud based service is better for a computing solution should only be done after careful analysis of all options for a given solution and therefore, external cloud architecture should not be predetermined. This meaning the use of cloud based computing should not be forced but considered as an option in the design and architecture phases for a solution.

The “cloud” type services have actually been in use for some time now but more recently the focus to how these services can be more defined and beneficial for service providers and consumers. For years now, organizations have hosted services like web, e-commerce, and email to service providers only to name a few. Additionally, routers and DNS services have been in use since the beginning of the Internet sending our email and web traffic from customers to partners without SLA’s for every path each bit traverses. Where the data security is concerned, security capabilities like PKI trust models and encryption technology have been added on to keep that data secure over insecure environments. Much will be the same as we move to cloud based architectures but the greatest part for the sake of security is that many related concerns can be raised in the beginning and at the design and architecture levels addressing security concerns ahead of time rather than adding security on top of existing solutions.


With cloud based architectures among the options for providing a solution, misconceptions are common as manufacturer’s market products for the cloud. Benefits being presented will include reduced total cost of ownership, lower initial costs for deployments, disaster recovery services, security control capabilities like system patch management and updates, and scalability as the need for more throughput arises. These benefits will be especially great for organizations deploying solutions with minimal internal capabilities to provide these services. Having a strategy and plan that includes cloud computing could allow for the most lucrative benefits. For more information on the direction of cloud computing at Intel, you can review the Enterprise Private Cloud Architecture and Implementation Roadmap or the Cloud Security related topics on Intel’s IT Center.


The shift to cloud computing should not change the need for baked in security requirements from the start. The hope is that security and privacy concerns can be at the forefront of requirements for any solution being deployed with public or private cloud based architecture. On one hand, the service providers will be reaching out for business and on the other, companies will be carefully evaluating whether to take the leap. For the larger organization, moving to cloud centric computing will most likely require the decoupling of many existing solutions for careful scrutiny and understanding of the threat landscape. This could even bring to light some needed mitigation for such threats that may not have been thought of before. The challenge for the cloud is that it is not just the technical aspect for there are other legal agreements and trust ramifications to consider. Organizations should consider a private cloud before migrating to public cloud (service provider) so that evaluation of security ramifications can become more prevalent over time and only move to the public cloud that which makes sense. The evaluation can provide more opportunity for security at the forefront of the technology, or the decision to use public cloud architectures can be avoided altogether. Not to say that every solution that becomes more cloud centric will be more secure but that many of the concepts for mitigations of common threats will likely be proactively offered as standards by service providers in external cloud services.


Cloud computing will bring about a change in the physical boundaries of data and moving that data between trusted partners securely and reliably. This capability will require encryption and trust models being constantly evaluated to ensure the latest security capabilities are being used properly. This capability may be enhanced by using the right service provider in the external cloud.  It will be important that service providers use cloud based computing architects that understand the capabilities in technology like Intel® Trusted Execution Technology (Intel® TXT) and the impact of the latest Intel based Xeon Processors integrated Intel AES New Instructions (AES-NI) to achieve accelerated encryption and decryption.  Cloud computing consumers will soon have greater access to the latest technology for security and performance because of the shared cost associated with cloud based architectures. Additionally, technology must continue to advance in the capability to protect data which may be easier implemented by the service provider that specializes in the protection of data in the external (public) cloud. So if risk and security conserns are at the forefront of discussions for moving to a cloud based architecture, information security in the cloud could be better.

We optimized our overall server storage strategy around the key metrics of: reliability, availability, performance, and scalability. Achieving results in these areas increases efficiency and reduces capital and operational costs. We are challenged with managing 25 petabytes (PB) of primary and backup storage in our design computing, office, and enterprise environments that is growing at an average of 35% YoY.  A variety of factors will stimulate future growth such as: the increasing complexity of silicon designs to continue to deliver on the promise of Moore's law, the growth of enterprise transactions, cross-site data sharing and collaboration, regulatory and legal compliance, and the ongoing need for retention.


Our storage landscape is mapped to multiple computing areas: silicon design, office, manufacturing, and enterprise. We choose storage and backup and recovery solutions based on the application use models for these respective areas:


  • Design computing:  Our silicon design computing primarily relies on network attached storage (NAS) for file-based data sharing. In addition to NAS, we use parallel storage for our high-performance computing (HPC) needs. We have more than 8 PB of NAS storage capacity and 1 PB of parallel storage in our design computing environment. We use slightly less than 1 PB of storage area network (SAN) storage in design computing, primarily to serve database and post-silicon validation needs.
  • Office, Enterprise, and Manufacturing:  We rely primarily on SAN storage for block-level data sharing, with more than 8 PB of capacity. Limited NAS storage is used for file-based data sharing. For both NAS and SAN, storage is served in a three-level tier model (Tiers 1, 2, and 3) based on required performance, reliability, availability, and cost of various solutions offered in respective areas.
  • Backup, archive, and restore:  Backup, archive, and restore are major operations used in data management. We use both disk and tapes for our backups. Tapes are used for archive functions to facilitate long-term offsite data storing for disaster recovery. The tapes remain offline, which saves significant energy and offers a cost-effective solution. Our disk-based backups serve specific needs whenever faster backup and recovery are required. Our virtual tape library serves the disk-to-disk backup for faster backup and recovery needs, especially in the office and enterprise computing areas.


I'd love to hear your server storage challenges and how you are addressing them.



Mobilizing enterprise applications is a key approach for modern companies wanting to increase the flexibility of their work force. We have our Mobilizing our enterprise applications, that is being implemented for internal customers. Chris 10 Reasons to Embrace Enterprise Mobility? I only need ONE on an article regarding the number one reason companies go mobile - employee productivity. I'm not here to address that question since we are going more mobile within our enterprise.


I want to address something I've seen internally, and read about externally, regarding "what to move onto mobile devices." There are many that leave it up to whoever has the money or has made their own decision as an application owner, with little or no customer interaction. To help us make this decisions in a more intelligent manner, I've come up with a few questions/statements (with underlying processes). I hope you find them useful.


What should I move to a mobile device? (things to consider)

    1. This is not an all or nothing exercise
      • Many look at the daunting task of taking a process-intensive, multi-tiered, multi-capable application as a very high hurdle towards mobile. This should not be the case. You should consider breaking up the application into those parts that should be deployed (see below).
    2. Understand your consumers
      • When looking at the TAM (total available market), you need to consider who your customers are and what they are using before targeting that platform.
    3. Focus on tasks versus applications
      • Applications are made up of tasks (some call them features). Many tasks have no business being inside the context of a mobile application. Be smart about what you consider deploying and focus on those. Some things to consider:
        • Is it appropriate for the screen size of the target device
        • Does it leverage the available sensors/instrumentation (GPS, camera, position, microphone, speakers, touchscreen)
        • Does it gain value from being on a mobile device (i.e., performing inventory in multiple locations or need to take photo and include)
    4. Data control
      • My first manager at Intel told me something that has stuck with me for 16 years, "Don't put something in e-mail you don't want people to read on CNN." With that thought always in the back of my mind, I've always tried to control what I state or release publicly in hopes of limiting the need to fight fires later. Even though there are several solid controls out there for encryption and authentication, losing control of your data on a mobile device should be a real concern. We've put in place a tiered approach to what data is allowed to be mobilized and so far this has proven very helpful.
    5. Who owns the device
      • With ownership comes responsibility and a certain level of control. A corporately owned device can have significant hardware and software features to enable remote wipe, full disk encryption or other controlling mechanisms to reduce risk from loss or over-the-air exposure.
      • We have a BYO (bring your own) strategy inside the company on top of our company provided devices. Having a strategy helps someone who wants to get e-mail access on their smart-phone, know what to do. It also helps to provide guard rails for the deployment of certain content you want higher levels of control on.
    6. Content is king, reuse is better
      • When it comes to developing software, the more you can reuse, the faster your development cycles are. That doesn't mean you won't have to create content to attract eyes, instead you can reuse core items and leverage the work done by your co-workers. What makes an enterprise great is the ability to leverage the talent all around you and give a consistent experience to the consumers. Reuse helps both those areas.
    7. Source ownership
      • This will seem obvious,but I'll say it anyway; if you own the code it will be easier to develop a mobile solution. With that said, you may be looking at an existing OTS (off-the-shelf) product you use internally and all of a sudden they have a mobile offering. Just know that the above considerations need to be taken into that decisions (data, consumers, tasks, device ownership)
    8. Legacy does not mean no mobile
      • Through the adoption of a services infrastructure, you can take a legacy application and freshen it up with use-cases in the mobile space. Exposing content and business process through a web service will enable you to move to mobile for those who need it.
    9. Know where you are going
      • In my Mobilizing our enterprise applications, I covered the three paths we are taking towards mobile. Whether going with a browser-based, native or virtual instance of your application, you need to be fully aware of the pro's and con's that come along with that path. As know as you know the reasons, which were not simply "because our boss said so," each subsequent decision and application decision is built on a confident foundation.


Our story is will be different from yours, but I hope you can take something and reuse it as you discover your path.

How are you doing on your mobile strategy?

As I promised last week, here is part three of five vblogs from Intel CISO, Malcolm Harkins. In this short blog, Malcolm talks about why Intel IT has undertaken a radical new five-year redesign of our information security architecture. Malcolm says that compromise is inevitable under almost any compute model, find out why...


If you haven't had a chance to catch the first two, check them out:
--Malcolm and security and the cloud
--Malcolm talks about how embracing social computing can reduce risk


Many to most IT organizations have been asked to support new consumer devices like smartphones, tablets or personal PCs inside the enterprise environment - a trend often refered to as "IT Consumerization".  Intel IT is no different and after a long 12+ month journey of research and evaluation, Intel IT launched a BYO smartphone program in January 2010 that has been very successful.


However, our journey has just begun.  Intel IT is shifting focus to deliver certain services to any device. By taking advantage of a combination of technologies and trends—such as ubiquitous Internet connectivity, virtualization, and cloud computing—we have an opportunity to redefine the way we provide services to meet changing user requirements. We call this vision the Compute Continuum, a program to chart the path to deliver these capabilities.Compute Continuum.JPG

The end state of this compute continuum journey is a more dynamic service delivery model where IT services can be envisioned across a range of devices including TVs, home PCs, netbooks, tablets and other non-traditional devices like in-automobile or in-plane displays. The usage model of these services (better and more flexible collaboration with the people we work and live with) are not that far fetched and is what brought me to the title of this blog - Planes, Trains and Automobiles


How many of you remember the 1987 movie, Planes Trains and Automobiles that starred Steve Martin and John Candy.  The story line is that Steve Martin is trying to get home to his family for the holiday but keeps having travel challenges and ends up being stuck with John Candy (an annoying traveling salesman) who "helps" Steve get home.


Steve Martin was seriously inconvenienced (and lost productivity, his ability to get home) because he did not have access to the services (phone, internet, alternative transportation companies) he needed to adjust his plans quickly and on the fly.  If Steve had access to the scheduling services on a range of devices and locations, he would have had a much easier time in getting home, collaborating with colleagues/friends/family and simply adapting to the environment around him.


So why does this matter to business - we believe that by delivering employees a rich, seamless and more personal experience across multiple devices, where they can move from device to device, location to location, while retaining access to the information, services and people they need to get their job done most efficiently and with the highest degree of productivity.


As a result we do envision a future capability that could involve delivering IT services for our employees to a display or new type of computing device accessible inside planes, trains and automobiles - securely enabled by desktop virtualization, cloud computing and a whole lot of IT innovation.


You can read more about Intel IT's vision of the compute continuum in this whitepaper about preparing IT for the compute continuum


I welcome your comments below.



A: Intel



~ 600 phones require 1 server


~ 122 tablets require 1 server


Find out more by watching this this video about the new Intel Xeon E7 processor.

In a recent Dark Reading article, a number of experts gave their perspectives on where the focus should be in order to prioritize security effort.


Focusing on attacks and not vulnerabilities can help companies prioritize their defensive efforts, says Dino Dai Zovi, a well-known independent security researcher.



Security consultant Daniel Guido stated "We can step back and study these things that are coming after us, and we can build more informed defenses that are more effective against those particular threats and that are less costly than not having done this process to begin with," 

The industry has traditionally focused on vulnerabilities as the primary way to prioritize security efforts.  Momentum is gaining to move away from this practice and put more focus on the attacks themselves as well as the threat agents who initiate them.  I have to say I am in the "know your enemy and know yourself..." camp.  What can I say, I am a fan of Sun Tzu's "Art of War".  When trying to interdict the enemy, I believe it is far more important to know what is likely, versus what is theoretically possible.


I say let Occam's razor, the law of economy, path of least resistance, and common sense rule.  Given a large number of paths to success, people tend to choose the most convenient, less risky, and most cost effective options.  The others are ignored.  The sheer volume of vulnerabilities is overwhelming.  History shows only a small number are regularly exploited.  In large or complex environments, knowing and attempting to close every possible vulnerability is an expensive and never-ending exercise in futility.  Better to make informed decisions based upon what is likely.  Understanding vulnerabilities is a valuable and necessary exercise as part of the decision process, but does not deliver optimal security prioritization alone.


I refer back to an older Fortune Cookie Security Advice blog:


In information security, like in sports, knowing your adversary is far more important than knowing the condition of the field.


I think the industry is starting to delineate between threat agents, the 'attackers', and the methods to use, the 'attacks', to exploit known vulnerabilities.  It may be why I am getting more and more inquires about the Threat Agent Risk Assessment (TARA) whitepaper I published back in 2010.


The underlying concept for the Threat Agent Risk Assessment (TARA) methodology is to narrow down the focus by taking into consideration the people behind the attacks.  Knowing your attacker, their objectives, and the likely methods they will employ, gives a tremendously powerful picture of what should be prioritized, based upon known vulnerabilities, controls, and exposures.

Cloud Computing is a key area of innovation for Intel and one of Intel IT's Top 3 Objectives.  We are adopting cloud computing as part of our data center strategy to provide cost-effective, highly agile back-end services that boost both employee and business productivity.   In 2010, Intel more than tripled our rate of virtualization from 12% to 42% and reduced the time to provision new infrastructure services to 3 hours from 90 days by implementing an On Demand Self Service portal as part of our enterprise private cloud.


Cloud Computing is changing the way we inside Intel IT look at our architecture: from the client technology our employees will use to access business services and data to the data center infrastructure necessary to support those services.  As a result, we adopted a multi-year enterprise cloud computing strategy in 2009 and are actively implementing an enterprise private cloud architecture to support Intel’s business.


Learn more by hearing straight from the people of Intel’s IT organization. In this episode of the podcast, we examine the enterprise cloud computing initiative at Intel with Das Kamhout and Ajay Chandramouly.



Hard Drive.jpgWe have come so far in understanding, measuring, and communicating basic information security factors.  Yet, the challenge continues as a recent news story shows.  A police chief assured the community that data from a stolen police laptop was secure:


"The police chief said he's been advised that it's unlikely anyone could access personal information stored on the stolen laptop because the battery is so old it barely functions without a companion power cord."

For the record, just because you cannot start a computer, does not mean the data it contains is secure.  Data residing in nonvolatile memory, which remains intact even after the power is turned off, must be secured in ways to insure it cannot be accessed by other means.  Encryption, device destruction, and data sanitization are normal methods which have proven to secure data if done correctly.  Additionally, beyond the data exposure potential, the actual configuration of a lost device, both in hardware and software, may expose ways for an unauthorized external computer to gain access to the secured network. 

Caution to the wise.  Any device which stores data should be addressed before it is abandoned, sold, or reused outside of your control.  This includes PC's, printers, network gear, hard drives, and USB sticks.   Data destruction is important.  Knowing how data can be exposed is the first step in avoiding unfortunate data loss situations. 

Back in March, I posted a blog on the first of five very short (~1 minute) vblogs with our CISO, Malcolm Harkins. Malcom has some very distinct ideas on some hot industry topics, like cloud computing, IT consumerization, social computing, etc. The first vblog gave Malcolm's perspective on security and cloud computing, the second one, embedded below, gives Malcolm's unique point of view on security and social computing. In this vblog, Malcolm suggests we embrace social media and social computing to reduce risk.




Next week I'll post how Intel IT is embarking on a radical five-year redesign of Intel's information security architecture, so keep comin' back!

What does cloud computing, information security and IT consumerization have in common? Among other things, they are included in three new audio podcasts from Intel IT. These podcasts are great for downloading and listening on your favorite device. Each of these audio podcasts shares Intel IT’s perspective from the Subject Matter Experts doing the work.


Check out:


Looking into the Cloud featuring Das Kamhout and Ajay Chandramouly. In this cloud computing podcast, Das and Ajay examine the enterprise cloud computing initiative at Intel. See how easy provisioning a server can be, and find out how a worldwide technology corporation can increase productivity and save in a big way by deploying a private cloud.


Rethinking Information Security featuring Malcolm Harkins, Intel CISO and Alan Ross, Principal Engineer. Malcolm and Alan discuss how our radical new security architecture is enabling new usage models like cloud computing, IT consumerization and social computing.


Consumerization of IT featuring Ed Jimison and Ron Miller. Learn how Intel IT enabled user-owned devices into the IT infrastructure safely and efficiently.


I’m on Sabbatical!

Posted by jghmesa May 2, 2011

One of the great perks working at Intel in the US is that every seven years you get eight weeks paid Sabbatical.  I have delayed mine for nearly three years but effective Wednesday May 4th to Weds July 13th, I am on Sabbatical and two weeks of vacation so ten weeks total.  I am experienced at this as this will be my 4th Sabbatical.  My wife and I have a laundry list of things to do and of course I’ve made a 70 day project plan.  We have some vacations, spring cleaning, and I have volunteer work to catch up on.  I plan to get to the gym daily, see all the cool sci-fi summer movies, catch up on reading, and also spend a few hours weekly in the pool. 


I leave feeling great as our Cloud Program reached 50% Virtualization last week which is a significant program milestone both actual and psychological.  In sort we’ve gotten over all the start-up surprises, no more weird anomalies, the team is mature and executing to a predictable rate, and coverage for me is a career growth opportunity for another PM.


When I come back, I’ll blog as to my Sabbatical experience and my PAS (Performance Against Schedule) as to how well I accomplished what I planned.  One data point that will rate outstanding is ‘Quality’ as there is no such thing as a ‘bad’ Sabbatical.


Have a great summer…





As I crawl closer to the 100 blog mark, I wanted to visually understand the word frequency of topics I have discussed

Word Frequency - Communities Security 1.jpg


Thanks to the cool applet at generated the following word frequency data visualization image for my blogs hosted on

This is pure eye candy, but I noticed over the years I have focused heavily on words such as Systems, Company, and of course Security.  What I find interesting  is the diverse set of topics in the next grouping down.  Fun topics like attackers, botnets, people, infections, access and hardware.  In the future, I want to focus more on the latter and frame such topics in a strategic way.


Drop me a line if you have thoughts or  topic requests.


To view all my information security rants, blogs, videos, and whitepapers:

Filter Blog

By date: By tag: