Skip navigation

There is no Royal Road to understanding and achieving information security


Everyone wants information security to be easy.  Wouldn’t it be nice if it were simple enough to fit snugly inside a fortune cookie?  Well, although I don’t try to promote such foolish nonsense, I do on occasion pass on readily digestible nuggets to reinforce security principles and get people thinking how security applies to their environment.


The key to fortune cookie advice is ‘common sense’ in the context of security.  It must be simple, succinct, and make sense to everyone, while conveying important security aspects.

Fortune Cookie advice for July, 2009:




There is no Royal Road to understanding and achieving information security



Taking a line of thought from Euclid, there is no easy route to understand the ever changing complexities of information security.

We exist in an era where information security is both exciting and complex. 



The rapid evolution of information technology, increasing number of targets, and the explosive development of creative tools attackers employ all contribute to a dynamic environment where a continual struggle between aggressors and defenders shifts the balance on a daily basis.  Only through hard work can security professionals effectively pursue achieving an optimal level of security which manages the tradeoffs of cost against controlling impacts and effectiveness of attacks.  Achieving information security is an exercise in hard work, diligence, consistency, and flexibility to adapt technology and behaviors in meeting the challenge.





Fortune Cookie Security Advice - Strategic Compettive Secure - June 2009


Fortune Cookie Security Advice - May 2008


Fortune Cookie Security Advice - May 2008

Fortune Cookie Security Advice - June 2008

Fortune Cookie Security Advice - August 2008

Fortune Cookie Security Advice - September 2008

Fortune Cookie Security Advice - November 2008

Fortune Cookie Security Advice - December 2008

Fortune Cookie Security Advice - January 2009

Fortune Cookie Security Advice - February 2009

Fortune Cookie Security Advice - March 2009

Fortune Cookie Security Advice - April 2009

Fortune Cookie Security Advice - May 2009


Greed drives behaviors of cyber attackers.  Matthew Rosenquist discusses the pain and benefits of the Greed Principle.





Video 3:29 minutes


Purpose of Security Programs

For the last 18 months, Intel has invested a significant effort to develop a full strategy & implementation roadmap for social computing within the enterprise.  I am pleased to announce the release of a white paper Developing an Enterprise Social Computing Strategy that I did jointly with Malcolm Harkins, Chief of Information Security. The paper details our approach towards embracing the use of collaborative technologies while addressing the mitigation of legal, HR and governance issues.  Here are some key areas you will find detailed in the paper:


  • The business focus for social computing (also refer to: Why Intel is investing in Social Computing
  • Collaborative approach IT, HR and Information Security
  • Intel's integrated architecture
  • Intel's approach to determine early use cases, business value and vendor/solution evaluations
  • Results of a security risk assessment
  • Phased implementation plan
  • Initial results after 3-1/2 months into deployment & adoption


There are a lot of key takeaways within this paper.  The biggest one that I hope you will walk away with is:  Enterprise 2.0 is a challenging effort.  Yes, there are risks.  But Intel hasn't discovered any new risks introduced with 2.0 technologies that doesn't already exist with 1.0.  We believe the opportunities outweigh the risks. In fact, we are convinced that inaction carries much greater risks: that the enterprise will not realize the benefits that social computing can deliver, and that employees will increasingly turn to external, unsecured tools for communication.  IT has a leadership opportunity to get ahead of and deliver emerging platforms, at a fraction of the cost of "standard" collaborative infrastructure, to enable their business to stay one step ahead of the competition. 


I hope you enjoy the paper.  I welcome your perspectives and learning about that strategy that is yielding success for you.

In June, I updated you on a small proof of concept studying Energy use in the Office.  The first phase of that PoC is now complete and although detailed results will be included in a paper we’ll be publishing later this year, I thought I’d share a few data points with you now.


If you remember from my last post, after establishing a baseline, we split the PoC users in to three groups to test different energy saving techniques.


The awareness group, whom we simply provided information on how much energy they were using, what it costs, and as some energy saving tips, reduced their energy usage by an average of 22%.


The power management group, whom we used a third-party tool to deploy and enforce client power management settings putting their systems in to standby after an idle period, reduced their energy usage by an average of 10%.


The smart strip group, whom we provided USB triggered power strips to power off devices in their office when their laptop was out of their docking station, encounter technical issues resulting in no change to their energy use.


While the savings found during the study are compelling, we did run in to several issues both technical and related to the small size of the PoC that could skews the numbers in either direction.  We are now planning to repeat the study on a much lager scale, focusing on awareness and power management profiles, to see if the original findings scale.


Please let me know if you have any questions or if you are doing or have done anything similar in your enterprise.





Telescope.jpgRisk metrics are the heart and soul of information security indicators.  An increasing proliferation of tools and assessments has emerged, attempting to quantify states of information security.  Given the nature of what is trying to be measured, this is arguably one of the toughest challenges in the metrics space.  The recent trend is for different bodies to develop and publish their own standards, which creates confusion regarding accuracy and applicability.  Why all the turmoil, competing models, and misalignment?  The sad story is (queue the somber violins) we just have not figured out how to measure information security risks very well.


I have seen and applied many different methods, audits, and evaluations with varying degrees of success and disappointment.  I have come to the following three basic conclusions:

  1. Current tools and methods lack maturity in this area, for both accuracy and comprehensiveness (and yes, I am guilty of contributing to the pool)
  2. No silver bullet exists.  A unified method, which provides a predictive overarching and detailed risk analysis, is unlikely.  Different approaches have their applicability.  Choose wisely 
  3. There is no replacement for a security professional’s brain.  From the selection of the analysis method, the gathering of relevant data, to the interpretation of the results, requires a seasoned security professional.  There is no substitute which can handle the ambiguity, chaos, and relational dependencies affecting the outcome

An example will help express some of the challenges.  The OCTAVE methodology, created by Carnegie Mellon University some years ago has been battle tested veteran in this role.  It is a qualitative to quantitative device which leverages the expertise of key people to give a numerical value of risk in their respective area.  Because personal bias and fears, the need to allow flexible ways of answering questions, and the varying degrees of base knowledge between the experts, results can vary greatly without even factoring in the changes occurring in the threat landscape.


Let me be clear, I am a fan and a longtime supporter.  However, it has its limitations.  I have developed several assessments based upon the model in a large environment.  As long as the limitations are accepted, it is applied where it leverages its strengths, and the process is rolled out properly, the results can be very valuable.


But don’t confuse value with precision.  I have observed the accuracy to be +/- 40% in complex organizations.  I believe this is largely due to multiple tiers of qualitative-to-quantitative analysis and the bias introduced at each level.  Credible sources have expressed a better +/- 20% accuracy for smaller implementations.  Although these numbers sound terrible, it is very good compared to other methods.  I have great respect for the chaps at Carnegie Mellon University who created the methodology.  Groups within our company have used a modified form of this approach, with advanced structures tailored to our computing ecosystem, for years with great success.  The low accuracy rate is not a poor reflection on the CMU model, rather it is a stark insight on how immature we are in this field.


So this is a sad story, but one which is not over.  A cadre of very bright people is working to tackle this problem.  In the short term, I expect to see many more methods, theories, templates, and standards emerge for specific situations.  In the end, I doubt if ever we will have a unified way to measure security risks, but I hold high hopes the best will be culled to a small number which can be applied to most situations and deliver reasonable metrics.

Filter Blog

By date: By tag: