Rob@Intel

What’s your number?

Posted by Rob@Intel Nov 23, 2011

I was looking in a book store (one that tends to stock the way out their books) and came across a book of Numerology. Now for those that don’t know, which I have to admit, I did not, Numerology is about how numbers affects our lives. Supposedly using numbers you can predict everything from if a marriage will work out to if you would lend someone a toothbrush. Now before you ask, I really have no idea if this is a valid form of science, but then I guess the whole point of science is that you don’t ignore something because you can’t understand it.

 

I often get asked to rate the security of a device so that we can put for example all smart phones in order going from most secure to least secure. In reality this is hard to do as it’s a bit like saying can you take all the vehicles on a road and put them in order from best to worse. We can take one aspect of security and compare it but when you put it all together it’s like saying which is best a Ferrari or a 40ft articulated lorry. 

 

So that calls into question how we do our risk assessments, should the output of a risk assessment allow for comparing of devices? Should the output be a score or a level? Well I don’t want the reputation of just reading books but it seems that risk management has changed and maybe the security world has not caught up.

 

Some risk management background, most organisations that I know use the standard risk assessment methodology of Threat, Vulnerability & Consequence(TVC). This seems to have come from the business definition of risk, so the finance organisations used much the same process for looking at financial and business related risks.  The idea with the TVC approach is to try to quantify the ratings then apply simple maths to come out with a risk score. Well that became a standard and filtered into security. There have been some problems with the TVC model and companies that have seen it go wrong have done some studies into how it fails.

 

In many cases TVC fails because the risks are not well communicated or the assumptions are wrong, in my experience that’s also why different security professionals risk assessing the same thing get different results. This sort of makes sense because you’re trying to quantify something that’s not quantifiable, risk is a thought process and how you think about a risk will depend on your character and outlook on life.

 

Where does this all lead to? Well a good risk assessment starts with profiling the audience that will be reading it so the end communication is optimum. Organisations with standard risk assessment formats are kidding themselves this is any sort of saving. Risk assessors must not be worried about saying they don’t know rather than making an assumption and leaving it in the detail of the assessment.

 

The last point… trust an expert.

We live in a world that’s driven by being able to out argue or prove your point. Sometimes the best experts are those that know it’s not right but can’t say why. Sometimes this is called gut feeling but there is science to back it up. Have a look at Jonah Lehrer’s book called “how we decide” if you want to be convinced that trusting an expert is often right, even if they can’t justify themselves.

 

Rob

Server virtualization is the first step to achieving the business benefits of cloud computing.  Virtualization enables real business value such as server consolidation, higher compute capacity utilization, data center space/power/cooling efficiencies, and operational agility.

 

However, security concerns are often cited as the #1 obstacle for deploying cloud capabilities in the enterprise.  Virtualization brings with it an aggregation of risks to the enterprise when consolidating application components and services of varying risk profiles onto a single physical server platform.   The aggregation of risk is created due to the added potential of compromise of the hypervisor layer, which in turn could lead to a potential compromise of all the shared physical resources of the server that it controls such as memory and data as well as other virtual machines on that server.  Concerns about security initially prevented virtualization of several categories of applications, including Internet-facing applications used to communicate with customers and consumers.

 

Intel IT has more than tripled the number of virtualized applications inside our Office and Enterprise environments in 2010 from 12% to 42% and are currently over 50%. To achieve our goal of 75% required us to overcome the security challenges of virtualizing externally facing applications.

 

I am pleased to announce the release of Intel IT's first cloud security white paper that describes how we neutralized these security risks to expand server virtualization into the DMZ thereby allowing us to further accrue the business benefits of virtualization.  Through our comprehensive approaches described in this paper, we have been able to ensure that our virtualized environment is as secure as our physical environment.  In fact with some of the additional security measures deployed in our virtual environment, one could argue that our virtual environment is even more secure.  Read this cloud security paper to learn more and decide for yourself.

 

@ajayc47

A recent security survey indicated half of those surveyed, have a documented cyber security strategy. 
Plan-Maze.jpg
I don't believe it!  More specifically, I don’t believe the organizations with a 'documented strategy’ have something which represents a realistic and comprehensive cyber-security strategy.   One which is a valued long term plan providing guidance in how to keep the organization on the healthy side of security.  If I had to guess, I would say the actual number is less than 1% meet such criteria.

 

Why the disparity and why should anyone care?

 

The numbers are opinions with no criteria.  The word 'strategy' is being interpreted differently by those respondents.  Does a plan to keep firewalls and client anti-malware current, constitute a strategy?  I think not.  That is a tactic.  It is a common sense practice, in response to immediate threats.   A strategy, by definition, is a forward looking long term plan to achieve a goal.  In this case, maintain a level of security which controls losses to an acceptable degree.

 

Thinking you have a plan when you do not, is dangerous.  If system administrators and management believe they have a cyber security strategy, they are less likely to allocate and focus resources to understand long term needs.  It becomes easy to ignore, hoping the status quo is sufficient and then be surprised when it is not.

 

So I have created a quick 5 question test, to validate the existence of a realistic cyber-security strategy.  Pass my test and I will concede you have a strategy.  Fail, and you must admit you don’t.  Deal?

 

 

Question #1:  Does your strategy identify the threat agents who will be attacking your organization over the next 3 to 5 years?
Without knowing the attackers, defenders remain in the dark and are forced to protect from threats both real and imagined.  The first step to any realistic strategy is to know who the opposition is, both today and in the future.  Thereby understand their capabilities, objectives, and likely methods.

Question #2:  Does your strategy articulate how you will likely be attacked by those threat agents?
Understanding the computing ecosystem, where it is less secure, and how specific threat agents will attack over time is imperative to a strategy.  Does the strategy talk about generic worms, viruses, and system patching, or does it take into account likely exploits paths.   The ones which align to the common methods of those pervasive threat agents identified in Question #1?

Question #3:  What impacts and losses are estimated from these attacks, given the expected defenses
Strategy is about planning.  Planning security is about finding the right balance between spending for controls and the losses prevented.  Ultimately getting to a comfy place where the residual losses are acceptable for the cost of security.  Without knowing the likely losses, even at a generic level, it is impossible to plan forward.

Question #4:  How does your budget and efforts align to managing those losses?  Does it fall within the range of acceptable levels of loss?
If you don’t have an accepted level of loss identified, you fail this question.  Impervious security, where no losses occur, either do not exist or are far too costly to employ.  Such a system is not practical.  So, some losses must be accepted.  Knowing this range is important to planning as it will trigger either growth or contraction for security spending

Question #5:  Who is responsible for the care and maintenance of the security strategy? 
Given the radical and chaotic nature of security threats, vulnerabilities, and impacts, a strategy must continually flex, adapt, and be updated.  Without crisp ownership, most strategies rapidly become stale and worthless.  Many such plans are rolled under some organization with little real stewardship.  The manager of the group is therefore the 'owner', yet so far removed it is an invalid answer.  If you don’t have the name of a single person, knows and agrees to actively manage the planning, you fail the question.

 

 

So do you have a viable documented cyber security strategy?  If not, don't be too disheartened.  You are in good company.  These are tough questions.  Most organizations struggle with cyber security strategy.  It is the norm, not the exception.

 

We are still at the beginning of this endeavor.  Rushing to claim maturity is not the path of enlightenment.  Let’s be realistic and recognize where we are and where we need to go.  Over time the community must move to a mature model where they benefit from pragmatic cyber security strategies.  To get there we must see our shortcomings and effort the good fight.

 

Reference: Ernst & Young Into the Cloud, Out of the Fog business briefing

Intel’s internal employee NewsWire interviewed Matthew Rosenquist, an IT information security strategist who recently completed a six-month rotation working in the security product side of the Intel Architecture Group (IAG), specifically in the PC Client Group (PCCG).

(If you missed them, see part 1 and part 2 of this series.)

 

IT Rotation.jpgQ. What have you learned over the last six months?
I had the invaluable opportunity to learn how Intel plans and designs products, and how IT can help IAG be successful. IT can help in many ways, so it's important to understand and continually align to IAG's evolving needs as we move to $50 billion and beyond.

Q. What did your IAG peers learn from you?
They learned that IT expertise is valuable. With Intel being a leader in the IT industry, we're experts of our diverse computing environments and tied into many different organizations and customers worldwide. Intel IT has insights regarding the impact, effectiveness, and operational considerations of future IAG features. In addition to expertise, IT's a great proving ground for new technologies. When it comes to information security, IT can help answer the difficult question, "Will new security technology be meaningful?" It expands the conversation from, "Can we build it?" to "Should we build it?"

Q. What was the best part of your rotation?
The best part has been the people, exposure to new ways of thinking, and the learning opportunities of the business. We're a huge company, but the teamwork and cooperation is not lost. Intel employees worldwide continue to work together with a passion for technology—to reach lofty goals. As a transplant from IT, I was embraced as part of the team. Learning and contributing simultaneously amongst teams of brilliant people to drive corporate objectives has been intoxicating!

Q. Was there anything particularly difficult?
Profit/loss groups like IAG operate in a completely different manner. Their goals, execution, budget considerations, prioritizations, and so forth are processed and viewed in ways far different than IT. IT is about efficiency and effectiveness. IAG is about execution to revenue and margins. Within IT, our world is largely the systems and data of our company—deeply vast and complex. Within IAG, the challenges scale to all Intel® architecture systems worldwide. The challenges are significantly broader in scope, but less deep in the specifics of how products are employed.

Q. In the end, how did it turn out?
Simply outstanding! Through the rotation program, both IAG and IT benefited from cross exposure and learning. Building close ties between these organizations has shown clear benefits in identifying where and how IT can contribute to IAG deliverables. Individually, it was a huge opportunity for personal and career growth.

Q. Any final thoughts?
IT management has the foresight to see the value of rotation programs. My experience over the past six months reinforces those benefits.  Taking on new challenges is part of being an employee at Intel. If we aren't pushing ourselves, we aren't being true to the values (those words on the back of our badges). This rotation has allowed me to push outside of my comfort zone and into a whole new environment. It's been difficult, but rewarding.

– Matthew Rosenquist, Intel information security strategist

DARPA, the Defense Advanced Research Projects Agency, and its Department of Defense partners, reached into the private sector this week in an unprecedented way through the first Cyber Colloquium, in search of talent and ideas.  What was remarkable, was both how they have purposely streamlined the process to target a tremendous untapped talent pool and more shockingly is technology they are looking to develop: OFFENSIVE Cyber capabilities.

 

So serious in this new research direction, they are going after new talent, which although rich in skills, usually falls far outside the conservative circles they normally engage.  DARPA and the defense agencies as a whole are configured to work with large and professional companies.  Unfortunately, 'weaponizing' computer systems into offensive options for military branches is a skill not commonly found in such traditional contract partners.  Those relationships have evolved to become very formal and bureaucratic affairs, taking tremendous time, resources, and oversight.  This is crushing to timelines and potential engagements with smaller groups not familiar with the process or possessing the necessary patience and resources to traverse the gauntlet of the submission, review, and selection.  The smart folks at DARPA have realized this and instituted a rapid evaluation process ‘Cyber Fast Track’ tailored to individuals, small teams, and non-traditional defense contract organizations with a turnaround averaging 5 days from submission to approval and kickoff of actual project work.

 

Research and innovation is their business.  It is no great surprise they are adapting their mining efforts to stay in touch with the best resources.  After all, this is the agency who grandfathered the Internet.  Streamlining the process to close the gap between DARPA project teams and external innovation talent is to be expected.  The impressive reduction of stifling bureaucracy is impressive, but not the real story of this first of its kind Cyber Colloquium. 

 

Presenting to a select and vetted audience, DARPA repeatedly made clear the new direction in research for the US defense community.  DARPA is looking to develop offensive cyber capabilities.  This is an overt request for ideas, research, and prototypes to arm military branches to not only defend their own compute environments, but offensive capabilities to attack adversaries.   The US is likely not the first nation to seek offensive cyber weapons, but it is a radical new direction for DARPA, driven by policy makers.

 

We are truly at a momentous point in history.  Will computers become the weapons of choice in the future?  We may be at the start of an escalating weapons race, for the proliferation of offensive capabilities, on a scale that has not been witnessed since the cold war.  Unlike those in history, this may expand to include, involve, and impact the computer systems in the hands of everyday people around the world.

jghmesa

My Green Home Project…

Posted by jghmesa Nov 4, 2011

As I mentioned in my prior blog, we purchased, remodeled a home, and for the most part moved into it.  We had three priorities in selecting what to do for remodeling.  As it was an older home, our first priority was doing what had to be done to modernize it.  The second priority is doing what we wanted to do per our tastes and lifestyle, and the third priority was to make our home as energy efficient as practical and cost beneficial.  

 

Our remodel started with a lot of learning and planning.  For me it was a project and my wife and I played dual roles as project manager and team member depending on what we were deciding upon.  For the most part we generally knew what we wanted to do and as I said in my last blog my wife and I continually made a revised scope of items based on what we learned but the third priority of energy efficiency (and conservation) did make for a more rounded out discussion and changed our decision on several items.

 

As our new home was an older home we knew other than the existing wood flooring (that we liked) everything was subject to change.  The kitchen looking like the cover of a 1970’s Better Homes and Gardens magazine needed a complete remodel so in selecting appliances we looked for the most energy efficient.  They all now have that tag of ‘Energy Star’ label which is an international standard for energy efficient consumer products. 

 

We replaced all exterior single pane windows with triple pane ones that have three pieces of glass with an argon gas in between that mirrors close the insulation a structural wall provides.

 

The existing air conditioner was a ten year old SEER 10 rated unit.  ‘SEER’ stands for ‘Seasonal Energy Efficiency Rating’ and the bottom line is he higher the SEER rating the less it’s going to cost you to run it.  Currently a SEER 13 is the minimal efficiency sold and we purchased a 16 SEER unit which although you can go up to a SEER 21 seemed the most cost effective / beneficial for our needs.  In most homes your AC/Heat pump uses the most energy annually so a good first consideration.  Currently in the United States there are some tax exemptions that also help.

 

We also added ceiling fans in every room and replaced all lighting with CFL (Compact Fluorescent Light) which provide the same light brightness for about 20% of the power (a 65 watt CFL bulb uses the same power a 13 watt incandescent light does).

 

We replaced all the electrical switches and outlets from those old ivory colored ones to the bright white decorator style.  In doing so we sealed with spray foam above the junction box in the wall, put insulation on the cover, sealed around the cover, and placed plug stops in all the unused plugs (same parents with infant children use to help child proof the home.

 

Our existing water heater was fairly new so we had the unit serviced, turned down the operating temperature, installed heat traps and insulated the piping and the outer tank with a blanket.  In most homes your water heater uses the second most energy annually so best to optimize it.

 

Lastly, for energy savings, we reinsulated the attic to R39.  The ‘R #’ means that it will take about that many hours for the temperature on one side to be the same on the other side.  So ‘R39’ meant 39 hours.  Higher ‘R’ is always better.

 

Keep in mind we repainted, modified bathrooms & closets, and did a lot of other aesthetic work but for the most part we incorporated good energy savings intelligence to all that we did.

 

As a result based on information provided by the prior home owner and the utility company I’ve calculated a reduction from 60.0 to 43.5 average kWh (kilo watt-hours) used.  Additional I changed my on peak/off peak billing usage with the utility company to better fit our schedules.  In summary the ‘energy efficiency’ spending was $13,000 USD or about one third of the total cost.  I calculate I saved ~ $850 USD per year on my electric bill and taking into inflation expected increased utility costs about a 13 year payback.

 

This was good but we were not done.  As we had reduced our electrical utility cost of the ‘bottoms up’ approach it was next to look at reducing our utility bill from the tops down.  To that end, we investigated a home solar power system and purchased PIF for 20 year lease.  The solar power system will reduce our daily kWh on average by 21 kWh therefore our daily kWh consumption is ~24.  This equates to anther $1060 USD per year saved and the ROI (Return On Investment) is about a 6 year payback.   

 

Though this personal project is about completed I’ve learned a lot about ways to reduce my utility bill with this green home project and keep looking for new products and methods to further increase my savings.  What’s been an unexpected benefit is my wife’s interest and new found expertize that she shares with her friends.  This situation constantly keeps me always in answer mode but it’s good to pass on knowledge and experience for the benefit of others.  I hope this blog also contributes…

 

…JGH

Hello Again,

 

Today I am going to share my Top 10 areas of focus for 2012, some of these are stretch goals, but would like to share ideas and see what others are doing in these regards.  Again, we are primarily focused on an internal enterprise private cloud, due to TCO, security, and performance reasons.  However we do intend to use external capacity for specific use cases, we can discuss that later….

 

First of all, as discussed in my last blog, we are going after three big business goals:

Business Goals 

1.) Achieve 80% Effective Utilization (CapEx Reduction)

2.) Velocity Increases at a Cadence for Service Provisioning and Maintenance activities (Agility and lower OpEx through more automation)

3.) Zero Business Impact (Resiliency)

 

Top 10 for IaaS and PaaS - we can discuss SaaS in a future discussion...

1.)  Cloud Bursting automatically, first from one Intel Data Center, then to second Intel Data Center, then to public cloud all through controllable policy.

2.)  Automated Sourcing at Provision and Runtime- as a consumer enters our portal or calls our APIs, based on business requirements entered, security classification, capacity available, and workload characteristics…. the automation and business logic will decide what type of services and the location (public, private, or hybrid cloud).  Workloads are dynamically migrated to higher performance infrastructure (and back) as demands change through the app’s life cycle.  End result is a dynamic infrastructure based on a Hybrid Cloud that adapts to consumer needs automatically.

3.)  Automated End-to-End Service Monitoring - as the Automated Sourcing occurs, all components are dynamically added immediately (as fast as provisioning time) to an end-to-end service model representing health, utilization, and usage of the deployed service.  Dynamic changes to environment are handled through automation (add/remove nodes, etc).  Key service level objectives QoS are exposed to the consumer (i.e.. availability, performance, configuration compliance, associated service requests) providing the consumer a view of how the service is performing to SLA for their precise instance.

4.)  Automated component based recovery - as specific components in the end-to-end service fail, automated remediation is completed resulting in 95% of situations being rectified through destroy/create concepts and or other immediate remediation solutions - net effect is Zero business impact.

5.)  Automated deployment of resilient services - nodes and components are deployed and managed through automation in such a way that allows for 100% uptime (zero business impact), through methods such as affinity, striping across multiple points of failure, active/active deployment across multiple data centers and disaster zones.  All done based on choice in portal on resiliency requirements for application, and with minimal complexity.  Applications built through PaaS are always built with Active/Active resiliency and with Design for Failure elements enabled.

6.)  All aspects of solution are available through Open APIs and rich but simplistic UI, or API layer that allows for usage of different service providers or different platform solutions allowing for write once methods with backwards compatibility for application layer.  Features are exposed via control panel to cloud consumer that permit manipulation of backup schedules, patch parameters, alerting thresholds, and other key elements for supporting a production service. Integrated dashboard views are available for different participants: operations, end-user, and management.

7.)  Security –Security assurance provided allowing for trusted computing for compute and data components of Cloud hosting environments.  Levels of trust available through programmatic queries and UI, with configurable settings to establish level of trust where security standards are not yet in existence, this configuration could include logical segmentation, physical segmentation, and authorized users roles, as well as such elements as encryption of data at rest, in motion, or in memory.   

8.)  Exposure of scale out data services (Relational, structured, unstructured, file shares) - through APIs, with replication between all necessary locations based on placement of nodes supporting application.

9.)  PaaS layer for both Java, and .NET applications - with associated IDE, Manageability, Data, and Compute Services, exposed at PaaS layer instead of IaaS, PaaS layer should automatically enable key design elements such as automated elasticity, automated deployment of resilient services, secure code on a trusted platform, and with client awareness.

10.)Select and Choose web services for consumption with appropriate interfaces exposed based on choices in portal on business solution being developed - encourage of reuse of existing web service stores in both public cloud space, as well as private cloud.  Providing community of mash-ups for specific business processes, and associated IaaS and PaaS underlying technology exposed as needed for use case described in portal.  Net effect is the ability to enable our Innovative Idea to Production Service < 1 day.

 

Are you doing similar things in your Cloud environment, are you doing these today already in your private Cloud?  As usual love to hear your thoughts on where you are going, as many of us are on this same journey to transform how IT is used to help things happen faster, better, and cheaper.

-Das
Intel IT Cloud Lead

IT organizations are constantly seeking ways to add more strategic value for their organizations and this holds true inside Intel IT.  In my blog earlier this year “What does the CFO want from the CIO”, I discussed that one of the value-add benefits Intel’s CFO wants from IT is to see more product influence from the Intel IT organization.  After hearing about this goal, I spent some time looking to learn about how Intel IT influences Intel’s product direction.

 

One of the key ways we work to meet this expectation is a through a formal program (we call it IT2IAG – IT to Intel Architecture Group) that actively partners with the Intel product engineers, architects and Intel Labs.  Interestingly, the motto for this program is “We don’t make Intel products, we help make Intel products better!”  As the IT department inside Intel - an in-house user of Intel technology and a Global 500 IT shop, Intel IT has an ability and an obligation to contribute to the definition and development of Intel platforms and solutions.

 

Intel’s CIO initiated the IT2IAG program in 2008 with goal of tapping into Intel IT’s enterprise expertise and creating an innovation-focused environment to positively help improve Intel’s enterprise roadmap.  We share our lessons learned and key insights internally through regular engineer to engineer collaboration sessions and formal internal executive reviews (called Five by Five meetings).  Since 2008, this program has coordinated nearly 150 projects and has another 50 in progress today.

 

These projects include six focus areas ranging from product definition to testing to deployment practices:

 

  • Usage Models: Identify how technologies and products would likely be used in the enterprise, and the associated value and benefits of those usages
  • Product Requirements: Identify required features, capabilities, functionality, and supportability with an emphasis on Enterprise and IT value
  • Technology Evaluations: Conduct limited technology and product testing in a pre-production controlled environment
  • Proof of Concepts: Conduct in-depth technology and product testing in a production environment with full support
  • Deployed Capability Assessments: Drive new feature development, requirements, and usage models for early adopted technology and provide defect detection and reporting
  • Strategic Discussions: Participate in discussions to help set the strategy for a particular technology, product or platform.

 

We also have chosen to share many of the key lessons learned and best practices gained through the IT2IAG program externally with industry peers through the IT@Intel program - using a variety of forums including face to face meetings, industry events, content publication and social media.  By engaging our peers in these open discussions we also gain valuable feedback on how other IT shops are using technology to enhance business value in their organizations.

 

Some of the IT2IAG projects led to the publication of these papers this past year:

 

 

Visit our website for more Intel IT best practices or if you would like to get regular updates about Intel IT, join the Intel IT Center program.

 

What non-traditional programs or processes does your IT group use to create value unique to your business?

 

Chris

In my last blog Part 1, I provided some details of ways to improve Information Security when working with a low budget. One main area of my focus was on ensuring sound security policy and integrating security awareness training into other processes within an organization. There are many other opportunities to integrate information security best practices that increase awareness and build on the information security posture for the organization. Here are a couple more ideas:

 

  • Find ways to integrate information security risk assessments into already existing processes so as to identify risks at early stages of product or solution development. This can allow the organization to evaluate the best mitigating controls which could be more expensive to add on at deployment.  At the forefront of defining the budget for a new solution or product roll out, the security management, technical and physical controls that are required should be considered ahead of time so that there are no surprises after implementation.
  • Evaluation of the organization’s purchasing process. If technical controls are required in a security policy or risk assessment and purchases are made from the budget of a project, there may be an opportunity for justification of funds for deploying security control at an organizational level. It may be just a checkpoint during the procurement phase to evaluate whether there are several different deployments of similar solutions. If so, there may not be the consistency needed to ensure quality standards are met. Additionally, negotiation with the vendor for licenses or hardware might be more beneficial on a larger scale to save a significant amount of money. One other benefit to discussing security with the purchasing representatives are the relationships that can be developed with the information security group which can help significantly in understanding how the business justification of costs work within the organization.

 

During the effort to integrate security within other processes, the security staff should know about common misperceptions such as being a “road block” or trying to paint a picture that the “sky is falling”. A positive attitude can help with encouragement of open discussion on risk and acknowledgement of good catches made. I’m sure there are other ideas for improving the security posture of an organization on a tight budget that others may want to share.

Filter Blog

By author:
By date:
By tag: