For the last couple years, as I have worked and focused on capturing and communicating the value of server refresh, I have been asked about the sensitivity of the analysis for different degrees of consolidation. For me the tradeoffs of how you consolidate and the relationship to the business benefits have been intuitive and I could easily explain these concepts to people I talked with.  BUT, as they say, a picture is worth a thousand words.

 

It is obvious that the more you consolidate, the more you save and the faster ROI benefit - equally obvious was the fact that if you replace servers 1:1, there is not a TCO benefit but there is a maximum performance gain. The challenge in the explanation was what about if you did only a little consolidation and that changes the benefit equation.

 

Personally, I have struggled with capturing this relationship visually. However, a colleague of mine and I were talking today and he captured this well bringing me one of those "Ah Ha" moments that I just had to share.

 

Let me explain this chart a little.  Each scenario shows the relative performance improvment and relative operational cost savings (TCO) of consolidating older single-core Intel Xeon 3.8 GHz servers purchased in 2005 or 2006 with a new Intel Xeon processor 5600 series server released in March 2010.  The calculations were made using the Intel Xeon Server Refresh Savings Estimator that was developed with key learnings from the Intel IT department.

 

roi_sensitivity_chart_small.jpg

 

Because every situation is different and because your cost assumptions will be different, I invite you to customize your own server refresh scenario and discover the benefits.  You can change over cost 50 variables, refresh with or without virtualization and change the begining and end server configuration in the tool to get the answer that is right for your organization.

 

I hope that what you find helps you realize the same thing that Intel IT finance team did last year ... that "defering our server refresh story was someting we could not afford to do"

 

Chris

If you are responsible for the security of a product in development, you may be wondering what the options there are for security/penetration testing. First, you may need to know what is involved with security testing and more specifically, what is a penetration test? A penetration test, or pentest, is a method of assessing the security of a computer product, system or network by simulating an attack from a malicious source, which can be defined from a threat model. In most cases, a pentest is performed to validate security before shipment of the product with the overall purpose of providing assurance that the security objectives and mitigations are correctly implemented, thus limiting the potential risk of exposure.

 

One important concept that is needed for anyone responsible for deciding who performs a penetration test is the difference between white box, black box, and grey box penetration testing. The difference is in the amount of knowledge of the infrastructure or product to be tested.

  • White Box Testing – provides the testers with complete knowledge of the product or infrastructure to be tested. Often information provided to the testers includes architectural specifications, source code, infrastructure information including network diagrams and IP addressing information.
  • Black Box Testing - assumes no prior knowledge of the product or infrastructure to be tested. The testers must try to figure out the inner workings of the product or infrastructure based on analysis of packaged documentation, shipped assemblies, inputs and outputs.
  • Grey Box Testing - There are also several variations in between, often known as grey box tests. Penetration tests may also be described as "full disclosure", "partial disclosure" or "blind" tests based on the amount of information provided to the testing party.

 

So, who should perform a Penetration Test?

 

Many organizations are training software developers and architects to improve secure development practices. Some product development teams have begun using static source code analysis tools as a means to determine insecure function calls, nonexistent or improper input validation and less secure coding practices. By all means, the increased security knowledge within a development team will definitely improve security as it is integrated into the Product Development Life Cycle (SDLC) and help in the awareness of potential security issues during requirements gathering, architecting, designing and coding of a product. But in many cases, the investment in security training and tools for developers is expected to pay off in that penetration testing before shipment might be deemed unnecessary. Although, these examples of best practices can really improve the security in development, it should not nullify the need of formal security testing that is provided in a penetration test.

 

It is also important to note that vulnerabilities found in any stage of development or penetration test may not have a known exploit. This means that a buffer overrun discovered during any security testing should not have to be exploited to prove that it is not secure. Writing exploits to prove code is insecure may be out of scope for a security test due to the time it can consume. Any potential security issue identified during a Penetration Test should be evaluated based on risk and if necessary, mitigated using secure coding practices.

 

Internal testing within the product development team:

 

Pros - An advantage to this type of testing is that some security related issues can show up early on in the development cycle, allowing adequate time for changes to be made at the architectural or design phases. The developers can perform the security related tests with the white box approach as they would have the most complete knowledge of the product. This can also be beneficial if there are talented individuals within the groups that can share their knowledge, it could be a great benefit to others in the groups. Additionally, if security related tools have been purchased by the corporation, the use of these tools can be leveraged.

 

Cons - There may be a lack of experience for the tester. If an attempt is made to find an experience security tester, the resource may not be available. Another likely con for developers testing security in their own code is that the mitigations or security objectives might not be tested as affectively due to a lack of objectivity which is important for any testing of a product. Quality Assurance (QA) team could get involved but it is important to note that any QA testing team is responsible for ensuring functionality in the deliverables and are not usually specialized in this area.

 

Insource - Testing within the same company using an internal dedicated security validation team:

 

An internal team may be already in place in some organizations and can be engaged to completely manage the security testing for a product. There are positives and negatives to this strategy as well.

 

Pros – This option can allow for sharing of knowledge of the product with the security testers and developers who can communicate directly with each other and discuss architecture, design and implementation. The employees may be permanent employees that allows for less legal agreements than a completely outsources solution when there are IP concerns. This can provide the ability for full white box testing if desired. The sharing of security testing knowledge is more possible and can achieve a higher level of objectivity in testing than in the case of a development group performing the security tests. The biggest benefit to this option is that it can allow the most open and direct communication and collaboration between the security testing team and the developers. The Security validation team can use a white box approach as they would have access to the complete knowledge of the product by communicating with the development team.

 

Cons – Expert security testers are limited resources these days so it may be necessary to pull in less experienced resources who can request support from those who are more experienced. If a formal certification is required for the product, this option may not be good as the organization would be self-certifying its security in a product.

 

Outsource - Security Penetration Testing using an external dedicated security validation team:

 

In some cases, it is good to have a 3rd party engagement for penetration testing of a product. This option completely outsources the service of penetration testing to an external company that specializes in this service.

 

Pros – The external security testers specialize in this type of work and may be dedicated to security testing. Outside source can provide the most objectivity. If needed, external security testing can provide certification to a specific level of security assurance. One beneficial way to achieve the most from this type of testing is to create a collaborative security test environment and work side by side to ensure the knowledge is shared amongst the development team and the security testing team.

 

Cons – This is most likely the costliest solution. Additionally, Intellectual Property (IP) protection is an issue and legal oversight with Non Disclosure Agreement’s NDA’s are needed to lower the risk to IP disclosure. This can also limit the amount of knowledge that can be provided to the 3rd party forcing a more black box or grey box approach to security testing …but a black/grey box approach may be the intent (pro) in some cases. Location and access to the testing environment for support may be limited for the customer, so validation of the testing environment may be difficult causing less assurance that tests are run with the most stable product(s) or version(s). There may also be less communication and collaboration between the testing team and the development team if the product is sent out for testing without expertise to assist the security testers in fully understanding the product.

 

To summarize, security testing through penetration testing is a valuable approach to validating security during development and before shipment of a product. If the risk of a product is valued high enough, it may be necessary for all of the proposed options to be utilized in some way so the suggestions provided here can be combined to ensure the best security solutions are planned in from the start. Even though some security testing can be performed at different stages in the development of a product, a pentest is most commonly planned for and scheduled before shipment of a product. But it is important to emphasize that security can never be appropriately and cost effectively “tested in” to a product after development.

I just finished reading this whitepaper published by my peers (Alan Ross, Bob Stoddard) inside Intel IT.  The paper  demonstrated how Intel Power management technologies can be used to reduce power consumption in a virtualized data center.  The tests they ran were conducted with synthetic workloads to model a mixture of computational and I/O intensive database workloadson running in a virtualized environment.

 

The study set out to evaluate if capping server power would impact performance, a key concern for any IT department looking to reduce power by limiting server power consumption, if even temporarily. Using technologies called Intel® Intelligent Power Node Manager and Intel® Data Center Manager (Intel® DCM), running on servers using the Nehalem architecture (Intel® Xeon® processor 5500 series), the team was able to show ability to reduce power consumption by up to 20% without impacting run time.

 

Consequently, the Intel IT team sees several potential use models for these new power management technologies including better data denter capacity management, increased power savings and improved business continuity.

 

 

Chris

On Friday March 12, I will be presenting on the topic of Data Center Efficiency along with Chris Auger, Manager Dell Enterprise Technology and Ziff Davis's Aaron Goldberg.

 

A quick excerpt from the agenda is captured here:

The single biggest conundrum for IT is that of having to spend the bulk of the budget just “maintaining” existing systems, and having precious little to invest in the game breaking new applications that will really drive your company.

 

My involvement and focus will be to share how Intel IT operates and innovates across an enterprise network of 97 data centers worldwide to create business efficiencies and establish a foundation for business growth and agility.

 

To register for the webinar, go here

 

Hope to see you March 12, 2010 12am EST / 9am PST

(if you can't make it, drop me a note here)

 

Chris

It’s been a few months since I last blogged on Energy Use in the Office.  If you remember from pervious blogs, we did a small proof of concept last summer studying different energy saving techniques in the office environment and found that just providing individuals with Awareness of their usage and some simple tips and tricks to reduce was the most effective.  We’ve been planning a follow-on PoC, on much larger scale, for some time now and I’m very excited to tell you that it is finally about to start.

 

In case this is new to you, here’s a few links to pervious postings to bring you up to date:

 

Metering the Office Environment

 

Awareness of Energy Use in the Office

 

Update on Energy use in the Office Proof of Concept

 

Energy Use in the Office Proof of Concept Results

 

Reducing Energy Use in Offices to Increase IT Sustainability (White Paper)

 

The new PoC will focus on Awareness only and will use a third party tool to "soft meter" individual’s PCs.  They’ll have access to their own individual information via a secure web portal as well as be able to compare their usage to others on their floor as well as the entire PoC population at large.  We’re also installing two larger kiosk type displays in a lobby and café where group and building level rollup usage information can been seen as well.  Maybe there’ll be some friendly competition between floors to see who can reduce the most.

 

We should start collecting baseline data in about two-three weeks with the PoC formally lunching about 30 days thereafter.  I’ll be back soon with status updates. 

 

In the meantime, here’s my question for you: Has your company done anything like this?  If so, what was that experience like?  If not, how do you feel about energy / PC usage data being collected (assuming only you have access to your individual information)?

 

-Mike Breton

IT Technology Evangelist

Increasing employee productivity is a core focus for any IT organization, including Intel IT.  From the selection of PC standards, deployment of new technologies to the efficient operation of the service desk, all of these operational choices can have a dramatic impact on creating or hindering employee productivity. 

 

A Flexible Standard: For the majority of our employees at Intel, the IT department has chosen a Mobile Business PC (sometimes called a rich mobile client) as our employee standard.   We have found that a platform (consisting of a high performance laptop with features that support wireless technology, remote management capability (Intel vPro) and advanced technologies (SSDs) provide a solution that fits the majority of our diverse workforce needs, keeping them computing even when on the road or from home.  A recent employee study showed us that 80% of our employees work away from their assigned desk, meaning that local compute performance is vital for both work/life flexibility and business efficiency.  This whitepaper illustrates the key learnings from our migration from a desktop based fleet to a mobile fleet, reducing TCO by 67% in the process.

 

Accelerating the Pace of Business: Helping our employees find, share and using information real time to make decisions is another aspect that can dramatically impact employee productivity. By deploying unified messaging solutions, social computing tools and video conferencing we are able to reduce Intel's operating costs, while streamlining information sharing and collaboration. For sales employees who work remotely most of the time (our road warriors), access to information via corporate handheld devices keeps them in touch with their customers in a real-time way - and now more employees are asking for access to corporate services on their own personal devices.  Intel just recently enabled employees to sign up for a BYO smartphone program where email, calendar and contacts is made available for certain authorized personal devices.  This program is less than a month old (launched Jan 2010) and over 3,000 people have signed up for this support.

 

Improving User Experience: By deploying user based IT tools to proactively manage employee PC environments with maintenance processes that reduce the IT load giving valuable performance back to our employees.  In addition by monitoring and fixing root causes of PC blue screens proactively, we have cut the frequency of blue screens by more than 50% further boosting productivity by eliminating problems before they occur. These techniques and technologies now have employee customer satisfaction level of our service desk above 90 percent, with 80 percent of user issues resolved on the first call

 

Explore our latest IT Annual Performance Report where we discuss more about the tools, techniques and technologies that we are using to help boost employee productivity and create business efficiencies inside Intel.

Chris

Filter Blog

By author:
By date:
By tag: