Like most other companies these days, Intel is facing a growing demand for computing resources. As a result, our computing costs are going up along with that demand. All of these issues prompted us to take a hard look at our data center strategy, to see where we could make it more efficient.

 

Therefore, we’ve launched into a significant undertaking. We’ve started the process to End of Life (EOL) or consolidate our data centers down to just eight strategic locations. This effort is planned to take us eight years, but we’re working to pull this in sooner. This initiative enables us to reduce costs, improve server and storage utilization, create higher density & more energy efficient data centers, and allows us to keep pace with our company’s rapid rate of innovation. The effort could deliver up to $750M in Net Present Value.  View my video blog for more information.



I would be interested to hear your comments, questions and have you share with us what your company is doing to drive efficiency within your data centers.

Want to get serious about Information Security? It is time for a Defense in Depth strategy. Interlocking Prediction, Prevention, Detection, and Response capabilities is the key. As no single solution provides comprehensive security, the way to achieve optimal security bliss is to apply a Defense in Depth approach of complementing capabilities to protect your computing environment and the data within. This strategy is highly effective at providing security assurance, cost efficient, scalable to large organizations, adaptive to changing threats, and proven to work.

 

The concept is straightforward. Establish a system of capabilities and services which align to attackers, their objectives and the methods they are most likely to attempt. Couple this with an understanding they will succeed sometimes and embed the fact at every turn there exist a learning opportunity to improve the system.

 

 

ISDefDepth01.jpg

 

 

Prediction:

Security threats are about opposition. These threat agents are living, breathing opponents who are creative, knowledgeable, motivated, and have personal objectives in mind. These agents utilize available methods and resources to achieve whatever goals they seek by leveraging vulnerabilities in people, computing systems, and communication networks. In total, this represents a massive potential target landscape to be protected, edge to edge. Good luck.

 

 

The reality is you can't protect against everything and everyone. It is too cost prohibitive and in most cases impossible anyways. Although the truly paranoid may disagree, not everyone is interested in attacking you and within the realm of possible attack methods; it is more than likely only a few would be employed. The "path of least resistance" rule applies here.

 

 

A common pitfall is to rely exclusively on vulnerability assessments to determine where to focus. Although vulnerability assessments are valuable, they are misleading if the only source for Prediction. Understanding your opponent is fundamentally different than being aware of the weaknesses inherent to your environment. The result will be expending effort on areas which will never be targeted for exploit. Consequently, fewer resources will be available for areas under siege.

 

 

The best security professionals understand the relationship between attacks and the environment they protect. They marshal their resources to intercept the most likely attack vectors for the greatest effect. Prediction is the first step in the efficient use of security resources. Knowing why your organization would be attacked, likely targets, and the ‘easy' ways which tantalize attackers, provides the insights necessary to prevent such incidents.

 

 

 

Prediction:

Security threats are about opposition. These threat agents are living, breathing opponents who are creative, knowledgeable, motivated, and have personal objectives in mind. These agents utilize available methods and resources to achieve whatever goals they seek by leveraging vulnerabilities in people, computing systems, and communication networks. In total, this represents a massive potential target landscape to be protected, edge to edge. Good luck.

 

 

The reality is you can't protect against everything and everyone. It is too cost prohibitive and in most cases impossible anyways. Although the truly paranoid may disagree, not everyone is interested in attacking you and within the realm of possible attack methods; it is more than likely only a few would be employed. The "path of least resistance" rule applies here.

 

 

A common pitfall is to rely exclusively on vulnerability assessments to determine where to focus. Although vulnerability assessments are valuable, they are misleading if the only source for Prediction. Understanding your opponent is fundamentally different than being aware of the weaknesses inherent to your environment. The result will be expending effort on areas which will never be targeted for exploit. Consequently, fewer resources will be available for areas under siege.

 

 

The best security professionals understand the relationship between attacks and the environment they protect. They marshal their resources to intercept the most likely attack vectors for the greatest effect. Prediction is the first step in the efficient use of security resources. Knowing why your organization would be attacked, likely targets, and the ‘easy' ways which tantalize attackers, provides the insights necessary to prevent such incidents.

 

 

 

Prevention:

This is where the magic happens. Preventing or deterring attacks is where everyone wants to be. Given the insights of Prediction, which includes incorporation of industry best-known-methods, you can put forth a front line of defense representing the bulk of your cost efficiency. The purpose is to render ineffective the most likely methods the attackers will employ and deny the attacker's their objectives.

 

 

Prevention can take many forms, both technical and behavioral. Here are some examples, but don't take this as a complete list or even a recommendation, as selecting the right prevention solutions is specific to the environment and organization. Policy, security awareness, web proxies, and email filters are examples intersecting people based attacks. Computing systems can be protected with anti-virus, system hardening, compartmentalization, authorization and authentication controls, host firewalls, and timely patching to name a few. Communication network attacks are prevented mostly with high speed automated technical solutions such as firewalls, proxies, as well as secure device configurations and a good network architecture plan.

 

 

At its best, a solid prevention plan will eliminate threat agent's easy attacks and protect those critical assets most sought by the attackers. Doing a good job here translates into the biggest bang for the security buck.

 

 

"Two types of victims exist: Those with something of value and those who are easy targets. Therefore, don't be an easy target and protect your valuables."
     

Detection and Monitoring: ( ...when the security drums fail - video)

Unfortunately, at some point a number of attacks will succeed. Although it is most efficient to deter or prevent attacks, ignoring those that do get through the front line defenses is ill advised. Security incidents and intruders must be promptly identified, cornered and squashed like bugs. The first step is the ability to rapidly ascertain when the Prevention defenses have been breached and track the actions of the buggers. Detection and monitoring capabilities sound the alarms and direct the Response resources to the source. Speed and accuracy is most important in detection. However, it must be designed to look in the right areas as it is cost prohibitive to watch everything. Again, Prediction can play a role in deciding what to watch as well as how to monitor.

 

Response & Recovery:

How an organization responds to successful attacks will have a great determination on what residual losses are finally realized. When an event occurs, having the right processes, people, tools, and capabilities in place to contain the security event is critical. Time is on the side of the attacker. The goal of the security professional is to eradicate the security problem and restore the environment to normal operations. This may range from minor efforts to catastrophic recovery. The earlier the Detection capabilities alert the organization, the easier it is to corral the issues and recover. The savviest attackers are stealthy. They want plenty of time working on achieving their objectives and they dig deep like an infected tick. The longer they have inside, the more damage they can cause and become progressively more difficult to eradicate.

 

Don't be caught without proper Response and Recovery capabilities. Inability to restore the organization to a safe and normal state, translates to hemorrhaging money, time, resources, productivity, and maybe worse.

 

 

Continuous Improvement:

Information security is a continuous process. Key learning's from every event can improve individual areas as well as feed the Prediction services, thus giving a better understanding for the next time around. Defense in Depth can successfully be managed centrally or in a distributive model, as long at the overall strategy remains intact and interactions drive continuous improvements.

 

 

 

 

 

If you are ready to take the Defense in Depth plunge, you will be rewarded. Interlocking your strategy in a coherent manner gives better insights to reach and maintain your optimal level of security.

 

The Problem of Measuring Information Security

Getting a Return on IT Security Investment

Information Security Defense In Depth Whitepaper is Now Available

Enough fluff, smoke, and flash: get to the point.

Why have security?

 

At the end of the day, it is all about loss. If you don't like experiencing loss then you must do something to avoid, minimize, or control it. Welcome to the world of Security.

 

Let's first get something out of the way. If you are seeking to eliminate all loss, I admire you enthusiasm, but you are out of your mind. Totally eliminating loss would be wildly expensive and in most cases impossible. How much would it cost to eliminate all auto theft in the world? Much more than is feasible, as just about any solution you propose would have some weakness and require additional measures, which in total would exponentially increase the cost as you near 100% effectiveness. It would become more cost effective to find a better replacement for cars, and destroy them all, rather than prevent all future thefts. Optimal security is not about 100% protection, rather a balance of spending, prevention, and acceptable losses.

 

 

 

The Profile of Loss

Back to reality. Security is about preventing loss and some would argue managing loss or the risk-of-loss. Well, it is splitting hairs, but I would agree with both as they are one in the same. When we talk about loss it encompasses all the tangible costs and impacts as well as the intangibles of missed opportunities, reputation, and goodwill. Only a few types of loss can easily be measured and most cannot easily be mentally grasped, much less quantified.

 

Security strives to prevent the ‘Loss' of reputation, financial assets, customer goodwill, operations uptime, computing resources, personnel productivity, intellectual property, liability protection, and the list goes on. Some of these are obvious such as a worm which brings your operations to a grinding halt for two days. Others are not as obvious. Losing Personally Identifiable Information (PII) of customers would open the liability of lawsuits, potentially incur governmental fines, tarnish the corporate reputation, sour customer goodwill, and invoke long term recovery costs. Failure to meet Sarbanes-Oxley requirements may result in and having to cope with a CFO indictment and the associated difficulties of finding a temporary replacement while your executive spends an extended vacation in a federal penitentiary. A single security incident can inflict many different types of losses which in turn may vary wildly in overall impact.

 

 

The Evolving Security Landscape

All security programs exist in an evolving state. The enemies get smarter, move faster, and grow. The technology by which information flows rapidly changes. The very organization being protected and the assets within evolve over time. Regulations, customer expectations, experts' recommendations, and industry best-known-methods morph on a continual basis at a dizzying rate. The effectiveness and efficiency of security varies due to these external drivers as well as internal reasons.

 

 

So what does security look like over time? What are the key indicators? Here is my perspective. An organization will experience loss, period. If people are involved and any type of value is inherent, loss is expected. No surprise here. To get a better insight, let's apply the Greed Principle.

 

 

Greed Principle

From a security perspective, greed is a double edged sword, both good and bad. Greed drives people to do bad things and break the rules for their benefit, but good as it gives continuing opportunities for security to catch these people. The Greed Principle simply states "Losses will increase if unchecked". This principle manifests itself in many different ways but basically, if someone is successful at finding a way of stealing $10 from you, they will continue unless something intervenes. In fact, they will increase the amount they steal over time. If it worked for $10, why not try $15 and so on. As greed is a strong emotional driver for the bad-guys, it provides more and more opportunities to the good-guys to detect them. Hence ‘greed' being both good and bad.

 

 

The greed cycle may be disrupted. Intervention may be in the form of additional controls, prevention, deterrence, social pressure, or direct interdiction just to name a few. Many different mechanisms can influence an attacker. Ultimately, unless something changes, greed guarantees losses will increase over time.

 

 

Instituting a decent security program is a surefire way to disrupt the unchecked losses. Even a completely mindless security measure can have a great impact. Ever wonder why sales associates say ‘hello' to you when you enter a boutique shop? Even if they don't have time to help you directly, they will make eye contact, greet you with a smile, and say hello. Is this for better customer service? Well yes that is one side benefit, but the primary function is to reduce the shoplifting. Most small stores don't have the money to maintain a security staff and shoplifting can be a major problem (last I checked, retail prices are ~15% higher to cover the costs of security and residual losses). The simple recognition of someone entering a store has shown to dramatically reduce the chances they will steal. In larger retailers, where they have a security staff, you may not get such a greeting (unless you wander into a predatory commission sales area).

 

 

 

 

The Security Maturity Model

Initial landing of a security program will affect the losses from attacks. But there is a price, namely the cost of security. Security spending bubbles before stabilizing in the maturity phase where it becomes more effective by lowering losses and more efficient by optimizing spending. Management usually has a firm hand in the reduction of spending, as they play an important part in keeping tension in the system.

 

 

So what do you get for your money? The amount of loss which did not occur, because of the influences of security, is the Loss Prevented. More loss prevented the better. But it is relative as the cost of security plays into the efficiency calculation. Basically the (Loss Prevented) - (Cost of Security) is one measure of value. A negative number is mostly unfavorable, indicating you are spending more on security than you are preventing. I wouldn't recommend that model unless what is being protected is irreplaceable (life safety, unique items, etc.).

 

 

Lastly, one other factor must be discussed. Sadly, the organization will still experience loss, regardless of how much you spend on security. This is Residual Loss. Nobody really likes to talk about this ugly fact of life. It is important. This is the gauge by which the organization determines what is acceptable.

 

 

Reasonable Expectations

Every security program must continually evolve to align to a changing landscape of attacker, methods, and alterations in the environment being protected. Over the long run, a good security program will get better and cost less.

 

 

I have rattled the ‘optimal security' saber before in previous blogs and it continues to hold true: Optimally, an organization should spend the amount of money on security which prevents enough loss to bring the residual losses to an acceptable level. Only management can decide exactly where the sweet-spot exists for any given moment.

Bob_Duffy

Intel® X38 Express Chipset

Posted by Bob_Duffy Oct 22, 2007

The buzz has started on the Intel's X38 Express chipset, making use of the next-gen PCI Express 2.0 connectivity.

 

 

Geoff Gasior from The Tech Report takes a look at how the X38 chipset stacks up.

"...the X38 takes a major step beyond the P35 with its 32 PCI Express 2.0 lanes, which make the X38 the first chipset to offer second-generation PCI Express, ensuring plenty of bandwidth for future graphics cards. The X38's full 32 lanes also make it the first Intel chipset capable of supporting dual-x16 CrossFire configurations.

The X38 has other perks, too, such as support for DDR3 speeds up to 1333MHz. DDR3 memory modules have quickly scaled to 1333MHz and beyond, making support for faster memory an attractive feature. However, DDR3 still carries a hefty premium, and we suspect most enthusiasts will prefer to stick with DDR2-based X38 implementations for now. "

 

Tech Report puts together an impressive report running a number of test on the first X38 boards from Asus (Asus P5E3 Deluxe WiFi-AP @n) and Gigabyte(Gigabyte GA-X38-DQ6). Check out the full report and let us know what you think.

Within enterprise and large network we are seeing diverse set of users and computer and keeping the network secure is becoming a challenging job.

 

In response to this within a corporate network, Intel IT initiated the on-connect authentication (OCA) program, locking down and enabling security on network access ports using 802.1x standards and port security.  802.1x standard has been around for long time but recently it has picked up the momentum and for a big network it is not a very easy job to deploy and maintain.  In a two-site pilot deployment, we gained insights, formulated best known practices, and developed automated tools and a strategy for an efficient global rollout to lock down every single access port at Intel.  I hope you find our experience useful to you and I would also like to hear your experience on this.

 


Update:  My white paper is now posted. Check it out and let me know your thoughts  Securing the Corporate Network at the Network Edge

Tune in 6:30 Monday 10/22/07

 

Chat live

 

The Social Media Club of Silicon Valley will be at Intel Headquarters on Monday October 22 from 6:00 to 8:30 p.m. I will be one a many cool cats discussing Social Media and the Enterprise.  If you can't attend watch the live webcast here.

 

The panel will be led by Shel Israel, co-author of “how blogs are changing the way businesses talk with customers” book  with Robert Scoble.

 

Panel members will include:

 

 

Also on hand will Bay Area NBC affiliate  KNTV-TV, some smart folks from  Bay Area NBC affiliate KNTV-TV, and some familiar voices from this web site (Open Port), on hand to do a bit of show and tell.

 

Register to attend the event here and add it to your Upcoming events listing here.

 

If you can make it in person come back to this post to watch and post your questions live.

Do you love an application and want to share it with the world? Well then go to Cool Software and give it... well a "Digg" to borrow a term from another site.

 

Cool software, from the ISN guys, allows the online community to post information about software applications they think are awesome. The more people who vote for an application, the cooler the application is.  What a great idea... wish I thought of it!

 

 

For the week so far the top vote go to

 

  • GoogleEarth 32 votes (Got to agree, pretty neat, I used GoogleEarth to virtually remodel my Family Room)

  • deliGoo 20 votes (Delicious Search Engine)

  • We+ 19 votes (social media platform)

 

So if you're a Visio nut, love your NeoPets screensaver or are simply addicted to vampire biting friends on Facebook, head over to coolsw.intel.com to make it cool.  Hmmm maybe they can add an uncool feature?

Bob_Duffy

Skulltrail - 8 cores of fun!

Posted by Bob_Duffy Oct 14, 2007

</object>

 

Sneak peak at Skulltrail system using two 45nm Quad Core Xeon processors (Harpertown) running at 4GHz.

 

From Channel Intel.

Do Not Wait for an Alarm or Failure Give your Data Center a "Health Check" using a simple hand held Infra Red (IR) Gun. This tool can provide early warning for electrical breaker overload, CRAC unit calibration issues, server air supply stratification, source of CRAC short-cycling. See the image below and use the number references for legend. The cost of the tool is between $100 and $500 the higher priced guns are recommended for the multiple features

 

1. Check temperature range of breakers

Check panel cover for ambient temperature, then breaker temperature range. Look for outliers hot and cold. Hot could be loose wire or overloaded circuit.

2. Check under floor for poor air flow

Floor tile temperature is a quick check for restricted air flow or range beyond CRAC.

3. Check actual temperature of delivered air (Supply air)

Concrete in front of CRAC should be around 55 degrees Fahrenheit.

4. Server in-take temperature on rack frame low

Rack frame at first server position compared to temperature at top of rack shows air temperature stratification or rack heating from conductive heat loads. Temperature range of 6 degrees is good. If more than 10 degrees, look for hot air mixing from above or behind servers. Max intake air temp greater than 90 degrees is a great risk to the server platform.

5. Server in-take (supply) temperature on rack frame high

Plus 6 to 10 degrees is the range from good to poor. (See note in 4 previous)

6. In-coming air (return air) temperature off sheet metal frame

Temperature in center of CRAC filter bank is a good indication of actual ambient mixed air returned to CRAC. Compare this temp with CRAC thermal readout for indication of short cycling or bad CRAC temp sensor.

 

 

 

 

See previous Blogs at

Data Center Toolbox for Power and Cooling

Data Center Toolbox \\"Watts per Square Foot of What\\"?

See Published articals at

http://searchdatacenter.techtarget.com/originalContent/0,289142,sid80_gci1275008,00.html

http://www.cio.com.au/index.php/id;537667845;fp;4;fpid;51245

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9028098&pageNumber=1

 

 

 

Please comment on and rate this Blog.

 

 

New topics coming soon:

"Generic Data Center Racking, Cost and Space Benifits"

"Data Center Layer One and Structured Cabling Designs, Without Costly Patch Panel Installations"

"Server Power Cord Management"

"Humidity Management to "Humidify or Not Humidify"

 

Disclaimer

 

The opinions, suggestions, management practices, room capacities, equipment placement, infrastructure capacity, power and cooling ratios are strictly the opinion and observations of the author and presenter.

The statements, conclusions, opinions, and practices shown or discussed do not in any way represent the endorsement or approval for use by Intel Corporation.

Use of any design practices or equipment discussed or identified in this presentation is at the risk of the user and should be reviewed by your own engineering staff or consultants prior to use.

 

 

Matt Rosenquist, Information Security Strategist at Intel, says that measuring success in the security industry is difficult, since there isn't a perfect tool for measuring what doesn't happen. In this podcast, Matt talks about how Intel approaches security.  How is measuring security programs any different than other IT or production programs? The heart of the problem is in trying to measure what does not occur. Security initiatives strive to prevent loss. So in effect they try and make something not happen or to lessen the outcome. And if something does not occur, how can you measure it?

 

 

Discuss this topic and more with Matt in his recent blogs:

The Problem of Measuring Information Security

Managing the Effort to Measure Security

Practical Aspects of Measuring Security

 

First let me say I'm not on the inside track with Moorestown. I'm an outside observer with my own perspective on this product, but I have to say... I think this will be HUGE.  Lot's of talk about the Moorestown platform at IDF this year and I've heard many refer to this as the iPhone killer, or next generation iPhone.  The game changer is size, the processing power, and WiMax capabilities. This is much more than anything in the market now.  It can be almost anything you want it to be, and what you want it to do might be more about what devices it talks to. Here's my personal speculation on potential uses for Moorestown.

 

Harmony Remote Killer:  This one is easy.  Unlike the iphone with this kind of device you should be able to add and download applications and be configure to do pretty much what you want it to do.  It's the size of a remote.  It has bluetooth and WiMax.  It should be able to talk to all of your AV stuff and replace your most advanced universal remotes.

 

GameBoy/PSP Killer:  This be will run on Intel’s next generation 45nm chips.  It should far exceed anything any hand held game system can do today. You could host games on the fly with people near you or host over the Internet. I actually believe this could be an XBox Killer.  It will have the horsepower, it will be ultimately connected. It just needs peripherals like a dock or wireless connectivity to a large display and keyboard.  Drop it on your coffee table, turn on your wall mounted LCD, pick up a wireless controller and you are gaming.

 

Desktop Killer: Yes, a desktop killer.  Again it should have the horsepower. It will have highspeed connections and a full blown browser.  More and more apps are moving to the web.  There's a lot of talk about the death of the application, as applications can be run in the browser. Drop it on your desk, have it detect and synch with your wireless keyboard, mouse and monitor and you are working. Also more IT shops are starting to see the value of OS and application streaming technology where you can pull down the apps you need when you need them.  Edit a spreadsheet, crop a photo, do a CAD Design, all apps come from the network when you need them, wherever you are.

 

Storage may only an issue for the few things you need locally.  With WiMax, songs, videos, applications could all be available at your finger tips whether you have them stored on your PC, DVR, or from a service provider.  You could ultimately have any data or any application on a powerful mobile device on your hip, in your pocket or in your purse.  

 

My perspective is Moorestown is shaping up to be the ubiquitous everything device.  I discussed this idea 2 years ago with an Intel engineer, during a school fundraiser.  I claimed if Intel could create the device the size of cell phone with the processing power of a PC, you would not need any other device other than peripherals.  I was new, I was in marketing and he thought I was nuts. And he pretty much told me so, citing that he didn't see how Intel would profit from it.  A couple of weeks later I saw him again and he was anxious to tell me he just saw a presentation that discussed exactly what I was talking about.  I'd like to think this is Moorestown... and personally I can't wait!!

The old axiom "work expands to fill time" seems to parallel a truth about client computing: capabilities expand to consume available resources. As an Enterprise Services Architect responsible for Intel's IT client architecture, I have seen first hand how IT shops, software vendors, and users manage to throw everything but the kitchen sink into client systems only to be surprised by the resulting hit on performance. Let's face it: at most companies, client performance is an after-thought that becomes important when people start screaming. Making matters worse, everybody wants to dictate what needs to be on the client, including business, productivity, communication and collaboration applications, security & manageability agents, connectivity managers, backup clients, personal user applications, and more recently virtualization applications.

 

It's time to break free of reactive performance management "tiger teams" and get serious about addressing client performance in a more proactive way. Intel IT has begun to take a much more comprehensive view of client performance and has instituted a new framework for client performance management. Here are ten key learnings that you may be able to use at your company.

 

 

1. Form a client performance virtual team (v-team).

This step is first on the list for a reason. Client performance cannot be approached from only one perspective, otherwise it wouldn't be such a difficult problem. Form a virtual team consisting of at least one person from each of the following areas: client platform engineering, security & manageability engineering, client support (helpdesk), release management, human factors engineering, and enterprise architecture. To keep us on task, our client performance v-team has the mission "to drive an intentional approach to client performance management".

 

 

2. Develop a process and identify tools to measure performance, establish benchmarks, and set performance targets.

This is a discipline that needs to be baked into your client engineering processes. In order to determine what performance should be expected for any generation of client in your environment, you first need a baseline for comparison. The best time to develop this baseline is when your next generation client is ready for deployment and performance has been optimized. Industry and/or third-party benchmarking tools can be used to form part of the "performance profile" for you latest release. The choice of benchmarking tool(s) is less important than that you pick at least one and begin documenting the baseline performance of your clients. Data collected in a pilot under real-word conditions will also be useful in the baseline profile. The goal is to end up with something that can be used in the future as a yardstick for performance troubleshooting on the same generation of clients and for setting targets for the next generation client.

 

 

3. Develop a process and identify tools for tracking and reporting platform performance on the installed base of clients.

This is related but different from the last one and more difficult to pull off without further impacting performance. Benchmarks and performance targets are important, but those are established in lab conditions or in controlled tests and created only occasionally. Here we need to actually instrument the client and provide supporting infrastructure to actually collect data from our live clients. To do this, you are probably going to need some sort of agent running on the client. The results from this data collection and aggregation ought to correlate with the feedback you are getting from the helpdesk regarding their top performance issues. One related idea we are considering is to deploy a tool that allows the user check a performance status indicator on their desktop and/or allows them to push a button that sends a snapshot of their system status to the helpdesk when they are experiencing a performance issue.

 

4. Dedicate resources to forward engineering of performance enhancing capabilities.

There are a number of emerging technologies and capabilities that can actually deliver improved performance on the client, mostly under the general heading of QoS. For example, there are some third party products that will help with resource and process prioritization. There are also new capabilities baked into Vista that can be leveraged to improve performance including i/o prioritization and client-side policy-based network QoS.

 

 

5. Establish and maintain a strategic client performance capability roadmap.

A strategic capability roadmap for client performance will help by defining targets and setting context for engineering activities. Many of these capabilities we've discussed above, including those that enable performance management and those that enhance performance. Such a roadmap can also be used to drive application vendors to improve the performance of their applications.

 

6. Continuously validate platform performance against established benchmarks.

Modify your client release management process so that a comparative analysis can be done between your performance benchmarks/targets and the actual performance of the new client platform. You'll need to ensure that the benchmarking methodology you developed earlier can be exactly reused by the QA team.

 

7. Institute an ongoing continuous improvement process.

Establish a rhythm for taking what's learned about performance from PCs in the wild and incorporate the fixes, BKMs, enhancements, and optimizations into the client build engineering process for future releases.

 

 

8. Don't play chicken with your PC refresh cycle!

You know how people still do things that they know they're going to regret? Exactly. If you know what the right PC refresh cadence is for your environment, don't mess with it! The school of hard knocks has taught us that when we stretch the lifetime of our PC fleet, we end up paying for it in the end with spikes in helpdesk call volumes, above average failure rates, and complaints of performance degradation. The temptation is great to put off spending for a quarter or two or hold off to intercept a new version of an OS or new hardware platform, but this is a losing strategy that will probably land you in tiger team **** and will end up costing more in the long run.

 

 

9. Map out the client ecosystem and figure out where you can eliminate redundancy.

Take a fresh look at all the capabilities and products that are either installed on your client or that exist in the infrastructure that impact the client. This way you can identify where you may have redundancy that can be streamlined. For example, do you have two or more manageability agents with overlapping functions? Can you live with one even if it means giving up a feature or two? Don't forget infrastructure services that impact clients from afar. "Agentless" performance data collectors, for example, still have a performance impact on the client.

 

 

10. Establish client integration standards.

Assuming you have a healthy governance process, standards can act as both sword and shield to protect your client platform from being overrun by the barbarians. Like your city's building codes, these policies and related guidance can set a bar for application owners and service providers so they understand what is required and expected of them before they try to land anything on the client. Some IT shops have developed a "minimum security specification" that stakes out the absolute bottom line security controls that must be implemented in a given solution. Consider establishing a "minimum performance specification" to help educate application developers & vendors about performance optimization on your clients.

 

 

As the industry moves towards the next big leap, virtualization, I can't help wondering will this be a security professionals dream or nightmare?

 

Disruptive technology:

I generalize virtualization as the necessary separation and compartmentalization of resources so things can be moved, consolidated, and managed better, across a wide swath of hardware platforms, users, and networks. It is a "disruptive technology" (not a bad term) which represents a fundamental change in how computer systems will operate, communicate, and be designed. It is a leap forward and represents greater agility, more functionality, and lower costs. The interesting security question is, what are we leaping into?

 

In the virtualization world you can name your poison....er, pleasure: Server, Client, Hardware, Operating System, Software, even data portability virtualization exists or is in development. I am not going to differentiate or explain the differences. Instead I am taking the strategic point of view. All these areas will be developed and instituted in some fashion. The details are far from being worked out. From a security perspective, it is the big picture that is important at the moment.

 

History has shown that the attackers have the advantage of ‘initiative' in technology, over the defenders. Basically, the attackers innovate and security then responds. But will this hold true for virtualization?

 

The Security Dream:

Virtualization holds the promise of security paradise by making systems more robust, hardened, simpler, and enabling new capabilities to make security more effective and cost efficient.

  • Virtualization allows a much greater consolidation of hardware resources. Multiple OS, applications, and databases on a platform equate to less platforms to protect. Consolidation and portability for efficiency sake, may result in less network traffic to monitor, scan, and secure

  • Virtualization allows for effective security sandboxes to be employed for un-trusted or questionable applications and processes

  • Segregation of resources for applications, processes, OS's, and users means a compromise in one will be easier to contain due to compartmentalization. This makes it tougher for an attacker to break a weak link and begin to elevate their control over a system

  • Application restoration is a snap and full systems restoration becomes easier when a client does bite-the-dust

  • Systems and applications can be designed to operate with multiple environments of trust: very secure, secure, marginally secure, and not-so-trusting secure, all on one box (or the informal version: I trust you with my sister secure, I trust you with my wallet secure, I trust you as far as I can throw you secure, and I trust you will steal from me the first chance you get secure)

  • Virtualization will drive standardization of application design and data types making them easier to secure

  • Failover systems become less painful to design and implement at many different levels

  • System upgrades become seamless as jobs can be moved temporarily to other systems and then returned without disruption

  • Virtualization and other supporting technologies will drive advances in real-time security state monitoring, potentially across the enterprise and deeply into applications, OS's, data, and users

  • My personal favorite is that eventually we will have the ability to monitor for suspicious activities from a trusted person, versus just looking at applications or data. Think insider threats. This will be the first significant advance in a long time for this problem

 

The Security Nightmare:

Virtualization may be the very bane of security for decades to come by circumventing every type of security technology and enabling new capabilities for attackers to do real damage, thus forcing an entire redesign and reinvestment of security.

  • At the highest level, virtualization offers pure stealth to an attacker. Currently, malware must hide, lay dormant, or be very quiet in order not to be detected. This limits what the bad guys can do. They must trade capabilities and impact for stealth. Not so with virtualization. Malware could have the best of both worlds

  • Total Control - it's mine, you can't find me, and if you do, you can't make me leave! I can see everything, I can control everything, and I can do anything! Mine, mine, mine! Control can extend well beyond a single system and permeate across the virtual domains, with the persistence requiring an entire group of machines be burned down and rebuilt with great care

  • Now for the sledgehammer effect. Virtualization technology will undermine every current type of security control (the short list):

    • Anti-Virus, HIPS/HIDS, and Host Firewalls - Cannot detect or monitor an attackers activities in a higher plane of control, making them ineffective while still giving the illusion of security

    • Patching - Controlling virtual instances, more importantly creating false ones, will have patches installed on fake instances, leaving the real one vulnerable and under the intruders control

    • Security scanning, used to check the system's state-of-security, can be fooled. Reporting back that all is fine when it is not

    • Encryption - At the right level, an attacker will be able to see before encryption, after decryption, and have your keys to decrypt at their whim

    • Security monitoring devices and agents can also be deceived, by showing them what they expect to see and nothing else

    • User Privacy will be compromised at many different levels and open the risks of aggregation across multiple data sources

    • Adware/Spam filters can be subverted

    • Secure channels can be monitored by attackers and setup between compromised systems

    • Security forensics may become a nightmare for many years due to the complexities inherent to virtualization and the fact that a high level compromise invalidates the integrity of logs

    • Even NIDS/NIPS & Network Firewalls become less effective. Hardware consolidation translates to less traffic on the backbone network and more in-between systems on a platform and within a local subnet. This gives less information to these network monitoring devices and lowers the chances they will detect malicious activity

  • The very same ‘sandbox' which can be used to isolate risky activities can be employed against security applications and processes, limiting their ability to control and protect the system

  • Virtualization adds more complexity and therefore risking more confusion when it comes to system management. Especially for patching and system scanning. Keeping track of who owns what is bad enough today. But at least if you track down a server owner, you can normally have a quick decision on when to patch and reboot. In the future, the server owner, may not know who owns the virtual instances running on their machine. So how does one coordinate downtime, patching, or other change control issues? These delays may extend the window of vulnerability giving attackers more options and targets

  • Less systems but more diversity and ambiguity gives places to hide and more opportunity to find a vulnerability

  • Virtualization portability will drive the standardization of application design and data types, making them predictable and easier to locate and compromise

  • Very complex designs which continually change are extremely difficult to restore and recover. Additionally, cascading failures can occur bringing down multiple systems whereas in a stovepipe environment they would be more insulated

 

Take the High Ground - Sun Tsu "Art of War"

The ultimate sweet spot for any computer attacker is to gain the deepest level of control, which in turn can control all other virtual instances. This is the proverbial high ground which can see and control everything, yet not be seen if it does not want to. Attackers are already making great advances and shown the initial ability to take the high ground. Defenders are quick on their heals, finding ways of detecting and defending this vital area.

 

Who can make the final determination in this battle? Intel and other hardware designers, of course! You can't get any deeper than the hardware. Imbedded security controls will be the key to victory. But here is the twist. You may have assumed I meant the victory to the glorious and honorable path of security. You are wrong. It is just the key to victory, period. Security and administrative controls are just functions with great power. Whoever controls those functions will be the victor.

 

Sometimes, the computer industry itself is its own worst enemy. Infighting on standards, rushing products to market, designing security as bolt-on afterthoughts, ill designed security solutions, etc may cause temporary self destruction. Even when a security function is developed, there is no guarantee it will be embraced by the industry or the consumer. It will take a small army of very smart people across the hardware, OS, application, and security services to design robust controls which present a value proposition necessary for widespread adoption.

 

In the end, the age old battle will continue to rage on between the attackers and defenders. Virtualization is simply the next battlefield. A new landscape to which these players will innovate, respond, jockey for position, and struggle for dominance. The rules and possibilities have yet to be defined. All we know about computer security will be thrown on its side and everything we do now will need to be rebuilt from the ground up. Virtualization is a brave new world, sure to bring both dreams and nightmares.

Things You Need to Operate a Successful Data Center Infrastructure.

This is number 2 in a series of Toolbox topics.

 

If you have spent more than 3 months in data center operations someone has asked, "What is your Watts per Square Foot (W/sq.ft) Data Center design"?

 

Odds are your room design is somewhere between 40 watts per sq.ft and 100 watts per sq.ft This value is most likely the room envelope, Wall to Wall area including staging, telecom, tape storage, PDU,s (Power Distribution Units) and CRAC units (Computer Room Air Conditioner) See diagram below. Although this is the correct answer from the architect's perspective and the electrical,mechanical capacity construction designs, it causes great confusion in the industry. What we really want to describe and reference is the area or space the work is being performed in. In other words where the POWER (Heat) is delivered, and COOLING, (heat removal), is required. To better understand this concept and use this knowledge to communicate with others, please review the drawing below. This is an example of the possable interpretations of Watts per Square Foot data center design. Note as you are going through the exercise that I started out with a 50w/sq.ft room and by re-evaluating my environment I created a room design at 130w/sqft without spending a dime! The point is Do Not be Confused by The Facts you may have a 50w/sqft room but you can produce 130w/sqft of capacity

bq.


 

Data Center Math

Watts Per Square Foot Of What?


  • Room Envelope = Gross Raised Floor sq.ft. This is the wall to wall space of the entire room including ramps, tape storage, PDU,s CRAC's staging area

  • Production Area= Servers Plus Support Equipment (Traditional Layout) This area is represented in blue and is the actual recommended space access (48in front 36in rear) PLUS the direct support equipment CRAC's that need to be near the heat loads

  • Equipment Footprint or Work Cell = Racks + Required Access Space (~16sq.ft. per rack) this is the recommended space for access (48in front 36in rear) and average rack size (24x40in)*

  • Server Rack Load The actual electrical load of the installed server base in Kw (kilo watts)

 

Please see my earlier blog Data Center Toolbox for Power and Cooling. Please comment on and rate this Blog. New topics coming soon:

  • "Use of a Hand Held IR (Infra Red) Gun for a Data Center Health Check"

  • "Generic Data Center Racking, Cost and Space Benifits"

  • "Data Center Layer One and Structured Cabling Designs, Without Costly Patch Panel Installations"

  • "Server Power Cord Management"

 

Disclaimer

  • The opinions, suggestions, management practices, room capacities, equipment placement, infrastructure capacity, power and cooling ratios are strictly the opinion and observations of the author and presenter.

  • The statements, conclusions, opinions, and practices shown or discussed do not in any way represent the endorsement or approval for use by Intel Corporation.

  • Use of any design practices or equipment discussed or identified in this presentation is at the risk of the user and should be reviewed by your own engineering staff or consultants prior to use.

 

Filter Blog

By author:
By date:
By tag: