Great video on a bunch of us getting together & talking.

 

enjoy.

Things you need to operate a successful Data Center infrastructure.

 

This is a first in a series of Toolbox topics others include

"Watts per Sq.Ft.of What"

"Use of a Hand Held IR (Infra Red) Gun for a Data Center Health Check"

"Generic Data Center Racking, Cost and Space Benifits"

"Data Center Layer One and Structured Cabling Designs, Without Costly Patch Panel Installations"

 

As a data center operations manager you are reasonable for the stability of the physical infrastructure of your environment. Often this requires support from maintenance and or engineering staff to provide you with capacity and room loading calculations. In order for you to do your job efficiently and not be reliant on others you need a few tools to Help You Help Yourself the first in a series is:

 

*Data Center Math

*Power and Thermal Measurement

 

Watts (w) = Volts (V) x Amps (A)

 

Voltage x Amps +=+ KW (Kilo Watts) (This is Electrical Heat)

1000 w

 

British Thermal Unit (BTU) (measure of heat)

 

One Watt of Power requires 3.432 BTU's to cool

 

12,000 BTU's = One Ton of Cooling

 

Example;

120 Volts x 160 Amps = 19,200 Watts = 19.2 kW

 

 

19,200w x 3.432 BTU = 65,894 BTU = 5.5 Tons of Cooling Required

One Ton of Cooling = 12,000 BTU

 

 

Power Basics:

Reduce all loads to Watts as the common measurement including cooling. If you use Watts as the common unit you do not need Amps or Voltage when determining capacities.

 

 

 

Power Rough Rules of Thumb

• One average rack of 2u to 8u servers, (40u's total) use~ 5000watts

• One disc type storage bay(24inches)is ~5000watts

• One network equipment rack ~ 30 to 40u's of switches requires 5000w to 6000w

• The average server landing power requirement with redundant network and redundant disc storage is 400watts per server

• The average server landing power requirement with single network switch and single storage connectivity is 300watts per server

• The average "One U" server rack with 40 servers per rack ranges between 7500w to 9000w depending on utilization

• One blade center is 3600w to 4000w

 

Cooling Rough Rule of Thumb

One blade center @3600watts requires 1 ton of cooling.

• One rack of 2u through 8u servers, (40u's total) required 1 1/2 (one and one half) tons of cooling

• The industry standard rack doors can restrict up to 40% of the air flow

• If using "relative humidity set points" set @ 50% plus or minus 20% this will reduce alarms and operating cost

• Available supply air temperature at the server intake can be as high as 80 degrees Fahrenheit without issues

 

 

If this Information is useful please comment

 

 

*Disclaimer

The opinions, suggestions, management practices, room capacities, equipment placement, infrastructure capacity, power and cooling ratios are strictly the opinion and observations of the author and presenter.

The statements, conclusions, opinions, and practices shown or discussed do not in any way represent the endorsement or approval for use by Intel Corporation.

Use of any design practices or equipment discussed or identified in this presentation is at the risk of the user and should be reviewed by your own engineering staff or consultants prior to use.

 

 

Note, this conversation occurred in the SecurityMetrics email discussion group and is a repost of select dialogue. Thanks to all the contributors who granted me permission to post their comments.

 

Will the recent data breach settlement by TJX be a landmark case, setting the precedent for future lawsuits?

 

[http://www.boston.com/business/globe/articles/2007/09/22/tjxoffers_deal_to_end_data_breach_suit/]_

 

This lawsuit focused on 45.7 million credit and debit card numbers that were stolen from TJX by hackers. The company will settle the case by offering $30 store vouchers, which equates to a value of the customer's time at $10 per hour. TJX will hold a "customer appreciation" 15% sale and will also offer credit monitoring and identity theft insurance to some customers. The total costs to TJX for this incident are around $256 million.

 

The Math of Liability Settlements

 

The discussion group was alight with the paltry $30 restitution per customer.

 

Dan Geer shed some light on the numbers by citing a legal precedent for liability and doing the math.

 

Given P = the probability of loss
     L = the amount of said loss
     B = the cost of adequate precautions
     Then Liability whenever B < PL
     So, taking data from the published FTC study[2] of 2003 where they said that 4.6% of the US population had had an identity theft problem and that in solving it the affected had expended 300 million hours and 5 billion dollars, and using the then Federal minimum wage, we'd thus have:

 

This leads to the question of whether $30.11/yr/consumer is enough to prevent identity theft, as defined by the FTC, and if it is, then liability would ensue.
     This is close enough, excluding increases in minimum wage, to the $30 figure in the press report to make me wonder if the TJX folks have been reading the same stuff I've been reading.

Impacts on Stock Price

 

The TJX stock has seemingly not been adversely affected.

 

Bill Frank noted:

 

I just looked up TJX stock price. It's within two points of it's all time high at $30.16. It surely dipped when the story was new. But it seems to have completely recovered.
     For one of the worst security breaches of all time, it does not look like there will be any permanent damage (to TJX).

Matthew Rosenquist:

 

Sadly, this does not surprise me. Until the distain of such breaches becomes personally embraced by the general populace, such incidents probably will not have a significant impact. I think it will be a slow curve as society begins to alter its perspective on how data-loss events affect 'others' and begin to comprehend that it very well could and does affect them. And that they are empowered to prevent being victimized, through the simple choice of where to spend their money and whom they choose to expose their PII/PHI and financial records. Only then will it change spending habits, investing choices, and ultimately begin a cascade effect with the economy directly surrounding organizations which allow, through ignorance or indifference, such losses.
     Today is a sad day, but tomorrow will be a little better as the pain will continue to grow and slowly manifest change in the herd.
     After some posts recommending more governmental regulations I threw out a couple of points:
     1. I believe the free market system, with its inherent checks and balances, will prevail. But the key is fixating on the real issue: Money. Follow the money.... How much will this cost the TJX consumer? How much higher prices will the need to pay for the mismanagement by TJX officers? This is the real metric (IMHO). This will determine the velocity by which the curve will occur (see my previous ranting on this thread). bq.
     2. The math (disclaimer: will someone with a bigger brain check my numbers, which are ballpark anyways - just for illustration purposes):
     TJX estimates total losses for the security incident: $256M
     TJX estimated Sales Revenue: $18000M
     TJX estimated Sales Net Profit: $738M (I chose to use Net instead of Gross, but use whatever you believe is right)
     TJX estimated profit margin: ~4%

In order to recoup the $256M in Net Profit, they would need to sell an additional $6400M in product ($256M / 4%), or INCREASE prices by ~25% without selling more. For those TJX customers, are you okay in eventually paying ~25% more for the same products, due to poor management practices of the retailer? (Yes, it is the decision of the management to decide how much they want to recoup, but you get the point).
...yes these are rough numbers, for discussion purposes only. The point is somebody has to pay. It will be the customers. Let's have a bright person do the math and show the customers what they are going to have to eat, as part of the cost of doing business with TJX (substitute company name of any organization who allows a data breach).

Bill Frank:

Matthew, the only metric that really counts is the stock price.
     I see your math if the point is to recoup the money lost. But too often the stock price ignores one-off events. The point is that the stock price has recovered even though they lost $250 million because the incident is seen as a one-time event that will not have any effect on earnings going forward.

Matthew Rosenquist:

Bill, you make a good point. My contentions are that due to a lack of realistic and understandable metrics both the consumer as well as investor does not have sufficient data to comprehend the future ramifications, hence the propensity of classifying these issues as one-time events. Which time will prove, they are not. Basically, the customer and investor do not know how to react. They are pensive due to a lack of understanding and experience. We are all on a path of learning. Empowering people with insights, understanding, and a strategic view is the role of metrics. In this case, I see the true power of metrics as a tool to help escalate the learning curve. I believe sometime in the future such a breach would cause significant backlash by the consumer and reflect in the stock price. We just are not there yet.

Anton Chuvakin:

I feel that there is something very wrong with this math... just not sure what exactly. My guess is that if you increase your price by 25% in this business, you'd be gone within a quarter (see narrow margins, cutthroat competition, etc) So they probably won't. Can somebody then explain, who pays?

Matthew Rosenquist:

Yes, there is something wrong, but I use it for illustrative purposes only. The missing link is the decision by management on how much loss they are willing to accept. If they choose to eat the entire $256M, then they do not need to raise prices at all. On the other end of the spectrum, if they want all $256M back, then they have to raise prices. An increase by ~25% for one year would come close, although realistically, they would spread out the pain over several years so as to be only a slight increase over a longer period of time.
     The key is what management decides, either consciously or unconsciously, to be an Acceptable Loss.
     Note: I grabbed the company's financial data, including the margin figures, from yahoo.com/finance

Susan Bradley:

But isn't the free market system working now? The one that has Russian/Asian hackers/Spammers/Phishers sneaking into our servers, causing breaches now working quite nicely now?
     Look at the free market system of software (and I'm not talking Microsoft here). Show me an accounting application that natively has encryption surrounding the PII data in it? Granted I hang in the SMB space, but do you guys in enterprise see movement up there or am I just not looking in the right places for vendors making changes reacting to PII losses?
     If the free market system was working ...then why does my Bank of America have computer terminals that look like DOS on their desktops? Of course then again why am I still banking at them and not moving to Wells Fargo where they are at least running Win2k last I looked? Aren't I guilty of not shopping for the most secure bank when BoA lost a few PII here and there? I haven't taken my business elsewhere as a result. Shouldn't I?
     I myself am guilty of this "bare minimum" view as I was on a virtual committee for the 'minimum' security standards for all sized entities organized by CISecurity.org and I couldn't (wouldn't) push for two factor authentication being a defacto standard since I didn't feel that the industry was mature enough to be a standard yet for SMBs.
     So while the free market industry for the spammers, phishers, etc seems to be quite robust, are the applications responding to the free market of checks and balances?

Matthew Rosenquist:

I believe the system is working, albeit not as fast as we all would like. As proof, we have dramatic changes and tension in the system. Neither side (good guys/bad guys) is completely winning but both are rapidly changing and evolving. The information security industry has skyrocketed in the past 5 years. So has cyber crime. In this dance each side is looking for advantages and continually adapting to their respective opposition. Change is afoot. Other areas of cyber security are much farther on the maturity curve than privacy and data breach security.
     Security will continually seek to mitigate losses in the most cost efficient manner. In doing so, the industry will change as well as the expectations of security. In the end, we are not trying to make everything impervious to attack, instead we are seeking to achieve and maintain the optimal level of security which balances the cost of security with the loss prevented to reach an acceptable level of loss. This is a wildly gyrating target as new vulnerabilities, threats, changes to environments, etc. are constantly changing. Adaptation is in small steps. I doubt we will wake up tomorrow to have every application using encryption. The cost is just too high and we would be overshooting the optimal level of security. Eventually however, the most critical applications will use encryption.

 

What are the risks to company employees embracing new social medial applications, such as Facebook, Myspace, IM, Twitter, etc. at work? 

 

I recently had a great discussion with Josh Bancroft, an Intel software engineer deeply entrenched in the social medial world (truth be known, Josh has been a champion in this area for a while and Intel owes much of our social media maturity to Josh and others like him).  Josh recently started a blog on this topic and is getting some great responses.  Check it out! 

 

 

 

Here is my position: 

 

Corporations institute security mitigations to control and manage risks to the corporate network, systems, data, reputation, customer goodwill, liability protection, etc.  Many of these new social applications expose employees to a new set of social engineering threats.  Connecting to these services from company machines across corporate networks exposes potentially critical assets as well. 

 

The benefits are undeniably great for these tools, but should corporations embrace such potentially risky communication channels?  If so how? 

 

Anytime an employee makes a connection through the corporate firewall to an external internet location, the risk meter goes up.  Email is a perfect example.  Uncontrolled email, as an example, would be a huge risk.  Without spam and malware filters, a corporate network connected to the Internet would surely be overwhelmed.  Organizations have instituted such security controls to manage the risk to an acceptable level.  But with the rapid introduction of new social tools, designed to transverse proven security controls, how should companies manage the new risks?

 

What is worse, these social platforms may be used by savvy attackers, to profile targets and directly go after one of the traditionally weak links in any security program, the human element.  Employees can be swayed to download malware and divulge sensitive information which can lead to tremendous compromises of corporate assets.

 

What to do, what to do.  With my security hat firmly bolted on, I say employees must comply for the greater good, which means balancing function with security.  Normally, corporate information security policies are in place to control what is allowable.  Policies are formal means for management to determine the acceptable level of risks, thereby defining the function/security balance. 

 

So how do we get beneficial social interfaces integrated into the corporate computing landscape?  Well, it really is a senior management decision to accept the risks.  Such an effort usually begins with a risk assessment to determine where on the risk spectrum it would be and what potential cost effective security mitigations could be applied.  If senior management is willing to accept the residual risks, then it is time to move forward.  With the sheer number of new social interfaces being introduced, it would be unlikely all would be embraced.  Some, if not many users may be unhappy, but this is the cost of effective, efficient, security assurance in the corporate setting.

 

But what if the end users collectively ignore these policies?  What responsibility does security management have to insure due care and due diligence are maintained?   Security must consistently follow their rules of engagement.  It is entirely tough enough to keep the environment secure without employees subverting policies.  I recommend detection and enforcement as well as collaborating with the end users to determine if a middle ground can be found to meet the business need while maintaining the integrity of security.  We are all in this together.  We will succeed or fail together.

MatthewRosenquist

Security in a Box

Posted by MatthewRosenquist Sep 24, 2007

Are you looking for that special gizmo in a box which will provide your organization a warm blanket of security?  Buy it, plug it in, and viola!  You are now secure.  Fold up the tents and walk away, the job is done.  Well, keep looking.  Regardless of what some security vendors peddle to uninformed IT managers, it simply does not exist. 

 

 

 

Security is an on-going process of diligence.  The simple fact is, as long as the environment being protected changes, and the threats to that environment look for ways to take advantage, security must also adapt.  No one product sufficiently spans the current and potential spectrum of attack vectors, nor does any one solution cover all aspects of technology and behaviors which may be exploited.

 

 

The booming growth of security products over the past few years can partly be attributed to organizations dumping money into the market.  A common mistake of many senior IT managers was to invest bags of money under the false belief it was a one-time expenditure.  As if security could be purchased in a box, installed, and the issue resolved.  Especially in IT departments, people new to the realm of security apply IT thinking to the ‘problem' of security, expecting to find an engineering solution ‘fix' so life can move on.  I can't blame them really, as most technology minded people deal with obstacles rather than opponents.  An obstacle can be overcome.  Engineers are great at going over, under, around or through obstacles.  Find the right technology, gadget, toy, process, or application and the problem is solved so the diligent IT person can move on to the next obstacle. 

 

 

 

Opponents, not obstacles

Well, security is not about obstacles, it is about opponents.  Every security threat can be traced back to a person.  That person, if malicious, has an agenda and an objective.  Put an obstacle in their way, they will find a way to counter or go around it in the pursuit of their objective.  In fact, the behavior of attackers is usually predictable, as they follow the ‘path of least resistance' to achieve their objective. 

 

 

If you treat an opponent like an obstacle, you will be fighting a never-ending set of losing battles.  One hole is plugged and the opponent simply adjusts to the actions and comes at you from another direction.  It can degrade into a battle of attrition.  The defense in this manner can only hope they ‘fix' enough things to make the attacker move on to another target.  However, the cost of each ‘fix' is much greater than the cost for the attacker to adapt.  For a dedicated attacker, the odds are in their favor, unless the target is willing to spend an inordinate amount of time and resources to continually fight the ‘obstacle' battle in hopes that eventually the attackers will tire or find an easier target. 

 

 

I plan on going more in depth on this Attacker -> Methods - > Objective model in another blog and may go into great depth in a whitepaper, time permitting.   Traditional IT thinking, when applied to security, is an endless treadmill consuming time and resources. 

 

 

 

Feel the Pain

Be careful what you wish for.  If senior management maintains a simplistic view of security, then many problems are sure to follow.  Time to bring on the pain.  Choosing to adopt the deceptively straightforward ´┐Żobstacle´┐Ż defense is an unpleasant education in futility as new issues quickly replaces ones just remedied.  It is both costly and frustrating.  Losses begin to tally and security spends increase as the organization is stuck in a routine of responding to each new type of attack.  Management can get very aggravated at the continuing expense and interruption with such a poor strategy.  From the perspective at the top, it is easy to blame the security staff and not obvious the lack of a comprehensive security strategy is the real culprit.   

 

 

In this cycle, it is a safe bet management would not comprehend the strategic need to identify an optimal balance of security.  Such viewpoints tend to distill the situation to a binary state, either the company is secure or it is not.  Trying to argue a gradient or any other perspective may fall on deaf ears.  Expect the commitment to be limited to short term security expenditures and no allowance for much in the way of sustaining costs or future additional costs necessary to mitigate new threats.  Budget discussions can be frustrating; with management expecting a dramatic decrease in future security spending while those in the trenches are struggling just to maintain effectiveness against new types of attacks.  The lure of an easy solution or product is very tempting, but nothing more than a mirage which distracts leaders and reinforces an overly simplistic way of thinking, leading the organization down a path of inadequate preparedness for sustaining needs of the future.

 

 

On the converse, if an organization maintains the perspective that an ‘opposition' exists, then an entirely different game is played.  One which can be won or at least managed efficiently.  The organization can implement a thorough defense-in-depth strategy which starts with Prediction.  Predicting the opposition's objectives, capabilities, and most likely methods is the first step in applying a cost effective structure to Prevent, Detect, and Respond to attacks.

 

 

 

Cost of the Magic Box

If your organization is looking for the magic security box then it is suffering from the ‘obstacle' way of thinking.  This will be costly.  The security programs implemented under this way of thinking will most likely be rigid and have a short effective shelf-life.  Many security initiatives will be in response to successful attacks and will be rushed into production.  Stacking an increasing number of independent solutions weighs heavily on the computing infrastructure, complicating the very environment it is trying to protect, and sets in motion steadily increasing sustaining and support costs, with no end in sight.  Bleak to say the least. 

 

 

Management perception and strategy are very important aspects when evaluating the value of security programs.  Security is not a snap-shot in time.  Sure, buying a flashy product may fix a specific problem which cropped up, but the long term costs must be factored in.  Will this product ever be End-of-Life'ed?  Is their a different product which not only closes this gap in security but also provides broader protection against future issues?  What are the real operating and sustaining costs?  Will the product be maintained by the vendor and continually upgraded to address new threats? 

 

 

 

The bottom line

When measuring security it is important to understand the threats, solutions, as well as the organization which everything will be applied.  With all other factors equal, the value of a security product is greatly different in an organization with a comprehensive defense-in-depth strategy, versus an organization with a haphazard strategy with non-integrated solutions.  No one product or service does it all.  The attackers are dynamic and will adapt to an organizations defenses.  Understanding the concept of ‘opposition', even embracing the idea, will thrust your organization ahead in this game.

 

Practical Aspects of Measuring Security

Security in a Box

The Four Dirty Questions of Measuring Information Security

Managing the Effort to Measure Security

</object>

 

SNEAK PEEK* Uday Keshavdas, Intel consumer marketing manager shows of a new BenQ mobile internet device running Linux.  UMD is getting cooler and cooler.

 

Puneet Gupta, CEO of ConnectBeam discusses his social bookmarking network appliance.

 

This is a pretty cool thing. Robert Scoble from Podtech has been discussing this idea as something called Starfish, where a persons contributions to blogs, wiki entries, Facebook, Twitter, etc, can be aggregated in a way that puts the content and search in context to the author. 

 

ConnectBeam has created something similar in a turnkey appliance, allowing the users to bookmark, tag and attribute the source author as a tag, so you have a LinkedIn like connection between the content and the contributor.

 

From an enterprise perspective, this seems pretty handy and a simple way to add social media tools to your business. Because it uses bookmarkng and tagging, it's not necessary to integrate search across your systems, as long asyou get bookmarking adoption. Also you get to see how employees are contributing to content. 

 

Check out ConnectBeam.

Intel's CIO, John “JJ” Johnson, took the stage at Intel IDF 2007 to make the case that CIO’s must take a holistic look at their environment.  Looking at emerging technologies and innovations while continuously assessing their impact on the competitiveness of the enterprise, allows the CIO to leverage technology as a true change agent. 



So do emerging technologies play into the needs of the enterprise?  JJ made the argument that technologies IT has in play today may not be competitive 3 years from now.  It is important for the CIO to understand which emerging technologies are going to impact how IT capabilities play in the environment.  For Intel IT the emerging technologies on the “watch list” are: Virtualization; Mobility; Social Media; Manageability.  What emerging technologies are on your IT watch list?

Today is Day One- the official opening of Intel IDF 2007 in San Francisco. The Open Port Community Managers (me, Bob and Josh) gathered - anxious & ready for a busy blogging day. I am blogging from the IT@Intel perspective- the IT spin.

   

Kicking off the morning was a mention by Pat Gelsinger of Open Port and the prevalence of social media at IDF. Now we're talking! Pat introduced the theme of IDF which is celebrating the last decade - the innovation and technology that has literally changed the world. Then Otellini hit the stage...



 

Otellini opened with the keynote "Extreme to Mainstream.  Extremes in technology, product and usage."  He focused on how we need to come together as an industry to drive technology from inception to widespread adoption. He showed a picture of what an "ordinary" worker looks like today- WiFi enabled Centrino notebook, Bluetooth, MP3 player versus 5 years ago it would have been bleeding edge.  How did we get there?  Intel's relentless pursuit of Moore's law to bring new technology to the market year after year.  Over the years, Intel has also pushed communication technology to the point where hot spots are now all over the world.  Intel has also been integral in moving memory to the technology that supports more mobile technology. Lastly, power- in new silicon technology, Intel has reduced the power consumption while increasing the compute power.  Together all this is the basis of the digital world.

 

At the core of the digital world is Intel's silicon processor technology. At the heart of this is the transistor- Recently, Intel broke through with new technology that provides a 20% faster performance increase while reducing the power by a factor of 10.  This is the magic of the 45nm technology. Intel now has the capability to add more technologies integrated directly into the chip.  Each die holds 1.8B transistors! November 12th Intel will launch Penryn plus several of the first 45nm skus for servers and high-end desktops. Penryn- is taking the same quad-core technology in the 7300 series and moving it 45nm.  This also encompasses new package technology that is 60% smaller to fit into reduced form factors and extract costs from the design.  Coming next is Nehalem- very modular and represents the scaleable next generation Intel Architecture.  Nehalem allows Intel to dynamically change features such as cache.  End users can also have dynamic capabilities such as turning off threads, cache etc. Multi-tasking will be taken to a whole new level.

 

Otellini then spoke about the next mainstream - Extreme Mobility. Basic mobility is now prolific.  We expect to be connected everywhere. However, mobility has a lot of room to improve- need new devices, services and products.  Where does Intel come in?  Ultra mobile and ultra low power silicon that will power devices that allow you to connect anywhere, anytime.  Intel is also driving the network to support the always connected with WiMax moving into the mainstream.  Fixed WiMax to mobile trials are speeding up and Intel is investing to make WiFi ubiquitous.  Intel is building an integrated WiFi WiMax module that will be available for notebooks next year- code name Echo Peak. Will ultra mobile devices plus WiMax change the face of the internet?  What is possible? Paul stated that Intel's job is to make what is possible- probable.

 

So what does all this mean to IT? 

Penryn and Nehalem have the potential to take data centers to a dramatic new future - Consolidated, Virtualized and Green. If we look at the results that Rob Carpenter saw with the newly released Quad-Core 7300 series The specified item was not found., the new 45nm technology should blow the lid off of the server performance vector without adding more heat. The possibilities of doing more with less could actually become a reality in the age of shrinking IT budgets and the need for smarter & more data center capacity.

 

The extreme potential of mobility could literally change the way that workers work and company's communicate externally.  IT could finally have the tools to support our business customers in a global economy.  Business in emerging markets could explode.  If your company hasn't made the mobile leap- it's coming. One of the key reasons Intel IT made a significant move towards mobility is due to the global nature of our business and the need for business continuity- no matter what the environmental conditions may be.  Not too long ago, flooding in India created what could have been a major challenge in keeping the business running for Intel.  Fortunately, our move to mobile platforms allowed our business customers in India to work from alternate locations which resulted in a minimal impact to our everyday bottom line. Now I know what you are thinking - mobile equals increases security threats.  I would argue that it isn't as much the technology as it is people.  Technology may increase the speed- but the masterminds, who actually "throws the bombs" are people.  Read and listen to  Power Tools in Information Risk Management Tim Casey's latest audio blog to see how Intel IT proactively analyzes & reduces our security risks.

 

Stay tuned...more coming soon.

Miss the event and didn't get an opportunity to ask the panel a question? 

 

 

Watch the Intel IDF 2007 Social Media panel & ask your questions here on this post.

Watch and submit your questions here at noon!  Video of live event below

 

Missed the event and didn't get to ask a question? Ask Gordon your questions here by commenting on this  post

Intel's fall IDF 2007 keynote doors open
!keynote.jpg!
...and in come the masses ready to nuggets of what is to come this week in San Francisco. As the masses assemble laptops abound in the room, I boot up and type in my hard drive password to hear a very pleasant voice over the load speaker letting us know that there will be forward thinking statement and we results of what we will hear and see will vary. Wow a disclaimer sandwiched in between Red Hot Chili Peppers and the Hard & Soft YouTube video. Now I feel ready to soak in the keynote. Start the montage of IDF keynotes in history, showcasing the evolution of technology over the years and the continued road map aligned with Moore's Law. Pat kicks us off let us know what to expect, with sessions from industry luminaries like Gordon Moore, and social media pervasive in the event... ah a slide of Open Port... coo. Thanks Jeff Demain. Opening video shows with an extreme sports video with a cliff jump while the jumper uses a mobile PC. Paul comes to the stage, and states is theme is extreme to mainstream where he explains that 40 years of innovation have helped create a mainstream market for technology. Idea being that with Moore’s Law the innovations keep coming and technology and a digital life have become pervasive with our mainstream lives. Three key points were made about Intel's position in the market palce - Unparallel Silicon Processor Technology - Intel Architecture - Market Creation This is where Paul get's geeky on us. He explains gate leakage, where processor loses energy as the processors get small. Intel has been able to get gate leakage down to a factor of 10. Paul explains Intel is making great headway on next generation. World’s first 32nm, on its way in 2 years. Starting this year Penryn is the heart of next generation platforms. Penryn quad core with 410 million transitor die, launches November 12 Intel in production on 45nm today. 700 plus designs from the ecosystem. Next Platforms will be in for consumers where we will see extreme mobility, entertainment, problem solving and inclusion. Just as laptops have become mainstream to allow mobility in computing, WiMax is the next generation wireless technology that will allow for ubiquitous connectivity. Paul announced Lenovo, Acer, Panasonic & Toshiba will include WiMax in products next year. New seemless global network on the horizon Project WiMax use that will provide a new seemless global network - 150 Million in 2008 - 750 Million in 201 - 1.3 billion in 2012 Now some sillyness. "Live feed" from Zion Utah desert showing use of WiMax from MID's Laptops, from watching slingbox videos to ordering pizza. Shows full PC capabilites with full broadband will be accessable anywhere. Continuing on the Comsumer market, Paul explains we are in a new market with a Consumer internet that use Social Networks, user generated content, 3D graphics, and games Paul explains that games are now mainstream, and Intel is leading the way with new processors to provide the processing power games require. Charles Wirth from XtremeSystems shows how he can push Intel systems. With a cooling system (- 160F) on an unmodified Quad Core system he demonstrated he could break 3 New World's Records in 2 minutes... well 2 minutes and 8 seconds- established 3 world records On to graphics cards. Paul explains that Intel is the number one supplier of integrated graphics but we've been behind the curve in our technology for graphics. Paul announced 65nm graphics in early 08. 2009 will introduce 45nm with graphics integrated into the CPU. 2010 10x performance with 32nm Jeff Yates from Havok, comes to the stage (in his first official Intel appearance) explains how the Havok physics engines is being developed to work across multiple cores and he appears to be hungry for Larrabee. Pandemic Studios shows of gaming on quad core... lot's of things blowing up real good with quad-core. To sum up, Intel has innovated, Intel promises to continue to innovate, and that we can all expect to see more Intel products in our mainstream lives.

Here at IDF, Intel is training how to use Intel AMT.  Matt Royer explains how they are making this happen.

 

 

We're out at IDF - if you find us we'll get you a free shirt on OPEN PORT. 

Sanjay Sharma from Intel gives us a quick look at Harpertown and 45nm

 

For expert views and opinions on 45nm at The Server Room


 

Measuring security must be done in a manner which is a benefit to the organization. Yes, it is difficult to obtain data, determine key factors, calculate value estimations, analyze results, conduct sanity checks, and translate the information to the intended audience. Yes, even the most expeditious professional can be consumed for weeks, months, and even years due to the complexities, lack of data, and sheer desire to make it a little more accurate. But this exercise has a purpose and a window of applicability. Taking six months to conduct a ROI for a project, which management wants integrated in 4 months, is a waste. Every request is different and the resulting analysis should flex to meet its intended purpose.

 


I know what you are going to say: "You can have it fast, cheap, or accurate, just pick two". This is very true and must be taken into account when tackling the ugly job of measuring security. In the example of the 4 month project, setting an expectation of a 1 week ROI to give ball-park accuracy may be entirely acceptable to management. They get what they need to make a go/no-go decision and the analyst does not waste effort on over-kill.


Beware the frustration inherent in trying to achieve accuracy to the second decimal place (or any other ridiculous granular measure). It is a mirage you will never grasp. Methods in measuring information security value are still in their infancy. No silver bullet exists which delivers precise results and applies to all situations. Know the situational limitations and align the analysis with the business decision trying to be made.


Understanding what is needed is the first step of any security measurement endeavor. Having discussions early on regarding the scale of accuracy, how the output will be formatted (dollars, MTTR, compliance to regulations, etc.), and a timeline for completion will set clear expectations and avoid the "bring me a rock" situations.


My advice is to apply the Security Judo mantra:

 

"Exert the minimum amount of enercy necessary to achieve the security business objective"kungfu.jpg

 

Principles of good planning and project management apply to measuring security. Don't go overboard and calculate the exact strength of a hurricane if management only wants to know if they should take an afternoon pleasure cruise.

 

Practical Aspects of Measuring Security

Watch the live webcast and chat from Open Port. COMMENT NOW to submit your questions for the panel.

 

Recorded video of live webcast

<

 

About this panel:Social Media Panel- coming to IDF

In this audiocast, information security analyst Tim Casey talks about three tools used to help manage risks to sensitive information: risk assessments, risk modeling, and standardized threat agent characterizations. Along with other tools and methods, these three play an important part in managing Intel’s information security profile.

 

</embed>


 

I love tools. I have a whole garage full of them. Big ones, small ones, ones with wicked sharp edges, ones for removing tiny splinters from fingers, and a few really heavy ones. My wife always wants me to clean some out, but how can I handle all the things that need fixing without a full tool compliment? I especially like the power tools. Nothing says “massive amounts of impressive work” like the shouting-loud whir of a 3/4HP tool tearing through a piece of metal.

 

It occurred to me recently (while power-driving 3" nails into a joist for a new support) that in my work in information security at Intel, I need power tools, there, too. Information security used to mostly mean adding passwords to accounts and stamping sensitive print-outs “Secret” —essentially, we could get by with just some simple security hand tools. Now we are dealing with increasingly complex environments, and increasing sophisticated attackers, so we need better and better tools to keep our information safe. Network scanners, intrusion-detection devices and the like are essential, but we also need tools that help us understand the big picture when it comes to overall information security risk. These risk management “power tools” help anticipate problems and concentrate limited security resources where they are needed most.  The three I use most often are risk assessment, risk modeling, and our new Threat Agent Taxonomy & Library.

 

For more in-depth information, check out my new white paper Threat Agent Library Helps Identify Information Security Risks.

 

These are very useful tools, but as I mentioned, there are plenty more. I’m curious as how much others use these techniques, as well as what other risk management tools or methods you are using. Are they home-grown or off-the-shelf? Are there any special adaptations you needed to make for your environment? Put on your safety goggles and let me know what infosec power tools you are using.

Here's a quick video of Todd talking about blogging on the vPro Expert Center.   Check it out. 

 

Bob_Duffy

Pat Discusses IDF

Posted by Bob_Duffy Sep 13, 2007

Pat Gelsinger gives us a sneak peak at IDF 2007

 

see more Intel videos at www.youtube.com/channelintel

Recently, a colleague and I spoke to a group of IT administrators in Washington, DC.  We left our car in a self-park parking lot in which the attendants had everyone leave their keys in their car, in lieu of keeping them on a valet "key board".  They seemed to be depending on reasonably honest customers (we were in a secure area past a government checkpoint) and their own memories to ensure no cars were "lost".  We returned to find that the parking lot attendants had completely rearranged the vehicles.  Since it was a rental car, it was hard to describe the car and therefore hard to find.  (By this point you're probably thinking that I've posted to the wrong board or that Intel pays me by the word, but bear with me)

 

It took a rather lengthy iterative search, but we eventually found the car.  As we walked, my colleague and I joked about this as "parking lot virtualization".  Our vehicle was moved from one slot to another to better fulfill the changing needs of the parking environment over time.  This struck a chord with us, having just been discussing some of the challenges with virtualization.

 

In the data center, most virtualization suites allow an administrator to manually move a workload from one host to another.  This is a very powerful concept - instead of having to negotiate for a 3:00am Sunday morning maintenance window to do preventative hardware maintenance, we can move all of the workloads to another physical machine, perform maintenance during normal working hours, and eventually move the workload back to its original location. We can also migrate workloads from a less powerful machine to a newer machine for performance or in order to retire hardware.

 

Combining this capability with the ability to host multiple workloads on a single piece of hardware, the data center can quickly become very complex.  Without a robust database to map workload to physical machine (and vice-versa) or an automated update mechanism to adjust these mappings after a move, we can easily lose track of our services.  These mappings are needed in order to answer questions like "host/rack/row/room x went down - what services need to be restarted?"

 

My colleague noted that ITIL has mature, well-defined mechanisms to deal with many of these types of events.  Change orders, maintenance escalations, and configuration databases were all designed with these business processes in mind, albeit at a much slower (and more manual) pace.  It would defeat much of the benefit of virtualization if one had to get a signed piece of paper, email approval, or file a trouble ticket in order to offload a workload in response to a failed CPU fan.  Instead, we should use policy to anticipate and enact these types of responses.  The discipline and rigor of change management is critical within the virtualized data center, but it must be directly encapsulated by our tools in order to be effective.  In essence, the CMDB needs to be dynamically updated in order to maintain fidelity to the Data Center's logical state at any given instant.

 

For those of you who have deployed virtual machines in large-scale production, what techniques have been most successful for managing the chaos of moving services and images?  Are you using a glue layer for your legacy CMDB and other management tools, or are you finding it easier to throw them out and depend on the tools provided by your virtualization stack?

As Bob Duffy mentioned inSocial Media panel blogcasted live for Office 2.0... IDF livecast next., IT@Intel is hosting and doing a live stream of a social media panel, "Social Media: IT Friend or Foe?", on September 18th during IDF 2007. We are priviledged to have Tom Foremski from Silcon Valley Watcher moderating the panel of IT web 2.0 mavens inside Intel. Who are the mavens you ask? 

Jeff Moriarty, one of Intel IT's key creators of Intel's internal social media strategy.

Eleanor Wynn, expert on social networking technologies conducts social research for Intel IT

John G. Miner, a methodologist who has studied the disruptive effects that web 2.0 has on the classic IT processes

Don Conant, senior legal representative addressing the legal & ethical challenges as IT pushes further into web 2.0

 

In addition to Intel's IT experts, Pete Kaminski, CTO & Co-founder of Socialtext, the first wiki company and leading provider of Enterprise 2.0 will be a voice on the panel.

 

Come to discuss your thoughts on how social media is a friend, foe or both of IT.  See you on the 18th in San Francisco. P.S.  If you haven't registered yet, and want to save some money, check out the discount codes I mentioned in Getting Ready for IDF

Veodia streamed all panel discussions during Office2.0 in SF last week. Cool service, you can sign up for a free trial and start video blogging.

 

Here's a session of the panel discussion Intel joined with WebEx, SAP, and Leverage Software at Office 2.0 to discuss Online Communities and our strategy with Open Port. Coolest part, during my introduction I asked how many people were blogging... half the room raised hands (submit a name to start the video)

Taking it to the next level at IDF!  On September 18th at 3pm we use Ustream to embed a live stream with chat within

Measuring security is very much a practical matter. It is important for an organization to understand the efficiency, effectiveness, and overall value in order to make decisions which lead to an optimal level of security.

 

 

 

 

History tells a tale

The industry has been witness to a recurring pattern. As companies begin to focus on security concerns the need to measure and understand the value proposition becomes increasingly important to make good business decisions. Many organizations jump into security based upon fears, uncertainty, and doubt (FUD) without the benefit of security value measurements. In classic knee-jerk reaction, some companies initially poured money into security programs and only when the dust settled did they begin to ask about the actual value and cost effectiveness of sustaining operations. As reality sets in they begin to ask, did this make a difference? Did I do too much? Why is the sustaining cost so high?

 

 

 

The maturity cycle takes over and the tough questions lead to the understanding they are not seeking a state of perfect security, rather a balance. Having sufficient security to insure zero negative impact from threats would be wildly expensive and most likely impossible. Too little security can allow unacceptable business impact and losses. So their must be a sweet-spot. This is where security metrics come into play, to help find the right balance and help leaders make the right decisions to attain it.

 

 

 

 

What is value?

We all know what value is, right? A quick check in the Encarta Dictionary will return:"the worth, importance, or usefulness of something to somebody". It is not limited to dollars or rate of return or some other finite indicator. In reality, it can be the absence of discomfort, compliance to regulation, satisfaction of key people, uptime, ability to seize opportunities, something tied to emotions, etc. Those who only seek to put a dollar sign on security value are missing the boat. Don't get caught in that tar pit. It will limit your visibility and undermine the accuracy of any analysis.

 

 

 

 

Who are these people and what are they asking for?

It may seem, to those in the security world, everybody wants to know the value. But it is more complex than that. Everybody wants it expressed in a different way, their way. Talk to a finance analyst and they will be demanding NPV (Net Present Value) or IRR (Internal Rate of Return) numbers. The friendly business analyst will prefer the *BV *(Business Value). The efficiency manager will be firm on CB and CE (Cost Benefit/Efficiency) ratios, while the product and service managers hold to the trusty ROI (Return On Investment) model. Savvy senior managers know to ask for overall ROSI (Return On Security Investment) numbers while mid-level operations folks live and die by the MTTR (Mean Time To Repair) and MTBF (Mean Time Before Failure) metrics. The list goes on, as auditors, compliance, corporate purchasing, etc. each has their preferred vernacular. Even the security researchers will tend to lean towards their expertise. It is easy to recognize those who have an economics, mathematics, and operations background, as they express their ideas in ways relative to those disciplines.

 

 

 

My advice is to ignore these people and their fancy acronyms. Express value in the most applicable and accurate way possible for the circumstance. It is hard enough just to do that! Keep it practical, keep it real.

 

Practical Aspects of Measuring Security

The iphone goes web2.0 and shows that it can be used to support business applications.  At Office2.0 in San Franciso, each attendee has been given an iPhone in a huge geekish experiment to be completely paperless.

 

Watch the video below

 

 

With the app you can find attendees, chat, upload photos to an event site and check out the schedule of event.  Source code is available from Etelos

Hi All, the time is coming for our IDF (Intel Developers Forum) in San Francisco. One of the courses that Kenny Chan & myself will be teaching is as follows:  Pro Platform Interoperability and Integration – Are You Pro Ready?  The course is focused on what End Users, ISV's & developers can do to make Interoperability & Integration a snap.  

 

I will also be joining up with Bob Duffy & Ken Kaplan to help work the technology for the IT Panel on Social Media.  I think this session is going to be great forum for discussing what Intel is doing on social media both internally & externally.  I have high expectations for this session. 

 

If you anybody from the Open Port / Intel(r) vPro(tm) Expert Center is going to be at IDF please let me know.   We were kicking around having an unconference during the venue to get together with fellow bloggers & talk anything from social media to client manageability technology.  

 

Also, Bob Duffy & I will be hitting the road this week to SF to join up w/ the Office 2.0 conference.  We're both pretty jazzed about the experience & also hearing from the presenters & break out sessions. If you haven't checked it out..  http://www.o2con.com/index.jspa

 

If you happen to be at this venue.. look us up..  we'll be on the panel on Friday.  I will also be bringing my HD Camera to record the journey.. Stay tuned for video.. 

 

Cheers..

In my experience over the years, calculating security value and providing consulting to others doing the same, I have noticed the same 4 questions tend to rear their ugly heads. Requests by senior managers, finance analyst, business value analysts, project and program managers all fall into one or more of these types of inquires. And when I say they are ugly, oh they are.

 

In most cases the parties seeking information are in some phase of the decision cycle:

 

Should I spend money on security? - This is a business decision based upon compelling drivers, usually loss of some kind, including non-compliance to regulatory requirements (which could send a C-level officer to spend an extended vacation at Club Fed) or risk of a catastrophic blunder sufficient to crater the organization. The business aspects must include how many coins are in the coffers, amount of loss (both realized and unrealized) on the table, and if money could be better spent elsewhere (opportunity costs)

 

How much should I spend? - A value decision considering what the organization is willing to accept in losses, what can be spent on security, and the amount of loss which could be prevented. Optimally, there exists a point at any given time which management is willing to spend a certain amount on security, which prevents enough loss to bring the residual losses to an acceptable level.

 

 

What should I spend it on? - An exercise in comparative analysis of available options which drives down overall costs, while increasing the losses prevented, and maintaining the optimal level of security and residual loss.

 

 

 

 

 

 

On to the ugly questions (feel free to share your experiences):

 

 

Ugly Question #1: How do I select the security product/program with the best value?

This is typically asked by senior management or by a product/service manager seeking to identify the best solution among a pool of several competing initiatives. As an example, they might be looking to purchase an Intrusion Prevention System (IPS) and looking for the best of breed. Conversely they may be looking to establish or improve a security capability (example: data protection) and trying to determine the best product among multiple solutions (encryption, IPS, document tracking, data destruction policy, etc.) across multiple vendors.

 

 

The challenge is to be able to compare which solution will best achieve the optimal level of security. This is a function of security cost, losses prevented (effectiveness), and acceptance of residual loss. To simply go for the cheapest, most effective, or fastest to adopt is most often than not, the wrong long term answer. (..and security is a long term proposition)

 

 

Ugly Question #2: What is the value of this security product/program?

This is asked by management and project managers when a solution is in the proposal stage, by the sustaining operation folks once it has been implemented into the environment, and by management during times when the organization is looking for opportunities to cut costs. As value is a dynamic concept, it can radically change based upon business, legal, and social aspects as well as the normal fluctuations in the threat landscape. First step here is to identify what types of value was intended to be provided and the appropriate metric to measure those aspects.

 

As an example, management may be seeking to protect the organization's image and liability from the loss of Personal Identifiable Information (PII) through the implementation of a hard drive encryption program for company laptops. The metrics may be as simple as determining the saturation of the program and if encryption is sufficient to protect from liability in the geographies they do business in. In this manner you can estimate the amount of coverage for which liability and image concerns are abated.

 

You might think, wait, that is not a dollar figure! Where is the value? Well, in this case, management may be looking for the establishment of a capability. Either we are protected from this threat or we are not protected. The same stratagem could be compliance with HIPPA or other regulations. To attempt to quantify a dollar figure in this example would be overkill and may detract from what is intended. Realistically, a dollar savings cannot reasonably be calculated now matter what kind of magic hat you possess. I have seen some attempts, by people with the best intent, to do this very calculation. But not knowing if or when or to what extent a loss may occur, nor to be able to truly measure the potential losses due to the large number of unknown variables which have an astronomical range of potential damage, these assessments are pure folly (but really fun to poke holes in). Half the battle in measuring the value of security is to know what limitations exist regarding the granularity of what can realistically be measured and validated.

 

 

Ugly Question #3: How do I compare the value between security and non-security initiatives?

This one bites. Really. It is almost impossible to do, anyone can challenge the results, and if you get this wrong everybody hates you. This comes up when senior management must decide where to spend hard earned budgetary dollars. It becomes an "us versus them" battle between security and some other group. Each party wants the money to spend and the infighting can get downright dirty. So what is a manager to do? Just tap your friendly neighborhood security analyst to calculate the value (just as long as it is not me), then compare against the value of the non-security program. Easy, right?

 

 

I wish. Security programs rarely have the benefit of real dollar justification attached. Unless you are in the security products/service industry, security does not generate revenue, it is just overhead. More on that in a different blog. Non-security programs have the edge here. A marketing program may generate XX dollars, an operations efficiency program may save YY downtime or be able to cut ZZ heads from the budget. These strong arguments bark loudly to management. Security value will retort with a whimper, maybe a risk reduction of xx% or at best a loss prevented of yy dollars. Did I mention even calculating such values takes more time, with more assumptions, and can't, in most cases, ever be validated as compared to the non-security programs? Pure ugliness. Alas it is not impossible. I have seen the fight won (ie. management given accurate and comparable data to make the best decision), but be beware, the deck is stacked against security.

 

 

Ugly Question #4: How much should my organization spend on security?

This is the big-daddy of questions, posed by senior management or if the organization is large enough, by a divisional head. Although I plan on discussing this in greater detail in another blog and whitepaper, the path to take is to identify the optimal level of security.

 

Every organization is different with ever changing business needs and drivers. What one company desires from its security program and is willing to spend will differ from its neighbor. The willingness to accept different levels of loss also vary greatly. But there are common perspectives which are shared to a great degree by all organizations. As an example, in most instances we don't want to spend more on security than we get in return (typically in the loss prevented).

 

 

If we look at an organization individually and imagine an increasing line of spending, for each point on that line we have an amount of residual loss which will be experienced (in theory, trending down to some degree as the security spending goes up) and therefore an amount of loss prevented for each point as well. At a strategic level, these three lines give us what is needed to answer this ugly question.

 

 

How much should be spent? Optimally, an organization should spend the amount of money on security which prevents enough loss to bring the residual losses to an acceptable level. What I have found, is the target exists somewhere between the low point of a diminishing rate of return and the high crossover point where the spending exceeds the loss prevented. Only management can decide exactly where the sweet-spot exists for any given moment.

 

 

Now your turn. What ugly question has been thrown in your direction?

 

Practical Aspects of Measuring Security

Filter Blog

By author:
By date:
By tag: