I have recently been blogging about IT Leadership and Management in this IT Community.  One of the principles I've been highlighting has been partnership.

 

Partnership is a powerful concept that helps teams perform better, helps create better value for customers and is the foundation of team work.  Partnership also happens across the industry, inside of industries and between job functions.  As professionals, peer-to-peer networking is extremely valuable as it enables personal development and career growth.  Learning from others through open sharing of successes, failures, challenges and methodologies enables faster innovation and value creation.

 

I was recently introduced to the CIO Leadership Network where Diane Bryant (Intel CIO) participates as an active member.  The CIO Leaderhship Network is an exclusive CIO-only community for Senior IT Leaders to meet and discuss the challenges they face in their profession and share best practices and key learnings as they lead their organizations.

 

Diane was recently featured inside the CIO Leadership Network with a discussion covering "Intel CIO's Top Two Priorities for 2010" where Diane talks about how the Intel IT organization was graded by internal business leaders on her performance and how that is shaping the actions and investments moving forward.

 

 

Chris

 

January 20, 2010 (Computerworld) No one should be surprised that the big action in the CPU market this year will be in the mobile and low-power processor segments. Rapid growth in the power-saving all-in-one and small-form-factor desktop PC markets, continued strong demand for portable computers, and new usage models (digital photo and video editing, casual gaming, watching high-definition movies and so on) will all ignite demand for powerful new processors that consume less energy than previous generations did.

 

 

What's more, a new category of small portable computer is springing up between smartphones and netbooks: the smartbook. Smartbooks are designed to maintain 3G connections to the Internet and deliver a full day's use on a single battery charge, like smartphones, but they're also designed to run productivity applications (usually via the cloud) and feature much larger screens and keyboards, like netbooks. And while Intel Corp. pretty much owns the netbook market with its Atom processor, it could face a strong challenge on the smartbook front from ARM Holdings PLC with its extremely low-power Cortex-A8 and Cortex-A9 processors and their successors.

 

 

All this emphasis on mobile devices is not to say the desktop processor market will stagnate; in fact, Intel announced no fewer than seven new desktop CPUs at this year's Consumer Electronics Show, and Advanced Micro Devices Inc. and Intel are expected to introduce their first six-core desktop CPUs this year.

 

 

Here's a broad look at the road maps from the major chip makers, including their overall strategies and promised technologies for the coming year, as well as a peek what they might offer in 2011.

 

 

Desktop processors

 

 

Quadcore processors will enter the mainstream this year as AMD and Intel whack down prices to gain market share. You can already find four AMD quad-core CPUs -- the Phenom X4 9850, 9750 and 9150e and the Athlon II X4 620 -- street-priced at less than $100.

 

 

At CES, Intel introduced an entirely new series of dual-core processors that were produced using its new 32-nanometer manufacturing process. Moreover, the first six-core desktop CPUs will be introduced this year, perhaps as early as the second quarter, but they will be aimed squarely at the enthusiast market.

 

 

At the other end of the spectrum, Intel will continue to dominate the market for ultra-low-power desktop CPUs. AMD is completely out of the picture there, but Via Technologies Inc. has some interesting products to offer.

 

Standard desktop CPUs

 

 

AMD will continue to rely on its K10 microarchitecture and won't ship any 32nm processors in bulk until 2011. As a result, the company's official desktop road map reveals very few CPU introductions this year. That will force it to compete with Intel largely on price in most market segments, since it can't challenge its rival on performance. AMD is, however, preparing to introduce a six-core desktop CPU -- code-named Thuban -- sometime in 2010.

 

 

AMD desktop CPU road map
AMD's desktop CPU road map. Click to view larger image.

 

Thuban is derived from the company's existing six-core Opteron server CPU and will have an integrated DDR3 memory controller. AMD says the chip will be backward-compatible with existing AM3 and AMD+ motherboards. Rumor has it that the CPU will be outfitted with 3MB of L2 cache and 6MB of L3 cache, but clock speeds will likely be slower than current AMD quadcores because of the thermal output of the two additional cores.

 

"Thuban is coming," said AMD spokesman Damon Muzny, "but we haven't disclosed specifications on the six-core desktop processors yet."

Intel continues to execute its "tick-tock" strategy, introducing a new microarchitecture (last year's Nehalem being the tick), followed by a new manufacturing process (the new 32nm Westmere process being the tock). At CES, Intel introduced seven new dual-core desktop processors (four Core i5 CPUs, two members of the new entry-level Core i3 series, and a new Pentium) manufactured using the 32nm process. Previously code-named Clarkdale, the new chips support hyperthreading, so that multithreaded applications are presented with two physical and two virtual cores.

 

Intel Clarkdale processor
Intel's new Clarkdale processor. Click to view larger image.

The Pentium G6950, the Core i3-530 and 540, and the Core i5-650, 660, 661 and 670 all feature integrated Intel HD Graphics in the same chip package (but not on the same die). Intel maintains that its new integrated graphics offering is good enough for both mainstream gaming (with support for DirectX 10) and Blu-ray video decoding. It supports DVI, dual simultaneous HDMI 1.3a and DisplayPort; it's also capable of streaming encrypted Dolby TrueHD and DTS-HD Master Audio soundtracks.

 

Intel's existing quadcore desktop processors -- everything in the Core i7 series and the upper end of the Core i5 series -- will continue to be manufactured using the older 45nm process. Intel does, however, have a six-core Westmere chip on its official road map. Code-named Gulftown, the chip will supposedly reach the market sometime in the first quarter -- well in advance of AMD's six-core offering -- as part of Intel's Extreme Edition family. Intel has not yet disclosed branding, but rumor has it the chip will be officially labeled the Core i7-980X.

 

Intel's desktop CPU road map
Intel's early 2010 desktop CPU road map. Click to view larger image.
Low-power desktop CPUs

At the other end of the power spectrum, Intel in late December announced two new low-power 45nm processors for entry-level desktop PCs: the single-core Atom D410 and dual-core Atom D510. Intel expects to see these chips used in all-in-one and small-form-factor PCs. The big news here is that Intel has moved the memory controller into the CPU, as it has done with its Nehalem architecture. This design change reduces the overall chip count from three to two, which lowers design and manufacturing costs as well as power and cooling requirements.

 

The Atom D410 has 512KB of L2 cache and the D510 has 1MB of L2 cache. Both processors run at 1.66 GHz, have a 667-MHz front-side bus (FSB), and support hyperthreading.

 

Unlike Intel, AMD won't have any ultra-low-power offerings this year. "AMD needs to enter this low-power market, but it has been too preoccupied," says Tom Halfhill, senior analyst at In-Stat's "Microprocessor Report" newsletter. "With any luck, AMD will be ready for a rebound in 2010."

 

Via Technologies -- which, according to Halfhill, pioneered the concept of simplified, low-power x86 processors -- does have a promising alternative to Intel's Atom. The company began mass-producing its Nano 3000 series of CPUs in December 2009. The Nano 3300 runs at 1.2 GHz with an 800-MHz FSB, while the Nano 3200 runs at 1.4 GHz, also with an 800-MHz FSB. Both chips are manufactured using a 65nm process, but they offer a number of features that Intel's Atom-series processors do not, including full support for Blu-ray video.

 

In addition, the processors in the Nano 3000 series support either 800-MHz dual-channel DDR2 memory or 1,066-MHz dual-channel DDR3 memory, while the Atom is limited to 800-MHz single-channel DDR2. And where the Nano 3000 series supports a full range of video interfaces (including LVDS, DisplayPort and HDMI), the Atom D410 and D510 are limited to LVDS and VGA.

 

For all that, Halfhill predicts, "Via will be lucky to nibble a few crumbs of market share. It's too bad, because Via makes some good x86 processors."

Mobile processors

Intel should notch the most mobile design wins this year, thanks to its ultra-low-power Atom processor and its Arrandale series processors, the latter of which integrate both a dual-core CPU and GPU in the same package. AMD's graphics division, on the other hand, should earn a lot of business in the desktop-replacement notebook market, because it's currently the only company that has a mobile graphics processor that's capable of supporting Microsoft's DirectX 11. In the handheld and smartbook market, ARM Holdings' Cortex-A8/A9 processors should gain significant traction.

Full-size laptop CPUs

AMD will continue to trail Intel on the mobile CPU front in 2010; in fact, the company has just two new mobile processors on its public road map for this year. AMD's first quadcore mobile CPU, code-named Champlain, will have 2MB of cache (512MB for each core) and support for DDR3 memory. AMD also plans to offer Champlain in dual-core trim.

 

According to AMD's road map, Champlain will be the foundation for its Danube platform for mainstream desktop replacement and thin-and-light notebooks. Danube will feature DirectX 10.1 integrated graphics with an option for a DirectX 11 discrete graphics processor.

 

AMD's second new mobile offering, code-named Geneva, will be a dual-core processor with 2MB of cache and DDR3 memory support. Geneva will form the basis of AMD's Nile platform for ultrathin notebooks and will feature DirectX 10.1 integrated graphics, with optional support for a DirectX 11 discrete GPU. AMD hasn't released any additional details about Champlain and Geneva since briefing analysts on the new chips in November.

 

AMD's mobile CPU road map
AMD's mobile CPU road map. Click to view larger image.

Intel's 2010 mobile CPU offerings include the products announced immediately prior to CES: five new Core i7 chips, four new Core i5 models and two new Core i3 offerings. Intel will continue to use its older 45nm manufacturing process to build its high-end Core i7 mobile quadcore CPUs, but the new Core i3 and Core i5 dual-core chips (previously code-named Arrandale) will all use the 32nm Westmere process. These chips will have a graphics processor integrated in the same package as the CPU.

 

Each of the new chips features Intel's Turbo Boost technology (a feature inherent in the Nehalem microarchitecture), which enables them to dynamically vary their core operating frequency based on demand as long as they're running below their power, current and temperature limits. The Core i3 and Core i5 processors can dynamically vary the frequency of their integrated graphics cores in a similar fashion.

 

Enlarged 32nm Westmere die
A 32nm Westmere die. Click to view larger image.

What's more, the new mobile processors can dynamically trade thermal budgets between the CPU core and the graphics core (a feature not supported on their desktop counterparts). If the computer is running a CPU-intensive application, for example, the processor will dial back the GPU to let the CPU run faster and hotter; likewise, if the computer is running a graphics-intensive application, the processor will dial back the CPU to give the GPU more thermal headroom.

 

Intel's new mobile processors will use the same graphics core as their desktop counterparts, so they'll offer all the same features, including support for DVI, dual simultaneous HDMI 1.3a, and DisplayPort interfaces, Blu-ray video decoding, and Dolby TrueHD and DTS-HD Master Audio soundtracks.

 

Intel's mobile CPU road map
Intel's early 2010 mobile CPU road map. Click to view larger image.

Netbook CPUs

No vendor seems prepared to challenge Intel on the netbook front this year -- AMD has nothing to offer, and Via's new Nano 3300-series CPUs are aimed at the desktop and thin-and-light markets. And even Intel itself has announced only one new Atom processor for this market segment.

 

The Atom N450 is a single-core processor with 512KB of L2 cache. It runs at 1.66 GHz with a 667-MHz front-side bus, and it supports hyperthreading. Like desktop-oriented Atom processors, the big news with the N450 is the integration of the memory controller into the CPU, which reduces the platform chip count from three to two. (Computerworld will be comparing four N450-based netbooks in an upcoming review.)

 

Smartbook CPUs

The outlook is quite different for smartbooks -- but offering any predictions about the smartbook market is nothing more than rank speculation, because this class of machine barely exists today. Smartbooks are expected to be smaller, lighter and cheaper than netbooks, and subsidies from cell-phone providers could even render them "free" -- provided you sign a long-term data-plan contract, of course.

 

It's widely speculated that ARM's Cortex-A8 and Cortex-A9 processors will become the CPUs of choice for the first generation of smartbooks. ARM doesn't build its own processors; instead it licenses its designs to other manufacturers who incorporate the designs into their own platforms. Cortex chips can currently be found in Freescale's i.MX515, Nvidia's Tegra series, Qualcomm's Snapdragon series and Texas Instruments' OMAP 3 series.

 

Designing a smartbook based on an ARM processor will entail trade-offs, according to some industry analysts. "ARM-based smartbooks can't run the desktop version of Windows," says Halfhill. "Instead, they will run Windows Mobile or GNU/Linux. My opinion is that most users will prefer a netbook that runs standard Windows apps, but others disagree. Apple could nudge the market in the ARM direction by introducing an iPhone-compatible smartbook."

Looking further out

AMD hopes to begin sampling its first 32nm CPUs later this year and to start shipping in bulk in 2011. The company expects to offer both a new high-end desktop microarchitecture, code-named Bulldozer, and a new low-power mobile microarchitecture, code-named Bobcat.

 

A single Bulldozer core will appear to the operating system as two cores, similar to Intel's hyperthreading scheme. The difference is that Bulldozer's two cores are based almost entirely in hardware.

 

A diagram of a Bulldozer chip.
A diagram of a Bulldozer chip.

AMD's first Bulldozer CPU, code-named Zambezi, will feature four to eight cores, which will appear to the operating system as eight to sixteen cores. Zambezi will be paired with an upcoming discrete graphics chip to form AMD's Scorpius platform for the enthusiast desktop market.

 

AMD also expects to finally ship its much-touted Fusion processor, which will be the first chip to combine a CPU and a GPU on a single die. (Intel's Arrandale and Clarkdale CPUs feature two dies in a single package.)

 

AMD calls its Fusion product an "accelerated processing unit" (APU). The first, code-named Llano, will combine up to four CPU cores with a DirectX 11-compatible graphics processor. Llano will be aimed at both the mainstream desktop market (as a component in AMD's Lynx platform) and the desktop-replacement and thin-and-light notebook markets (as a component in AMD's Sabine platform).

 

AMD's Bobcat microarchitecture will finally give the company products that can compete with Intel's Atom processor in the netbook market. Not much is known about Bobcat at this time, but AMD has revealed that two Bobcat cores will be used in its low-power APU, code-named Ontario. Ontario will be aimed at the ultrathin and netbook markets (as a component in AMD's Brazos platform).

 

Intel won't be standing still either, and it has already announced that it intends to introduce a new microarchitecture (the next "tick" in its ongoing execution strategy), code-named Sandy Bridge, later this year. Intel has not released much official information about Sandy Bridge, other than to say that it will use the 32nm manufacturing process introduced with Westmere and that it will feature a graphics core on the same die as the processor core -- which makes it sound a lot like AMD's Fusion. It's been widely reported in the enthusiast press and on tech-rumor Web sites, however, that Sandy Bridge will include four CPU cores.

 

Via Technologies declined to provide a longer-term road map for its CPU business, but the company is likely to continue to plug along in its niche markets. ARM Holdings also declined to comment on future products, but at CES, several of the company's licensees announced new products based on its existing CPU architectures. Marvell Technology Group Ltd. announced the first quadcore CPU based on the ARM instruction set, for example, and Nvidia Corp.

announced that its next-generation Tegra system-on-a-chip (SoC) would feature a dual-core ARM Cortex-A9 CPU with a clock speed as high as 1 GHz.

AMD won't pose much of a threat to Intel's dominance in either the desktop or notebook CPU markets in 2010, but neither company has a strong portfolio when it comes to smartbooks and other ultramobile devices: Intel sold its handheld mobile CPU division to Marvell in 2006, and AMD sold its handheld business to Qualcomm Inc. in early 2009. And that leaves ARM in a very strong position for at least the next year or so.

intel_logo

 

In a bright sign for recession-battered Silicon Valley, Santa Clara chipmaker Intel has just handed out its biggest employee bonuses since the dot-com era, reflecting the company’s vastly improved finances.

 

The extra pay was provided in two larger-that-usual bonus categories, as well as in a third surprise “thank-you” bonus, which was the first time Intel had given that in several years, according to spokeswoman Gail Dundas.

 

She said the company’s generosity stemmed largely from a big resurgence of business at Intel, which last week reported it earned $2.3 billion in the fourth quarter of 2009. That’s an 875 percent increase from the same period a year ago and the company’s biggest quarterly profit since the fourth quarter of 2005.

 

Intel routinely offers two types of bonuses in January. In one of those, the company this time gave its employees 12.4 extra days of pay. Dundas said the last time that bonus was bigger was in 2000, when Intel paid its workers an extra 13.5 days salary.

 

Intel also routinely pays another type of bonus in January based on each employee’s job level and performance multiplied by a number that reflects how well the company did that year. This time, Dundas said the Intel multiplier was 3.92, the highest it has been since 2000. In other words, an employee whose job level and performance warranted a $1,000 bonus actually received $3,920 using the latest multiplier, verses $2,660 in 2008, when the multiplier was less.

 

On top of those two bonuses, “Intel U.S. employees each received a surprise $1,000 bonus in December as a thank you for the business results delivered for the company,” Dundas said. Intel employees in some other countries received $500, she added, noting that the last time Intel paid a thank-you bonus was in 2005.

 

Dundas was vague about the total amount of the bonuses the company paid its worldwide work force of 79,800, as well as the range of compensation it gave to various employees, saying Intel doesn’t normally disclose that information.

 

Aaron Boyd, research manager at Equilar, a Redwood City compensation consulting firm, said he is unaware of other Silicon Valley companies offering new or increased bonuses like Intel, which he noted “has really bounded back from where they were a year ago when things weren’t so rosy.”

 

But given that Intel, the world’s biggest chipmaker, is widely viewed as a barometer for the tech industry, he said, other companies eventually might follow its example if their businesses also improve.

 

Economic prospects do seem to be brightening for many local firms according to Carl Guardino, president and CEO of the Silicon Valley Leadership Group, who consults regularly with the area’s business executives.

 

Even in sluggish economic times when companies aren’t hiring, they often feel a need to pay bonuses to keep the top talent they have, Guardino said, adding that his organization also recently awarded most of its employees bonuses. “We wanted to send a message to our own team: ‘We believe in you,’” he said.

It is winter-time in the US and most of us are thinking about staying warm. 

 

 

However, Many IT professionals, especially the facilities teams are constantly thinking about keeping the data centers cool.

 

 

What if IT started to think like everyone else and we could allow the Data Center to warm up some. What risks would that bring? Intel IT has been testing and evaluating our ability to adjust the temperature of our Data Centers and the findings are interesting.

 

 

Alon Brauner (Regional Data Center Operations Manager, Intel IT) talks about his experience on this project in this video on IT Sustainability.  Alon has found that turning up the temperature a little in the datacenters (like turning your lights off at home) is saving Intel IT money while maintaining the data centers and the equipment within specification.

 

 

For more IT Sustainability best practices and lessons learned from Intel IT, visit our website (keyword: sustainability) or start with the Intel IT Sustainability Strategy

 

 

Chris

    Serving large amount of powerful compute servers, many of our file servers are struggling from the high load and from time to time may become a bottleneck.
    One of the most common access patterns in our batch environment is related to validation, where some dataset under test is accessed again and again by thousands of batch jobs. Such dataset never changes after it gets created, and after some period of time it becomes irrelevant, as new version of the same dataset gets released and all tests are pointed to this new version.
    To accommodate such workloads, we've developed caching mechanism which is highly integrated with the actual testing environment. Every time the test lands on a compute server, it validates if the relevant dataset is already cached. If it is, the test runs against the cached copy on local disk. If it's not, the test copies the dataset to the local disk either from file server or from one of the peer compute servers, and registers such new location in the central directory service. This solution results in significant reduction of load on the central file servers. The cache manager also takes care of the old data clean up.

 

Do you have issues with file servers performance? How do you solve them?

 

Till the next post,
   Gregory Touretsky

I recently did an interview with IT Business Edge around the BYOC concept in the Corporate IT space.  http://www.itbusinessedge.com/cm/community/features/interviews/blog/bring-your-own-computer-intel-calls-for-split-personality/?cs=38256&utm_source=itbe&utm_medium=email&utm_campaign=MII&nr=MII-TOP

 

This is an area we have been researching for a couple of years now and I have spent the last 6 months talking to numerous IT shops big and small around consumerization, virtualization and BYOC.  BYOC is very attractive to IT shops for numerous reasons;

  • Gives them the opportunity to get out of the platform management business

  • Can lead to lower Capital costs or IT can in some cases get out of Capitalization of these assets all together

  • IT is now viewed as an enabler/partner to end users versus roadblock

 

These are just some of the benefits, but there are just as many concerns as positive parts;

  • Managing security on non corporate owned devices

  • Funding and allocation models are cloudy in most cases

  • What operating systems and device types do you support

  • Users are increasing the number of devices they wish to use to perform "work" related tasks

 

We are looking at our user segments in a completely new way.  We are no longer looking for the "one size fits all" solution, but instead we are refining our user segments into smaller categories and looking for niche use cases to gain positive ROI right away and grow to larger population of users as it matures.

 

BYOC isn't something that is ready for corporate prime time yet, but it also isn't that far off.  Start looking at the architecture and services you are delivering internally now and begin to think of these future models.  As the consumerization influence grows, more employees are going to want to use the latest devices they have bought themselves.  We just need to outline how to make it safe, seamless and practical.

 

I welcome everyone else's thoughts on the subject

Wyse Technology is putting Intel’s Atom processor into mobile thin client device.

 

Wyse officials said the X90cw thin client is aimed at mobile workers looking to take advantage of the benefits of virtualization and cloud computing.

 

Wyse, which announced the device Jan. 13, is demonstrating the X90cw at the British Education and Training Technology show in London.

 

Also at the event, Wyse unveiled its TCX Suite 4.0, which offers features designed to improve the end user experience on virtual clients.

 

The X90cw brings the user experience of mobile virtual clients to the same level as traditional laptops, according to Ricardo Antuna, vice president of product management, business development and alliances at Wyse. It also offers the same security as other thin clients, given that the data and other key components reside on central servers, not the device itself.

 

“The Wyse X90cw is now the new wave of compact, lightweight, high-performance Internet devices that assures the end user experience is as good or better than a comparable PC, but the security of a virtual client means that it’s now more dangerous to misplace a smartphone than it is a computer,” Antuna said in a statement.

 

The device weighs 3.2 pounds and offers an 11.6-inch widescreen display. It runs Microsoft’s Windows Embedded Standard 2009 operating system and are optimized to work with such virtualization platforms as Citrix Systems’ XenApp and XenDesktop, Microsoft’s Terminal Server and Hyper-V, and VMware’s View.

 

It offers such options as a built-in Webcam, integrated wireless capabilities, Bluetooth 2.1 and support for 3G cards.

 

The X90cw is available now starting at $699.

 

Wyse’s TCX offerings are designed to heighten the end user experience on virtual PCs. Version 4.0 brings all existing TCX solutions into a single suite of offerings, and are optimized to work in Terminal Services, XenApp and XenDesktop, and View environments.

 

The suite also supports Microsoft’s Windows 7 and Windows Server 2008 R2 environments.

 

With its Collaborative Processing Architecture, TCX Suite 4.0 also can divide workloads between the server and client.

 

eWeek

Late in December, I read a thought provoking blog post, titled the Most Important Job of the CIO by Isaac Sacolic that has had me thinking about this subject for nearly a month now. Isaac affirms that the most important job/skill of a successful CIO, or other IT leaders, is negotiation. While I agree that negotiation is important .. the very premise of negotiation, in my opinion, puts IT/CIO at odds with Business. 

 

In my experience, the concept of partnership is a more critical skill set / job for the CIO. Partnership implies mutual goals, a shared vision, aligned priorities and a common interest in the success together - the true win/win.  This partnership unleashes innovation and streamlines IT investments and business transformation projects that are truly in the best for the business and IT’s credibility inside the organization.

 

If we place too much focus on winning the negotiation and jeopardize the partnership, the risk is that we spend less time innovating and creating business value. Within Intel IT organization, our CIO, Diane Bryant, has fostered a culture of partnership that is driving some very cool results for the business. When I first joined IT, I dismissed the message of “partnerships”, however, I’ve been learning about the importance of partnerships first-hand.

 

To reinforce my point I’d like to share a recent QnA I heard Diane Bryant address with her peers around the role of the CIO:

 

Q: Where do you see the role of the CIO going?  What should every one of us IT leaders be focusing on to prepare for the future?

 

A (Diane): I just received the ‘09 survey results from Intel’s business leaders grading my organization’s effectiveness in meeting their needs.  There were two themes common across all groups, spanning manufacturing, sales and marketing, finance, human resources, and the P&L organizations.  They all stated their dependency on IT is growing.  And they stated they want a greater strategic partnership with IT.  This reflects opportunity and challenge for IT’s future.  The challenge is we must run an operation that is always available. The business depends on us.  The opportunity is to further cement IT’s role as a “value center”.  IT is no longer a “cost center”.  There isn’t an element of the business that doesn’t benefit from the integration and automation through information technology.  As I tell my organization:  We are successful when IT is a clear competitive advantage for Intel.  We achieve that objective through improving employee productivity, improving business efficiency (bottom line value), supporting business growth (top line value), and in delivering IT efficiencies and continuity.   And as a CIO, my job is to build the strategic relationships with the Intel executive staff to ensure that IT is top of mind when they are developing their business strategies.

 

For more from Intel IT, visit our web site at www.intel.com/IT

 

Chris (twitter)

In regards to computer security, one of the most important aspects of creating a secure solution or product is to focus on security from the start, it cannot be tested in. Many organizations make extensive use of security testing products to help find security vulnerabilities that exist in currently deployed products. But how did those vulnerabilities get there in the first place? It may have been that computer security was not a consideration from the start.

 

 

There are many different terms and processes being developed in organizations throughout the world to ensure security (SDL) is integrated into the Systems Development Life Cycle SDLC but my focus here is to summarize what I think is important and why. The challenge is that implementations of information security processes are commonly different from one organization to another and therefore, it is one topic that is difficult to learn in an information security course. The difference is due to various factors that include (1) risk tolerance, (2) security culture and (3) capability maturity levels of information security processes within an organization.

 

 

Here's a scenario to consider: You’ve just completed and delivered the latest business requirements document for a new product and you think things will be getting back to a normal 45 hour work schedule.... That's when your manager strolls into your office and says you will now be responsible for security during the development of this new product and you have no idea what is being asked of you. Understanding the 3 factors described above are very important in this scenario. One example of a secure culture would be an organization that has already established an information classification standard. This will assist in determining a level of risk for the “information” being processed by a computer application or solution being implemented. If this is non-existent, you may have to get advice from a risk consultant or research what is being done in the industry to place a value on the type of information needing to be protected. Information Security processes for an entire organization can follow recommendations and concepts as documented in the ISO 27000 series standards. Obviously, if the processing involves handling or storing of information that is considered personal identifiable information (PII), that is a good indicator of where there is value to a potential attacker. Note that information can still be considered private in the sense that a person may not wish for it to become publicly known, without being personally identifiable. Another risk could be the disclosure of intellectual property which is always more difficult to value but fits into the category of risk tolerance for the organization.

 

 

Most importantly, security is a process and not a one size fits all process at that. Some of the things to consider when developing a process to integrate security:

 

Threat Modeling:

One of the first steps in integrating security is by threat modeling. This process involves creating a list of reasons an attacker would want to find vulnerabilities in your product or solution, and then defining the possible attack vectors which are the ways of that attacker being able to succeed. Understanding the business drivers and usage models is important in this process and can help identify areas of most concern. Estimating value of a threat materializing begins the definition of risk and it includes the possibility of someone gaining access to the valuable asset of which the “type” information described above. Most importantly, we come out of the threat modeling process with a list of threats rated based on information being handled by the new system. A feature set being implemented may be obviously too risky at this point but can be confirmed only after completing the next phase of defining security requirements.

 

Defining Security Requirements:

With the threat model complete, we can create the security requirements that may evolve throughout the development of the product. It should be considered an expectation that changes will take place so this process can work in an agile methodology. The threat model and security requirements can be updated throughout the lifecycle so that as changes are approved for the current or next iteration, those changes are properly evaluation for security related risks. This phase also allows for the creation of a defense-in-depth security strategy as the design can include the consideration of other security mitigations and levels of access restricted for different functions.

 

Checkpoints are a good practice and are intended to ensure security mitigations are appropriate and that the defined process is being followed throughout the development cycle. These checkpoints should be done several times before implementation. A qualified representative should be assigned to own information security requirements and can be the focal point for all questions, testing, and follow-up.

 

Security Testing:

Also called penetration testing. Although this process is most oftentimes the first consideration for security, it should be one of the last. Before deployment, this phases could be performed internally, externally (i.e., 3rd party) or both. Security testing should be done to evaluate that all the security requirements have been implemented appropriately. If time permits, consider testing security for other areas outside of what has been defined as a threat and the results may be surprising. It is possible that the threat modeling did not capture all possible threats and therefore, new security requirements might be needed at the last minute. This should not be considered bad or that all the work performed in previous steps were not valuable. On the contrary, it is only through evaluating what went well and what didn’t that improvement can occur and maturing models increase.

 

 

The security process in an organization of any size should be one targeted for constant improvement. Processes for integrating security into the SDLC can provide valuable contribution to a defense-in-depth security strategy. Moreover, there have been many estimates throughout the industry that have proven security issues to be much less costly to an organization when they are found during development rather than after deployment.  For this reason, a security process defined and integrated early in the development cycle is very important.

The role of IT organizations is to deliver value by increasing employee productivity, driving business efficiency, facilitating business growth, or by delivering IT efficiency and continuity.  During economic downturns, IT is often called upon to step up and deliver even more value.  This is exactly what Windows 7 with the latest Intel Core-based mobile platforms delivers.  This is why I am so excited that we are well down the path to full deployment of the new OS with our enterprise release just around the corner.

 

Windows 7, along with the performance from the latest Intel-based mobile clients, enables us to deliver to our overall client strategy of delivering productivity and flexibility to our employees while reducing TCO.  Our testing has shown that Windows 7 improves productivity with higher application responsiveness, a better user interface, and improved stability.  The new OS also enables clients to be managed easier, helping to drive lower TCO through reduced service desk calls. Finally, the enhanced security and application control features are complementary with Intel vPro technology and give us better data protection.  To ensure we capture this value as quickly as possible, we are preparing for an aggressive enterprise adoption of Windows 7 coupled with our continued PC refresh strategy.

 

I hope you find this video pertinent and I encourage you to respond and share your ideas on what your company is doing to drive employee productivity or how you are taking advantage of Windows 7 and the new Core-based platforms.

 

Thank you,
Diane Bryant, Intel CIO

 

Cloud Computing & the Psychology of Mine

Legacy Thinking in the Evolving Datacenter

The 1957 Warner Brothers* cartoon “Ali Baba Bunny” shows a scene where an elated Daffy Duck bounds about a pile of riches and gold in Ali Baba’s cave exclaiming, “Mine, Mine.. It’s all Mine!” Daffy Duck, cartoons, Ali Baba… what do these have to do with the evolving datacenter and cloud computing?

The answer to this question is ‘everything’! Albeit exaggerated, Daffy’s exclamation is not far from the thinking of the typical application owner in today’s datacenter. The operating system (OS), application, servers, network connections, support, and perhaps racks are all the stovepipe property of the application owner. “Mine, Mine… It’s all Mine!” For most IT workloads, a singularly purposed stack of servers, 50-70% over-provisioned for peak load, and conservatively sized at 2-4x capacity for growth over time. The result of this practice is an entire datacenter running at 10-15% utilization in case of unforeseen load spikes or faster than expected application adoption. Given a server consumes 65% of its power budget when running at 0% utilization, the problem of waste is self-evident.

Enter server virtualization, the modern Hypervisor or VMM, and the eventual ubiquity of cloud computing. Although variations in features exist between VMware*, Microsoft*, Xen*, and other flavors of virtualization, all achieve abstraction of the guest OS and application stack from the underlying hardware and workload portability.

This workload portability and abysmal utilization rates allows consolidation of multiple OS-App stacks into single physical servers, and the division of ever larger resources such as the 4-socket Intel Xeon 7500 series platform which surpasses the compute capacity of mid-90s supercomputers. Virtualization is a tool that helps reclaim datacenter space, reduce costs, and simplify the provisioning and re-provisioning of OS-App stacks. However much like a hammer, virtualization requires a functioning intelligence to wield and could result in more management overhead if one refuses to break the paradigm of ‘mine’...

A portion of this intelligence lies with the application owner. In the past, the application owner had to sequester dedicated resources and over-provision to ensure availability and accountability. Although this thinking is still true to a degree, current infrastructure is much more fungible than the static compute resources of 10 or even 5 years ago. The last eight months working on the Datacenter 2.0 project, a joint Intel IT and Intel Architecture Group (IAG) effort, brought this thinking to the forefront as every Proof of Concept (PoC) owner repeatedly asked for dedicated resources within the project’s experimental ‘mini-cloud’. Time and time again, end users asked for isolated and dedicated servers, network, and storage demonstrating a fundamental distrust of the ability of cloud to meet their expectations. Interesting, most of the PoC owners cited performance as the leading reason for dedicated resource request yet were unable to articulate specific requirements such as network bandwidth consumption, memory usage, or disk IO operations.

The author initially shared this skepticism as virtualization and ‘the cloud’ have some as-yet immature features. For broad adoption, the cloud compute model must demonstrate both the ability to secure & isolate workloads and the ability to actively respond to demands from all four resource vectors of; compute, memory, disk i/o, and network i/o. Current solutions easily respond to memory and compute utilization however, most hypervisors are blind to disk and network bottlenecks. In addition, current operating systems lack the mechanisms for on-the-fly increase or decrease in the number of CPUs and memory available to the OS. Once the active measurement, response, trend analysis, security, and OS flexibility issues are resolved virtualization and cloud compute are poised to revolutionize the way IT deploys applications. However, this is the easy piece as it is purely technical and one of inevitable technology maturation.

The more difficult piece of this puzzle is the change in thinking and paradigm shift that the end users and application owners must make. This change in thinking happens when the question asked becomes, “is my application available” instead of, “is the server up?” and when application owners think in terms of meeting service level agreements and application response time requirements instead of application uptime. After much testing and demonstration, end users will eventually become comfortable with the idea that the cloud can adapt to the needs of their workload regardless the demand vector.

Although not a panacea, cloud computing promises flexibility, efficiency, demand-based resourcing, and an ability to observe and manage the resources consumed by application workloads like never before. As this compute model matures, our responsibility as engineers and architects is to foster credibility, deploy reliable solutions, and push the industry to mature those underdeveloped security and demand-vector response features.

Christian D, Black, MCSE

Technologist/Systems Engineer

Intel IT – Strategy, Architecture, & Innovation

Download the Whitepaper:  Whitepaper: Prioritizing Information Security Risks with Threat Agent Risk Assessment

 

Intel IT has developed a threat agent risk assessment (TARA) methodology that distills the immense number of possible information security attacks into a digest of only those exposures most likely to occur. This methodology identifies threat agents that are pursuing objectives which are reasonably attainable and could cause unsatisfactory losses to Intel.

 

It would be prohibitively expensive and impractical to defend every possible vulnerability. By using a predictive methodology to prioritize specific areas of concern, we can both proactively target the most critical exposures and efficiently apply our resources for maximum results.  The TARA methodology identifies which threat agents pose the greatest risk, what they want to accomplish, and the likely methods they will employ. These methods are cross-referenced with existing vulnerabilities and controls to pinpoint the areas that are most exposed. Our security strategy then focuses on these areas to minimize efforts while maximizing effect.


Download the whitepaper and share your thoughts, criticisms, and ideas.

 

 

Other security whitepapers:
Whitepaper - Measuring the Return on IT Security Investments

Information Security Defense In Depth Whitepaper is Now Available

Threat Agent Library Helps Identify Information Security Risks

   Our IT infrastructure is complicated and includes thousands of compute and file servers, distributed batch environment, network components, etc. Large amount of various projects utilize this infrastructure in parallel.

   We obviously have monitoring systems in place which are tracking behavior of individual components, such as critical servers, network, etc. However, these monitoring solutions can't address every potential service degradation which we may get into.
   To be able to intercept such unexpected issues before our internal customers begin to suffer we try to introduce some kind of user experience monitoring, or black box monitoring

BlackBox

  Some solutions in this area exist on the market for ERP or DB systems. However, I'm talking here about open systems we use in our R&D environment.

  For example, we monitor responsiveness of our NFS environment. Instead of looking on the specific metrics of the file servers (network I/O, CPU utilization, etc - which we are still collecting for future analysis) - we monitor the entire stack also.
  We copy the same file from every file server every X minutes using 2 clients residing in different subnets. We measure the time it takes to copy this file and compare with the baseline. Every time we exceed the predefined threshold, we launch automatic data gathering to see what has happened  with the affected fileserver, network, batch infrastructure, etc. This data is analyzed immediately or at a later time to make the appropriate operational decisions.
  Naturally, the amount of data we are collecting is exploding, so data mining may provide some interesting insights.

What data mining techniques/solutions do you use, if at all, for your IT-related data analysis?

 

Till the next post,

   Gregory Touretsky

The Christmas 2009 incident, when a bomber attempted to detonate explosives sewn into his undergarments while in the passenger cabin of a commercial airliner, could have resulted in a horrific catastrophe.  Although near tragic, it is another example of how security savvy minded people were quick to respond and interrupt the attack.  The media has focused on how the device malfunctioned, but paid little tribute to those passengers who rose up, acted quickly, and subdued the assailant.  Given the fact his primary plan failed, he likely would not have stopped in his mission to do great harm.  The passengers essentially stopped his ‘Plan B’ and deserve credit.

 

Americans will never again be subdued like what happened on aircraft during the infamous 9/11 attacks.  We have learned a very important lesson.  Being aggressive to assure security, in the face of an incident, is imperative.  Security knowledgeable people will remain aware and act quickly when situations arise that require intervention to restore security.

 

These lessons translate well to the information security realm:
1. Security savvy users are incredibly valuable component in a Defense in Depth strategy (Information Security Defense In Depth Whitepaper is Now Available)
2. Rapid and aggressive response is important to reduce loss and restore the environment to an acceptable level of risk
3. We as administrators and users must continually learn, adapt, and evolve to security risks.  The attackers continuously adapt, we must too.

 

 

What security lessons have you learned recently?

Filter Blog

By author:
By date:
By tag: