iStream.jpgDownload Now

 

As a leader in multi-platform managed broadcast solutions, iStreamPlanet helps companies turn digital content into sustainable revenue streams. Working with cloud innovator Switch Communications, iStreamPlanet deploys a robust infrastructure-as-a-service (IaaS) cloud that uses Intel technologies as the foundation of its server, network, and storage solutions. iStreamPlanet executives say the Intel® Xeon® processor E5 family will help them give their customers a strategic advantage and deliver more compelling media experiences to more consumers at a lower cost.


“Out of the gate, we saw a 20 percent improvement in how quickly we can digitize content for distribution,” explained Mio Babic CEO of iStreamPlanet. “We’re talking about thousands and thousands of hours of content, so to digitize it 20 percent faster or with 20 percent less resources translates to significant savings.”


To learn more, download the new iStream Planet business success story. As always, you can find many more like this one on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

5600 Series.jpgDesigned for industry-leading performance and maximum energy efficiency, the Intel® Xeon® processor 5600 series delivers versatile one-way and two-way, 64-bit, multi-core servers and workstations that are ideal for a wide range of infrastructure, cloud, high-density, and high-performance computing (HPC) applications. Learn how four companies are putting it to work in these new business success stories:

  • Healthy Outcomes for Cerner: RISC migration to Intel Xeon processors 5600 and 7500 series improves up time, performance, and savings for Cerner's mission-critical healthcare applications.
  • Mindspeed Technologies Moves to a Platform for Growth: Standardizing on Intel Xeon processors 5600 series helps Mindspeed Technologies consolidate, reduce costs, and support continued business expansion.
  • Versant Boosts Performance: A high-performance database developer completes tests around 80 percent faster with standardized IT based on Intel Xeon processors 5600 series.
  • Virtual World Comes to Life Quicker for Virtalis: A leading virtual reality company delivers a high-performance solution based on Intel Xeon processor 5680, reducing lead times on customized workstations by approximately 25 percent.

As always, you can find many more business success stories like these on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

I get questions occasionally from customers.

 

One recently was, ‘Can Intel Xeon Processors handle a 20TB Oracle database?’

 

We get this question occasionally and the question doesn’t make any sense to me.  I understand the basis of the question; the customer is concerned that Xeon can tackle a very large database.   Is the question really, ‘Can Xeon read in a lot of data and processes it efficiently and quickly?’  We can easily show that the Xeon E7 family of processors can do this faster in benchmark tests than most proprietary RISC processors.

 

xeone7_tpch_1kGB.jpg

Higher is better

 

 

Where the question falls apart is in the premise, can a 64-bit Xeon address 20TB?  If a 64-bit RISC processor can address 20TB, then a 64-bit Xeon will as well.  No database is going to be read 20TB of data at a time and besides, an Oracle database is going to have a lot of space that is either empty or not used.  (For instance is there really 20TB of data or is it really 12TB or less?)  But the concern of the customer usually goes deeper.  So let’s break this issue down.

 

What is the number of users?  This is a useful question.  For instance is it a data warehouse with only a handful of users?  Or is it a highly transactional database with thousands of users?  In either scenario Xeon is great.  (In 2008 and 2009 I was a DBA for Oracle on a benchmark they were running of a 10TB medical database with between 10 and 20 thousand of users.  The Xeon processors for this benchmark were a number of generations ago.)

 

Another question that is maybe being asked is: ‘What is the largest data file I can create for my 20TB database?’  What I’ve found behind this question is a concern regarding the manageability of the database given the number of datafiles that would need to be created to get to 20TB.  (For that benchmark 3 years ago it took me all weekend to build a 10TB database with 1GB datafiles.  I had them spread out but there were an awful lot of them.  Today, with much faster I/O creating a 20TB database will be much faster.)

 

Another concern being raised by the original question would be memory addressability.  For large databases the thinking is that the datasets being processed in memory are very large.  Can Xeon address as much memory as a proprietary RISC processor?  In other words, can Xeon scale up?  Do the platforms sporting a Xeon e7 processor have the memory capacity as servers with a proprietary RISC processor?  We can easily demonstrate that Xeon will fill the bill by platform diversity from various vendords that can support 2TB to 6TB of RAM.

 

Another concern raised by the question might be on concurrent processing.  With a 20TB database a lot of the processing may utilize Oracle’s parallel query function.  The Xeon E7 family with its multiple core and hyper-threading technologies can easily handle significant parallel processing.  For example, I started running Oracle Parallel Query Option, PQO, in 1996 when the feature first came out and I was using a 24 processor Sequent server utilizing Pentium processors.

 

I imagine there are additional ways to break this question down but overall the question: "Can the Xeon E7 processor run a 20TB database?" deserves an answer that addresses the real issues.   The simple answer is a resounding YES!

Please note: A verision of this blog appeared on InformationWeek.com in the Cloud Section as an Intel sponsored post.

 

 

 

 

Before we jump into discussing cloud security frameworks, I’d like to thank all who responded to my first blog on InformationWeek.com through Twitteror LinkedIn. It’s rewarding to know that you found my initial blog on cloud security frameworksworthy of comment. I hope you continue to find my ideas interesting.

 

Now let’s consider today’s topic. While attending the University of Southern California, I was introduced to the concept of systems engineering and management. The premise of this discipline is disarmingly simple. First, the boundaries of any system are defined—sometimes erroneously—by the collective perspective of those participating in the effort. Second, the more complex the effort, the greater the interactions and the more difficult the solution. Finally, if you try to focus on a single technology or business component of that system to the exclusion of others, the success and effectiveness of the effort will likely suffer.

 

In theory, this approach makes sense. But from a more realistic perspective, business, technologists and technology vendors often decide to focus on a single element of a solution and—perhaps intentionally—ignore or overlook proposing solutions from an end-to-end perspective.

 

I wrote about the potential impact of this approach in a blog titled Cloud Lessons and LeMans. The key takeaway was that to build a workable cloud solution framework, you must understand and react to considerations larger than IT and the data center. In many respects, cloud security requires exactly the same considerations.

 

Organizational Behavior

 

A typical IT organization has a stratification of skills, responsibilities, and associated budgets. These are generally structured along platform, operations, and increasingly, lines of business.

 

Stratification is an inherent byproduct of organizational dynamics and how the success of each group is measured (and, in turn, compensated). In this environment, each group becomes detached from the needs of other groups and tends to define trust and risk based on their needs.

 

The cloud is a community of players made up of many diverse groups.  These can include cloud service providers, telco service providers, and perhaps thousands of end users running any number of platforms. If you look at it this way, you begin to understand that the business problems associated with cloud security are significantly harder to resolve than the technical challenges.

 

Are Security Breaches Linear?

 

So let’s say security breaches are linear in nature (subject to discussion). How does a typical organization defend itself?  In a blog written by Billy Cox that discusses security air gapsto separate systems, one might envision this defense as a string of very strong fortifications, erected around your platforms or line of business units, which are purpose-built to keep the bad guys out. I like to call this approach the Fort Knox Syndrome. (While I wish I could claim this term as my own, that honor goes to Ed Gerck, PhD, in a paper titled End-To-End IT Security that was originally published in 2002 and later republished in 2009 by Network Middleware Applications (NMA), Inc.)

 

Otherwise known as the United States Bullion Depository, Fort Knox is a fortified vault in Kentucky that can hold 4,577 metric tons (147.2 million oz. troy) of gold bullion. As you might imagine security in and around the building and its grounds is impressive.

 

Given the stratification of skills, responsibilities, and budgets described earlier, it shouldn’t come as a great surprise that for most organizations, security means building the equivalent of a Fort Knox-type fortification around their platforms and, by default, their application portfolio.

 

Figure 1 shows how this might look at a platform level.

 

Fort Knox Syndrome.jpg

Figure 1. Typical Enterprise Security Platform

 

 

Although the slide is a bit busy, it shows how the Fort Knox Syndrome works in many enterprises today. Each component is protected by its own firewall (represented by the red line surrounding each of the blue ovals). Within each component of the framework, nobody is really concerned about how their firewall impacts any other component of the system. This acknowledges some of the group-based detachment I mentioned earlier. Each component of the model demands some level of security compliance and ultimately has the right to determine who will—or will not—play within their domain.

 

The small cylinders in the figure—which represent identity, policy, and compliance—are the enforcers. Think of the identity cylinder as a simple device authentication capability. The policy cylinder represents a set of rules defining who can have access, the conditions, and under what criteria a device or its user is granted access. The compliance cylinder enforces policies such as maintenance of patch levels, firewall uptime, anti-virus definitions, and configuration vulnerability throughout the infrastructure. In a centralized IT shop today, it’s likely the data center component of this framework drives compliance of the associated elements.

 

But even with this simple model, problems are plentiful. When was the last time your organization experienced some type of security glitch when one component was updated and perhaps not fully tested against the umbrella security framework? I think it’s safe to conclude that the more federated your framework becomes (via a cloud ecosystem), the more the problems the Fort Knox model generates.

 

Please join me as I explore the topic of cloud security across upcoming blogs. For now, and reserving the right to add or modify these topics as we move forward, here are the areas I’ll address in the coming months:

 

  1. Current state security
  2. Security as a factor of cost
  3. Business issues surrounding security
  4. Evaluating new-world security model Investments
  5. Security, data architecture, and big data
  6. Security in Depth (E2E Frameworks)

 

I’m interested in your feedback on today’s blog in general and, specifically, how your enterprise is approaching E2E security and E2E cloud security. Do you consider the two topics as separate but equal or as one and the same discussion?

RTT.jpgDownload Now

 

South Africa’s RTT provides logistics services to clients in industries that demand highly specialized supply-chain solutions including pharmaceuticals and consumer packaged goods. Every day, the company delivers more than 70,000 shipments—over one million kilograms of freight.

 

With new facilities and expanding branches, RTT has a growing customer base that demands the most advanced technology.  It developed a proprietary logistics system to support its mission-critical requirements, based on HP ProLiant* DL980 G7 servers with the Intel® Xeon® processor E7 family. The new system has cut RTT’s hardware costs by 25% and improved performance.

 

“We’re able to grow quickly into new regions because we have the underlying technology infrastructure to support this growth,” says Hemal Kalianji, RTT's CIO. “We are constantly on a drive to improve our systems to be leading edge and maintain our competitive position.”

 

For all the details, download our new RTT business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 

*Other names and brands may be claimed as the property of others.

Radvision.jpgThe Intel® Xeon® processor E5 family makes IT more efficient, productive, and secure for enterprises of all types and sizes. Download these new real-life business success stories to see how it can help you meet your toughest IT challenges:

You can find more stories like these here. As always, there are many more business success stories on the Intel.com Business Success Stories for IT Managers page and the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Download

 

Better Security.jpgTo learn more about today’s most important security strategies, download “Better Security Drives Innovation,” a new white paper that explores:

  • The high-level evolutions enterprises and their security officers face.
  • Key considerations including people, devices, and data rating.
  • Scenarios following the information lifecycle to implement security policies in the organization.
  • Technologies to better secure the information technology system.
  • Changes in the global environment.
  • Indications, tricks, recommendations, techniques, and useful technologies.
  • How we can move from building firewalls to instilling security behaviors into each employee.

 

Download it here.  And to learn about more enterprise IT solutions, visit the Intel.com IT Center.

castilla la mancha.jpgDownload Now


The Spanish province of Castilla-La Mancha was eager to enhance the services it offered to local citizens, even with a tightening budget. This meant carefully planning any new resource investments for maximum return. Although the province's population is widely spread out, 95 percent have access to broadband Internet, so the regional government decided to focus on developing its online capabilities to both enhance the quality of service and reduce operating costs.


The solution was a shared cloud platform built on a virtual computing environment (VCE) from Intel, Cisco, and others. The consolidated architecture included 16 Cisco B200 M2* blade servers powered by two Intel® Xeon® processors 5650, each supporting 70 virtual machines. The virtualization-friendly features of Intel® technology optimized the performance of the new system in real time.


“Having industry experts from both Intel and Cisco on hand to share their expertise and consultancy was tremendously helpful,” said Pedro-Jesus Rodriguez Gonzalez, head of IT and Internet for Castilla-La Mancha.


For all the details, download our new Castilla-La Mancha business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 

*Other names and brands may be claimed as the property of others.

wayi.jpgDownload Now


IT infrastructure plays a key role in an enterprise built on e-commerce. This is no exception in the gaming industry. The management team at Wayi International Digital Entertainment Co. Ltd., a leading game company in Taiwan, believes that powerful, efficient servers are essential to deliver quality IT services. Wahi deploys servers based only on Intel® Xeon® processors, including the Intel Xeon processor E7 family, with features like Machine Check Architecture Recovery (MCA Recovery), which has a proven high level of reliability that results in fewer malfunctions. This feature is imperative for key players in an industry where the competition is fierce.


“A huge number of manufacturers work with Intel,” said Xie An, Wayi’s CIO. “When we were looking for dual-core CPUs [for two-way servers], we had the luxury of choosing from a wide array of brands and models. That provided us a lot of flexibility when we were building our system.”


To learn more, download the new Wayi business success story. As always, you can find many more like this one on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Please note: This blog originally appeared on Data Center POST.

 

Power and thermal management in the data center are likely to be among the top five priorities in 2012 for data center managers. More detailed awareness and control of data center energy resources can ultimately help to contain energy costs, which are estimated to comprise approximately 70 percent of data center operational costs, according to the Forrester Research Report: “Updated Q3 2011: Power and Cooling Heat Up the Data Center.”

 

Without proper power and thermal management, overprovisioning of power/cooling resources in data centers is common, and has led to rising costs and underutilized space and equipment. According to a McKinsey study, $24.7 billion is wasted each year on energy and cooling for unused servers. It is estimated that in data centers, as much as 30 percent of the servers are “dead”—using less than 15 percent of their compute capacity, but consuming 70 percent of their rated energy capacity.

 

A nascent technology category, Data Center Infrastructure Management or DCIM, addresses the power/thermal usage information needs of facilities managers who are looking to operate their data centers more efficiently and cost effectively. Highlighting the growing demand in a December 2011 report, the industry analyst firm, 451Research, estimates that the DCIM market will grow by a factor of five between 2011 and 2015.

 

DCIM tools, when fed by real-time power and thermal consumption data, apply to many use cases in the data center, including:

 

  • Measurement of energy usage by device;
  • Capacity planning;
  • Identification of dead and under-utilized servers;
  • Identification of power/thermal failures;
  • Improvement of thermal profiles; and
  • Power continuity during brownouts/outages.

 

With its many use cases, DCIM is clearly a priority for today’s data centers. There are challenges to achieving real-time power and thermal management, however. Until recently, much of the power and cooling data available was not real-time data, but modeled or estimated, using face plate or manufacturers’ estimations of peak power usage. This estimated data has been found to deviate from actual usage by as much as 40 percent. Weather forecasting is probably the only industry that allows this large of a gap of deviation from actual conditions and still be considered successful!

 

A second challenge has been the lack of standards governing power data and the resulting difficulty in aggregating data from multiple, proprietary systems. Traditionally, data center managers have collected the data and manually aggregated it using spreadsheets or home-grown aggregation systems. More recently, some DCIM vendors have begun offering energy/thermal management platforms and analytics that are cross-stack, and therefore simplify data collection and analytics.

 

At last count, there were as many as 80 vendors touting DCIM capabilities, potentially overwhelming IT and data center managers faced with the daunting task of sorting the facts from marketing hype. Therefore, a first step in purchase decision making is to clearly identify how many and which of the DCIM use cases have the greatest potential for increasing efficiency and adding value to your individual data center. This knowledge may then be used as a litmus test when evaluating the information available from DCIM vendors, partners, ISVs and others.


Single-vendor data centers will do best to implement the DCIM tools integrated by the preferred server manufacturer. Managers of data centers with equipment made by multiple vendors will want to look for vendor-agnostic solutions that aggregate power data across their multiple systems.

 

Data centers with large numbers of legacy systems (equipment built prior to 2007) will have different requirements than data centers with newer, EnergyStar® rated equipment. Data centers facing energy audits or other government regulated energy restrictions may use DCIM tools to better grasp their actual power consumption and plan more strategically. Data centers nearing capacity may deploy DCIM to help determine under-utilized servers and thus increase their existing capacity through more efficient usage. Data centers in areas subject to energy brownouts may use DCIM to provide continuous, albeit reduced power, to ensure enterprise-critical servers are still processing as less critical ones are shut down during the outage.

 

With so many market drivers, DCIM is poised to introduce important technological advancements in 2012 and beyond, and will rightfully rank among the top priorities in the data center.

I’ve been attending Intel’s Developer Forum (IDF) in Beijing this week. At the beginning of the week, we also held our second PRC “Day in the Cloud” event. This particular post will be on the Day in the Cloud event. Look for another post shortly on IDF and the state of the cloud in China.

 

 

DITC_Beijing_2012.jpg

 

In this “Day in the Cloud” event, we had 30+ members from the press/bloggers in attendance. As reflects the state of the China market, the press and bloggers here are quite savvy and ask good, pointed questions. The overall level of sophistication is clearly advancing year over year. With great leaders such as Taobao (Chinese), Ali, Tencent, and others, that should not be a big surprise.

 

One fundamental observation is that while the press is quite savvy, they are dealing with a reader base that spans the range from new to the topic of cloud computing all the way to experts. As a result, they work to understand the ‘why’ and ‘so what’ on any given topic; they are definitely interested in the ‘what’, but tend to want to understand the ‘why’ and ‘so what’ to really understand why something is relevant. In China, we find a large number of end customers that never had traditional IT (in the US sense) and so have the chance to directly move to more efficient structures (aka cloud). These customers are ready and willing to adopt new techniques and represent a leading edge audience for these press and bloggers.

 

We have also seen a distinct difference in the specific content from our China partners in the Intel ® Cloud Builders program. Our PRC partners speak specifically to the PRC versions of the usage models and also in the context of the PRC ecosystem partners. It is great to see this diversity and innovation coming from all segments of the market.

You have 60 UNIX servers from one vendor running a consumer facing application; the finest money could buy 4 years ago. They have performed adequately and the vendor has provided good support but now the bill is due as the vendor has been increasing the costs of support.   Upgrades are expensive forklift changes with no promise of reduced costs.  That’s the hard place.  The rock is that the increasing costs of keeping this consumer facing application going is leading marketing to impose a fee for use on the consumers.  Consumers are up in arms and setting up petitions on change.org.  The news channels are featuring this new fee as an example of the squeeze on consumers.

 

This is the big squeeze that many enterprises are finding themselves in.  How do I reduce the Total Cost of Ownership (TCO) of an application while at the same time how can I provide the new services that consumers have come to expect to be free?  And if you don’t do it the competition will.

In fact from the ZDNet report on IT Budget priorities in 2012 this is the divide between IT budget winners and IT budget losers:

 

… these results paint a picture of a two-speed IT market in which some organizations are pushing ahead aggressively with transformative projects based on new technology or new delivery methods while others are bunkering down and looking inside for cost cutting opportunities. The danger for the latter group is they may find it increasingly hard to compete against leaner, more agile, more modern and more automated competitors.

 

You can get ahead in this game.  ‘New technology and new delivery methods’ are another way of saying that these budget winners are moving to servers based on low cost Intel Xeon Processors.  These processors are bringing the qualities you have found in the UNIX Processors into the low cost Intel Xeon family of high performance processors.

 

You’ll have to make sacrifices.  It will mean some work and those expensive servers will have to find a new home.  Maybe there is a ‘Rescue’ organization in your community that will take them off your hands and find them a new good home.

The hardest part will be going from sole source vendors to having a wide range of choices of vendors.   Gosh, you can choose from IBM, Dell, HP, Bull, Cisco, Oracle, SGI, SuperMicro, Lenovo just to name a few.  You also have a choice of rock solid enterprise operating systems from Linux and Linux variants such as Oracle, Red Hat, SUSE, to non-Linux operating systems such as  Windows, to Solaris.   Now you can be sure to get the lowest cost possible and you no longer are locked into one vendor.

e7 Server OEM slide 1 slide.jpg

 

Now you have options but you need a plan to get there.  With this many servers hosting one application the effort can seem daunting.

Break the effort down.  What are the easy parts to migrate?  For instance, products that are COTS (Common Off The Shelf) software that have a version for UNIX and a version for Linux and there isn’t much extensive customizations involved can be low hanging fruit, so to speak.  Databases that can be migrated using a tool from the database vendor can be next.  For instance Oracle Streams for Oracle databases or Replication server for Sybase are examples of these tools.  In this manner a lot of the application can be easily converted.

 

So now the user facing layer has been changed.  The backend layer consisting of databases has been changed.  Now the middleware layer can be complex and include logic servers with lots of custom code or with a big ERP package the middleware can be all COTS software.   This middleware can be a big challenge.   But now you have experience in migration and can approach it as a veteran.

 

One word of caution here; I keep hearing stories of missing something while migrating from veterans of in this IT modernization effort.  Usually missed are the undocumented or poorly documented data feeds that were added in a hurry to or from some application.  You can try to avoid this by running a rehearsal migration before the production one and have the application tested as if in UAT or with a test harness.

What am I talking about here?  See my white paper on migration methodologies on our website

 

Let me end this with a story about how using the Intel Xeon Processor family significantly helped a University meet their data processing requirements.  The University of Colorado had to address their rapidly rising compute costs.  They decided to migrate their mission-critical IT environment from legacy UNIX servers running RISC processors to Intel® Xeon® processors. The results were staggering. Their data center footprint dropped to 1/20th (5%) its former size. Power consumption dropped to 1/10th (10%) of its pre-migration levels. And their ERP performance jumped by 400% to 600%. In Year 1 alone, they saved $600,000.   While your results may vary you can read the white paper on this achievement on the Intel.com web site.

 

Do you find yourself caught in the squeeze I discuss here?  What are some of the steps you have taken?

When I last discussed network technologies, I said the launch of the Intel® Ethernet Controller X540 ushered in the age of 10 Gigabit Ethernet (10GbE) LAN on motherboard (LOM). That might sound a bit grandiose, but to networking and IT folks who have been have been anticipating 10GbE LOM for several years, this is an important milestone.

 

LOM integration is one of the keys to bringing a new generation of Ethernet to the masses, because it means customers no longer need to buy an add-in adapter to get a faster network connection. This, of course, leads to greater and accelerated adoption of the new technology, placing it on track to eventually overtake its predecessor. We saw this play out with Fast Ethernet (100 megabits per second) and Gigabit Ethernet (GbE), and we’ll see the same thing with 10GbE.

 

But if 10GbE LOM is so important, why did it take us so long to get here? And what do the Intel Ethernet Controller X540 and 10GBASE-T bring to the show that wasn't here before?

 

Prior to the launch of the Intel Ethernet Controller X540, 10GBASE-T solutions required two chips: a media access controller (MAC) and a physical layer controller (PHY). Adapters based on these two-chip designs were notoriously power hungry, with a single-port card consuming nearly the 25 watt maximum allowed by the PCI Express* specification. These early products were also expensive, costing around $1,000 per port. With power requirements and costs like those, no server vendor was going to include 10GBASE-T LOM. Newer generations of 10GBASE-T products retained two-chip designs and power needs that, while lower, still weren’t suitable for LOM.

 

CP.GIF

A first-generation 10GBASE-T adapter. Note the cooling fan.

 

 

The Intel Ethernet Controller X540 is the first 10GBASE-T product to fully integrate the MAC and PHY in a single-chip package. As a result, it’s the first 10GBASE-T controller that has the proper cost, power, and size characteristics for LOM implementation. Each of its two ports draws a quarter of the power required by first-generation 10GBASE-T adapters, and its 25mmx25mm package is cost-effective and requires minimal real estate. Add advanced I/O virtualization, storage over Ethernet (including NFS, iSCSI, and Fibre Channel over Ethernet), and support for I/O enhancements on the new Intel® Xeon® processor E5 family, and you can see why we’re excited about this product.

 

X540.gif

The Intel Ethernet Controller X540


 

But that’s just part of the story. Let’s talk about 10GBASE-T, the 10GbE standard supported by the Intel Ethernet Controller X540; it’s going to play a major role in the growth of 10GbE.

 

Last year I described the various 10GbE interface standards. They all have their strong points, but limitations such as reach or cost have prevented each from achieving mainstream status. 10GBASE-T hits a sweet spot, making it a logical choice for broad 10GbE deployments:

 

  • 10G BASE-T supports the twisted-pair copper cabling and RJ-45 connecters used in most data centers today, meaning expensive “rip and replace” infrastructure upgrades aren’t necessary.
  • It’s compatible with existing GbE equipment, providing a simple upgrade path to 10GbE. You can connect 10GBASE-T-equipped servers to your current GbE network, and they’ll connect at GbE speeds. When you’re ready to upgrade to 10GbE, you can replace your GbE switch with a 10GBASE-T switch, and your servers will connect at the higher speed.
  • It supports distances of up to 100 meters, giving it the flexibility required for various data center deployment models, including top of rack, where servers connect to a switch in the same rack, and middle of row or end of row, where servers connect to switches some distance away.

 

So with 10GBASE-T LOM, are we seeing the start of something big? It sure looks that way. Crehan Research projects 10GbE adapter and LOM port shipments will rise to nearly 40 million in 2015, compared to five and a quarter million in 2011[1]. And 10GBASE-T will account for over half of those 40 million ports.

 

Impressive numbers, aren’t they? We certainly think so.

 

2012 is going to be a big year for 10GbE. 10GBASE-T LOM is getting us off to a great start, and you can expect to see a new generation of 10GBASE-T switches to connect servers to the network as the ecosystem continues to grow.

 

If you’d like to learn more about advancements in 10GBASE-T product design, check out this article in EE Times (see page 38), penned by Intel architect and technical chair for The Ethernet Alliance 10GBASE-T Subcommittee, Dave Chalupsky.  Or listen to Brian Johnson discuss the latest Intel Ethernet Technologies on a recent episode of Intel Chip Chat below.

 

   A Consolidated Fabric for the Data Center – Intel® Chip Chat episode 178 by Intel Chip Chat

 

 

For the latest and greatest, follow us on Twitter: @IntelEthernet.

 

 

[1] Crehan Research Server-class Adapter & LOM/Controllers Long-range Forecast, 1/31/2012

Download Now 


businessdecision.jpgAs part of a program to continuously increase computing power and reduce energy consumption at its green data center in Grenoble, France, Business & Decision Group recently trialed the benefits of consolidating storage and network traffic onto a high-bandwidth 10 Gigabit Ethernet (GbE) fabric. The reduction in equipment and cabling provided by 10 GbE simplifies network management and maintenance, consumes less energy, and, from a customer perspective, enables faster cloud availability. Business & Decision Group is now consolidating its entire data center network to Intel® Ethernet 10 Gigabit Server Adapters.


“A simplified network reduces management complexity, while less cabling and fewer network adapters reduce the number of potential points of failure, helping to minimize maintenance and associated costs,” said Gerald Dulac, Green DC project manager for Business & Decision Group.


For all the details, download our new Business & Decision Group business success story.  As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Download Now

 

xeon_d_rgb_3000.pngDriven by Moore’s Law, CPU architectures have advanced rapidly over the last decade. We’ve moved from discussing server performance purely in terms of GHz to a discussion where parallelism and the number of cores and software threads have become more important to achieving optimal application performance.

 

As virtualization has become more pervasive, two frequent questions are:

  1. What’s the best way to use all available CPU resources?
  2. How can we use benchmarks to determine the optimal configuration instead of having the simplistic view of using the amount of GHz and number of cores, particularly when comparing different architecture generations?


A new white paper looks into these challenges and also addresses the migration path of the virtual machine (VM) footprint re-evaluation during migration. Download it here.

Download  Now


cancom.jpgHaving offered a hosted application service to its customers for nearly 10 years, Germany-based CANCOM was excited by the potential enhancements cloud computing could offer. It recognized that with increased virtualization capabilities, its Application Hosting Platform* (AHP*) could become the AHP Private Cloud*, an entirely new proposition for its customers.


Eager to ensure its new offering was based on the most appropriate platform, CANCOM deployed HP ProLiant* DL380 G6 servers powered by the latest generation of Intel® Xeon® processors – beginning with the Intel Xeon processor 5500 series and soon upgrading to the Intel Xeon processor 5600 series when it was subsequently released.


“We have a longstanding relationship with Intel. We’re a Gold member of the Intel® Technology Provider Program and almost all of our IT environment is based on Intel® architecture,” comments Pavlovits. “So, it was a simple decision for us to make use of this technology in this solution. We knew that only Intel would be able to deliver the energy efficiency, performance, and long-term stability that we required. In addition to this, virtualization was to play a crucial role in this new platform, so a processor with the power to support the consolidation of multiple applications and heavy workloads was essential.”


For all the details, download our new CANCOM business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 


*Other names and brands may be claimed as the property of others.

Download Now 

 

NCNU.jpgTaiwan’s National Chi Nan University (NCNU) needed an easy way to distribute and manage software for its 6,500 staff members and students. NCNU began to roll out VMware View 4* for desktops on their virtual private cloud, deploying the VMware vSphere* virtualization platform. After running various compatibility tests and performance benchmarks, the team discovered that servers running on the Intel® Xeon® processor 5600 series delivered the best results. They then started offering cloud computing services hosted on these servers.


“With VMware View 4* cloud desktop system supported by Intel Xeon processors, all types of personal computers at Chi Nan University can now be used to run complex software titles,” explained Hong Zhengxin, NCNU’s Data and Network Center director. “We’ve also decreased maintenance costs, reduced our utility bills, and managed our software assets centrally, providing increased flexibility. This has added value for our users and helped us realize cost reductions in terms of management.”


For all the details, download our new NCNU business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Solutions for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 


*Other names and brands may be claimed as the property of others.

menzies.jpgDownload Now

 
UK-based Menzies Aviation wanted to protect its airline customers from operational disruption and raise efficiency by boosting the security and versatility of its IT infrastructure. The firm deployed virtualized Dell PowerEdge* R710 servers with Intel® Xeon® processors 5600 series, replicated across two data centers at separate locations.


Menzies was able to ensure business continuity with its mirrored, virtualized solution—with around 97 percent virtualization of operational systems and four times faster deployment. This virtual solution is about 60 percent cheaper than it would have been to purchase physical servers.


“We considered virtualization as a way to consolidate the environment and put an end to server sprawl,” explains Stephen Koller, executive vice president of IT for Menzies. “In addition, we wanted to improve flexibility with fast, cost-effective server deployment, while keeping downtime to a minimum. We also needed to implement an effective disaster recovery solution.”


For all the details, download our new Menzies Aviation business success story. As always, you can find many more like this on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 

*Other names and brands may be claimed as the property of others.

There is something wonderful when approaching the use of technology from a broader humanistic perspective, and our latest Digital Nibbles interview with Amber Case did not disappoint on this front. Amber is an anthropologist by training and a cultural wunderkind who is focusing her work on the anthropological implications of technology on society.

 

We discussed some of her main premises including the facts that all of the changes of the past 30 years were predicted by scientists in small circles 50 years ago. Also, that the main shift that technology represents to humanity is that it extends tool based advancements from the physical to mental world, and that an approach to predicting the future of technology fits squarely between utopian and dystopian viewpoints…and that where you are on the socio economic “scale” will determine how well the future of technology works out for you.

 

Lots to ponder in our first conversation, and I’m hoping that additional conversations will take us further into the realm of cyborg anthropology.

 

To learn more about Amber be sure to check my co-host, @ruv aka Reuven cohen's blog out.

 

Take a listen, and let me know what you think.

 

You can listen on SoundCloud, iTunes or the DigitalNibbles site.

 

To learn more about the show and keep up with who will be on next follow @DigitalNibbles on Twitter.

Please note: This blog originally apeared on Data Center Knowledge as an Industry Perspective.

 

 

As data centers have grown over the years, server power consumption has taken to center stage in the IT theater. Electricity to power servers is now the biggest operational cost in the data center, and one of the biggest headaches for budget managers.

 

So how do you contain server power consumption? I suggest you begin by looking first at inefficient servers—the elephant in your data center. Old and inefficient servers not only consume more power than newer servers, but they do less work. That means you’re paying more to get less.

 

At Intel, we’ve had a laser focus on this issue for many years now, and the new Intel® Xeon® processor E5 family continues this focus. It addresses the efficiency problem on two key fronts: processor performance and power management.

 

To increase server performance, the Intel architecture builds hyperthreading technology into the processor. In simple terms, hyperthreading overlays instruction paths to double the number of processor cores and deliver a lot more throughput for the same amount of energy.

 

For further gains in power efficiency, the processor includes a turbo feature that allows energy to be focused where it is most needed. If a job running on one core needs more power, it can make use of the extra power headroom available on other cores to accelerate processing.

 

Other automated power management features in the new processor family include Intel Power Tuning Technology and Intel Intelligent Power Technology. Power Tuning uses on-board sensors and to give you greater control over power and thermal levels across the system. Intelligent Power Technology automatically regulates power consumption

 

With capabilities like these, the newest Intel Xeon processor products families deliver up to 70 percent more performance per watt than previous generations.[i],[ii] These gains help you flip the inefficiency ratio that comes with older servers. Rather than paying more to get less, you pay less to get more.

 

 


 


 

[i] Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

 

 

[ii] Source: Performance comparison using SPECfp*_rate_base2006 benchmark result at the same TDP. Baseline score of 271 on prior generation 2S Intel Xeon processor X5690 based on best publication to www.spec.org using Intel Compiler 12.1 as of 17 January 2012. For details, please see:  http://www.spec.org/cpu2006/results/res2012q1/cpu2006-20111219-19195.html.  New score of 466 based on Intel internal measured estimates using an Intel Canoe Pass platform with two Intel Xeon processor E5-2680, Turbo Enabled, EIST Enabled, Hyper-Threading Enabled, 64 GB RAM, Intel Compiler 12.1, THP disabled, Red Hat* Enterprise Linux Server 6.1.

In Part I of this blog post on information security I investigated the performance of Oracle TDE with hardware acceleration using Intel AES-NI and saw significant performance gains. However, I subtly sidestepped the question of what happens when the data is cached? As we found during part I, Oracle does not cache the data at all, instead choosing to read from disk and decrypt every time the same query is run. Why does Oracle not cache the data after the first time the query is run?

 

Running a trace on the query and looking in the trace file helps answer the question by showing that the operation used to read the data was identified as a direct path read. This will be familiar to Oracle DBAs with Oracle parallel query experience. It means that the data is read directly into the user session's PGA (bypassing the SGA), instead of using a more familiar db file scattered read, where the data could potentially be cached but placed at the end of the LRU list and aged out more quickly if space in the buffer cache is required. Why does Oracle use a direct path read for a non-parallel query? The answer lies in Note: 793845.1 from My Oracle Support that says:

 

"There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans. In 10g, serial table scans for “large” tables used to go through cache (by default) which is not the case anymore. In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats."

 

This makes a great deal of sense. Given the massive gains in platform bandwidth and latency, direct path read can be as fast as a db file scattered read and also improve scalability: There is no need to acquire a cache buffers chains latch to scan data buffered in memory to prevent that data being changed while it is in the process of being scanned. It is also beneficial in a RAC clustered environment where other nodes may be interested in the contents of the local buffer cache. As the My Oracle Support note mentions, the decision whether or not to cache the data is dependent on a number of factors. One of the most important is whether the table exceeds a size value based on the value determined by the hidden parameter _small_table_threshold. Instead of using this parameter, however, I granted the user a higher privilege and then used the event 10949 "Disable autotune direct path read for serial full table scan" to modify the default behaviour to observe its impact on clear text and encrypted data as follows:

 

SQL> alter session set events '10949 trace name context forever, level 1';

 

After doing this, re-running the same query on the clear text data does physical reads but takes slightly longer to cache.

 

Elapsed: 00:00:42.53

 

Subsequent runs shows that we have returned almost to previous performance, although not outperforming the direct path read.

 

Elapsed: 00:00:29.55

 

Now the data is cached in the buffer cache in the SGA and not read from disk.

 

Statistics
----------------------------------------------------------
...
0 physical reads
...

Tracing also showed that autotune direct path read was disabled and db file scattered read is being used.

 

I then tried setting the same event for the unencrypted data with hardware acceleration disabled. Running the query on the first occasion took slightly longer than before as the data was read from disk, decrypted, and cached in the buffer cache.

 

Elapsed: 00:02:25.97

 

However, on subsequent runs the difference was dramatic. The data was cached in clear text in memory and therefore ran considerably quicker.

 

Elapsed: 00:00:29.30

 

Similarly, with AES-NI enabled the initial read from disk and decryption took a similar length of time as before.

 

Elapsed: 00:00:45.30

 

Once the data was cached no decryption was required.

 

Elapsed: 00:00:29.49

 

To recap, the following are results when the query is cached in the SGA:

 

Clear Text Query cached = 00:00:29.55
Software only encryption cached as clear text = 00:00:29.30
AES-NI accelerated cached as clear text = 00:00:29.49 

 

In other words, the result is exactly the same and entirely expected from the TDE FAQ with cached data. Once the data is in the buffer cache it is in clear text and should therefore take a similar time to read irrespective of whether the tablespace is encrypted.

 

For Oracle encryption performance when running queries that use full table scans once the data is cached in memory (for tablespace as opposed to column encryption) it is in clear text and therefore hardware acceleration will not be used after the first read from disk. However, as we saw in Part I it would be wrong to assume that just because we size the buffer cache adequately that a table would necessarily be cached. Additionally, at this release of Oracle (11.2.0.2), whether the data is encrypted does not impact upon how full table scans are implemented. You have the manual intervention alternative and using unsupported underscore parameters to modify Oracle's behaviour. In these simple tests I have tested for only a single user, without considering the implications of scalability or clustering. If you do modify Oracle's behaviour you will need to retest that your assumptions are correct for each and every Oracle release.

 

What we want with Oracle database encryption as the name TDE implies is for it to be transparent without needing to modify practices at all just because we want the data to be encrypted. What these simple tests show is the best way to do this is by using Intel Xeon processors with AES-NI for Oracle database encryption acceleration.

Please note: A verision of this blog appeared on InformationWeek.com in the Cloud Section as an Intel sponsored post.

 

 

 

 

diving airplaneIf you’ve read any of my previous blogs about the eight fundamental truths of enterprise cloud strategy, you may remember that I sometimes allude to my passion for anything with wings or wheels. Anyone who’s gone through basic flight training has had to learn how to recover from a stall. A stall is the point where, due to any number of factors, the wings of the aircraft lose lift. Every aircraft has a stall speed. In basic terms, when you stall, your aircraft becomes immediately vulnerable to the laws of gravity—in the worst way. Depending on a number of variables, it’s possible (in fact, likely) that one wing will stall before the other, sending you into a diving spin. Recovery procedures are pretty basic for a spin instigated by a stall (trust me when I tell you it’s drilled into pilot’s subconscious). Remembering to execute all the steps simultaneously, basic stall recovery involves:
Via flickr user TGIGreeny:

 

 

 

  1. Throttle back
  2. Nose down
  3. Opposite rudder to spin direction and
  4. Once the spin stops, putting the nose and throttle up to regain control

 

Unfortunately, no two stalls are exactly the same, so results may vary.  Further, each aircraft exhibits slightly different stall characteristics, so these basic recovery procedures may become a bit more involved depending on altitude, nature of the spin (flat or inverted being examples), and how your aircraft responds while in stall. In fact, some aircraft after they enter a stall are exceedingly difficult if not impossible to pull out of a spin initiated by that stall; you can do everything right in your recovery attempt and still end up having a very bad day.  Thus, what on the surface is a relatively straightforward recovery process suddenly becomes much more complex. (This is one of the reasons you hope to have an experienced command pilot in the left seat on your flight to Sheboygan.)

 

A viable cloud security framework is similar. On the surface, it seems pretty basic and should mirror whatever security framework(s) your company has built around your legacy business systems. The problem I see, though, with this simple idea is that not many companies have approached security as an integrated, end-to-end (E2E) framework. An E2E framework begins in your data center, extends to end-user devices, and includes networks, software, staff, attitudes, management, and implementation.

 

How Do You Define Security?

 

Let’s start with something really basic. How does your company define security? When I was last in Santa Clara, as part of a discussion on security frameworks, a colleague asked me this exact question. After thinking about it, I replied that for me, security equals threat assessment as it relates to risk as a factor of cost (perhaps reflecting some of what I learned in the Air Force).

 

Shaking his head, my colleague said that in his opinion, security equals insurance. I thought about his answer and how it differed from mine. Perhaps it was a higher-level description? By default, the term “insurance” implies a measure of agreed-upon value. While I think the concept is sound, I’m not sure many companies would be able to put a hard dollar value on security framework breaches. How much is your IP worth? What’s the value of a hacked email system? Maybe the cost of a security breach is more qualitative than quantitative? I’d greatly appreciate your feedback on this point.

 

Next, I recalled another Intel colleague who defined security as a way to provide a safe and secure experience for users. What makes these definitions so interesting is that they’re all correct in their original context. Yet, given the community orientation of the cloud, you can understand why this also creates some significant challenges.

 

How can you possibly prioritize and maximize the value of enterprise security investments if you can’t agree on something as fundamental as your organization’s E2E security definition? Now you see why I opened this blog with an analogy to recovering an aircraft after it stalls. While it would be nice to believe that one approach to recovery (security) always works, factors outside your influence simply don’t allow that to happen. So, whatever your goal for recovering from a stall, you need to have a solid framework to connect whatever scenarios you might encounter, from takeoff to landing (E2E), as events unfold.   Even then, there will be instances where you are powerless to avoid the effects of gravity (security breach).

 

Please join me as I explore the topic of cloud security across upcoming blogs. For now, and reserving the right to add or modify these topics as we move forward, here are the areas I’ll address in the coming months:

 

  1. Current state security
  2. Security as a factor of cost
  3. Business issues surrounding security
  4. Evaluating new-world security model Investments
  5. Security, data architecture, and big data
  6. Security in Depth (E2E Frameworks)

 

I’m interested in your feedback on today’s blog in general and, specifically, how your enterprise is approaching E2E security and E2E cloud security.  Do you consider the two topics as separate but equal or as one and the same discussion?   Please contact me through Twitter.

Filter Blog

By date:
By tag: