Many frequently use catalogs to learn about, compare and purchase items.  What would happen if we extended the catalog concept to Cloud services as well?  Can and should something as abstract as an internet-based service be catalogued and put into normalized terms so that comparisons and purchases are made easier?

 

According to the Open Data Center Alliance (a unified voice of more than 280 global IT leaders), the answer is yes.  They've outlined the need for a self-describing services catalog that enables cloud subscribers to parse through the features, capabilities and price across multiple cloud offerings to increase transparency and ease deployment for cloud services.

 

You may ask, won't this generalize cloud services and/or commoditize?  I personally don't think so. In fact, I think it will do the opposite.  If there is an easy way to compare common features/capabilities across cloud providers, then there is also an easy way to highlight innovation and differentiation among service providers.

 

To learn more about how a services catalog can demystify the cloud, check out my latest blog on datacenterknowledge.com and download the Open Data Center Alliance's Service Catalog usage model to read the full requirements.

 

Contact and follow me on twitter for the latest in cloud industry news, Intel's cloud strategy and my thoughts on the industry a we move toward the cloud: @RaejeanneS.

 

I look forward to your feedback.

download_arrow.pngStart your week right with the Data  Center Download here in the Server Room!   This post wraps up everything going on with the Server Room, the Cloud  Builder Forum, and our Data Center Experts around the web.  This is your chance to catch up on all of the  blogs, podcasts, webcasts, and interesting items shared via Twitter from the  previous week.

 

Here’s our wrap-up of the weeks of  June 20th – June 28th:

 

 

In  the blogs:

 

Bruno Domingues explained Database  virtualization – latency in the cloud

 

Pauline Nist  reviewed HP  Discover: Itanium to tablets with everything in between!

 

Winston  Saunders shared the Open  Compute Project: Facebook’s idea for the future data center. And A Top-Down  Look at Server Utilization Effectiveness

 

Robert Deutsche retook the High  Ground in the Cloud Discussion

 

Justin Van Buren gave us 5  Questions to Ask Intel at GigaOm Structure (June 22-23, 2011)

 

Raejeanne Skillern described Unifying  the voice of IT as we head into the cloud

 

Brian Yoshinaka gave us a 10 Gigabit  Ethernet Update: Another Serving of Alphabet Soup

 

Lawrence Chiu told us the one  mission critical RAS feature you should look for

 

John Hengeveld had  a guest feature on Inside HPC and  shared  a Look Back at ISC’11 by John Hengeveld

 

Broadcasted  across the web:

 

Last week was a big week for  supercomputing and HPC on Intel Chip  Chat.  It may have had something  to do with ISC 2011. ;-)   Last week, you got twice the dose starting  with FZ Juelich’s Norbert Eicher, who elaborated on the efficiency and portability that the Intel MIC  architecture delivers.  Mr.  Eicher also discussed the collaborative work FZ Juelich is doing with Intel, and other HPCS  and cluster computing organizations throughout Europe.

 

In the second dose, Allyson Klein  chats with Michael Showerman, the Technical Program Manager for the National Center for Supercomputing.  Mr. Showerman explains the Science of Computing & Intel MIC.  They continue to chat about the science of  technical computing and the scalability & portability of the Intel MIC  architecture.

 

For your Conversations in the Cloud, last week Dave Nicholson of  EMC took some time to discuss Solutions for the  Cloud with EMC and the Open Data Center Alliance (ODCA).  Mr. Nicholson is the Director of the newly  created EMC Solutions Group. In this podcast, he further talks on the journey  to the cloud and what it means for your IT infrastructure.

 

In a webcast with Gproxy and Intel,  Gproxy’s Rafael Tucat, Leo Tuzzo, and Intel’s Priya Abani discuss a Balanced Compute Model for the Cloud.  They review the Gproxy Intel® Cloud Builders  reference architecture focusing on the Client Device Score (CLIDES) by Gproxy,  which ranks a client device’s capabilities including CPU, CPU load, connection  type, bandwidth and screen resolution, helping to optimize the user experience  based on the client device’s capabilities.

 

On  YouTube:


Why Greater Performance per-Core Matters for  Server Virtualization         

 

Benefits and Challenges of IT Consumerization

 

HPC @ Intel 2011

 

In  the social stream:


@RaejeanneS shares that Red Hat Shows Cloud  Leadership w Expansion of Comprehensive Cloud Comp Solutions Portfolio http://bit.ly/jpBqhH

 

@WinstonOnEnergy shared Intel Cloud on a Chip http://t.co/LTWAY7O

 

@WinstonOnEnergy shared a major datacenter announcement from northern Virginia http://t.co/9vt1FTd

 

@WinstonOnEnergy said: Leading the way to Exascale - Kirk Skaugen at ISC http://t.co/7fROiRH

finish_line.jpgIn my previous blog I described the methodology to run a Proof of Concept when migrating a production database that underlies a mission critical application.  This blog covers the analysis of the results of the Proof of Concept.  In other words, you’ve done the PoC, now what?
Flickr Image: Pat Guiney

 

 

 

The Proof of Concept has three primary objectives:

  • To ‘prove’ that the application can run in the new environment
  • To see how the application performs in the new environment
  • To specify the steps necessary to conduct the actual migration.

 

The proof that the application can run in the new environment seems pretty straight forward.  Sure, if it runs after being ported, it runs.  Check that one off the list.  However, when we start the PoC, there’s no guarantee that it actually will run.  We use the UAT procedures - be it a regression test harness or a select team of users - to bang at the application and ensure that we have ported and hooked up everything.

 

As mentioned before, these tests are run frequently and usually after ‘flashing back’ the application to the initial starting point.   Once this is done, all of the components and set up steps need to be carefully documented.  These steps will be repeated in the subsequent phases of the entire migration.

 

The steps taken for the proof of concept can be seen as covering a continuum.  In some cases, the steps will need to be repeated in each phase, such as porting the data.  Other steps, once done for the proof of concept, like rewriting shell scripts or editing the C++ code will not need to be repeated.  Once these ports are done to the satisfaction of the customer, then that is all that needs to be done and these can be put aside for use in each of the subsequent phases.   Other steps fall in between.  These steps are setting environmental parameters that need to be set for this application.  For each subsequent phase, we need to reset the parameters but we don’t need to determine how to set them.  Documenting these settings is all that needs to be done.

 

So now that we know that the application will run, we then look at how it runs in the environment.   This is where environmental tuning parameters are adjusted.  This is where code may need to be rewritten.

 

Some proof of concepts start off with the application performing on the new target significantly worse than in the original environment.    An application can perform differently when hosted on a different platform.  For instance, an application that was CPU bound on the old platform could start having I/O queues or memory usage issues.    If you’ve been measuring application performance by looking at queues on the original host you’ll likely miss this.  The only queues you really should be looking at are the queues (CPU, I/O, and Network) on the PoC hardware which simulates the eventual target hardware.

 

I once got an application running that was ported from a mainframe into a relational database.  Performance was terrible even though on paper the new environment would significantly outperform the old host.  We looked at tuning parameters and we were optimal.  We looked at performance reports from the operating system and it was OK.  We looked at the performance reports from the database and sure enough, CPUs were barely running but the I/O to disk was pegged.  The customer was looking to us and the application vendor for a fix.  I looked at the code and found that the application vendor had written VSAM style data access methods in SQL!  In other words, instead of using relational set theory to winnow the data, his application read each row in sequence through the entire table.  The PoC stopped right there, and the customer kicked the application vendor out.

 

Like in the story, we need to observe carefully how the application is performing.  We can use the output from the testing harness to give us data on how long each tested task took and compare that to the SLA’s for the application.  We can look at the tools built into the operating systems, like Perfmon for Windows or the ‘stat’ tools, like vmstat, iostat, etc., in Linux.   (Let’s not neglect the application as the primary source of performance problems but we’ll discuss that later.)

 

The data from the tools needs to be analyzed.  The performance times measured by the test harness are pretty obvious as it usually reports response time.  If the task took more than a second to complete but requires sub-second response time, then we know there is a problem that needs to be looked into.  Is the application just running poorly?  Is the hardware inadequate?  Is the operating system or underlying software in need of tuning?  Perhaps the application architecture needs adjusting.  Our performance monitoring tools will provide us clues.  We’re looking for bottlenecks.  Bottlenecks can be found where data travels, like I/O or in the case of a database, in logging the database changes.  Look for queues in network and storage I/O and queues for the CPUs at the operating system level.   At the database level, we are looking where ‘waits’ are occurring.  For instance, in Oracle we are first looking at waits in the single threaded operation of writing to the log files as well as anything affecting the logging process.

 

Now we can more precisely determine the capacity requirements of the target platform.   Here we can project with greater confidence the characteristics of the platform we will be porting to.  For an application requiring bare metal this is critical; for an application that can be hosted in a Virtual Machine this process defines the initial set up requirements.  Remember this new platform will need to handle the projected growth and handle the unexpected black swan.

 

Now the PoC is over.  The results have been converted to foils to be presented to management for planning the conversion.  The documentation we wrote is now the recipe for the next step, the migration rehearsal.

Nowadays, cloud computing is the hottest topic in IT. There is a massive amount of discussion about how to build it, consume it, and how cloud computing will change our lives. The potential is very real. However, there are distinct challenges and physical limitations to deal with in order to transform it into a mature, widely adopted technology.

 

Economies of Scale are the base of the cloud computing business model. How can one use computational capability in an efficient way to dramatically reduce cost? Accommodation is only one aspect in a multi-tenancy environment with peakes and vales of different workloads in a multi-purpose platform. Kill latencies and bottlenecks are probably the hardest parts of cloud infrastructure. A well-balanced environment not only pays back on customer experience, but also increases profitability to better utilize available computational resources.

 

A big concern in a highly transactional environment is the database. The IOPS requirements are not only on throughput but also latency. Even in a non-virtualized environment, there are tough decisions to be made. Single storage without high availability or sync replication between two storages in the same site of geographic dispersed? Use Fiber Channel or 10GbE to connect to storage? Page size, link aggregation, etc. (including virtualization in this equation) make these decisions even harder.

 

In fact, a virtual environment cannot provide the same level of performance as bare metal. But how much is the impact for a highly OLTP system that requires high IOPS?

 

In several tests conducted with multiple VMMs available in the market, some are penalized by virtual CPU count, whereas others deal with virtual disk technology. Yet, all share the same constraint: high waits on read latches and I/O throughput to redo.  The following tables present these differences, comparing the virtualized and native databases in the same hardware and configuration:

 

DB-tables.PNG

 

In this case, the average wait for log file sync jumped from 1ms in native platform to 6ms with virtualization!

 

Making transactions longer means that you may experience more locks rates in the database, and also that your application server and web server will hold threads for longer periods. In this scenario for example, if you use default values for most of the application servers and web servers available in the market, then you will probably post to your user the “Server is too busy” message (aka. error 500)

 

ServerTooBusy.png

 

For Instance, your web server will experience lower CPU consumption. Default configuration in most web servers allocates 25 threads per processor (you can change this value). When a user requests a transaction that requires database access, the time spent on round trip until database means that the user’s work thread is in a wait state, and it is unable to take another user’s request until it concludes the actual transaction. At the end, with a large amount of users and lower CPU consumption, the user will experience the bad service denied experience.

 

Usually, for highly OLTP environment, database virtualization is not a good idea from a computational resource utilization standpoint. However, if you decide to use it for any reason, spend some time tuning the application end-to-end in order to optimize these “latencies”.

 

 

Best Regards!

Now that I’ve recovered from HP Discover 2011, it’s time to  talk about the good, the bad and the ugly.

 

First, HP (along with their key sponsors Intel, SAP and  Microsoft) did an excellent job of putting on a great show that pulled together  all of HP in one conference, for the first time ever.  It was a class act, and gave them the  opportunity to show off new products from the high  end 32-socket Itanium Integrity, to the entry level web  OS tablet to be launched in July.  HP  set up a great Blogger Lounge on the main show floor, providing a great  convenient central place for connecting (along with all the comforts of  home).  I had a chance to do a quick  video interview with Thomas Jones @niketown588.

 

Kirk Skaugen delivered a great Intel  keynote on Monday evening. He strongly confirmed the Intel commitment to Itanium, along  with highlighting Intel’s Ultrabook effort to drive a next generation of lightweight,  ultra-portable laptops with the attributes of a tablet and the performance of a  PC.

 

As part of his keynote, Martin Fink further reinforced the Itanium  commitment from HP when he demonstrated a running Poulson prototype system (on  target for delivery in 2012), along with announcing the availability of Virtual  Partitions on the entire Integrity portfolio.  Of course, the big news in Martin’s keynote was the official HP letter to  Oracle threatening legal action regarding Oracle’s decision to end  Itanium support, followed by the more recent decision to file the lawsuit.

 

Finally, in order to avoid any post show letdown, HP waited  until the Monday after Discover to announce the latest round of corporate  reshuffling, which implemented many of the changes that were first rumored  around the time of the annual HP shareholder  meeting in March.

 

Last but not least, the break snacks at HP Discover were awesome, with  more healthy choices than I’ve ever seen in Las Vegas or anywhere!

I had the privilege to attend Facebook’s Open Compute Project Summit in Palo Alto on June 17, 2011. Attendance appeared excellent; I met folks from silent mode start-ups to Google and from Adobe to ZT Systems. Although I didn’t estimate attendance, Facebook showed the logos of about ~ 100 companies invited to the event. Impressive.

 

The idea behiond Facebook's Open Compute Project is to move traditional hardware development toward a model followed by open source software development. The OCP's key tenets are: Efficiency, Economy, Environment, and Openness. The Forum began the process of building a community, united by the same vision, to develop specs and accelerate innovation.

 

An exciting element of the Open Compute Project highlighted by both Tom Furlong and Amir Michael was how Facebook’s approach has really overturned the usual paradigm on efficiency. Generally, higher efficiency is associated with higher cost. For facebook just the opposite is true. Done right, “Efficiency is Profitable” as the posters on the walls of the Forum highlighted. A good “proof point” is the current OCP platform. Not only is the platform economical but efficiency is designed in with 94% efficient PSU’s, a weight that is pounds lighter, and a layout that reduces the time to do routine services like HDD, DIMM, and PSU swaps from multiple minutes to under a minute.  So efficiency really covers the whole usage, from construction, to installation, to operation, all the way to maintenance. And every step is cheaper.  There's more detail in the introduction video below:

 

 

 

Presentations of customer requirements from Rackspace and Goldman Sachs had many common elements. Both are addressing the issues with scale-out and looking to contain costs. Goldman Sachs highlighted that in scale-out the only real differentiation left to vendors was the quality of service and support. Serviceability was highlighted by Rackspace as a key selection criterion due to their SLA requirements.

 

I enjoyed sitting in on the Networking hardware discussion. The key problem highlighted was the mismatch of the pace of innovation in the network layer compared to what is being done in software and hardware. The comparison between the very limited capabilities today in the monitoring, control, and data layers of the network versus what can be done in Linux where features and capabilities can be stripped down and optimized for a particular use, highlighted the opportunities. A particular use case was responding to a dynamic app stack. Facebook can have a weekend hackathon and completely alter the behavior of the software requirements for the network. Rackspace highlighted that someone could swipe a credit card, fire up a bunch of machines, and completely alter the dynamics of what is running in their data center. Both need affordable flexibility to understand and customize network solutions quickly.

 

There were also some disagreements about requirements. For instance in server spec design, what command set does a management solution support? It seems that linkage to the Open Data Center Alliance (ODCA) could foster some productive synergy. For instance, the end user-defined usage models that ODCA recently released and the infrastructure building blocks that OCP release look very complementary. In terms of hardware requirements, what feature sets get included in what reference designs might also lead to either divergences, or perhaps branches growing off a main server spec.

 

Overall I’m very excited about the opportunities presented by the Open Compute Project. It appears to be an opportunity to innovate and create new models which better meet the needs of the end users.

There’s a perfect storm coming your way. It’s fast approaching your enterprise (and your IT organization). And as it passes, life as you know it will forever change.

 

Driven by a desire to reduce costs and respond quicker to market changes, cloud computing (private, public or hybrid) will evolve over the next few years, changing the organizational and ecosystem dynamics of the enterprise, IT and data center staffing. Skill requirements and historical technical boundaries that have been static for 30 years will forever be altered.

 

Vendors are offering solutions to this challenge—many of them based on software or hardware products the vendors sell. What seems fundamentally different about the current storm is that vendors are approaching your enterprise business partners first and perhaps taking advantage of the historical skepticism between a typical IT organization and their business brethren.

 

What can you do to instill some common business and planning sense into this process and perhaps get IT and the DC in front of this encroaching change?  Good question Bob, but unfortunately, most IT/DC organizations are not particularly well suited (in fact, most are almost defenseless) to stem this charge; a robust and enterprise-focused cloud solution takes much more than a blueprint or a template-based architecture.

 

While I do not profess to be anything today but a principal architect at Intel, my boot level business experience both inside and outside of IT gives me a unique perspective to understand both the technical and business challenges you’re facing—and those your organization will likely face as it gets pulled further into this storm.   I will be discussing these issues in a series of articles for Data Center Knowledge, a daily news and analysis website for the data center industry.

 

My current blog post identifies the components that make up the foundation for a robust and scalable cloud solutions framework. By taking an architect’s approach to the enterprise business challenge, I’m intending to offer perspectives that balance business with technology in a format that’s refreshing and not focused on product.

 

You can read more and join in the discussion at Data Center Knowledge.

 

I’ll be posting new blogs about every other week, so please be sure to come back and share your own point of view.

GigaOM's Structure is this week (June 22-23) and Intel is a headline sponsor. While at the show, we would love it if you asked Intel some questions. To get the ball rolling, here are five topics that we can address while at the show.

 

 

 

1) What is an “Open Cloud” and how does my organization take advantage of it?

 

Jason Waxman, Intel’s GM for High Density Computing in our Data Center Group, will be conducting a fireside chat on Thursday June 23rd at 2:30 pm.  To give you a preview, he will talk about newly released cloud use model specifications that top enterprise IT organizations released to the industry via the Open Data Center Alliance, and how IT organizations can take advantage of these use models for the benefit of your organization.

 

For those who are not attending the event, you can watch the webcast live at http://event.gigaom.com/structure/video/.

 

 

 

2) What is Intel’s low power Micro Server strategy?

 

In addition to talking about the open cloud,  Jason will also address Intel’s point of view on microservers and how they are ideal for workloads where many low-power, dense servers may be more efficient than fewer, more robust servers.

 

3) What is a hardware root of trust with Intel Trusted Execution Technology and how does it help with my cloud’s security, audit, and compliance issues?

 

To answer this question, you can attend the Intel workshop with Intel’s Billy Cox on Wednesday, June 22, 9:50 am.  To give you a preview of what he will talk about, listen to Terremark’s Chief Strategy Officer Marvin Wheeler, Cisco’s Director of Cloud and Virtualization Solutions Chris Hoff, Hytrust’s CTO Hemma Prafullchandra and RSA’s Senior Technology Strategist Dennis Moreau in the video below.

 

 

 

 

4) Do you have any reference architectures that help address implementation of unified networking with 10G Ethernet solutions in my data center?

 

At the same workshop as referenced above, Billy will discuss the Intel Cloud Builders program. He talks about how an IT architect can use this program to implement unified networking solutions (with all the associated unified networking benefits) in your datacenter. For a preview, watch the video below from NetApp on how to use one of the Intel Cloud Builders Reference Architectures to implement an Intel 10G Ethernet solution in your datacenter.

 

 

 

 

5) How did Intel’s own IT department implement a private cloud and go about virtualizating its applications enterprise-wise?

 

As part of Billy's workshop, we will also have Das Kamhout from Intel’s own IT department talk about implementing a private cloud across more than 75,000 servers. To give you a preview of his content, please view the video below.

 

 

 

We look forward to seeing you there.  To stay up to date with Intel while at the event, we will be posting updates at www.intel.com/go/cloudnews or you can follow me on twitter at @jlvb2006. Additionally, we will have many Intel experts at our booth at all times - please come by with your questions and say hi!

download_arrow.pngStart your week right with the Data Center Download here in the  Server Room!  This post wraps up everything going on with the Server Room, the Cloud Builder Forum, and our Data  Center Experts around the web.  This is your chance to catch up on all of the blogs, podcasts, webcasts, and  interesting items shared via Twitter from the previous week.  This one’s a big download (nearly two weeks’  worth!) so you'd better grab some coffee ;-)

 

Here’s our wrap-up of the weeks of June 6th – June 17th:

 

 

In the blogs:


Billy  Cox reviewed his post on Usage  Models and Technology that appeared on Data Center Knowledge
Emily  Hutson discussed Smart Investment for Small Business: Intel Xeon Processor E3 Family-Based Servers
Chelsea  Janes  gave us her notes on From  Cloud Computing to Cocktails: Intel @ HP Discover 2011
Barton  George gave us his POV from the Dell blog: On  the ground at Cloud Expo
Ken  Lloyd took a look at the  human side of Unix/Mainframe migration
Wally Pereira explained the RISC  to IA migration – Rising to the Proof of Concept
Bruno  Domingues gave us a Holistic  Overview of Security for Cloud Computing
Jennifer  Sanati showed us the First  Server Solutions with Microsoft and Intel
Megan  McQueen told us about the Power  Management and Cloud Security Demos at Day in the Cloud

 

Broadcasted across  the web:


While Intel  Conversations in the Cloud took a break, Intel Chip Chat shared a bit extra…  :-)

 

To start things off, Intel’s Raejeanne Skillern discussed the Open Data Center Usage  Models.  As the development of cloud  computing and data center technology advances, open standards are more necessary.  Raejeanne discusses the work Intel does with the Open Data Center Alliance to drive standards and accelerate transitions to new technologies. For  more of what Raejeanne has to share, check out her Industry Perspective on Data  Center Knowledge.

 

Want to know more about the latest news from the ODCA?  You can start by listening to the next  episode released featuring Adrian  Kunzle Managing Director and Global Head of Engineering and Architecture for JPMorgan Chase, and Open Data Center Alliance Steering Committee representative.  In this episode Mr. Kunzle discusses the Customer Requirements for Clouds Delivered by  the Open Data Center Alliance. In this episode, Mr. Kunzle discusses the creation of customer requirements for cloud with the Open Data Center Alliance and their expected impact in the  market. For more information and news, check-in with the Open Data Center Alliance.

 

As for the Intel Chip Chat trifecta, Lenovo’s Andrew Jeffries discusses how you can Improve IT Manageability for Small Business with Lenovo servers enabled by Intel Technologies. In this episode, Andrew discusses technologies enabling the extension and automation of remote management tools to small and medium businesses with Intel  Xeon E3 technology in Lenovo servers. For more, dig into the info on Intel Xeon  E3, and the new Lenovo Servers.

 

For your viewing:


Enabling  Client-Aware Computing with Lenovo Secure Cloud Access

 

Business  & Decision - Intel® Xeon® processor 5600 series

 

Research@Intel  2011: Cloud Zone

 

Open  Data Center Alliance: Usage Model Requirements

 

Interop  2011 - Day One Interop  2011 - Day Two

 

Unified  Networking Benefits with Intel Ethernet 10 Gigabit

 

Intel  Cloud Builders: NetApp* Unified Networking & Storage Reference Architecture

 

Intel®  Cloud Builders Reference Architecture VMware vCloud™ Director Demo

 

Securing  the Cloud with Intel Trusted Execution Technology Usage Models

 

A Tour  of Intel IT's Data Center at Russia

 

SAP  HANA - A collaboration between SAP & Intel

 

 

 

Remember to get the  latest news and events follow us on Twitter  @IntelXeon, and Facebook.

On June 7th, 2011 the Open Data Center Alliance made some significant industry announcements.  In just seven months since formation the organization has quadrupled membership, created 8 usage models that define IT requirements for some of the most pressing challenges in building/deploying clouds, and announced working relationships with leading standards organizations and solutions vendors.  I wrote a blog on what this means to the industry, cloud adoption and Intel on Data Center Knowledge.  Please read through and let me know what opportunities and potential you see from this major industry milestone.

 

To hear more of my thoughts on this announcement, you can also listen to my podcast at: http://intel.ly/lE1rae or follow me on Twitter @RaejeanneS.

Back in March 2008, my colleague Ben Hacker wrote up a blog post that compared and contrasted the 10 Gigabit Ethernet (10GbE) interface standards. It was a geeky dive into the world of fiber and copper cabling, media access controllers, physical layers, and other esoteric minutiae. It was also a tremendously popular edition of Ben’s blog and continues to get hits today.

 

So here we are three years later. How have things shaken out since that post? Which interfaces are most widely deployed and why? Ben has left the Ethernet arena for the world of Thunderbolt™ Technology, so I’ve penned this follow-up to bring you up to date. I’ll warn you in advance, though – this is a long read.

 

Still here? Let's go.

 

In “10 Gigabit Ethernet – Alphabet Soup Never Tasted So Good!” Ben examined six 10GbE interface standards: 10GBASE-KX4, 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, 10GBASE-CX4, and 10GBASE-T. I won’t go into the nuts and bolts of each of these standards; you can read Ben’s post if you’re looking for that info. I will, however, take a look at how widely each of these standards is deployed and how they’re being used.

 

10GBASE-KX4/10GBASE-KR

These standards support low-power 10GbE connectivity over very short distances, making them ideal for blade servers, where the Ethernet controller connects to another component on the blade. Early implementations of 10GbE in blade servers used 10GBASE-KX4, but most new designs use 10GBASE-KR due to its simpler design requirements.

 

Today, most blade servers ship with 10GbE connections, typically on “mezzanine” adapter cards. Dell’Oro Group estimates that mezzanine adapters accounted for nearly a quarter of the 10GbE adapters shipped in 2010, and projects they’ll maintain a significant share of 10GbE adapter shipments in the future.

 

10GBASE-CX4

10GBASE-CX4 was deployed mostly by early adopters of 10GbE in HPC environments, but shipments today are very low. The required cables are bulky and expensive, and the rise of SFP+ Direct Attach with its compact interface, less expensive cables, and compatibility with SFP+ switches (we’ll get to this later) have left 10GBASE-CX4 an evolutionary dead end. Dell’Oro Group estimates that 10GBASE-CX4 port shipments made up less than two percent of total 10GbE shipments in 2010, which is consistent with what Intel saw for our CX4 products.

 

The SFP+ Family: 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, SFP+ Direct Attach

This is where things get more interesting. All of these standards use the SFP+ interface, which allows network administrators to choose different media for different needs. The Intel® Ethernet Server Adapter X520 family, for example, supports “pluggable” optics modules, meaning a single adapter can be configured for 10GBASE-SR or 10GBASE-LR by simply plugging the right optics module into the adapter’s SFP+ cage. That same cage also accepts SFP+ Direct Attach Twinax copper cables. This flexibility is the reason SFP+ shipments have taken off, and Dell’Oro Group and Crehan Research agree that SFP+ adapters lead 10GbE adapter shipments today.

 

10GBASE-SR

“SR” stands for “short reach,” but that might seem like a bit of a misnomer; 10GBASE-SR has a maximum reach of 300 meters using OM3 multi-mode fiber, making it capable of connecting devices across most data centers. A server equipped with 10GBASE-SR ports is usually connected to a switch in a different rack or in another part of the data center. 10GBASE-SR’s low latency and relatively low power requirements make it a good solution for latency-sensitive applications, such as high-performance compute clusters. It’s also a common backbone fabric between switches.

 

For 2011, Dell’Oro Group projects SFP+ fiber ports will be little more than a quarter of the total 10GbE adapter ports shipped. Of those ports, the vast majority (likely more than 95 percent) will be 10GBASE-SR.

 

10GBASE-LR

10GBASE-LR is 10GBASE-SR’s longer-reaching sibling. “LR” stands for “long reach” or “long range.” 10GBASE-LR uses single-mode fiber and can reach distances of up to 10km, though there have been reports of much longer distances with no data loss. 10GBASE-LR is typically used to connect switches and servers across campuses and between buildings. Given their specific uses and higher costs, it’s not surprising that shipments of 10GBASE-LR adapters are much lower than shipments of 10GBASE-SR adapters. My team tells me adapters with LR optics modules account for less than one percent of Intel’s 10GbE SFP+ adapter sales. It’s an important one percent, though, as no other 10GbE interface standard provides the same reach.

 

10GBASE-LRM

This standard specifies support for 10GbE over older multimode fiber (up to 220m), allowing IT departments to milk older cabling. I’m not aware of any server adapters that support this standard, but there may be some out there. Some switch vendors ship 10GBASE-LRM modules, but support for this standard will likely fade away before long.

 

SFP+ Direct Attach

SFP+ Direct Attach uses the same SFP+ cages as 10GBASE-SR and LR but without active optical modules to drive the signal. Instead, a passive copper Twinax cable plugs into the SFP+ housing, resulting in a low-power, short-distance, and low latency 10GbE connection. Supported distances for passive cables range from five to seven meters, which is more than enough to connect a switch to any server in the same rack. SFP+ Direct Attach also supports active copper cables, which support greater distances while sacrificing a small amount of power and latency efficiency.

 

A common deployment model for 10GbE in the data center has a "top-of-rack" switch connecting to servers in the rack using SFP+ Direct Attach cables and 10GBASE-SR ports connecting to end-of-row switches that aggregate traffic from multiple racks.

 

This model has turned out to be tremendously popular thanks to the lower costs of SFP+ Direct Attach adapters and cables. In fact Dell'Oro estimates Direct Attach adapter shipments overtook SFP+ fiber adapter shipments in 2010 and will outsell them over 2.5:1 in 2011.

 

10GBASE-T

Last, let’s take a look at 10GBASE-T. This is 10GbE over the twisted-pair cabling that’s deployed widely in nearly every data center today. It uses the familiar RJ-45 connection that plugs into almost every server, desktop, and laptop today.

 

RJ-45 Cable End

RJ-45: Look familiar?

Alternate title: Finally, a picture to break up the text

 

 

In Ben’s post, he mentioned that 10GBASE-T requires more power relative to other 10GbE interfaces. Over the last few years, however, manufacturing process improvements and more efficient designs have helped reduce power needs to the point where Intel’s upcoming 10GBASE-T controller, codename Twinville, will support two 10GbE ports at less than half the power of our current dual-port 10GBASE-T adapter.

 

This lower power requirement along with a steady decrease in costs over the past few years mean we’re now at a point where 10GBASE-T is ready for LOM integration on mainstream servers – mainstream servers that you’ll see in the second half of this year.

 

I’m planning to write about 10GBASE-T in detail next month, but in the meantime, let me give you some of its high-level benefits:

  • It’s compatible with existing Gigabit Ethernet network equipment, making migration easy. SFP+ Direct Attach is not backward-compatible with GbE switches.
  • It’s cost-effective. List price for a dual-port Intel Ethernet 10GBASE-T adapter is significnatly lower than the list price for an Intel Ethernet SFP+ Direct Attach adapter. Plus, copper cabling is less expensive than fiber.
  • It’s flexible. Up to 100 meters of reach make it an ideal choice for wide deployment in the data center.

 

We at Intel believe 10GBASE-T will grow to become the dominant 10GbE interface in the future for those reasons. Crehan Research agrees, projecting that 10GBASE-T port shipments will overtake SFP+ shipments in 2013-2014.

 

If you’re interested in learning about what it takes to develop and test a 10GBASE-T controller, check out this Tom's Hardware photo tour of Intel’s 10 Gigabit “X-Lab." It's another long read, but at least there are lots of pictures.

 

In the three years that have passed since Ben’s post, a number of factors have driven folks to adopt 10GbE. More powerful processors have enabled IT to achieve greater consolidation, data center architects are looking to simplify their networks, and more powerful applications are demanding greater network bandwidth. There’s much more to the story than I can cover here, but if you are one of the many folks who read that first article and have been wondering what has happened since then, I hope you found this post useful.

 

 

Follow us on Twitter for the latest updates: @IntelEthernet

"The computer says 'No'," I was told as I was turned away from a tram ride at a nice ski resort that had recently upgraded its ticketing to an advanced, automated system.  The system could track everything from a person's season pass status to the roster of all the people taking the base to peak tram ride at any given time. I later found out that that everything was fine except that one of the resort’s systems went down, which made me think - what if the computer had said 'No' midway up on my ride?

 

Similarly, enterprises today demand reliability in their datacenter for operations ranging from customer facing CRM for the call centers to the backend databases cranking out account settlements. Reliability and availability are essential to our perceptions of quality, yet there are still many who often appreciate power and performance the most in choosing the servers for our datacenters.

 

Yet most computer circuits are susceptible to what are called soft-errors. These are non-permanent data or operation errors that originate from random environmental alpha particles, cosmic ray radiation, or other thermal neutrons. Computers work with binary signals, and the energetic particles can cause a signal change from '1' to '0' or '0' to '1' in submicron circuits, resulting in errors that can sometimes be observed in calculations. While engineers strive to minimize these types of errors by adding additional checking and correction circuits, there is also an important feature we should look for in mission critical servers: Error prevention.

 

An error prevented is one that never has to be detected, corrected, logged, and recovered from. A high-end server processor such as the Itanium processor 9300 series based on the EPIC architecture is conceived with error prevention as a design goal. It makes extensive use of soft-error hardened and resistant latches and registers (memory elements) that are 100 times and 80 times more resilient respectively than their non-hardened versions. In fact, over 99 percent of all latches in the system interconnect functional areas, the highways within the Itanium 9300 processor, use the soft-error resilient latches.

 

There are many RAS features to consider on a mission critical processor, such as advanced machine check architecture (MCA), physical (electrically-isolated) partition handling, and Cache Safe Technology. The following whitepaper, link here, is a good start for those interested in these advanced features. Yet it’s also important not to overlook the role error prevention plays to improve reliability and availability in silent ways. Combined with a mission critical system design and hardened operating system, it means companies will be much less likely encounter a catastrophic event that they cannot recover from, which equates to increased savings to their bottom-line.

 

As a final thought, by preventing more soft errors, the best RAS feature becomes the one that you seldom notice, but that means all your computers will more often correctly say 'Yes'.

 

Till next time!

Billy Cox

Usage Models and Technology

Posted by Billy Cox Jun 17, 2011

As engineers we are fascinated by technology and can ramble and gesticulate for hours on end. But, unless we are buying the drinks, our poor IT customer just gets lost. In reality, the IT customer is having to map the technology discussion onto the problems they are facing as a business - something that is really hard.

 

That's why, when the Open Data Center Alliance (ODCA) announced their Usage Model Roadmap, we (technologists) should take notice. As I wrote in an article in Data Center Knowledge, when a bunch of users get together and tell us what they want, and in a form that we can digest, that is big news.

 

The ODCA usage models are an interesting list and touch on things I hear everyday from customers. Things like VM Interoperability allowing for true interoperability across hypervisors. Or, Carbon Footprint looking at the power required for a workload and mapping it back to the specific source of power. Or, IO Control requiring specific policy based bandwidth controls on a per VM basis.

 

Seems like the shoe might be on the other foot now: us technologists now have to map these user pain points back to the technology. Maybe that's what we get paid to do anyway?

Download now 

 

itc_carselect_cs_gameservers2_xeon.jpgTo support the much-anticipated multiplayer launch of Activision’s Call of Duty: Black Ops* game, hosting provider GameServers needed to substantially expand its data center capacity, delivering outstanding performance for millions of new players while controlling power, cooling, and real estate costs. The company decided to build a cloud environment stretching across 25 data centers with new servers based on the Intel® Xeon® processor 5600 series. The new servers increased processing density by 150 percent, enabling GameServers to provide an exceptional gaming experience while keeping customer pricing low.


“It was clear that Intel Xeon processor 5600 series could deliver the best performance and greatest density of all the processors we tested,” explained Anthony Quon, chief operating officer for GameServers.


To learn more, download our new GameServers business success story.  As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

 


*Other names and brands may be claimed as the property of others.

Times are tough and budge ts are tight these days. Gas costs more than $4 a gallon (I remember when it was under a dollar!). Small business owners, like most folks, are watching their bottom line and counting their pennies. Not necessarily the best time to invest in your IT infrastructure, you might think. Well, according to the fine folks at IDC research, the nearly 8 million SMBs in the United States will spend more than $125 billion on IT in 2011 – and that’s $5 billion more than they spent in 2010.


 

So now that we know small business owners ARE spending money on their infrastructure – how can they spend it wisely? We think that upgrading storage/network/backup capacity with a new Intel Xeon processor E3 family-based server system is a smart start. Platforms based on this new generation of single socket processors are up to 5.9x faster and 6.5x more energy-efficient than a 4 year old desktop-based system. Plus, they have more I/O ports and more memory capacity than a desktop to support the storage and networking needs of a growing business – and they are validated to run on server-class operating systems and applications.


 

And don’t get me started on the value of ECC memory, which is only supported by Intel® Xeon® processors. ECC stands for Error Correcting Code memory, which detects and corrects almost all memory errors before they have a chance to corrupt data or crash your system. Haven’t we all suffered the loss of an important file – not to mention an entire hard drive? For only a slightly larger investment than the purchase of a new desktop system, you can choose a “real server” based on the Intel Xeon Processor E3 family. We’re talking a couple hundred bucks here, folks…less than 25 cents a day over the life of your system…and isn’t your valuable data worth a little extra money to keep safe?


 

Here's another short video "starring" me and Cori Driver where we put a humorous spin on the value of a "real server" for small businesses - enjoy!


Download Now

 

sanbi.jpgThe South African National Biodiversity Institute (SANBI) wanted to deliver greener IT as part of a program to reduce the organization’s carbon footprint. It also planned to simplify IT management, increase performance, and improve services to clients.


SANBI chose a virtualized solution featuring Dell PowerEdge* blade servers based on the Intel® Xeon® processor 5600 series—and cut its carbon footprint by using around 76 percent fewer servers. It also reduced IT outages by around 80 percent and enabled employees to raise productivity by using more reliable systems. IT also cut management time by around 40 percent. Plus, SANBI can expand more easily with highly scalable storage.


To learn more, download our new SANBI business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

 

 

*Other names and brands may be claimed as the property of others.

Last week, Intel, partners, and customers joined to educate attendees at HP Discover 2011 in Las Vegas throughout June 6-10.  The week was filled with dynamic events that shared new HP products and technologies and the show floor was packed with 154 exhibitors to show and tell why they partner with HP.

 

Monday night kicked off the week with over three hours of informative and innovative keynotes. Ten thousand anxious attendees filled the keynote hall, which featured four large screens above a wrap around stage. The night was filled with speeches from Léo Apotheker, HP President and CEO; Kirk Skaugen, http://communities.intel.com/community/openportit/server VP and General Manager of Data Center Group; Bill McDermott, SAP Co-CEO; B. Kevin Turner, Microsoft COO; and innovative guest speaker Don Tapscott, Moxie Software Chairman. For a hint of comedic uplift, comedian Jake Johannsen was also featured during the keynotes.

 

Monday night’s DISCOVER ZONE reception, sponsored by Intel, was truly an event that no one missed.  The atmosphere was set with glowing red and blue lights, cozy cloud-like chairs, blue Intel-tinis, tasty hors d'oeuvres, and people clustered around their HP notebooks discussing ideas and innovations. Throughout the evening, thousands of attendees buzzed around and networked with one another, as the air was filled with conversation about new technologies.  People explored multiple HP exhibits and partner exhibits such as Intel, SAP, Microsoft, VMWare, Brocade, Emulex, and others. There was over 800 break-out sessions available as well.  Social media teams were speedy with their thumbs to tweet and blog the messages discussed between partner companies, and the pitches of demo stations. A dark ambiance with backdrops of neon colors and dynamic shapes filled the convention center.

 

Many of the exhibits were active, but Intel’s booth was full of life. Demos included the Intel Cloud Builders eBook, the Security and Compliance in Cloud demo paired with VMWare, the HP Slate 500 Tablet, the Intel® Ethernet Networking for Proliant and Integrity demo, the HP Client Automation with Intel® vPro demo, the Intel® Xeon and Solid State Technology, and the Intel’s High Performance Computing with test driving the race car.

 

People hovered over the demos, asking intuitive questions and were eager to learn more.

 

Intel’s Mary Beth Ruecker and Preston Atkinson staffed the Intel® Ethernet Networking for Proliant and Integrity demo. She said that with all of the booth attention, they had some good conversations. “This night we got bombarded with people.  Good level of engagement.  People know that Intel sells networking products, and lots of them use our 10GbE products. Many are happy that Intel has that offering now.”

 

Billy Cox, Intel’s Cloud Software Strategist, staffed many demos at the Intel booth, but comments specifically on the Security and Compliance in Cloud demo, saying that the most interested industries are healthcare in financial. “Most people want to know ‘don't I trust all my servers? Has it been changed and how do I know? With this migration, how do I use it?’ … they use it for an audit report. You want to be able to prove how things are running, and this proves it.”

Marcel Saraiva’s HP Slate 500 demo was very popular.  Marcel was able to meet and show off the slate to fellow Brazilians like Ken Peck Da Vita and Jessian Cavalcanti, as they talked in their native language and used the Slate like it was their own.

 

The eventful night was packed with old co-workers reuniting, attendees learning new technologies, and social media teams meeting each other as they tweeted every moment and took notes for later blogs. The ambiance was perfect to start the week of HP Discover 2011.

barton808

On the ground at Cloud Expo

Posted by barton808 Jun 15, 2011

Last week thousands  cloud enthusiasts and professionals converged on New York City for Cloud Expo East. There was a lot  going on but I was able conduct video interviews with several industry leaders  to talk about their recent cloud computing announcements and share their  thoughts on the industry as a whole.

 

 

 

  • I also caught  up with Dustin Kirkland, who manages the system integration team for  Ubuntu. He provided some insight into utilizing OpenStack, talked about their  work with Eucalyptus, and gave a little insight into Ensemble and Orchestra solutions.

 

 

  • Josh Fraser, VP of business development for RightScale, talked about the company’s  myCloud solution and their recent work with Zynga.  The Zynga engagement is particularly interesting as RightScale manages across  Zynga’s public and private clouds.

 

Besides  interviewing others, myself and other colleagues from Dell gave 11  presentations at Cloud Expo. My talk focused on the revolutionary  approach to cloud computing and how it is setting a new bar for IT  efficiency.

 

One  final item of note: Dell commissioned  a survey of IT professionals at the show and the results show that there  continues to be divergent opinions on cloud.  For example, 47% viewed cloud an  extension of existing trends toward virtualization while 37% felt that it was  more of a significant IT shift.

 

Check my blog and come back to this Forum for  more cloud-related updates from next week’s Structure in San Francisco.

I have been on a theme as of late with posts related to legacy migration.  The majority of the focus has been performance of Xeon vs legacy Sparc and Power, and stability/availability of today's Xeon solutions.  And Wally has been discussing the process of data migration. This post is going to look at the softer side of migration - the people.

 

Every IT manager I have discussed migration with has made it a central point to mention the people challenges in legacy migration.  So putting on my OD ( Organization Development ) hat, I want to share some a BKM I witnessed which delivered a smooth and flawless migration.

 

I see two primary soft barriers to migrating off legacy platforms:

    1. Desire to Win - I root for "My OS/Platform"
    2. Fear to Lose - My job depends on the legacy OS

 

The most successful migrations must deal with both issues early.

 

The first issue is not unlike cheering for "my team".  People want to be right.  If the migration is perceived as 'us and them' they will typically pick 'us'.  This belief creates a cognitive bias whereby information that challenges the superiority of their 'belief' is doubted, and only legacy positive information is accepted.   It is difficult to win this discussion using just facts and data, changing a belief system takes time.

 

While the first issue is mostly perceptual, the second issue can be profoundly real.  Every company today has a mature staff supporting X86 platforms, many with both Windows and Linux teams.  If my value and expertise is Solaris, or AIX - the loss of these environments would make me redundant.  The challenge here is to capture the knowledge, wisdom, and experience of these senior IT professionals without sacrificing their value.  Disgruntled IT professionals seldom deliver successful projects.

 

The best migration story I have ever witnessed was really the result of one person who perceived these challenges and addressed them elegantly.  His understanding of the challenges was matched by his ability to perceive the trend early (circa 2006) and build a long term plan that would optimize the  migration journey.

 

He was in a position to alter the roles of the Unix admins, and in 2007 he had them begin managing a set of Linux servers.  He also gave them Linux desktops, and made ample training and development opportunities available.  The key here was that this was not done in a convert or die scenario, this was done as an skill expansion opportunity.  These are geeky IT pros, like us, and given a new set of toys they dug in and found out how they worked.

 

By 2009 the group that would stereotypically be the harshest critics of legacy migration was actively coming to the manager discussing advantages in performance and cost on Linux-Xeon platforms.   He had created his own advocates from the very group that could have been most resistant.

 

In 2009 he kicked off the first migration projects.  They were a resounding success.  The critics he did not anticipate were the business groups that didn't believe Xeon could be as good as their legacy platforms.  Fortunately, they trusted their admins whom they had worked with for years.  The pilot convinced even the most hesitant that life was better on Xeon ( better performance, lower cost).  By the end of 2011 all legacy platforms will be replaced.

 

I think the things I admire most about this story are the manager's combination of vision and patience.  One persons ability to read the tea leaves and put the pieces in place to make BOTH people & technology successful.

Running a Proof of Concept (PoC) may seem like a slam dunk.  You know your application, you know your users and it should be easy to test the application out on new hardware.   All I need to do is copy it to the new hardware and we’re set to go.  But there have been many failures of PoC’s when this has been the extent of the high level planning.

 

How do we avoid PoC failure?  How do we get it done without analysis paralysis?

 

Many RISC to IA methodologies have elaborate analysis tasks prior to starting the PoC.  This is also often a path to failure as the analysis becomes the focus rather than the opportunity to reduce costs by migrating an application from an expensive RISC server to a commodity IA based server.  All that analysis takes the excitement and fun out of the process.  People get bored without results and interest turns elsewhere.   This is especially true today when business conditions demand immediate response to changing conditions.

 

But we don’t want to jump right into the PoC either.    So let’s look for the middle path, somewhere between the analysis paralysis and the unexamined response to management directives.  This middle path includes a measure of planning and a careful execution methodology that assumes that ‘there is more here than meets the eye’.   For instance, I haven’t done a PoC that didn’t uncover something new about the application.  This ‘something new’ is an aspect of the application that the staff supporting the application either didn’t know about or didn’t think was important.

 

Here’s my offering for a middle path.  For a database application migration the following steps (at 10,000 feet) should be taken.

 

  • Document the critical characteristics of the application.
    • Storage requirements
    • CPU requirements
    • Memory requirements
    • Network requirements
    • SLA requirements
      • These are the performance targets that you have to meet.

 

  • External application linkages

 

  • Identify the data source(s) for the application that you’ll use
    • Could be database back up
    • Source code for custom aspects to the application
    • Download and stage the IA port for the application (if applicable)

 

  • Size and acquire the target hardware and configure it in the lab
    • Connect it to the dev-test or lab subnet
    • Tune the operating system for the application

 

  • Size and acquire the appropriate storage for the application
    • Sometimes this can be high capacity SSD’s in a PCIe slot or a SAN/NAS array of hard disks.

 

  • In parallel select the PoC team from the team that supports the application and get them a break from their ‘day jobs’.

    • This is critical.  The staff needs to have skin in the game.  Without the staff commitment you won’t get answers to your questions.

 

  • Identify your regression test harness.
    • Be sure you have the ancillary hardware to run the test harness at the levels required.  (You need servers to simulate users for the test harness)

 

Now you’re ready to set up the application on the server.

Set up the IA port of the application on your new server.  If you’re moving a database you’ll want to set it up first and test it running on the target server.   The first step after setting up the application server is to back it up.  (It’ll be easier to restore this configuration after each test than to re-install it.)

Once you’ve completed the back-up begin the installation of the application from the designated source server.  This can be an active QA or UAT server or a back-up of the application.  With an Oracle database this can be done using RMAN restore, Data Pump import, or Oracle Streams.   With Sybase, you can use Replication Server.

 

For Oracle Data Pump or Export Import the following graphically presents the process:

 

Migration Methodologies Data Pump.jpg

 

For the Oracle Streams process the following demonstrates the configuration graphically:

Migration Methodologies - Oracle Streams.jpg

 

Regardless of the tool you are using you want to determine which tool is least likely to cause errors and move the data over within the maintenance window for the production application.  The PoC is your chance to determine the best methodology for the data transfer.   (Again, the data has to be transferred as character data across the wire as the endian configuration has to be corrected.  RISC processors read the data in ‘big endian’ format and IA processors read the data in ‘little endian’ format and this has to be converted.  Currently it can only be converted when represented as character data.)

Now that you have restored the application it needs to be hooked up with the web servers for the testing and the data feeds if necessary.  Again, back-up at this point or use flashback to create a restore point.

 

After the restore point is set, fire up the test harness and see what happens.  Also try to get some users to run their functions in the test mode.  What do the users think?

 

Even if you followed carefully the setup of the application on the source server the performance could be bad, or it could scream.   One company (unnamed) got such good performance at this point they shut down the PoC and began implementing the application in production.  For them the application was a Web server and the PoC was more like a pilot than a PoC like the one I’m describing here.   If you are moving a production, mission critical server, DO NOT skip the subsequent steps of the methodology.  Higher performance may be tempting to implement right away but the next steps are there to ensure that the production application is migrated seamlessly without a serious interruption in the business.

 

After the testing is done it’s time to clean up and document the steps and results.  Cleaning up includes restoring the database on the target server to the original installation before the application was ported.  The documentation includes not only the performance numbers of the tests but also the steps needed to get the application to be ready to run the testing.   An important part of the documentation is to determine which steps in the set up can be run in parallel.  If you’re using Data Pump, while it runs in parallel maybe you can run multiple Data Pumps, each one moving a different Oracle schema or one focused on moving the largest database table while the others move the smaller tables.  (I don’t want to get lost in the weeds here but this is what I’ve done with Oracle Export and Import)  The point is, what can be run in parallel that is on the critical path so that the application can be migrated more quickly.

 

The final step is a team meeting to review the experiences during the PoC.  You’re looking for what could have been done easier?  What could have been done quicker?  What new components of the application have we found?   All this turns into the project plan for the next step of the migration of the production application, the rehearsal where you are going to practice the actual migration of the production application.

 

This has been a long write up but it is very important.  The next entries will cover the next steps in the migration process.

Security is a key element for cloud computing, it is also appointed by 30% of respondents as concern in a survey conducted by Forrester Research for Cloud Computing adoption. Data privacy issues are also accounted for by 25% where security is also directly related, and integrity threats of infrastructure can expose user’s data and availability of the whole environment. Security is not an option for cloud environment, it’s a requirement without it user’s will not have confidence in the cloud and like any other institution, it will fail.

 

Security strategies should be treated in depth, from bottom to top and during the design process, not as something that you add at the end.  Inheriting most of the old school techniques such as SDLC for application development, authentication/authorization protocols and safe guard of credentials plus new and reviewed measurements focused on nature of cloud architecture.

 

New vectors attacking virtualized environment are rising such as:

 

  • Hyperjacking: is a verb that describes the hypervisor stack jacking, involves installing a rogue hypervisor that can take complete control of a server. Regular security measures are ineffective because the OS will not even be aware that the machine has been compromised. This kind of attack still in its infancy, but some of proof of concepts such as Blue Pill and SubVirt are those that proves the stealth potential and damage – a counter measure against this kind of attack is adoption of Trusted Compute Pools;

 

  • VM Jumping/Guest Hoping: this attack leverage vulnerabilities in hypervisor that allows malware to beat VM protections and gain access to lower levels (i.e. host). The driver for these attacks is that a hypervisor has to provide at least the “illusion” of a “ring 0” for a guest operating system to run in – a counter measure against this kind of attack is twofold:
    1. Harden the VMs, keeping OS and applications patches updated in order to avoid malwares exploit know vulnerabilities not patched;
    2. Segmentation can also be a strategy to reduce the damage for zero-day attack, where malware exploit unknown vulnerabilities, or those that no patch is available. Placing application with like security postures together and isolated from higher/lower level secured application and system can mitigate higher damage and can be accomplished by Trusted Compute Pools;

 

  • VM and Networking:  the live migration of VM between hosts, copy the respective memory area between hosts in order to allow a transparent movement between hosts with minimum interruption. The protocols used by most solution available in the market are not authenticated and susceptible for MITM attacks, giving full access to OS/kernel memory and application state.

 

VM_LiveMigration_MITM_attack.PNG

You can read more on cloud computing and virtualization vulnerabilities through many papers on the web.  The counter measure against this attack can be mutual authentication provided by Trusted Compute Pools, encryption or isolation of traffic in a secure network separated physically or virtually.

 

It’s not intended to be a full list. In a multi-tenancy environment, update and patch the full stack are subjects that should be also considered in design and day by day operations.

 

 

Best Regards!

Small businesses today are fraught with many IT challenges: limited manpower, tight budget constraints, security concerns, lack of technical expertise, and a rapidly evolving IT environment. At the core of this IT is your server, supporting the functions which are central to your business. On the front end, servers run productivity applications such as email, file/print, and database. And on the backend, servers run various security, manageability, and backup applications to keep the server and your business up-and-running reliably.

 

But what makes a server? Aside from the underlying hardware, the right operating system is crucial for enabling shared resources and maximizing productivity. Microsoft Windows Small Business Server 2011 Essentials (SBS 2011 Essentials) is one of the many options to consider for your first server purchase. This operating system provides an easy-to-use solution to protect your data with advanced, automated backup features. SBS 2011 Essentials helps you organize and access your business information from virtually anywhere. And, the operating system is designed to give you quick and easy connectivity to the cloud and a wide range of online services. Windows Small Business Server 2011 Essentials offers the best of both worlds: on-premise server capability and a simple gateway to the cloud.

 

A server running Windows Small Business Server 2011 Essentials powered by Intel Xeon processor E3 family is an excellent solution for small businesses. Intel Xeon processor E3 family delivers features designed to enhance your employees’ experience through dramatically improved performance of up to 30% over prior generation servers and up to 6X that of an older desktop.

 

Because your server runs many applications critical to your business, it is important that it is up and running 24x7. With support for Error Correcting Code (ECC) memory, Intel Xeon-based servers can detect and correct memory errors, preventing system crashes and potential data corruption. Security also continues to be a real concern, and Intel Xeon processor E3 family features Intel AES-NI, advanced instructions designed to speed up the encryption process and minimize compute resources, making encryption feasible where it previously was not.

 

Plus, with Intel Rapid Storage Technology, data redundancy is possible through software-based RAID allowing you to quickly recover in the event of a hard drive failure. And last but not least, Intel Active Management Technology gives service providers or your in-house IT staff the ability to remotely diagnose and repair your server, saving you time and ensuring swift recovery from system downtime.

 

Come and see all of this in action! Intel, Microsoft, and Acer are conducting a unique 3-city road show giving you a first-hand view of the new Acer AC100 server running Windows Small Business Server 2011 Essentials powered by the Intel Xeon processor E3-1200 product family. Experts from Intel, Microsoft, and Acer will be there to introduce you to options and give you advice on things to consider when purchasing your server. And don’t forget to take advantage of this opportunity to get hands-on experience and network with local business partners. Plus, one attendee from each event will receive a new Acer AC100 server at no cost. Go ahead and register, but don’t wait long because these free and informative sessions are filling up fast!

 

SBS Xeon Acer.jpg

The recent Day in the Cloud event showcased reference architecture demos from members of the Intel Cloud Builders cloud services ecosystem that addressed a variety of solutions relating to cloud computing infrastructure.

 

Intel’s Alan Priestly recently blogged from the event and highlighted a couple of particularly relevant subjects: policy-based power management and security in the cloud.

 

Policy-based power management provides more control over server and data center power consumption based on specific, pre-determined policies. Alan calls out the newly published reference architecture from Dell and JouleX as an example of how to integrate power management solutions that help data center operators manage power consumption based on utility pricing and demand, helping to mitigate costs.

 

Security in the cloud is one of the key topics whenever cloud computing initiatives are discussed. The recent reference architecture from HyTrust and VMware provides some insight into trusted compute pools, which is a set of servers capable of supporting a trust boot process at the hardware level.

 

We’ll bring you more cloud insight from Alan in the future, along with the latest on the Day in the Cloud events.

Download now


itc_cs_savvis2_xeon_library_preview.jpgLeading cloud computing provider Savvis needed to build an infrastructure for a new offering while expanding its existing environments to support continued customer growth. By selecting new servers based on the Intel® Xeon® processor 5600 series, Savvis is providing customers with improved application performance for all of its cloud offerings while increasing server density up to 50 percent compared with previous deployments. That density helps Savvis manage power, cooling, and real estate costs and retain a competitive edge by keeping customer pricing low.


“With larger core counts and support for greater memory capacity than previous processors, the Intel Xeon processor 5600 series lets us double the number of virtual machines on each physical server,”  explains Reed Smith, director of product management for cloud computing at Savvis. “We can support continued customer growth while conserving data center power, cooling, and real estate.”


To learn more, download our new Savvis business success story. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

itc_cs_grundfos_xeon_library_preview.jpgDownload now

 

To increase efficiency and centralize operations, Denmark’s Grundfos Group wanted to standardize on a single IT platform across its 200 sites. The existing infrastructures included proprietary systems, which were complex and expensive to run.


The project team created a standards-based infrastructure that featured rack and blade servers from Dell based on Intel® Xeon® processors 5600 series. This new infrastructure has made it quicker and easier to roll out new environments and Grundfos Group estimates the consolidated platform will save more than DKK 1 million in energy costs—lowering carbon emissions by around 700 tons in Denmark and 1,600 tons globally. It’s also enabled the company to reduce its IT costs by around 20 percent per end user. Finally, the new infrastructure has enabled Grundfos Group to provide consistent customer support across more than 45 countries.


“We quickly decided to adopt a standards-based rather than a proprietary solution because we gained the same high performance but with lower maintenance costs,” explains Karsten Sørensen, group vice president for Grundfos Group.


To learn more, read our new Grundfos Group business success story. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

As a principal architect in Enterprise Solution Sales (ESS), I understand the business challenges that are likely to slow down adoption of various elements of the cloud computing ecosystem. Those are the issues I’ll be discussing in my posts featured on Data Center Knowledge (DCK), a daily news and analysis website for the data center industry.

 

My first post establishes context for the discussions to follow and briefly touches on the need for a formal cloud solutions framework with a promise to reveal a detailed cloud transformation/maturity framework strategy as the topic develops. I believe my architect-based candidness about the cloud is different from anything else you’re hearing in the industry. By taking an architect’s approach to the enterprise business challenge, I’m hoping to offer a new perspective that’s refreshing (and maybe even a bit controversial).

 

Please join in the discussion. My posts will appear about every other week on DCK.  Go checkout my first post on cloud computing - Let's Get Real About the Cloud...

K_Lloyd

How big is my Sparc?

Posted by K_Lloyd Jun 1, 2011

Normally I post some "opinion" or "interesting reference".  Consider this post an open request.

 

Per my earlier posts on Sparc ( Sparc Arrest ) lots of folks are migrating off legacy Sparc to current X86.  Every couple of weeks, an oem, reseller, or sometimes a customer pings me with a question about system sizing.  Normally this isn't typically a bake off head to head with the latest Sparc vs the latest Intel server processors.  This is something like: "How big of a server do I need to replace my SunFire V490 UltraSparc? It is about five or six years old..."

 

It would be great to just key this in to the evergreen super performance tool, if such existed.  Does it exist?  if so please tell me.

 

The reality I have found is that historical performance publications are a sparse matrix.  This is made more sparse by the fact that for many years Sun published very few benchmarks - especially my favorite generic indicator of enterprise performance SPECint_rate_base.   With this reality, each request becomes a combination of archaeology and documented assumptions...  Not my favorite process.

 

In an Intel deck I found the below historical SAP SD Sparc benchmarks.  Maybe not the perfect comparison tool, but at least it is something.

With this and what I can find online via searches and benchmark sites I can usually construct a supportable response.

 

As for the results I deliver, it is almost always a relatively small Intel Xeon server to replace a relatively large legacy Sparc box.  The savings in power, administration, licensing, etc are large. ROI can often be measured in months.  The biggest challenge is usually organizational, but that will be a topic for another post.

 

 

SAP SD Sparc Historical Performance Data (PDFs)

 

  • 4 processors / 32 cores / 256 threads, UltraSPARC T2 Plus, 1.4 GHz, 8 KB(D) + 16 KB(I) L1 cache per core,4 MB L2 cache per processor, 128 GB main memory, Number of benchmark users & comp.: 7,520 SD (Sales & Distribution) Average dialog response time: 1.99 secondsThroughput:,ully Processed Order Line items/hour: 753,000,Dialog steps/hour: 2,259,000,SAPS: 37,650,Average DB request time (dia/upd): 0.098 sec / 0.278 sec,CPU utilization of central server: 99%,Operating System central server: Solaris 10, RDBMS: Oracle 10g,SAP ECC Release: 6.0.   Certification Number. 2008058

  • 4 processors / 32 cores / 64 threads,Intel Xeon Processor X7560, 2.26 GHz, 64 KB L1 cache and256 KB L2 cache per core, 24 MB L3 cache per processor,256 GB main memory, Number of SAP SD benchmark users:10,450, Average dialog response time:0.98 secondsThroughput:Fully processed order line items/hour:1,142,330Dialog steps/hour:3,427,000SAPS:57,120Average database request time (dialog/update):0.021 sec / 0.017 secCPU utilization of central server:99%Operating system, central server:Windows Server 2008 Enterprise EditionRDBMS:DB2 9.7SAP Business Suite software:SAP enhancement package 4 for SAP ERP 6.0. Certification number: 2010012

  • 16 processors / 32 cores / 64 threads, SPARC64 VI, 2.4 GHz, 256 KB L1 cache per core, 6 MB L2 cache per processor, 256 GB main memory ,Number of benchmark users & comp.: 7,300 SD (Sales & Distribution) ,Average dialog response time: 1.98 seconds Throughput: Fully Processed Order Line items/hour: 731,330 ,Dialog steps/hour: 2,194,000 ,SAPS: 36,570 ,Average DB request time (dia/upd): 0.018 sec / 0.041 sec ,CPU utilization of central server: 99% ,Operating System central server: Solaris 10 , RDBMS: Oracle 10g ,SAP ECC Release: 6.0

  • 24 processors / 48 cores / 48 threads, UltraSPARC IV+, 1950 MHz, 128 KB(D) + 128 KB(I) L1 cache, 2 MB L2 cache on-chip, 32 MB L3 cache off-chip, 96 GB main memory ,Number of benchmark users & comp.: 6,160 SD (Sales & Distribution) ,Average dialog response time: 1.99 seconds ,Throughput: Fully Processed Order Line items/hour: 616,330 ,Dialog steps/hour: 1,849,000 ,SAPS: 30,820 ,Average DB request time (dia/upd): 0.018 sec / 0.033 sec ,CPU utilization of central server: 99% ,Operating System central server: Solaris 10 ,RDBMS: Oracle 10g ,SAP ECC Release: 6.0

  • 4 processors / 8 cores / 8 threads, UltraSPARC IV+, 1800 MHz, 128 KB(D) + 128 KB(I) L1 cache, 2 MB L2 cache on-chip, 32 MB L3 cache off-chip, 32 GB main memory , Number of benchmark users & comp.: 1,200 SD (Sales & Distribution) ,Average dialog response time: 1.86 seconds ,Throughput: Fully Processed Order Line items/hour: 121,330 ,Dialog steps/hour: 364,000 ,SAPS: 6,070 ,Average DB request time (dia/upd): 0.044 sec / 0.035 sec ,CPU utilization of central server: 97% ,Operating System central server: Solaris 10 ,RDBMS: MaxDB 7.5 ,SAP ECC Release: 5.0

  • 104-way SMP, UltraSPARC III, 1200 MHz, 8 MB L2 cache, 576 GB main memory, Number of benchmark users & comp.: 8,000 SD (Sales & Distribution) ,Average dialog response time:  1.81 seconds , Throughput:  Fully Processed Order Line items / hour:       813,000,  Dialog steps / hour:  2,439,000 , SAPS:  40,650, Average DB request time (dia/upd):  0.067 sec / 0.045 sec , CPU utilization of central server:  97% , Operating System central server: Solaris 9 , RDBMS:Oracle 9i , R/3 Release: 4.6 C,Total disk space: 1,818 GB

  • 36-way SMP, UltraSPARC IV, powered by Chip Multi-Threading technology (CMT), 1200 MHz, 192 KB L1 cache, 16 MB L2 cache, 288 GB memory , Number of benchmark users & comp.: 5,050 SD (Sales & Distribution) , Average dialog response time: 1.72 seconds ,Throughput: Fully processed order line items/hour: 517,330 ,Dialog steps / hour: 1,552,000 ,SAPS: 25,870 ,Average DB request time (dia/upd): 0.050 sec / 0.056 sec ,CPU utilization of central server: 98% ,Operating System central server: Solaris 9 , RDBMS: Oracle 9i ,SAP R/3 Release: 4.70 ,Total disk space: 3,816 GB

  • 72-way SMP, UltraSPARC IV, 1200 MHz, 128 KB(D) + 64 KB(I) L1 cache, 16 MB L2 cache, 576 GB main memory, Number of benchmark users & comp.: 10,175 SD (Sales & Distribution), Average dialog response time: 1.95 seconds, Throughput: Fully processed order line items/hour: 1,021,330, Dialog steps/hour: 3,064,000, SAPS: 51,070, Average DB request time (dia/upd): 0.060 sec / 0.074 sec, CPU utilization of central server: 98% Operating System central server: Solaris 9, RDBMS: Oracle 9i, SAP R/3 Release: 4.70, Total disk space: 3,816 GB

Filter Blog

By author:
By date:
By tag: