bryceolson

Big Servers are Back!

Posted by bryceolson May 28, 2008

One trend that is really starting to take shape in the server industry is that big servers are back! That doesn't mean big servers ever disappeared off the map. Historically bigger servers with 4 or more processor sockets have been 7-8% of the server market from a volume perspective. And bigger servers have always been used for scalable, data-demanding enterprise applications which IT values for it's performance, headroom and reliability. What we're seeing now is a greater shift in popularity towards these servers as IT invests more and more in this direction.

 

So, why is that? Well, check out this video and then let me know if you agree or disagree. After you watch it I'd also be curious to learn more about what you value as the most important buying criteria when you go big.

 

 

Join me for a discussion with industry leaders and IT professionals on this topic on the ArsTechnica webforum.

 

There is a lot of proof supporting both sides of this question. Maybe ... Just maybe ... new server technology can help turn today's IT burden's into tomorrow's business benefit?

 

 

Share your opinion or Tell us your experience.

 

 

 

 

 

Part four of three

 

Hopefully if you are watching this, you have already seen the first three installments I did on surviving data center crisis. A quick recap, the premise ( aka crisis ) is, You are running out of capacity.

 

According to Green Tech World, TMC 2007 "81% of IT mgrs will exceed capacity for power or space in the next 5 years".

 

 

In the first three video segments I spoke to three complementary approaches, that taken together could give you as much as 50X the data center capacity in your existing power and space .

 

 

Summarizing:

 

 

Data Center Crisis - How to Survive... Refresh with todays advanced high performing servers

Data Center Crisis - Part 2 - Using Virtualization... Virtualize and Consolidate

Data Center Crisis - Part 3 - Getting Dense- Use every Watt

 

 

Today I want to address two follow-up questions:

 

 

     One, Where to go next when I used up all this new capacity?
     Two, Who can help me get there?

The answers, it turns out, are related.

 

Moving outside the box is the 4th strategy, and like the other strategies, it can be used anytime, in complement with the other three strategies.

 

 

Step to outside the boxness:

 

 

Moving outside the box allows it manager to move work that can be efficiently run elsewhere ( things like email ) outside the data center, and focus on the highest business value or least movable work inside.

 

As to who can help you get here. The system integrator/IT Outsourcer community offers support in all four strategies I have outlined.

 

 

My recommendation is to examine your situation, and your growth projection, and create a plan using all four strategies that will preclude the major capital expense of data center construction. Avoiding that 10 to 50 million dollar capital hit should be a very compelling proposal.

 

 

Have you ever asked yourself that question when you are bombarded with marketing messages from multiple different companies on why choose their products vs. a competitors product?. As a non-Engineer in an engineer centric company, I certainly have thought about this several times and asked myself a very simple question  - Why should I choose one architecture type over another offering?

 

I suppose the best place is to start at the beginning and try and decipher the acronym soup of RISC, x86 etc. I decided to use my ‘old friend’ Wikipedia http://www.wikipedia.org/ to help with this process. What I found was another alphabet soup that I could have researched for hours, but try and simplify it below.  I attach my detailed definition findings at end of this blog.

 

Simply put, RISC (pronounced risk) is a CPU design to use simplified instructions to execute very fast thus providing higher performance. x86 is a generic term that refers to the instruction set of another CPU architecture. So basically both RISC and x86 are types of instruction sets linked to CPU architecture.

 

So which one should I choose?.

Call me old fashioned, but as a business guy, it always comes down to 3 basic tenets in terms of making a decision

1)     I like choice and the ability to pick and choose between multiple suppliers to get the best deal to meet my needs.(and the ability to change supplier without major obstacles)

2)     Performance is really important. The higher performance means that I get my work done quicker which reduces the overall cost / improves time to revenue and ultimately improves the productivity of my business

3)     System cost and total cost of ownership are key decision points in today’s era which is vastly different from the ‘dot.com’ boom. It is all about managing the bottom line through good decisions around CAPEX and OPEX spending

 

I applied my decision criteria and quickly found out that there is not a lot of choice from a hardware and operating system perspective with RISC architecture. In fact it looks quite the opposite of choice which always concerns me, call me pro-choice if you like, but I like the ability to move around suppliers!. On the other hand I found x86 to have lots of choice with many hardware vendors to list and a range of operating systems from windows to Linux and Solaris.

 

Having choice out of the way, I then moved onto performance for my business and looked at published results from many hardware vendors on different websites like http://www.spec.org. what I found was that Intel based systems had a lot of leading results against architectures like SPARC from SUN or Fujitsu and POWER from IBM.

 

I then looked at price (and being an ex-Accountant in my past career) nearly jumped for joy when I saw that system prices were low for x86 systems compared to the comparable RISC systems.

 

This analysis helped me understand it better and helped simplify my decision making.

 

Here is a short video with a little bit more detail. I would be interested in your thoughts and have you had any similar experiences that you would like to share.

 

 

S_Poulin

Virtualization - Who Cares?

Posted by S_Poulin May 13, 2008

I have visited a number of customers recently.  The discussions are usually straight forward where I provide them with a download of our current products, I tell them about things that we are doing in the future and along the way I ask them some questions about trends that they are seeing with their businesses.  It will come as no surprise that enterprises are trying to keep up with their current requirements while also squeezing out increasingly flat or dwindling budgets to do something new.  Many are turning to virtualization as a way to do more. 

 

So who cares?  CFO's care.  I went out to visit a leading Fortune 500 company based on the West Coast of the US.  Keep in mind I am planning to discuss our server platforms, why I believe they are leadership on performance and power and also all of the great new virtualization features we have recently introduced or will intro in the future.  Before we get started they proudly walk me through their new datacenter and I stop in front of a rack that has two servers in it.  Two 2U two processor servers.  It is right next to another rack that has four servers in it.  I inquire as to why both racks are only partially full and I receive a response that says one is owned by Finance, one is owned by a business unit.  IT just manages them.  You can look at this two ways.  The glass half empty way would be that they are wasting an incredible amount of datacenter space and they are hopeless.  The glass half full way would be that this is a great opportunity to really deliver value to this company's bottom line by first convincing them that physical consolidation (full up their racks) is important, then showing them a path toward application consolidation and finally sharing a vision of datacenter virtualization that includes compute, storage and networking.  Their CFO will care.

 

IT employees care.  One theme that seems to be coming through loud and clear is that people who drive some form of virtualization are usually considered as innovators or leading edge thinkers within their company.  I have heard the term "IT Hero" to refer to someone who has delivered on a high ROI project, usually these days through the use of virtualization.  I have met a number of IT folks at conferences and during visits and it is uncanny how many are trying to dig for more product information and how eager they are to hear about what new features we're putting into CPUs, chipsets, networking devices.  A quick search of Youtube found this case study (here) that sums up the sorts of things I have heard.

 

It is also increasingly important that all of this stuff works well with the software, VMM and OS vendors product offerings.  I know we are working closely with all of the ecosystem players because if we come out with an amazing new feature in our components it would be wasted if the VMM, OS or software didn't take advantage of it.    There is some interesting banter here (here) about some of the pros and cons with virtualization.  We are busy working on features that improve the performance and simplify the experience end users have when they virtualize.  Why do you care about virtualization?  What are you doing today that you couldn't do a year or two ago that has been made possible because of virtualization related technology?

As part of the Sun Microsystems and Intel alliance, the two companies have collaborated to bring open source Threading Building Blocks (TBB) support to the Solaris Operating System (OS) and Sun Studio software toolchain. Check out the SUN Blog for additional information. Click the video below for a short interview with Deepanker Bairagi, Principal Engineer for the Sun Studio.

 

 

 

Software parallelism can unleash the processing power that the newer multi-core architectures provide, including the Quad-Core Intel® Xeon® processors. For developers, multithreading offers a software parallelism model, but many existing solutions require a lot of low-level coding. Threading Building Blocks offers a rich approach to expressing parallelism in a C++ program by offering higher-level, task-based parallelism that abstracts platform details and threading mechanism for performance and scalability.

 

The Solaris OS is able to take advantage of multicore architectures, including the Intel Architecture, with features such as a lightweight processes (LWPs), load-balancing across cores, and processor affinities. Sun Studio software offers a complete integrated toolchain for Solaris and Linux platforms, including parallelizing compilers, performance and thread analysis tools, memory and code debuggers, NetBeans-based Integrated Development Environment, and more.

 

Combined with Threading Building Blocks, developers for the Solaris platform now have a fully loaded toolbox that simplifies the development of optimized multithreaded applications for multi-core Intel processors. Click here to learn more about Threading Building Blocks and optimizing performance for multi-core processors.

 

Would like to hear from the community on how you see this impacting the next generation of software development for Solaris running on Intel Architecture.

whlea

Reference Room up and running

Posted by whlea May 1, 2008

Hi all, I just found out about this new site, check it out here: http://www.intel.com/references/

Filter Blog

By author:
By date:
By tag: