A lot has been written in the HPC field about how it doesn’t matter how many cores you have in a processor if you can’t keep them busy. Well, I will pile on with my own proof point and opinion.    One of the key factors to keeping the processor cores busy is how fast you can move data in and out of cores.   In some cases, customers maximize total application performance by optimizing around performance per core.  This also often results in extracting the best performance return on your application licensing costs (if you happen to be licensing your application on a per core, process, or software token basis).   The performance per core optimization occurs by turning some cores in the processor and thus allowing the remaining active cores more memory bandwidth-- spreading your workload among more processor sockets or servers while keeping the number of cores constant.  So why would you turn off cores in one of the best processor in the market today?

Consider the Nehalem-EP processor example running a memory bandwidth intensive energy application (refer to Fig).   The base case scenario (relative elapsed time = 1.0) corresponds to running the application on 2 dual socket processors servers using all cores active (4 cores per processor).   Now, moving to the right, 4 dual socket processor servers using only 2 of 4 cores active per processor, maintaining the number of total cores constant, resulted roughly in a 30% improvement in application elapsed time.   In other words, moving from 2 two sockets servers with all cores active per processor to 4 two sockets servers with half cores active per processor resulted in a about 30% improvement in application performance. If you license software costs are on a per core or process basis, then you just used the same number of total software resources while achieving 30% faster results.  Results might vary on your application, but there a number of applications in the Energy and CAE field would exhibit this type of scaling behavior.

 

chart.jpg

With upcoming multi-core products in the horizon from not just Intel, it is important to keep in mind what HPC folks have known all along, it is about optimal balance of performance and I would further add to this: extracting the maximum performance out of your software licensing costs.  With the Intel® Xeon® processor 5500 series (Nehalem-EP) in the market today and upcoming Westmere-EP processor, we aim to give HPC users just that.

There is more and more buzz around low-latency Ethernet these days, coincident, no doubt, with the growth of the 10 Gigabit Ethernet market.  Two events worth highlighting if you want to hear the latest and greatest…

 

  • February 2nd:  The Ethernet Alliance is hosting a TEF (Technology Exploration Forum) next week where there will be a panel on low-latency Ethernet.  I’ll be talking about iWARP over TCP and participating in a panel discussing why and where we need low-latency Ethernet.

  • March 14-17th: The OpenFabrics Alliance is having its annual workshop in Sonoma where low-latency 10 Gigabit Ethernet will be included in many of the sessions.  Expect good presentations and discussions as Enterprise Data Center and Cloud are key focus areas for this year’s workshop.

Get ready there is a big train a-coming. Ok, you may ask what on earth this has got to do with Servers.

While thinking about what to share in my blog post, this classic song just kept popping into my head and would not leave.

 

When something pops into my head, I get curious, and so decided to revisit the famous lyrics form this song.

 

The lyrics really resonated with me in and in a strange bizarre way I can relate some of the famous lyrics to what is going to happen in the Server Market.

 

So, what is that train that is a-coming. That train is the new Nehalem-EX processor for high-end computing that will change forever the shape and dynamics of the high-end computing server industry.

 

'People get ready' - I have written in previous blogs about Nehalem-EX and how Nehalem-EX will compare to RISC architectures and change the face of mission critical computing as we know it today. Now it is time to get ready and Customers are getting ready. I have lost track of the number of Customer calls that I have been on over last few months sharing with Customers some early material on what Nehalem-EX will bring. Whether it is performance projections Vs RISC; deep dive into the 20+ Advanced RAS capabilities; or discussing the 15+ new 8S and above OEM designs, there is certainly a lot of excitement building among Customers who previously would not have considered Xeon for their mission critical workloads. At times it nearly feels like Customers are working through a mental checklist of what features they were waiting for Xeon to deliver. I was on a call with one Customer and they just wanted to know about new Memory RAS capabilities and the ability to deal with failing DIMMs and avoid a reboot. When we answered the question, you could sense an intake of breath and a sense of excitement that we had something coming which was at the top of their needs list.

Now that Nehalem-EX will deliver the exceptional performance, additional reliability and availability features that are critical for customers and all within an economic value proposition that is a fraction of the equivalent RISC solution, there is no reason not to get ready

 

'Just get on board' - There are many reasons why people say that they are not willing to move from RISC to Xeon architectures. Most of those reasons can easily be addressed to allay any fears or concerns that you might have about moving. I wanted to share a few of the common reasons that I frequently here.

 

“Xeon is not reliable.” The good news is that we have been road testing our reliability story with both Industry Analysts and real end customers that are thinking of moving from RISC. Customers have some specific things that they wanted to see in Xeon which they are now getting with Nehalem-EX. 20+ advanced RAS features with Nehalem-EX. RAS delivered through System/Software resiliency, not just microprocessor

“Xeon does not scale.” Xeon has always scaled pretty good, there just has been limited choice out there. Now lots of choice with expected 15 new 8socket and above designs from multiple different OEMs

“My applications will not run”. Most packaged apps run x86 environment. There are very few Applications Vendors that do not support a version of their application running on Xeon based environment

“Migration costs are too high.” Yes there are costs, but TCO and ROI advantage outweigh the costs for most packaged applications

 

'There's room for all' - Having headroom to grow and scale your mission critical workloads to meet the peak demand periods is essential. Being limited by the size of an OEM platform, IO constraints, memory limitations has been among the reasons in the past why Customers choose RISC architectures. The good news is that those reasons and many others are now being removed with the Nehalem-EX product line. As discussed above there are over 15+ new 8 socket and above designs coming out from many, many OEMs. This will enable lots of choice and deliver platforms that are capable of handling the room required by your mission critical solutions.

 

So get ready, there is a big train a-coming that is going to change the face of mission critical computing forever. This events do not occur frequently in our industry, so prepare to get onboard that train.

A: It depends.

No, seriously, as much as it sounds like a copout, that is actually the correct answer.

 

I still get asked this question several times a week.  After a deep breath, I carefully say “it depends”, but then try and explain my position.  Part of the problem comes from looking at virtualization as something you might do to a server, instead of looking at it as part of how you manage all your servers. 

 

In the olden days, say four years ago, it was a pretty simple question in that the options were limited.  Processors had a single core, and you had a choice – do I go with two processors or four.  Four processor systems had more room for memory, but they also had more processors. 

 

Today things are more flexible and more fluid, and this trend is only increasing.  Processors have multiple cores, and the options are vast.  Intel is introducing new processors into the Xeon family soon.  The Nehalem EX and the Westmere EP.  All of these benefit from the architectural advantages that came with the introduction of the Nehalem architecture.  All of them have multiple cores.  So how do we pick the right virtualization server – the “best” virtualization server? 

 

There will be a lot of dials we can turn.   A given server could have two, four, or eight processors.  Each processor could have four, six, or eight cores.  Different servers will have different memory capacities and I/O capabilities. 

To make a good choice you will need to understand which resources constrain the addition of more VMs to your servers.  Understanding your workload is the key.

 

As you load virtual machines onto your platform what barriers do you run into first – memory? CPU? Disk I/O? Network I/O?  Something else?

 

Choosing the right server will require understanding your workload, and selecting the hardware that best addresses your virtualization constraints, without breaking your licensing or budget.

 

There is no magic answer – the right server for your VMs depends on what your VMs do.  Are they web heads, sharepoint servers, data bases, or ERP modules?  The right answer depends on understanding your workload.  Or as I said before, It Depends…

Managing a data center is a complex task, and there are a number of advances in sources of data and points of control - what we call instrumentation - that are available to help run a more efficient enterprise.   The animation posted here discusses how instrumentation delivered on Intel Xeon processor based servers, including Intel Intellligent Power Node Manager, can be used to improve the energy efficiency of your data center.

 

 

Every day, Intel® technology and platforms help companies solve business problems and challenges. Here are a few of the growing number of stories and reasons for choosing Intel processors and technology.

Winning: Western Digital – Storage technology leader

Western’s performance demands kept rising. With the economy pushing IT to cut costs, Minh Vu, director of data systems and operations, says his IT shop has found a way to meet those demands and deliver even greater ROI: continue its server refresh on the Intel® Xeon® processor 5500 series. The economics of the new processor are so compelling, Vu says, “Even in hard times, we don’t have to hold off on server replacement. With every new server I buy, we come out ahead.

 

Read about it here

 

The results:

  • Increased application performance. Performance-sensitive engineering applications run 20-25 percent faster than on Western’s Intel Xeon processor 5400 series-based platforms.

  • More virtual machines. Western Digital can put 33 percent more virtual machines on an Intel Xeon processor 5500 series-based server than on its Intel Xeon processor 5400 series-based systems.

  • New server deployment cost drops by over one-third. Counting total hardware, memory, software, maintenance and other operational costs, Vu says the cost of deploying a new server has fallen by one-third.

 

Winning: The Schwan Food Company

The Schwan Food Company (Schwan) has grown to become a multibillion-dollar enterprise that relies on IT to sell its frozen foods from traditional delivery trucks, in grocery-store freezers, online, and in the foodservice industry. In 2001, the company began virtualizing its IT infrastructure to improve response time and cut costs, while migrating mission-critical applications from a mainframe to Intel® processor–based servers. Over the last two years, continued virtualization with the Intel® Xeon® processor 5500 and 7400 series has helped the company reduce power and cooling by 13 percent, decrease server real estate by 30 percent, and extend virtualization to new applications across the enterprise.

 

Read about it here

 

The results:

  • Improved responsiveness. With a virtualized environment, the IT group has accelerated new application deployment from two weeks to one day.

  • Decreased power consumption and footprint. By consolidating applications on fewer physical servers, the IT group has cut power consumption by 13 percent and reduced the hardware footprint by 30 percent.

  • Extended virtualization. The performance of Intel processors has helped the IT group extend virtualization to new applications across the enterprise.

 

Winning: Mappy

 

Mappy, a fully-owned subsidiary of the PagesJaunes Group, is a key player in the European online map market segment. The maps are supplemented with an array of local detailed

knowledge, such as weather reports, the cheapest petrol stations, hotels and even where to park your car (e.g., in a remote London suburb). Its website is one of the top 25 most visited sites in France with approximately 8 million unique visitors each month. The site is absolutely

central to Mappy’s business. Mappy wanted to launch new functionality and new services on its website and also release new APIs into the marketplace. As a result, it needed to ensure maximum performance from its back-office IT infrastructure.

 

Read about it here.

 

The results:

  • New platform. Gains new high-performing, stable platform to re-launch business critical website and new business services

  • Data centre gains. Requires fewer services and less data centre space and lowers energy costs while maintaining same TCO as with a larger number of IBM servers

  • Reduced energy. Electricity consumption fell from 55 kVa (kilovolt amperes) to 40kVa

 

Winning: Clark   County, Nevada

Economic hard times pose a double challenge for state and county governments, increasing the demand for services at a time when revenues are falling. In Clark County, Nevada—which covers an area the size of New Jersey and encompasses the world-renowned Las Vegas Strip—the IT department has found a winning strategy for dealing with the challenges. The county’s IT leaders say that virtualization and consolidation strategies using the latest Intel® Xeon® processors help ensure that Clark County’s IT infrastructure is sustained through the tough financial times, by both helping to reduce costs and keeping up with expanding automation.

 

Read about it here

 

The results:

  • Increased capacity and density. By replacing older servers with servers based on the latest Intel Xeon processors, Clark County has achieved 10:1 consolidation rates while gaining the performance and headroom to support new applications.

  • Greater IT efficiency. Consolidation, virtualization, and standardization allow the county’s IT department to better meet the increasing automation needs of the agencies it supports.

  • Significant cost savings. Greater processor performance and energy efficiency are generating significant savings on server total cost of ownership (TCO) as well as postponing the need to expand the data center. Consolidating server workloads onto fewer, higher-performing Intel® processors has significantly reduced software costs

While desktop PC’s are optimized for gaming and business applications, workstations are purpose-built systems designed to deliver the high performance experience you expect when working with engineering, digital media, financial services and scientific applications.

With workstation performance powered by up to two Intel® Xeon® 5500 processors you can play more "what ifs" (e.g. analyzing your ideas for form, fit or function), and potentially arrive at a more optimal design in less time.  Workstations based on the Intel Xeon processor deliver the intelligent performance you need to elevate your personal innovation engine.

With Intel® Xeon® processor based workstations you have the opportunity to support more memory than traditional business desktop systems. That means the digital canvas you use to create tomorrow’s ideas on is larger and potentially more capable than a traditional desktop.   Now you can compute and visualize larger data sizes.  Where before you were once forced to work with smaller data sets, with workstations you can now work with entire assemblies and more efficiently understand larger scale trends.

Intel® Xeon® processor based workstations also support Error Code Correction (ECC) memory.  That means you can now be confident that a single bit memory error will not cause a “blue screen” failure.  Imagine working for sometime on a large assembly model and suddenly you have a memory failure.  At that point ECC memory is mandatory.  If your workstation did not have ECC memory you would have to start all over again and that is not fun or productive.

So what does a workstation do that a desktop system cannot?

Workstations:

·     Deliver the compute and memory capacities to work with large scale manufacturing, financial services, digital media and scientific applications models.

·     Enable users able to compute and visualize large scale trends and details that would have been easily missed had they been forced to view the identical data broken into multiple smaller-sized subsets and then re-assembled as sections of the whole.

·     ECC based memory solutions – As users work with larger more complex data sets the reliance on memory as break point will need to minimize.  ECC memory detects and corrects errors introduced during storage or transmission of data and adds significantly to overall system reliability. 

Before we dive into the discussion of PLM and Cloud Computing let’s set the stage by first examining product development and how it is changing.

Product development at most companies is no longer the invention of new ideas within a large vertically federated R&D department.  It is now more likely to be accomplished by networks of geographically dispersed companies and consultants that offer the necessary skills and expertise to impact a products competitive advantage or time to market.

Firms are now rethinking their traditional approaches to innovation and are seeking more collaborative forms of product development.  Enter cloud computing, a blanket term for anything that involves the use or delivery of a service over the internet that is hosted by a second party.  Private clouds are simply proprietary computing infrastructures that provide hosted services to a limited number of people behind a firewall of an organization or enterprise.  Of course using an old definition, cloud based PLM is simply employing the internet to provide a comprehensive information infrastructure that coordinates all aspects of a product from initial concept to its eventual retirement.

Now that we have that out of the way looks look at the potential barriers to PLM and Cloud Computing.  The obvious barriers are security, availability, and predictability.

Cloud based PLM needs to be like an electric utility company.  Just as there is electricity to light your home when you need it, cloud based PLM needs to always be there offering ready and secure access to your data whenever you need it.  It also needs to offer predictable service levels with no ebbs and flows based on competing traffic or cloud use.  Security remains paramount too.  Enterprise based PLM prevents unauthorized access from the outside. However, in a virtual world a physical perimeter no longer exists. Therefore, businesses must assume that all data transferred may potentially be intercepted.  Encryption technologies, long been used by militaries and governments to facilitate secret communication, are now being used to help secure PLM data  by transforming its high value information by using a algorithms to make it unreadable to anyone except those possessing special knowledge. However, if the encryption keys themselves are lost or comprised, then the data itself is effectively lost or compromised. For this reason, cloud based PLM security remains a top concern.

Cloud based PLM and Intel technologies.

Top concerns in cloud based computing remain security, predictability and availability.  Let’s look at how Intel server based technologies can help you overcome these potential barriers.

  • Intel® Trusted Execution Technology (Intel® TXT): Intel® Trusted Execution Technology for safer computing, formerly code named LaGrande Technology, is a versatile set of hardware extensions to Intel® processors and chipsets that enhance the digital office platform with security capabilities such as measured launch and protected execution. Intel Trusted Execution Technology provides hardware-based mechanisms that help protect against software-based attacks and protects the confidentiality and integrity of data stored or created on the client PC. It does this by enabling an environment where applications can run within their own space, protected from all other software on the system. These capabilities provide the protection mechanisms, rooted in hardware, that are necessary to provide trust in the application's execution environment. In turn, this can help to protect vital data and processes from being compromised by malicious software running on the platform. To learn more about Intel® TXT visit these URLs: Trusted Execution Technology Overview (PDF 83KB), Trusted Execution Technology Architectural Overview (PDF 184KB)
  • Intel® Virtualization Technology (Intel VT) In an IDC report titled Optimizing Hardware for x86 Server Virtualization, they stated that “virtualization offers a myriad of benefits to an enterprise.” Beyond the obvious CAPEX and OPEX savings from server virtualization and consolidation the high availability, fault tolerance, disaster recovery, and workload balancing offered by this technology can help deliver a resilient cloud based solution.
  • The next generation of virtualization brings in advanced management tools for greater automation and orchestration of the datacenter and promises to further reduce operational costs and improve service levels. These new technologies will help you rapidly save, copy, and provision a virtual machine that enables zero-downtime maintenance and supports new "go live" initiatives.  Dynamic sharing of idle resources across server platforms will improve performance and use while eliminating stovepipes.  Seamless failover when a virtual server component fails will lead to higher system availability.  Net/Net with Intel virtualization technologies your PLM cloud computing solution will deliver predictable service levels.  Here is a cute video on virtualization I am sure you will enjoy - Intel® Virtualization Technology for servers.  To learn more about Inte® VT visit this URL: http://www.intel.com/technology/virtualization/technology.htm.

 

  • Intel® Virtualization Technology (Intel® VT) FlexMigration - Flexible migration technologies enable cloud service providers to easily move workloads across multiple generations of processors without disrupting services. Performing live migrations from a newer generation processor with a newer instruction set to an older generation processor with an older instruction set carries the risk of unexpected behaviors in the guest. In 2007 Intel helped solve this problem by developing Intel® Virtualization Technology (Intel® VT) FlexMigration. By allowing virtual machine monitor (VMM) software to report a consistent set of available instructions to guest software running within a hypervisor, this technology broadens the live migration compatibility pool across multiple generations of Intel Xeon processors in the data center. This reduces the challenges to IT in deploying new generations of hardware, enabling faster utilization of servers with new performance capabilities as they become available.

 

  • Accelerating I/O performance and enabling more efficient migration Virtualization solutions are inherently challenged in the area of network I/O because the guests on a host server all end up sharing the same I/O resources. Moreover, many I/O resources are emulated in software for consistency and decision-making (e.g., network packet routing from the shared I/O resource is often done in software).

Intel improves availability through a number of technologies that accelerate I/O performance. This enhances the ability to deploy I/O intensive workloads (beyond simple consolidation) and increases efficiency in Virtualization 2.0 usage models such as load balancing, high availability, and disaster recovery (all of which extensively rely on data transfer over the network). Intel’s I/O technologies for improving data transfer include:

    • Intel® Virtualization Technology (Intel® VT) for Connectivity - (Intel® VT-c) provides unique I/O innovations like Virtual Machine Device Queues (VMDq) that offloads routine I/O tasks to network silicon to free up more CPU cycles for applications and delivers over 2x throughput gains on 10 GbE.
    • Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) delivers scalable I/O performance through direct assignment (e.g., assigning a network interface card to a guest) and enables single root input/output virtualization (IOV) for sharing devices natively with multiple guest systems.

One last comment on PLM and Cloud Computing - Privacy and Policy

Intel recognizes that many aspects of successful policy implementation depend on software and hardware development from third party providers whose implementations are outside Intel’s direct control. Intel believes adherence to these or equivalent policies is critical to delivering the full benefits of Intel Trusted Execution

Technology and other complementary security technologies, and will vigorously encourage our fellow travelers in the industry to internalize and implement these policies. For details on these policies visit http://www.intel.com/technology/security.

Like many businesses, when Intel IT initially embarked on virtualizing our data center environment, we expected that it would deliver tremendous value but we did not know exactly how it would affect our operations, intfrastructure investements, platform standards and financial ROI .. now we do.

 

Two of my peers in Intel IT, Bill Sunderland and Steve Anderson, published a paper last week that discusses the key learnings from our last four years and where we are going in the next few.

 

Some interesting tidbits I took away were:

 

  • Break even financial ROI occured at ratios of less than 4VM / server
  • Implementing our virtualization strategy required investments across all aspects of the data center, including server, storage and network
  • Intel IT averages 10:1 consolidation for servers and 15:1 for storage with opportunity for even higher density moving forward
  • Appropriate server sizing depends on a variety of many variables including risk tolerance, flexibility, cost and financial goals

 

The bottom line conclusion of the implementing virtualization white paper is that Intel IT will accelerate our virtualization efforts to more environments and at a faster pace in 2010 continuing to utilize the latest generation of 2S and 4S Intel Xeon based servers.

 

To see how virtualization fits within our larger data center operations strategy, read more about our Intel IT Data Center Solutions

 

Chris

The ability for a server system to deliver more than one million IOs per second (IOPS) on standard server components is here today.  Last week, Intel and Microsoft hosted a webcast to discuss these latest iSCSI performance results.  The latest Intel hardware for CPUs and IO along with Microsoft Server 2008 can deliver capability that only a year ago seemed a distant possibility.  All Intel NICs ship (and have shipped for sometime) with iSCSI support built in, and Microsoft's native initiator support along with the increase of server CPU performance has made mainstream, ultra high performance network storage possible.

 

The key takeaway is that performance storage over LAN through a native OS storage stack provides a lot of simplicity, consistency, scalability, and still maintains very good performance.  In fact, as discussed on the webcast, one million IOPS is more than any application really needs today, but tomorrow may be here sooner than we think.  As storage growth continues, the ability to standardize the datacenter on common, low cost interconnect that has a consistent manageability structure is increasing valuable to IT.  The knock on iSCSI storage has always been that Ethernet isn't the ideal protocol from a performance perspective but iSCSI seems to be rising to challenge at hand.

 

I should probably delve into the simplicity angle here a bit more, because in addition to having this iSCSI capability on standard server components, the native stack management and the convergence of both storage and LAN onto a single interface dramatically reduces the complexity (and cost) of deployment.  Additionally, virtualization is a key driver of storage growth and with iSCSI virtualization performance can be enhanced.  As it's based on a standard network infrastructure, the benefits that I've highlighted before with respect to Virtual Machine Device Queuing (VMDq) apply directly to iSCSI traffic as well.  With iSCSI you get mainstream OS support natively from Microsoft, standard server components available from many vendors based on Intel CPUs and networking, the ability to converge your storage and LAN networks to reduce cost and complexity, and a complete solution that supports all the latest virtualization capabilities.

 

If you don't like the sound of that, I'm not sure what else to say.  One million IOPS usages may be here faster than we think; luckily hardware and software performance is ready for the challenge!

 

And if this piqued your interest, register for the Intel/Microsoft webcast and watch the archived version.  For those of you who need it, there is a little background on storage over Ethernet in an old blog of mine covering iSCSI and FCoE.

 

Ben Hacker

E_Hitzke

Server Upgrade Gets an A+

Posted by E_Hitzke Jan 18, 2010

I went to Georgia (the US state not the country) for the first time last week. The state’s reputation for great BBQ and fried food is well deserved, so I was not disappointed! But, I wasn’t there just to eat well. I went to visit the Brookwood School and learn more about their server upgrade. I went with a video crew to capture their upgrade of 180 PCs and 4 servers. I’ll be sharing the footage soon, but in the meantime, I wanted to share some key learnings while the experience is still fresh (and fried) on my mind.

 

 

Familiar pain points get a failing grade
The school’s resident Technology Coordinator (and a calculus teacher) Keith Massey, described the situation before the upgrade, and it was not pretty:


- Slow response times from the email and database servers
- Corrupted Operating System
- Servers would shut down and wouldn’t start again
- Constant risk of losing critical data

 

Server management also provided some challenges:


- No dedicated IT staff. Responsibility for IT was shared between several department heads. This team of individuals was saddled with the job of maintaining the network along with their normal jobs of running the school.
- Management of the existing system of servers was complex, often requiring specialized skill sets for different brands of servers, and an increasing amount of time was spent “dealing” with each individual physical server.
- No remote support. There was no single source of out-of-band management capability for existing servers.  If work needed to be performed, it had to be done onsite, which was a big waste of the staff’s limited time.
- Growing needs for IT support. As the servers aged, the demand on the IT staff had grown exponentially, stealing time from their regular jobs.

 

Multiple Choice: what should the school do?
(a) Upgrade the Server Performance
(b) Virtualize the Servers
(c)  Adopt Managed Services
(d) All of the above


Jason Bellflowers, CEO of Virtual World Technologies, an Intel® Channel Partner and a managed services provider, conducted a complete network evaluation and performance analysis. Besides the day-to-day issues, he realized some servers were underutilized while others were dangerously overtaxed.

Bellflowers recommended virtualizing the servers to allocate the workload among virtual servers. He chose the latest Intel® Xeon® processor 5500 family for their reliability, intelligent performance and flexible virtualization, and the Intel® Modular Server for the ease of management and built-in expandability for future growth. The server upgrade was done practically overnight and was transparent for the end users. And now Virtual World is managing Brookwood’s PCs and servers, which has lifted a huge burden off the shoulders of the school’s staff.

 

Study these benefits and earn better ROI
The testimony to the success of Brookwood’s server upgrade came from Mike Notaro, the school’s headmaster, “I don’t hear about the servers anymore.” He appreciates this, because as he says, “We are in the business of education, not in the business of IT!”
With the upgrade:
- Teachers, faculty and staff can focus on educating students instead of fixing IT issues
- Users benefited from a dramatic increase in the server response time
- The server virtual solution cost was 30% less than a 1:1 server replacement
- They saved on the electric bill

 

Your homework: Do a network assessment
Contact a trusted Intel solution provider today to have your IT situation assessed: http://premierlocator.intel.com

They can help you identify the right solution for your needs. And don’t forget, investing in Intel-based servers is smart and the advantages will add up.

 

January video Brookwood 2.JPGMr. Notaro at his desk during the videoshoot.

If you missed it last week at CES, Intel launched a new line of Intel Core processors and had Mia Hamm show how Turbo Boost worked through a very cool exercise demonstration.  Check it out one of the write-ups here (among many others on the web):  http://scoop.intel.com/2010/01/exercise-your-core-mia-hamm-core-street-teams-niketown-run-more.php

 

It reminds me of another blog from last March when we launched the Xeon® 5500 platform that was the first line of CPUs to support Turbo Boost (http://communities.intel.com/community/openportit/server/blog/2009/03/30/a-heart-to-heart-talk-on-turbo-boost)

 

…but hey, it doesn’t matter if you’re running, playing soccer, or working out on an some exercise machine – since it’s the beginning of the year, make a New Year’s resolution to have your servers get some more exercise with Turbo Boost – they’ll thank you for it.

It is winter-time in the US and most of us are thinking about staying warm.

 

However, Many IT professionals, especially the facilities teams are constantly thinking about keeping the data centers cool.

 

What if IT started to think like everyone else and we could allow the Data Center to warm up some. What risks would that bring? Intel IT has been testing and evaluating our ability to adjust the temperature of our Data Centers and the findings are interesting.

 

Alon Brauner (Regional Data Center Operations Manager, Intel IT) talks about his experience on this project in this video on IT Sustainability.  Alon has found that turning up the temperature a little in the datacenters (like turning your lights off at home) is saving Intel IT money while maintaining the data centers and the equipment within specification.

 

For more IT Sustainability best practices and lessons learned from Intel IT, visit our website (keyword: sustainability) or start with the Intel IT Sustainability Strategy

 

Chris

Customers have spoken, and Intel has listened - there are now three different methods to setting up the BMC (Baseboard Management Controller) on a Intel Xeon 5500 Series Server Platform to simplify your installation methods for these servers.

 

 

 

http://www.intel.com/sites/sitewide/pix/badges/boards/server.gif

IPMI 2.x

Compliant

Intel Intelligent

Power Node Manager

Intel® Server Board S5500HCV

X

Intel® Server Board S5500BC

X

Intel® Server Board S5500HV

X

Intel® Server Board S5500WB

XX

Intel® Server Board S5520HC

XX

Intel® Server Board S5520UR

XX

 

  • ipmitool - this is the most common tool used by system administrators to setup their BMC en mass.  Many end users have scripts in place to deploy and configure the BMC no matter which platform is being used.  The iBMC is IPMI 2.x compliant and will also accept the open-ipmi commands for configuation.  Shown below is a windows based version of ipmitool doing a simple chassis status query on the S5520UR platform.

ipmitool.jpg

 

  • BIOS configuration - this is new as of BIOS40 on the 5500 series platforms, and a welcome change.  For many of you who often use the BIOS for configuration of platform technologies - you can now also add the manageability settings of the BMC in the BIOS as well. 
  • BIOS-Server_Management_BMC_SETUP.jpg

 

  • Intel Deployment Assistant CD - this setup CD has been around for some time, and many users like the refreshed interface - it's a simple bootable ISO which allows for configuration of BIOS, Manageability, RAID and even OS preparation.  You can also save profiles to speed your deployment process across multiple servers.

IDA-CD-Screen.jpg

 

Let us know how your BMC setup process works - how do you do setup?  Do you have any recommendations or tips?  Thanks for reading!

The migration seminars we hosted at different cities late last year had very positive feedback and we had requests from customers to re-run.  Thank you to those who attended the seminars.  And to respond to the popularity, we are going to Atlanta…again…and will host another RISC/UNIX migration seminar together with friends from Red Hat and Dell. 

 

Register today! 

Come and learn about the steps to take to migrate UNIX/RISC environment to RHEL/Intel. 

 

Date:  January 20, 2010

 

Location: 

Seasons 52
By the Perimeter Mall
90 Perimeter Center West
Dunwoody, GA 30346
Phone: (770) 671-0052
http://www.seasons52.com/locations/perimeter2.asp

 

Get the information you need to start planning for migrating from RISC/UNIX to Red Hat/Dell/Intel to lower your TCO

  • Proven best practices to ensure smooth migration, planning and implementation
  • A illustrative roadmap showing the estimated timeframe and costs for your migration
  • Effective migration plan for you and your IT staff

 

 

Registration link in case above hyperlink does not work:  https://inquiries.redhat.com/go/redhat/AtlantaLuncheon2

I wanted to drop by the Server Room (it has been a while!) to let everyone reading know that this coming Thursday, January 14th, Intel and Microsoft will host a webcast on  server connectivity to iSCSI SANs and make an announcement of a breakthrough in iSCSI networking performance.


We’ll discuss how:


  • Every server ships ready for immediate iSCSI connectivity with iSCSI support embedded in Intel® Ethernet Server Adapters and the Microsoft Server Operating System in both physical boot  and virtualized environments.
  • Intel and Microsoft collaborated on the latest releases, Windows Server 2008 R2, Intel®Xeon® 5500 Processor Server Platform and Intel® Ethernet 10GbE Server Adapters to deliver breakthrough performance and new I/O and iSCSI features.
  • Intel and Microsoft are ensuring that native iSCSI products can scale to meet the demands of cost conscious medium size businesses and large Enterprise class data centers.
  • Intel® Ethernet Server Adapters provide hardware acceleration for native Windows Server Network and Storage protocols to deliver great iSCSI performance, while allowing IT Administrators to utilize native, trusted, fully compatible OS protocols.  This is critical because for any protocol or technology to be successful on Ethernet it must adopt the plug-and-plays economies of scale characteristics of the ubiquitous Ethernet network to be successful.  Native iSCSi in the OS and Intel adapters does just this, which had led to the tremendous growth iSCSI has experienced over the last 3 years.
  • Intel and Microsoft are addressing the challenges of server virtualization network connectivity with Server 2008 R2 to deliver near-native performance and features.


Please join Jordan Plawner, Intel Senior Product Planner for Storage Networking and Suzanne Morgan, Microsoft Senior Program Manager, Windows Storage, to hear the big news. 


Register for the webcast now!


After the webcast I'll be back to blog a quick summary, but I strongly encourage you to tune in... some pretty amazing developments here!


-- Ben Hacker

gwagnon

What happened to vConsolidate?

Posted by gwagnon Jan 12, 2010

As one of the 'owners' of vConsolidate for the last nearly 3 years... I would like to clarify just what we (Intel) did with the benchmark as a little insight for the folks that wrote and read this article:  Does anyone care about virtualization performance in 2010?

 

Early 2009, we stopped development and maintenance of the benchmark.  It is however still used by several companies for their own internal testing and evaluation of server configurations.  We no longer support external publication of vConsolidate benchmark results.

 

Prior to this, it was available to customers via direct contact with me and my team.  I have a list (that I cannot share, sorry) of companies (OEM's, ISV's, Finance, Tech, Medical, Auto, Insurance, and others) who requested access over the last few years and used vConsolidate in their own test lab environments.  That was after all a primary goal of the benchmark;  internal evaluation of server systems for a virtualization environment.  Its goal from the start was not to become an industry standard benchmark.  But rather, it was always designed and maintained essentially as a test tool.  Although, for a lack of any other solution, it crossed that boundry on some occasions (with approvals of course).

 

In terms of virtualization solutions, VMMark came about at roughly the same time period and had roughly the same test scenario's involved as did vConsolidate.  The most often discussed comparison is that VMMark focuses on VMWare environments, where vConsolidate allowed for multiple hypervisor configurations.  One key difference that most people do not often call out is that VMWare made a benchmark for publicly evaluating server configurations, we made vConsolidate as a test tool for internal labs to evaluate server configurations.  Customers needed something to test with in non-VMWare test environment, so vConsolidate was updated slightly, given a GUI, and offered as a tool they could use.  But it was still primarily only for internal evaluation.

 

Why did we stop development, maintenance, and essentially offering vConsolidate out as a virtualization environment test solution?  Well, the short answer is that SPEC has had their own version of a virtualization benchmark forthcoming that would supersede what vConsolidate offers.  We fed a lot of what we learned from our efforts on vConsolidate to the SPEC committee... as members, we leverage our experience.  Ultimatly, we decided that supporting an industry accepted/developed benchmark is better for customers.

 

As far as the question of 'does anyone care about virtualization benchmarks in 2010'.  I guarantee the answer is yes.

 

There you have it, some insights.

On January 7, Intel, Fujitsu, HP, Lenovo, NEC and other Intel channel providers introduced a new entry-level workstation that successfully broadened workstation appeal to users who may have been purchasing consumer and business desktop computers.  This new workstation, based on Intel® Core™ i5 processors or Intel® Xeon® processor 3400 series and the Intel® 3450 chipset, delivers a workstation experience at near desktop prices.

If you are employing your workstation for 2D drafting and detailing then this new workstation with integrated Intel® HD Graphics may be an ideal and economical solution. Together with industry-leading 2D CAD applications like AutoCAD LT, this new workstation provides a comprehensive set of tools that will help you accurately and efficiently create, document and share drawings.  It will also give you access to a workstation platform built around the efficiency, power and reliability demanded of a professional workstation product.

Designing in 3D is here to stay, so why not use a workstation that is designed and optimized to deliver a basic 3D design experience that engineers demand? Why not provide management with an effective and affordable system that allows you to quickly and accurately manipulate part models at an interactive pace?

It’s all possible—today. Intel® Core™ i5 processors or Intel® Xeon® processor 3400 series combined with the Intel® 3450 chipset deliver real workstation features that provide a stable and reliable platform to create your designs on.

“The design goal of this category was to deliver features, functions and performance of an entry workstation at a more affordable price point,” said Tony Neal Graves, General Manager of workstation technology in the Intel Architecture Group.  “This new workstation category is intended to help SMB and Enterprise businesses extend the workstation product promise to more of their high value knowledge workers who create digital assets for their company.”

“Entry level workstations with the new Intel Core i5 processors and Intel Graphics HD are optimal for drafting applications” said Guri Stark, vice president of AutoCAD and Platform Products at Autodesk.  “Our AutoCAD LT users now have access to affordable workstation technology that will help them accurately and productively create, document, and share their drawings.”

 

"With its new integrated HD graphics technology, Intel Core I5 workstations give Adobe professional video users an opportunity to quickly and effectively create dynamic content” said Dave Helmly, North American TechSales Manager for Adobe professional Dynamic Media products.  “And, at a new compelling price point, the workstation will help Adobe users make their workstation more capable to keep up with their creativity.”

 

Filter Blog

By date:
By tag: