The ecosystem is growing...

 

Sean Maloney's keynote presentation at IDF 2009 highlighted Intel Node Manager.  This is the video from his keynote which shows customers from Baidu, BMW, Oracle, and Telefonica, who have been working with Intel on Intel Intelligent Power Node Manager.

 

 

Check out the final slide showcasing the OEM/ODM/Console providers and customers using Intel Intelligent Power Node Manager.

The webinar the Red Hat - Intel team delivered on Sep 23rd is now available for download for those who missed listening in real time.

96% of the webinar guests thought the content was good or better,

91% of the guests thought the content met expectation, and

in general, the audience has requested more of technical "how to's" of UNIX/ RISC to RHEL/ Intel migration.

Enjoy...

Really good case study for a leading Turkish bank that used the Xeon ROI tool to justify their server refresh with Xeon 5500 and 7400 platforms:  http://communities.intel.com/docs/DOC-4114#cf

bdgowda

The One Million IOPS game

Posted by bdgowda Sep 26, 2009

Few months back I saw an press release on Reuters from Fusion IO and HP claiming to hit 1 Million IOPS with a combination of Five 320GB ioDrives Duos and Six 160GB IO drives in an HP Proliant DL785 G5 which is a 4 Socket server with each socket having 4 cores, that makes a total of 16 cores in the server. I went saying wow that is amazing, a million IOPS is something any DBA running a high performance Database would like to get hands on. But when I did a quick search on the Internet for on how affordable the solution would be, I was horrified to see the cost which was clsoe enough to buy me couple of Mercedes E class sedan, all though the performance was stellar the cost and 2KB chunk size made me say which application does a 2KB read/write anyways, the default windows allocation is 4KB.

As time went by I got busy with other work till our Nand Storage Group  told us that they are coming up with a product concept based on PCIe to show a real 1 Million IOPS with 4KB block sizes which application in real world uses. This triggered the thought on what takes to achieve a 1 Million IOPS using generically available off-the shelf components.  I hit my lab desk to figure out what it takes.


Basically getting a Million IOPS depends on Three things:

1. Blazing fast Storage drives.
2.
Server hardware with enough PCIe slots and good  processors.
3. Host Bus Adapters capable of handling the significant number of IOPS


Setup:

  Intel Solid State Drives was my choice, there has been a lot discussed and written about the performance of Intel SSD's and that was easy choice make. I selected Intel X25-M 160GB MLC drives made using 34nm process. These drives are rated for 35K Random 4KB read IOPS and seemed like a perfect fit for my testing.

Then I started searching for the right Dual Socket server, this
Intel® Server Systems SR2625URLX with 5 PCIe 2.0 x8 provided enough slots to connect HBA's. The server was configured with Two Intel Xeon W5580 running at 3.2Ghz and 12GB of memory.

Search for the HBA was ended when LSI showed their 9210-8i series (Code named as Falcon) which has  been rated to perform 300K IOPS. These are entry level HBA's which can be configured to hook up up to Eight drives to eight Internal ports.

Finally I had to house the SSD's some where in a nice looking container, and a container was necessary to provide power connectivity to the drives. I zeored in on Super Micro 2U SuperChassis 216 SAS/SATA HD BAY, this came with Dual power supply and without any board inside it, but it provided me an option to simply plug in the drives to the panel and not worry about getting them powered. The other interesting thing about this Chassis is that, it comes with Six individual   connectors on the back plane so all each connector handles only Four drives, this is very different from active back planes which routes the signal across all the drives connected to them, this allowed me to just connect 4 drives per port on the HBA.  I also had to get a 4 slot disk enclosure ( Just some unnamed brand from local shop) in total I had capability to connect 28 drives.

With all the hardware in place, I went ahead and installed Windows 2008 enterprise server edition and Iometer (Open source tool to test IO performance). 2 HBA's were populated fully utilizing all 8 ports on them while other 3 HBA's were just populated with 4 ports only.  The drives were left without a partition on them. Iometer was configured with two manager processes with 19 worker threads 11 on one Manager and 8 on the other. The 4KB Random reads were selected with Sector alignment set to 4KB. The IOmeter was set to fetch last update on the result screen.

 

 

 

chart.gif

 

clip_image002.gif

 


Result:

Once the test started with 24 drives, and felt I was short of few thousands to reach 1M IOPS so I had to find the 4 bay enclosure to connect another 4 more SSD's taking the total number of SSD's to 28. There was a Million sustained IOPS from the server with an average of 0.88 ms latency and 80-85% of CPU utilization.  Please see below pics for more pictorial representation of the setup.

Conclusion:

Recently we demonstrated this setup at Intel Developer Forum 2009 at San Francisco, this grabbed attention of many visitors due to the fact that this is something an IT  organization can achieve realistically without spending a lot of initial investment, the good thing about this setup is that the availability of parts and equipments in open market. As Intel we wanted to get this thought started that High Performance storage without robbing a ton of money from your IT department's budget. Once a storage admin gets the idea on what is possible the industry will take more innovative approach to expand and tryout new setups using of the shelf components.

Next Steps:

I would be spending sometime to get this setup running with a RAID config and possibly use a real world application to drive the storage. This needs a lot of CPU resources and I have in mind one upcoming Platfrom from Intel which will let me do this. . I come up with followup experiments.

 

-Bhaskar Gowda.

…on my way to a customer meeting, and the thought dawns on me about why the car I’m getting into is a relatively new, clean 2008 compact car and not a 1966 Chevy Impala, which probably has enough steel to dramatically distort the earth’s local magnetic field.  Well the reasons are fairly simple:

 

 

  • Newer cars are more reliable and require less maintenance - cars in the shop don’t make the rental car agency money, and don’t make customers happy if they break down
  • Newer cars are typically more fuel efficient - that ’66 Impala’s gas mileage might be quoted in gallons per mile J
  • Newer cars typically fall under a manufacturer warranty

 

 

As with rental cars, servers aren’t much different.  It’s all about keeping your business running smoothly, minimizing your operating costs, and keeping your customers happy.  While I’m guessing not many of today’s data centers have the server equivalent of a ’66 Impala in them, there are probably a bunch ready to be removed from the rental car fleet.

 

 

Think about it on your next business trip, and check out the benefits of refreshing servers that are only 3 or 4 years old with the Xeon® ROI estimator tool (link:  www.intel.com/go/xeonestimator).

See this video from IDF 2009, San Francisco.

 

Sean Maloney demonstrates new features coming with the next generation Intel Xeon processor for 4S+ server configurations, Nehalem-EX.  Sean focuses on the unique scalability and RAS capabilities newly introduced into the platform.

 

Paul Ottelini on Monday said it is the democratization of data.  With the capabilities, Intel Xeon processor based servers are ever more relevant to any type of workload a data center would support.  The economics of standards based Intel architecture platforms will in effect provides another choice for data center operators to run the most demanding and mission critical workloads where expensive and legacy proprietary architectures like RISC are no longer the sole choice.  This choice proposition is very powerful as the cost reduction is the foremost concern that needs to be tackled by data center operators and IT managers.

 

Nehalem architecture brought the performance and efficiency.  Nehalem-EX will bring, on top of that, the RAS capabilities and increased variation of OEM system designs.  In addition, ISVs will be ready to have hardware features reflected into the software products.  It is a game changer, turn of the industry, where Intel is providing data centers with opportunities to standardize ALL the workload, including the most mission critical, to Intel Xeon processor based infrastructure.

 

 

In my previous post Fall IDF: Is Italian Pasta the Actual Inspiration for Server Virtualization? I talked about the evolution of Server I/O virtualization. I mentioned a few demos and invited you to check them out. But I didn’t give any details about the demos…

 

Well, it's the 3rd and last day of IDF today and the demos have running now for 2 days. You have one moreday to check them out! So let me describe them quickly.

 

Dell has been a great partner for these demos. We’re showing 2 demos together in booths 709 and 711 in the Virtualization Community, using Dell’s R710 servers, based on the Xeon 5500 platform, and using Intel 82599 (Niantic) with Virtual Machine Direct Connect (VMDc). The 1st demo is with VMware and their Network Plug-In Architecture (NPA) technology.

 

VMware.jpg

 

 

When you visit the demo, check out the great CPU utilization as well VMotion* among heterogeneous server configurations!

 

The 2nd demo is delivered with Citrix, showing scalable direct assignment by using XenServer with VT-d and SR-IOV support. The overall performance is really great and live relocation of virtual machines is working nicely.

 

Citrix.jpg

 

 

 

Another demo (booth 707) is delivered by Red Hat, featuring RHEL 5.4 with KVM (shipping SW with VT-d and SR-IOV support!), Neterion with their 10GbE NIC, all running on an Intel Xeon 5500 server. Look out for the performance value shown for scalable direct assignment!

 

Neterion.jpg

 

And finally, a storage (not LAN) demo! Using the same combination of VMM and server, in booth 517, LSI is showing the value of scalable direct assignment for a RAID controller. The performance boost is fantastic!

 

LSI.jpg

Check out these demos at the IDF Showcase... I’d love to hear your impressions!

As Intel’s server group Intel Developer Forum (IDF) showcase manager, I am always interested in the responses of IDF attendees to our demo showcase.

 

Yesterday (Day 2 IDF), after lunch, I went to the Advanced Technology Zone (ATZ) to observe the showcase floor.  The ATZ is located in the public hallway in the venue.  It contains the demos with latest Intel technologies.  This year, we have 2 Nehalem-EX 4-socket server demo’ing in ATZ.

 

As soon as I walked in ATZ, I saw 2 Asian attendees standing in the corner discussing one of our Nehalem 4 socket demo.  As an Asian myself, I walked up to them to say Hello.  They were reporters from the Vietnam TV.  The Vietnam TV is a national TV, which has an education channel discussing topics including technology and innovation.  According to the reporters,  this channel broadcasts programs 24 hours per day.  The reporters started covering IDF new Intel technologies since 2008 Spring IDF in Shanghai.

 

The demos they were interested were the Nehalem 4-socket demos.  They wanted to see this 8-core and 2.3 billion transistors platform and the applications that leverage this 32-core machine.  This is an amazing new Intel platform targeted to be released in early 2010.  OEM can design 2-socket to 8 socket Nahelem-EX platform gluelessly, and higher configuration with their own node controllers.  Currently, we are expecting 15 8-socket and above configuration systems from 8 OEMs to come to the market at launch.

 

The 4-socket system supports up to 1 Terabyte main memory.  With Intel® HyperThreading technology, it can possess up to 64 logical CPUs.  It also contains many reliability features such as Machine Check Architecture (MCA) to recover memory errors without OS blue screen/crash.  These features address the need of mission-critical business continuity and performance requirement.

 

One of the mission-critical environment is the New York Stock Exchange (NYSE) trading solution provided by NYSE Technologies.  This environment is showcased in this year IDF: With Nehalem-EX scalability, NYSE Technologies demonstrates trading-in-a-box capability.  This new trading solution consolidates multiple tiers of servers into a single Nehalem-EX 4-socket system, allowing ultra-low latency that is highly desirable in stock market trading.  NO NETWORK HOPS; NO I/O INTERRUPTS; NO WASTED CLOCK CYCLES.

 

These 2 Vietnam TV reporters conducted the interviews and videocapturing on the 2 Nehalem-EX demos for over 30 minutes.  Judging from their engaging questions and their eye-openning facial expression (usually Asian is very conservative in their public expression), it seemed they realized that they were witnessing the history of computing openning a new chapter with Nahelem-EX platform.  This type of excitement brings the joy to  me as a program manager after almost 5 months hardworking of planning the IDF showcase.

 

For

 

About Hugh Mercer: I am a sales development manager in Intel’s Enterprise Solution Sales group. One of my responsibilities is working with Intel’s Server Platforms Group to indentify, develop and highlight success stories around Intel’s server platforms and technologies.

Every day, Intel® technology and platforms help companies solve business problems and challenges. Here are a few of the growing number of stories and reasons for choosing Intel processors and technology.

 

Winning: Rheinisch-Westfälische Technische Hochschule (RWTH)

Leading German university turns to Intel® Xeon® processor 5500 series for high-performance computing

Read about it here

The results:

·          Implemented small server farm. Intel Xeon processor series performed more powerfully than RISC architectures.

·          2010 scale out. In 2010, the university plans to implement some 400 more systems with over 20,000 cores powered by the upcoming Intel Xeon processors code-named Nehalem EX

 

Winning: Alvotech

Alyotech turns to Intel® Xeon® processor 5500 series to deliver insightful design improvements

Read about it here

The results:

·          Alyotech benchmarked the new processor, developed on 45nm Hi-k next generation Intel® Core™ Microarchitecture, and increased performance by 65 percent over the previous generations, dual-core servers

Winning: Atos Origin

Intel® Xeon® processor 5500 series helps Atos Origin lower total cost of ownership of its data centre environment.

Read about it here.

The results:

  • Atos Origin compared the performance of      the Intel Xeon processor 5500 series with four cores to that of the previous-generation      with just two cores. It found, on average, 2.4x greater      transaction throughput running a web server, 1.75x running a database server and 1.25x running an email      server.

 

Winning: Business and Decision Group

Business and Decision Group powers forward with huge virtualization project underpinned by the Intel® Xeon® processor 5500 series.

Read about it here

The results:

  • Early results showed that      with the Intel Xeon processor 5500 series they could gain virtualization rates      of 20:1 and with a processor load slightly below 55 percent.
  • Power consumption was reduced by approximately 30 percent compared to the previous generation of processors.

 

 

Winning: Onkosh.com

Intel® Xeon® processor 5500 series boots performance of unique Arabic search engine Onkosh.com

Read about it here

The results:

·          Onkosh.com already witnessed an increase of around 20% in performance. This performance increase was possible due to the new micro-architecture with Intel Turbo Boost

·          Onkosh.com is now able to grow about 300% in terms of the ability to crawl and parse new Arabic content automatically discovered on the World Wide Web.

 

Winning: BMW

Migration to Intel® Xeon® processor 5500 series lowers total cost of ownership and increases flexibility

Read about it here

The results:

·          BMW Group is deploying Dell PowerEdge* servers powered by the Intel® Xeon® processor 5500 series, which will replace a RISC-based infrastructure that has much higher costs, lower performance and less flexibility

·          This allowed BMW Group to increase the workload to more than 80 percent and to significantly decrease the total cost of ownership (TCO).

 

Winning: Société d'Exploitation des Transports de l'Agglomération Orléanaise (SETAO)

SETAO turns to Intel® Xeon® processor 5500 series to strengthen and build on its service offerings.

Read about it here

The results:

·          Thanks to the Intel® Xeon® processor 5500 series and VMware hypervisor, SETAO is now able to provide mainframe-class quality of service and ensure easy deployment of new virtual machines and applications while reducing total cost of ownership.” Olivier Parcollet, Chief Technology Officer, SET

·          SETAO estimated that it could save approximately 40 percent on energy costs due to the higher server consolidation ratio and greater CPU energy consumption management.

From North to South America, from Europe to Asia Pacific and Japan, every end users and service providers that I spoke to interested in cloud computing concept but degree of readiness for adoption of cloud computing varies, some limited to just the concept, others already putting it to good use!

As usual US market is leading the way, few enterprises already using the Public cloud (Amazon EC2) for their research and development environment such as Eli Lilly, Johnson & Johnson. In addition recently few cloud service providers such Right Scale and Amazon announced their Enterprise focused Virtual Private Cloud, expanding beyond just public cloud offering to exclusive private cloud services to address the need of Enterprises and pave the way for wide scale adoption of Cloud Computing!

There is no doubt that Cloud Computing is here to stay and it is and will impact all aspect of computing from application and Infrastructure architecture development and deployment to Enterprise IT operational environment, but there are still lot more needs to be done to convince majority of end users to trust cloud service providers to move their computing and data into to the cloud. Security, Data Governance, End to end Intelligent Management and monitoring, High Availably & Reliability to name a few!

Join us today to discuss business models, Architectures and challenges of delivering and consuming cloud services in “Understanding the Cloud: Business and Usage Models, Architectures and Implementations” PDCS002 session.

Take a look at this video segment from Paul Ottelini's keynote today at IDF. Very interesting on where the technology is headed and what the consumer wishes technology could do for them. Very cool stuff......

 

 

I love it when simple concepts formulate a new best practice or technology direction. There's a certain artistry to it, and for whatever reason it gives me hope that not everything these days needs to be uber complex to be innovative.  That's why I was very excited when Mike Patterson, resident data center efficiency genius at Intel, told me about a new collaboration with LBNL, IBM, HP, and Emerson called ACE - or Adaptive Cooling Environment.  ACE drives major improvements to data center facility efficiency...using technology that exists in data centers across the world today.  And because it's foundation is based on industry standard technologies, there's a good chance that the vast majority of data center managers can use it across their server installations soon.  Find out more about ACE and what Mike had to say about the collaboration here.

I am an IDF veteran...I've attended too many IDFs to remember and I have developed a couple of truisms about the event.

 

#1 Geeks will be on hand.  As one of them it's kind of like spending a week with my people.  People who get more excited about, say, the latest I/O bus compared to Milan's latest fashions.

 

#2 The Geeks on hand will not be disappointed.  If there is one thing that Intel can be counted on in doing this week it's showcasing some pretty cool technology across all of the market segments we operate in.  We'll also bring some very big name industry leaders on-stage to talk about what they're doing to create our collective future.

 

In the past couple of weeks I've been travelling across the world working on a few last minute items for the event, and I happened to be in Munich for the Intel/T-Systems Data Center 2020 Test Lab opening event.  This was a big announcement by T-Systems (the enterprise arm of Deutche Telecom and sister organization to T-Mobile) and Intel.  The Data Center 2020 Test Lab was developed by the two companies with the express purpose of developing data center best practices for tomorrow's datacenter requirements.  The initial focus of the lab will center on data center efficiency best practices and specifically cooling efficiency.  And this is a fantastic thing that the industry and our customers will all benefit from...but this is not what was most exciting to me about the test lab...

 

I visited the test lab the day before the opening to get a preview and was given a tour by Herr Meier from T-Systems.  As we walked towards the lab I noticed a glint in his eye that told me he was very proud of what he was about to share with me...and soon I realized why.  If you could imagine the most ideal data center configuration for the Data Center 2020 project you'd be close to what Herr Meier showed me.  The data center itself is small, but the features of the facility highlight the degree of data collection that can be measured and analyzed.  Features included: depressurized air at 10K feet altitude, raising and lowering ceiling height, smoke system to track air circulation, cold aisle containment cabinets, water cooled rack for cross comparison, and floor tiles that allowed for acute control of airflow. And enough sensors to measure every micro adjustment in efficiency gains across a multitude of testing variables.  If this sounds nearly as cool to you as it was to me, I'd encourage you to attend the joint T-Systems Intel session at IDF this week in the Eco-Technology track and see the T-Systems demo in the Eco-Tech community.  And to learn more (in English too soon) visit www.datacenter2020.de

 

See you all at IDF!

I have a confession to make… Last year was my first IDF. Ever! I had no idea then, that this year I would end up being responsible for a whole track, and sponsoring the Virtualization Community zone. I was lucky that Jake took ownership of the community zone. He assembled a great line-up of demos, from a variety of companies. It should be great, go see!

 

But this blog is about the Enterprise Cloud track. I set out to make it to represent a theme, rather than a collection of loosely related sessions. In my view, this required a mix of depths – an overview session to explain the concepts alongside deep technical sessions. I also thought it would be a great opportunity to gather some industry leaders beyond Intel to talk about Enterprise Cloud vision and the opportunities it presented for the developers community.

 

“What is this guy talking about” you must be asking yourself. “What is Enterprise Cloud? Not more hype?!” Well, I think of Enterprise Cloud as a very real vision of the place where actual IT needs meet the aspirations of the Cloud Computing hype.

 

The Cloud hype is based on some pretty impressive efficiencies that several companies are being told to have achieved. These companies did so by designing custom application to run in their data centers. In some of these cases the data centers and the hardware in them were even custom architected and designed to run these applications. IT wants to gain similar efficiencies. But IT can’t throw away all the legacy applications…

 

In comes Enterprise Cloud, where IT evolves to gain the efficiencies, without losing the legacy investments…

 

In the Enterprise Cloud track we’ll cover some of the key technologies that are required for this to happen.

 

We’ll start with an overview (session ECTS001) on Tuesday at 10:15, where Dylan and I will do an overview of key technology areas: virtualization and performance, Data Center efficiency, evolution of I/O, and security, and why they are critical for the evolution of IT. What will follow are several in-depth sessions that will cover those very topics:

 

·         ECTS002 – will focus on Intel® Trusted Execution Technology and explain how application can protected in the Enterprise Cloud environment. Check this out in Jim's blog

 

·         ECTS003 – will cover enhancement for encryption processing in upcoming CPUs. Leslie gives a really great overview in her blog

 

·         ECTS004 – will talk about technologies to improve Data Center efficiency. David covers one of those technologies here and check out his other blogs as well.

 

·         ECTS005 – is an in-depth review of Intel’s technologies for virtualization, and will be presented by Intel Fellow Rich Uhlig.

 

·         ECTS006 – will discuss evolution of I/O, which is necessary to enable IT to gain the desired efficiencies. RK gives an excellent preview in his blog here

 

·         We also have a Q&A session on Tuesday evening (ECTQ001) to allow an open unscripted conversation with all the track presenters who will be around on Tuesday.

 

·         Finally, we have a VERY exciting panel (ECTP001, on Tuesday at 5pm). Jake Smith from Intel will lead a discussion with some true industry thought leaders from Cisco, Citrix, Microsoft, Sun, and VMware. The Theme of the panel is “Enterprise Cloud – technologies, usages, and opportunities for the developers community”. This should be an exhilarating hour!

 

 

Along with a couple of labs this should a great track. See you at IDF… it all starts tomorrow!!!

First there was the Multi-Billion Dollar Automobile “Cash for Clunkers” program that I wrote about back in early August.   Then in late August we started reading more about the planned $300M state-run rebate programs for consumer purchases of new ENERGY STAR® qualified home appliances.  Appliance categories eligible for rebates include: central air conditioners, heat pumps (air source and geothermal), boilers, furnaces (oil and gas), room air conditioners, clothes washers, dishwashers, freezers, refrigerators, and water heaters.

 

The government wants to make cars and homes more energy efficient, while helping to support the nation’s economic recovery.  But what about making Data Centers more efficient?

A couple of years ago the US Environmental Protection Agency reported that the energy consumption associated with data centers had doubled between 2000 and 2006, reaching some 60 billion kWh in 2006, roughly 1.5% of the entire US energy use. The EPA says this is expected to double again by 2010.  The same authors of that report previously calculated that US servers currently use the same level of electricity as all color TVs in the country combined.

So this got me thinking…which industries have done the most to increase output per energy unit and which products also offer the most attractive paybacks when you invest in them.  The findings were interesting to say the least.  Let’s first look at the sectors creating more energy-efficient products over the last 30 years*.

  • Autos – 1978 (14.3 MPG), 2008 (20 MPG): Energy Efficiency gains = 40%

  • Airlines – 1978 (22.8 Revenue passenger MPG), 2008 (50.4): Energy Efficiency gains =  121%

  • Agriculture – 1978 (0.63 units of output per unit of energy use), 2008 (1.46): Energy Efficiency gains = 132%

  • Steel Mfg – 1978 (63 lbs of steel per MBtu), 2008 (167 lbs): Energy Efficiency gains = 167%

  • Lighting – 1978 (Incandescent light bulb – 13 lumens per watt), 2008 (Compact Fluorescent Bulb – 57 lumens per watt): Energy Efficiency gains = 339%

  • Computer Systems – 1978 (1,400 instructions per second per watt), 2008 (40,000,000 instructions per second per watt): Energy Efficiency gains = 2,857,000%

*Source:  “A Smarter Shade of Green,” ACEEE Report for the Technology CEO Council, 2008.

Next let’s look at some big ticket energy efficient products that offer the most attractive paybacks on their investments.  (Note: Buying a hybrid automobile wouldn’t make this list below in terms of rapid payback, hence not included.)

IT industry far exceeds others at increasing output per energy unit… and Intel servers also offer a faster payback on investment than other energy efficient products (including Energy Star Products).  Yet there is not government stimulus package to help encourage these purchases in energy efficiency.  Simply, this is the most energy efficient investment that the government won’t help you make.

I would be curious to hear what you think.

bryce

ENERGY STAR compliance is becoming an increasingly important factor for servers and workstations.

Accurate interpretation of the ENERGY STAR specification can be quite challenging with the ever-expanding scope of rules and regulations that govern platform and component-level design considerations.

Intel recently developed a semi-automated test tool to greatly simplify ENERGY STAR testing of servers and workstations. This tool has the following capabilities:

1. Ability to identify the relevant ENERGY STAR criteria per configuration

2. Automated assignment of ENERGY STAR test conditions and system testing

3. Preparation of system test data for analysis and ENERGY STAR submission

Feel free to stop by my booth at IDF in the Eco-Technology Community for a hands-on demonstration and to learn more.

 

Jennifer

K_Lloyd

Data Center Security

Posted by K_Lloyd Sep 19, 2009

Even the name is a sort of a misnomer.  Not that there isn’t a lot of physical security around most data centers.  The doors are locked and not even regular employees have access.  This is necessary, and if someone gained physical access they could really mess things up. But, this is not where the big risk typically occurs.

 

The growing challenge is data security – i.e. protection from threats that come across the wire.  With ubiquitous networks, and data moving everywhere, protecting the crown jewels is a full time job.  Hackers, malware, employee abuse, and other threats can lead to data exposure that is potentially devastating, and almost undoubtedly embarrassing for the IT manager.

 

Gartner recently declared IT security the number one worry of fortune 1000 companies. This is not surprising when a report from Symantec showed exponential growth in internet security threats.

 

There is no silver bullet, and there is no system that can never be defeated.  We need to do the best we can with the tools we have.  Doing anything less could be seen as negligent.

 

Like security in the physical world, data security is a combination of business process and technology.  Neither can be effective alone.  Business processes must make clear what roles deliver data access, data steward ship, data ownership, and data disposal.

 

<sidebar>Data disposal is going to be one of the biggest challenges to the promises of cloud computing.  If we consider a hosted app like “gmail” to be part of the cloud, then we either must accept privacy policies like “all data belongs to the host” or try to stick to using internal systems. </sidebar>

 

The other half of the security solution is technology.  Intel, and others, are delivering new technologies to the server to assist with security enforcement.  New string accelerator functions dramatically speed content scans for malicious data.  Technologies like execute disable & SM range registers provide improved protection against buffer and cache attacks.  The next generation of Intel server processors will introduce new features that can validate that code is un-altered and remove much of the overhead from encryption.

 

Security can not be an occasional focus any longer.  Every security manager will need to be up to date on the state of technology and tools, and have the social skills to drive good data practices into the work environment.

Meet the Rock Stars of PCI, PCI Express and USB Initiatives at IDF.

 

The real Ajay Bhatt and other Intel Rock Stars will be on hand to take your questions about PCIe and USB.  You might even get him to sign an Ajay Bhatt t-shirt if you ask a question that’s not too hard for him to answer!

 

  • This event is on Tuesday Sept 22nd at 6pm in room 2004, level 2.

  • For the real scoop on what’s going on in PCIe today

    • consider attending the following sessions on Thursday, Sept 24th in room 2003 starting at 11.10am.

 

 

 

11.10-Noon: TCIS006: PCI Express* 3.0 Technology: Device Architecture optimizations on Intel Platforms

 

This session is for developers with advanced knowledge of PCI Express* Technology and related usage models. This session will help developers comprehend platform implementation challenges and what is needed to build the ecosystem for successful product deployment.

Topics include:

 

  • Overview of PCI Express* (PCIe*) 2.1 and 3.0 technology protocol extensions as well as the power/performance benefits and applicability across various market segments
  • Implementation considerations for a selected set of PCIe Technology protocol extensions aimed at Intel based platforms
  • Software development required for these features.

 

 

1.40-2.30pm: TCIS007: PCI Express* 3.0 Technology: PHY Implementation Considerations on Intel Platforms

 

This session is intended for developers with advanced knowledge of PCI Express* Technology.

This session will help developers appreciate, understand, and address implementation challenges related to the encoding scheme.

Topics include:

 

  • An overview of Intel’s analysis of the logical layer enhancements required for the PCI Express* 3.0 technology operating at 8.0 GT/s
  • Implementation challenges and considerations for Intel based platforms.

 

 

2.40-3.30pm: TCIS008: PCI Express* 3.0 Technology: Electrical Requirements for Designing ASICs on Intel Platforms

 

This session will focus on the electrical and mechanical elements of designing PCIe with topics to include:

 

  • Overview of silicon and motherboard/add-in card design features required to support typical PCI Express* Technology one connector and two connector topologies at 8.0 GT/s
  • Analysis of studies for deeper understanding of transmit and receive equalization schemes, measurement, and testing
  • Process for determining form factor specific electrical requirements, PCB design guidelines, and add-in card and motherboard test methodologies.

 

 

3.40-4.10pm: TCIQ002: Q&A and panel on PCI Express

 

Get any lingering questions you have on PCIe at this open session where no PCIe question will go un-answered!

We seem to have an insatiable appetite for all kinds of computing equipment.  I remember my parents carrying a cell phone the size a football.  I thought they were way cool. Today, my 7 year old cousin has a blackberry, not sure how I feel about that.  I suspect that she is more tech savvy than I am.

 

Needless to say, the use and proliferation of electronic products has grown substantially over the past two decades, changing the way and the speed in which we communicate and how we get information and entertainment.  According to the Consumer Electronics Association (CEA), Americans own approximately 24 electronic products per household. So I did a quick inventory at my own place and came up with 12. True, I fell short, but I am a household of one and not a techie.  With all this electronic stuff out there, ever wonder what happens to it? Does it end-up in a landfill?   Can you donate or recycle? The answer is not as straight forward as my might think.

 

Relative to a few years ago, it is easier to recycle.  Some OEMs offer free recycling.  In the EU the Waste Electrical and Electronic Equipment (WEEE) Directive provides direction regarding recycling options.  19 US states have also passed laws that mandate recycling.  But we have a long way to go, as only 15-17% of the equipment that we do not want is recycled.  So I pose the question, how do we work as an industry to increase recycling? Can we design compute equipment to help with recycling? What are HP and DELL doing?  What can we learn from them?  If you are curious, come to the LCA panel discussion at IDF and hear first hand from the experts and no, my 7 year old cousin will not be there.

Last month, Intel added another high-performing, low power to the Xeon 5500 SKU lineup with the Intel Xeon L5530 processor (2.40 GHz, 60W TDP).  As with the L5506 (2.13 GHz) and L5520 (2.26 GHz) SKUs that were launched in March, the L5530 deliver the same performance as its 80W counterpart (E5530), but at 25% lower CPU power.

 

With space being a valuable asset in power-constrained data centers (IDC estimates datacenter construction costs at an average of $1,000/sq ft and $40,000/rack), the Xeon L5530 delivers even more performance in the same 60W CPU power envelope to help get the most out of each rack. Here’s the tale of the tape:

 

  • 66% more performance than previous generation Xeon L5420 SKU
  • 45% more performance than the Xeon L5506 SKU

(performance numbers based on SPEC_int_rate2006*, see http://www.spec.org/cpu2006/results/ for more details)

 

Want to find out more about the Xeon L5530 and the rest of the 5500 lineup, check out:  http://www.intel.com/p/en_US/products/server/processor/xeon5000

 

I love food! Since I was a kid I’ve loved noodles, especially Italian pasta. I used to think that Spaghetti was the general name for Italian noodles. Learning how to twist Spaghetti on a fork gave a great sense of achievement and joy.

Bowl of Spaghetti.jpg

Many years later my wife and I travelled to Rome. Naturally – both of us really love food, we spent a lot of time seeking out restaurants and checking out new food. One of the wonderful dishes we had was Pappardelle (on the left) with Duck Ragu. It was my 1st encounter with Pappardelle –a very wide form of pasta. You get only a few Pappardelle on your plate but it’s still the same amount of pasta. I found it not as practical to twist the Pappardelle around my fork, so I cut them up into smaller pieces to eat them.

Pappardelle.jpg

 

I’ve thought about it recently when I looked at the back of a virtualized server. Looks similar, no?

 

Server with 1GbE.jpgBowl of Spaghetti.jpg

A typical virtualized server has 8-10 1Gigabit Ethernet (1GbE) ports, and 2 Fibre Channel ports. This makes for a lot of cabling, and many add-in cards. It translates to a lot of cost, power, and complexity (and thus reliability risk) for an IT shop. As a result, there’s a lot of buzz around high-speed networks, specifically 10GbE. That technology presents the opportunity to consolidate all these 1GbE ports to a significantly smaller number of higher bandwidth, i.e. 10GbE ports. It makes for a much tidier server.Server with 10GbE.jpg

 

 

Kind of like substituting Pappardelle for Spaghetti

In that case, iSCSI or FCoE (Fibre Channel over Ethernet) could be used for the SAN connection, still using the same high-speed ports. Standards like Data Center Bridging (DCB) could add a lossless character to the 10GbE link to make it friendlier to FCoE.

 

Few new solutions though come without new challenges. The common way for VMs to share I/O devices in a today’s environment in through mediation of the hypervisor, using emulation or para-virtualization. That reduces the effective I/O bandwidth. It also becomes a fairly significant overhead to the server in its own right, reducing the available server capacity for application processing, and it adds latency. With the growing trend in IT to treat virtualization as a default deployment mode for any application, these issues become quite limiting.

 

We at Intel have thought that the best way to overcome these issues is by using “direct assignment”. Using the Intel® VT-d technology (launched in the Xeon 5500 platform), a VM can be assigned a dedicated I/O device. This nearly eliminates the overheard related to the hypervisor mediation I mentioned above. A side benefit is that it increases the VM to VM isolation and security. But assigning an individual I/O device to one VM is not very scalable…

 

This is where the PCI-SIG’s SR-IOV (Single Root I/O Virtualization) standard comes into play. This standard allows a single I/O device to present itself as multiple virtual devices. With SR-IOV, each virtual device can be assigned to a VM, adding scalability to the direct assignment model, effectively allowing the physical I/O to be shared yet with greater security and reliability.

 

Another challenge with the direct assignment model is related to live migration. Hypervisors have typically assumed the SW mediated IOV model. As a result, hypervisors need to be modified to adapt their live migration solutions to direct assignment.

These technologies span many different components of the server platform. Intel® VT-d is necessary, so Xeon 5500 must be used (or later platforms). SR-IOV capable I/O devices – NICs or Storage controllers, are required. BIOS must be modified, as well as hypervisor software. This is pretty heavy lifting.

 

So you can only imagine how excited I am to be able to showcase 4 different SR-IOV demos at IDF next week! The demos involve 2 server vendors, 3 VMM vendors – 3 different vendors implementing 3 different hypervisor architectures, and 3 different IHVs representing 2 different I/O technologies. We show the performance improvements, as well as VM live-migration. It works!

 

 

Come and see it (Booths 517, 707, 709, and 711 in the IDF showcase)!

I and Sean Varley will be jointly presenting a session on ‘I/O innovations optimized for enterprise cloud’ at Intel Developer Forum (IDF). We are focusing on the I/O challenges in virtualization based enterprise cloud infrastructure and how current innovations, collaborations and technologies solve some of the challenges efficiently.

 

I have written in the past we have a view that evolution of virtualization has different phases to it. IT begins with basic consolidation, what we call virtualization 1.0, and then wants to extract more efficiency through flexible resource utilization and automation that we term as virtualization 2.0. The next phase to flexible resource management is deployment and management of scalable applications on a dynamic infrastructure, which we can relate to as enterprise cloud or virtualization 3.0.

 

In our view, the requirements of virtualization 2.0 makes virtualization 1.0 better and similarly the requirements of virtualization 3.0 makes 2.0 phase better. This means some of the challenges and solutions we discuss for enterprise cloud will make the IT datacenter today much efficient.

 

So what are the I/O challenges for enterprise cloud built on virtualization? There are many. Enterprise cloud model would mean being able to deploy the workload on available infrastructure (given that security and compliance needs are met) in a more flexible manner. This could lead to large scale consolidation given the new server capacity and performance. More VMs on a single server means more pressure on the I/O. So we need a balanced platform solution that maps the I/O capability to the CPU performance increases.

 

Flexible resource utilization should mean that the even the I/O hardware resources are flexible. However typical I/O architecture in a server is very rigid today. IT typically configures a virtualized server with a bunch of HBAs for storage I/O and eight or ten ports of Ethernet device for network traffic. Add to it separate cabling associated with each. Well, that means we cannot reallocate resources as needed in a flexible manner. So I/O hardware resource limits the extent of true flexibility. How to get around the challenge of rigidness in I/O architecture? And how do we reduce the complexity of the fabric and the power consumption? We perhaps need a unified converged high-speed I/O fabric.

 

Is it sufficient to have a converged fabric or do we need more? Of course we need more, what about the QoS and SLA of the I/O traffic? How about the scalability of the I/O fabric and security features to isolate the traffic between two VMs? All these are important as well in a multitenant enterprise environment.

 

In our session we explain how Intel in its products and through standards work has been targeting solutions for these challenges and delivering those to market with the ecosystem. To learn more make sure you attend the IDF session ECTS006. And for those who cannot attend, look for a blog from me and Sean post-IDF where we will succinctly define how we solve the challenges with Intel technology solutions.

 

RK Hiremane & Sean Varley

I finally finshed the content for the talk that I am scheduled to deliver next week at IDF on Sept 22 (TCIS001 10:15 am – Room 2004).  The content covers examples of optimizing for multi-core using our software tools to accelerate performance,  and more importantly the seamless use of the same software base with minimal or no changes in next-generation architectures (what we call scaling performance forward). Personally, I am excited about the potential of multi-core optimizations with today’s architectures. When I was a graduate student in parallel computing from 1988-1994, it was extremely difficult to take any algorithm and map it to the parallel architectures since most of the algorithms were not very efficient once you took the communication delay’s into account.

 

The key is to get total delivered performance at an application level, not at the kernel level. However, given the architectures of today which are better balanced, and the availability of multi-core, the memory bandwidth, software tools that work, and the faster interconnects, the number of algorithms that can be parallelized and that actually benefit with accelerated performance (Delivered total application time)  is huge, pretty much every industry vertical is taking advantage of multi-core architectures, software tools, and clusters.

 

 

John Gustafson, from our Intel Labs, an industry HPC veteran, is my co-author, and I am thrilled to have him speak about Balanced Computing. Wes Shimanek, a colleague of mine at Intel introduced me to John and after listening to his explanation of balanced computing, and his views on what works and what doesn’t, we immediately  knew that John’s expertise will be greatly valued by the IDF audience, and invited John to be part of the talk. John graciously accepted to participate, and I hope that folks interested in computing architectures, especially in the HPC world will make the time to come listen to John’s talk.

 

 

I will also be giving you a high level view on the challenges that drive our products and briefly introduce you to the various aspects of our strategy. I will be followed by 3 more talks that will cover the key aspects of what we do at Intel in HPC, Software Tools for Scaling Application Performance Forward (TCIS002), Delivering more to HPC than just Performance (TCIS003), and Intel® Cluster Ready (TCIS009).

 

I am looking forward to IDF next week. See you all at the developer forum

 

Nash Palaniswamy (Intel)

JGreene

IDF: Something for Everyone

Posted by JGreene Sep 16, 2009

 

It has been a couple of years since I’ve had the opportunity and pleasure of attending an IDF, but I remember the experience well.  While I had been in the technology industry for many years and was familiar with major tradeshows like Comdex, Interop, CeBit, etc, I recall being amazed that a single company could be the catalyst for such a huge event.  But as I experienced it, it made more sense: after all, Intel sells a very broad line of products to a huge array of customers.  And our products are among the most technologically advanced and complex in the world—yet they are only critical components to solutions that require a wide range of complementary parts—system boards, test tools, compilers, software, BIOS and integrators—to name just a few.  And IDF is the critical venue to galvanize this huge and surprisingly efficient cadre of fellow travelers that will help build upon and deliver our technologies to the world.  It is where we educate, communicate and differentiate, and it is a great showcase for Intel.

 

This year, I’m excited to be able to participate.  As I wrote a few weeks ago, I’m looking forward to being able to use this showcase to help establish Intel’s focus on server security. We’ve got a couple of key new features—Intel® Trusted Execution Technology (TXT) and Advanced Encryption Standard new instructions (AES-NI) for encryption processing—that promise to make secure processing for servers more complete and efficient.  You can get a glimpse of what Leslie Xu and Michael Kounavis will cover for AESNI. I’ll be working with Mahesh Natu and some friends in the fellow traveler community to help introduce TXT for servers. Like many others, we’ll be using this opportunity to: conduct training for developers (session ECTS002); show the technology in action in a really cool Server Zone demo (Booth #517), and generally help build awareness for TXT and security in general.  I’m really looking forward to the demo.  It is one thing to offer a cool feature, but it is a whole new level of anticipation when one can so clearly visualize how this technology can be deployed to make users’ environments better.

 

I know that we’re eager to share our enthusiasm and engage the developers and customers that will make our technologies a success.  I’m also keen to get to see other great things coming out of Intel and our fellow travelers. What are you eager to see and hear about at IDF?

Given the increasing costs to power and cool data centers, IT and Facility managers of enterprise and cloud data centers want to maximize capacity utilization and reduce total cost of operations. However this has been challenging given the lack of fine grained instrumentation and visibility into server and rack level power consumption resulting in over-provisioning of racks and costly expansions despite unused capacity.

 

 

With Intel® Xeon® 5500 series, we introduced Intel® Intelligent Power Node Manager that provides server level power monitoring and policy based power capping. Since power and cooling constraints exist at rack, row, room and PDU level, we also released Intel® Data Center Manager to enable ISVs to realize Node Manager benefits at data center level with reduced investment.

 

 

Intel® DCM software development kit (SDK) provides power and thermal monitoring and management for servers, racks and groups of servers in data centers using Intel® Xeon instrumentation. Management Console Vendors (ISVs) and System Integrators (SIs) can integrate Intel® DCM into their console or command-line applications and provide high value power management features to IT organizations.

 

IT organizations facing power and cooling challenges can benefit from:

 

  • Increased rack density within space, power and cooling constraints through power control
  • Reduced capital costs by right-sizing power and cooling infrastructure based on actual power
  • Reduced operations costs by eliminating worst-case head-room during provisioning

 

Intel® DCM features include:

 

  • Built-in policy based heuristics engine maintains group power and dynamically responds to changing loads
  • Designed as an SDK to integrate into existing management software products
  • Scales to thousands of nodes support in large data centers
  • Manages across system vendors through use of standard IPMI and DCMI Power Management support
  • Supports integration with Smart PDU meters for servers without Node Manager to provide a unified view

 

If you are at IDF in San Francisco Sep 22-24, 2009, stop by following sessions to learn more about Intel® DCM and see proof-of-concepts and meet with end users who benefited from this.

 

    • PDCS003 - Cloud Power Management with Intel® Micro-architecture (Nehalem) Processor-based Platforms
  • ECTS0004 - Improving Data Center Efficiency with Intel® Xeon® Processor Based Instrumentation

 

There are also several demos in the showcase area that showcase the technologies from multiple partners.

 

I will be at IDF co-presenting PDCS003 - stop by after the class with your questions and thoughts about Intel® DCM.
See you at IDF.

 

--Susmita

If you would like to learn about a new power supply technology for reducing server energy usage there is an upcoming IDF session that may interest you. The title of the session is “Cold Redundancy

 

  • A New Power Supply Technology for Reducing System Energy Usage”. As you can probably guess from the title, we are calling this new technology “Cold Redundancy”. There has been a lot of research done to figure out ways of reducing the input power of a server when it is in an idle state. After all, an idle server is just a very expensive space heater sitting there doing nothing other than consuming energy and producing lots of heat. This is important because some utilization studies have shown that servers can sit idle for a considerable percentage of the time. So anything that reduces the input power of an idle system will have a very significant effect on the overall yearly energy usage - and ultimately save on operating costs.

 

This presentation will describe, and demonstrate, the cold redundancy technology we have been working on here at Intel® to reduce system idle power.

 

One great thing about this new technology is that everything can be kept inside the power supply. No changes to the system software will be needed so the only additional requirement to implement this would be using a power supply that has cold redundancy technology inside it. This will make it easy to integrate into systems in the future because it could become a “plug ‘n play” power supply upgrade option.

 

Since cold redundancy is a power supply technology, I’ll cover some basic concepts to get things started.

 

There are two different types of power supplies called redundant and non-redundant which are used in computers.

 

A non-redundant supply has only a single module which provides all the power needed to keep the system operating. This means there is no backup so if the power supply fails, the system shuts down until the supply is replaced. Desktop computers typically have a non-redundant supply in order to keep the costs low.

 

Most servers on the other hand, have a redundant type of power supply. That means there are extra (redundant) power supplies in the power subsystem so if one supply fails the server will continue working normally. This is for applications where getting maximum system uptime and reliability are worth the additional cost of putting in the redundant supplies. In a case like this though, the redundant supplies are not really needed until one of the supplies actually fails. The drawback with having the redundant supplies turned on until needed is that the supplies still use a lot of power which increases the system operating costs.

 

Cold redundancy reduces system idle input power by putting these redundant supplies into an almost off (standby) condition or “cold redundancy” mode, as we call it here at Intel®. Because of how cold redundancy works, the more redundant supplies there are in a power subsystem the more effective it is and the more energy that can be saved. The general idea of powering down redundant supplies is not new, but the problem has always been how to turn the supplies back on fast enough so that system operation is not affected in case of a failure. We have come up with a solution to this problem by developing cold redundancy technology. Cold redundancy has the ability to put the redundant supplies into a standby state to save energy at system idle while still being able to turn them back on fast enough in case of a failure to keep the system operating normally. It really is the best of both worlds, saving energy while maintaining the same system uptime and reliability as conventional redundancy where all the power supplies are running all the time.

 

If you are interested in learning more, the session number is ETMS001 and will be presented at the fall Intel Developer Forum in San Francisco on September 22nd at 10:15 AM in room 2006 of the Moscone Center. This session will be a combination of lecture and live demonstrations. We decided that having a couple of demos during the lecture would help make the concept more understandable and the presentation more interesting as well. One thing to keep in mind about this session is that we will not be discussing theoretical possibilities or projects planned years in the future but real products that will be available soon.

 

The live demonstrations use a production ready cold redundancy enabled power subsystem that is being integrated into a product Intel® plans to release in the 4th quarter of this year. It doesn’t get much more real than that. The demonstrations will show how the control logic works and what power and energy savings are actually possible. This will be done by measuring the AC input power to a four module power subsystem and running the same output load profile with and without cold redundancy enabled. By comparing the two input power graphs the advantages of implementing this new technology can be immediately seen and quantified. I think you will find this to be a very interesting and informative session but then I’m probably biased just a little bit. J

 

Hope to see you there,

Andy

 

Presenters:

Viktor Vogman – Power Architect

Andrew Watts – Test Automation

Since we started the Ask An Expert discussion thread in the Server Room a couple years ago, I found that the community often asked for guidance between selection of server system type and processor number as IT professionals sought to make the best purchase for them.

 

 

As I responded to these threads, I realized there were a lot of the same questions occurring over and over again.  I then thought that having a selection tool to allow the community to guide themselves through a few questions to help narrow the options might be a valuable.

 

 

Sometimes the world (ok Intel) moves too slowly for me.  My brainchild on this was something I wanted to have done about a year ago with the first 45nm quad-core processors (Xeon 5400).  However, our server and corporate marketing teams got a little distracted by the Xeon 5500 (Nehalem) processor launch.

 

 

However, after much delay I’m proud to introduce this simple, interactive Xeon Server processor selector tool that can help you choose which server system type and processor would be ideal for your application and business goals.  With Three Easy Steps, you can narrow your choices.         

 

  • Step 1: Identify the business environment, application type and primary purchase criteria
  • Step 2: Compare and Choose the processor family (7000, 5000, 3000)
  • Step 3: Compare and Choose the specific processor within that family

 

 

In this 3rd step you can look at price, performance, power and feature set across multiple CPUs to help you narrow.  Take a short cut and look at the most popular CPUs or expand your options and look at the whole range of offerings.

 

 

We also have a Workstation Selection Tool (this tool was what  triggered the idea to create a server one)

 

Other IT and business value assessment tools from Intel include:

 

 

Chris

Follow me on twitter

 

Cryptography, encryption, identity theft, rootkit, malware - none of this sounds familiar? You're not alone. These are words that identifies with managing, communicating, and protecting information and data in business environment and in our personal life.  Rootkit and malware are nasty software that get below the hypervisor and OS to infect your computer system. Cryptography is the science of secret codes, transforming data from ordinary readable form into unintelligible gibberish in order to provide confidentiality, integrity and authentication for data protection, end to end protection, and access control. The challenge is that historically, cryptography has been complex and computation costly.

 

Why is cryptography hot in the marketplace today, especially in the enterprise? For starters, over 90 million consumers have been notified of potential security breaches regarding personal information since 2005 per privacyrights website. The rate is accelerating and the attacks are more complex and harder to detect. There is a shift from attacks that infects millions of computers to one which targets a few banks/government agencies with sensitive financial and personally identifiable information. In the highly virtualized environment of computing today, several virtual machines share the same hardware resources. The hardware resources needs more secure protection as there are more eggs in one basket. Encryption provides the defense in depth that even if the systems are compromised, information is lost, it is still possible to make the information unusable through symmetric, asymmetric, and hash crypto schemes. Encryption also provides data protection increasingly important due to HIPPA (health), SOX (US companies), and PCI (payment card industry) regulation compliance

 

Asymmetric cryptography involving a public and private key, symmetric cryptography with just one key, and hashes are all cryptography types. Advanced Encryption Standard (AES) is a type of symmetric cryptography that has been adopted by the US government as well as other governments in the world.  Three main enterprise AES usage models include secure transactions with SSL/HTTPS/FTP/SSH/Ipsec, software full disk encryption (FDE), and application level encryption in databases, mail servers etc.

 

As for AES-NI, it comprises 7 instructions for accelerating different sub-steps of the AES algorithm. 4 instructions to perform the first round and last round of the 10/12/14 rounds of transformation that encrypts 128b of data from plaintext to ciphertext and vice versa. 1 instruction for mix column operation and 1 instruction for generating the next round key. The 7th instruction, CLMUL, does the packed carry-less multiply in hardware. The benefit are reduction of software side-channel attacks and reduction of performance overhead.

 

To find out more, please attend my class "Securing the Enterprise with AES-NI" class with Michael Kounavis and come see the "Westmere-EP Encrypting the Internet" demo at Fall IDF in San Francisco.

What if someone told you that more than 10% of the dollars you spent powering your servers did no useful computing work?  This sounds wasteful, however, that 10% is spent spinning the air movers that remove the heat generated through power conversion and powering of silicon and peripherals.

 

As a thermal and acoustic architect for servers it’s my goal to reduce that 10%, but the electrical energy going into a computer is converted to heat.  That heat must be removed to ensure components stay within their temperature limits ensuring data integrity and long term reliability.

 

For years and years the focus was on improving performance, even if that 10% sometimes pushed up to 20% in extreme cases.  In today’s environment, performance requirements must now be balanced with the power required to create that performance.  This change has driven a wide array of silicon features that can create that balance but the overall server cooling design must adapt to take advantage of those features while using the most cooling-efficient thermal components.

 

The session ETMS002, ‘Server Cooling Design Optimization for Low Power Consumption', at the upcoming IDF will provide answers demonstrating how servers are becoming more 'cooling-efficient' while ensuring that performance can be maximized.  Cooling tradeoffs based on board layout, heat sink selection and usage of silicon thermal management features will be discussed and quantified with regard to their impact on potential power savings.

Whether you are concerned with server design itself or with becoming more informed on purchasing decisions, this IDF session will enable you to understand the cooling and thermal management implementations that will save energy and reduce total cost of ownership.           


I wanted to share my excitement and some details about an upcoming demo at IDF 2009 (San Francisco, Sept 22-24, 2009) that will demonstrate advanced trading on Nehalem-EX.

 

Using the power of the 8-Core Intel® Xeon® Processor codenamed Nehalem-EX, NYSE Technologies will demonstrate a complete Smart Order Routing System in a single box that has a total of 32 cores. The demonstration will process the entire OPRA feed and all north American equities feeds, apply rules to decide when to trade and convert the order information into FIX format for delivery to the trading venue.

 

The demonstration will show NYSE Technologies’ Market Data Platform V5™ feed handlers processing raw market data at a rate of over 1.5 million updates per second; NYSE Technologies’ Data Fabric™ messaging platform will pass those messages via its Local Direct Memory transport to a mock Smart Order Routing program which will use Data Fabric again to pass orders to a NYSE Technologies’ Market Access Gateway. Typically, these processing tasks are designed as a three tier model with two latency inducing network hops. Deploying this solution in a single server provides for an order of magnitude reduction in latency.

 

Nash Palaniswamy (Intel)

Register and mark your calendar.  On Sep 23, 10am EDT (New York) / 14:00 GMT / 16:00 CEST (Paris), Red Hat and Intel team will host another webinar, guiding you through the steps to take to migrate your enterprise workload from UNIX/RISC to RHEL/Intel.  The "why" and economics of the migration is now quite evident.  This webinar presents "how" a migration should be carried out.  The time is scheduled best for audience in Europe, Middle East, and Africa, but also works well for those on the east coast of Americias.

In the mean time, Red Hat has written this migration whitepaper that walks you through methodologies of a migration.

Happy migration!  and drive your data center cost down!

There has always been a Linux option for enterprise workloads.  But today, with greater uncertainty and greater pressure for cost reduction, the option, these days, is now THE course to take.  But how? 

Here are two whitepapers we developed with our friends in the industry, giving data center managers guidance and directions on what to look for and what actions could be taken for UNIX/RISC to Linux/Intel migration. 

 

With Ziff Davis, Dell, Red Hat...  http://communities.intel.com/docs/DOC-3631;jsessionid=BF37C65ED3F67E934DD8DB579D28898E.node3COMS

 

With Red Hat...  http://communities.intel.com/docs/DOC-3642;jsessionid=BF37C65ED3F67E934DD8DB579D28898E.node3COMS

Also, visit http://www.redhat.com/intelligence/ for more information on the RISC migration program we run with Red Hat.

Are you hearing this clamor? Nope, this is not London Calling! But your employees calling for more performance, your customers calling for faster response time, your boss for more savings.

 

Have you been waiting to upgrade until your existing servers clash, I mean, crash? This economy has led to a lot of indecision, but when it comes to upgrading your servers, the benefits are pretty big not matter the size of your company.

 

Good news, the new Intel® Xeon® processor 5500 series-based servers will deliver just that and more.

 

• Save money. By spending money now, you can save in the long run. The latest Intel Xeon processor-based servers deliver more performance than previous generations. Small businesses can consolidate three older servers to one new server and still have room to grow (1). And make sure to take advantage of government and manufacturer server incentives. All of that adds up to a return on your refresh investment in about a year. This tool can help your calculate your ROI: www.intel.com/go/xeonestimator

 

• Be more competitive. You want to be ready when things rebound and rely on competitive IT equipment. The additional performance and improved reliability offered by updated servers means a more productive staff and faster response times for your customers.

 

• Avoid hidden costs. The other thing to consider with older servers is the expenses that you don’t expect, like maintenance and downtime. You know - one day is fine, next day is black. To get your boss off your back and your business running smoothly, newer equipment now is a great idea.

 

So, if fast ROI, savings, increased performance, improved productivity, new warranty sound like music to your ears, talk to your IT solutions provider (http://premierlocator.intel.com) about going with an Intel Xeon processor-based server.

 

And for more info, check out this new brochure:

Almost as good as the lyrics from The Clash

 

[1] Source: Intel Xeon Server Refresh Savings Estimator, Jul 09

ENERGY STAR® for computers version 5 went into effect July 2009, superseding version 4 from 2007.  Version 5 has higher efficiency limits, introduces an energy total calculation for personal computers, while also expanding the scope of products to include thin client systems.  The EPA released ENERGY STAR for computer servers in May 2009. The EPA also has plans for energy efficiency programs to cover enterprise storage systems and networks, along with updates to its existing programs in a year or two.

 

 

Japan, Europe, Australia, Canada and other countries have included the ENERGY STAR programs as part of their energy efficiency programs.  Japan has begun to enhance its TopRunner program for updates in a year or so.  Europe’s Energy Commission has an Energy Related Products (ErP), formerly Energy Using Produdct (EuP), program that complements their ENERGY STAR program with mandatory power requirements.  The ErP program has an upcoming implementing measure which sets a maximum Off-power (i.e. Lot #6) at 1W for personal computers effective January 2010.  Lot #6 lowers off-power limits to 0.5W in January 2013.   The European EC is reviewing additional "Lot"s to cover other system power characteristics and investigating programs to cover other systems such as servers. Korea just established its e-Standby program this year, 2009.  Australia and New Zealand have been developing its Minimum Energy Performance Standards (MEPS) as mandatory levels for personal computers. Other countries are following suit with their own energy efficiency programs.

 

 

In short, with the geo-political focus on energy efficiency and green-house-gases, many countries have or will begin instituting a number of power targets for computer systems. With multiple energy efficiency programs springing up, each has its own specific methods and targets.  The various criteria drive a common question of whether a product complies and to which specification.  At this fall’s Intel Developers Forum in San Francisco, I will host a panel discussion ECOP002: World Wide Information Communications Technology (ICT) Energy Efficiency Regulations: What Are They and Where Are They Headed? The panel includes Andrew Fanara from the US EPA, Rick Goss from the IT Industry Council, and Jan Viegand a lead  European Commission consultanat.  The panel will provide an overview, plans, and discussion on some of the major energy efficiency programs for computer systems.

Found this video about how intel IT converted what was a high volume manafacturing facility to a high performance computing datacenter that now is on the top 500 list.   Watch Tom Greenbaum, Data Center Operations Manager for Intel IT, provide a description of this retro-fit and tour of the new facility.

 

Some key facts highlighted in the video

  • avoided several million $ in facility cost avoidance
  • landed traditional enterprise environment in raised floor, hot/cold aisle design in one section of facility
  • landed HPC environmet on existing concrete slab floor which enabled higher density deployment of servers
  • 6M Watt, 10K server capacity (4.7k today)
  • room to grow for future to support data center consolidation

 

chris

http://www.intel.com/sites/sitewide/pix/badges/xeon/xeon09_62_trans.gif I'm always looking for good ways to describe to end-users what Intel Intelligent Power Node Manager can relate to everyday activities.  Over the weekend, I was helping a buddy of mine move to a new home, and of course we rented a truck.  While we were driving, we noticed a cool gauge on the dash and a pretty simple sticker describing what it does:

keep-it-green.jpg

 

Keep it in the Green - what a simple concept!  Most everyone can relate to the gas pedal in your vehicle directly with gas mileage. If you have a lead-foot, you burn more gas.  But people who want to conserve, and keep it green - use cruise control.

 

Well, Intel servers can also be managed to optimize the energy consumed by the platform.  Power Optimzed servers using X5500 Series Processors (Nehalem) and the X5500 chipset in conjunction with Node Manager is like cruise-control - you set your "speed" and the servers keep that maximum speed.  It's all managed via P/T states using Intel Datacenter Manager.

 

Of course, at times the RED ZONE is needed - work needs to get DONE - so you throttle up, kick in the Turbo Boost and release that power cap!  But there are also times when all that energy isn't needed - so you lift your foot off the gas pedal, and set your speed for the work that needs to be done. Intel Xeon based servers can transition to higher/lower power states using technologies like EIST, DBS, and Node Manager.

 

Keep your eyes on the lookout for more data on Intel and server power management at the Intel Developer Forum 2009

 

Cloud Power Management with Intel® Microarchitecture (Nehalem) Processor-based Platforms

 

Check Twitter for more details @IDF and @IntelNews and search #IDF09

 

 

 

 

 

 

 

 

 

* disclaimer: giving credit where credit is due U-Haul owns that sticker and tagline!

A virtual workstation uses both virtualization hardware and software technologies that, when combined, provides end users with an uncompromised workstation experience.  It gives engineers and IT user’s concurrent access to key workstation hardware functions previously not available with traditional virtualization technologies. Through this approach, you get near native access to key workstation services, such as those delivered by graphics cards or NICS, needed to run multiple high-performance applications regardless of the operating system they run on. . Best of all, with Intel® Virtualization technology for directed I/O, delivered by Parallels Workstation Extreme, you will be able to leverage this new virtual workstation capability in ways that improve workflows across operating systems while reducing IT management requirements.  This is a win/win for both you and IT.

Ok we have segregated compute resources between IT and the user.

What can you get with Intel’s VT/d technology?

How many times have you been faced with a need to run an application that runs on a prehistoric OS, or that runs 32 bit OS when your entire environment is running on 64 bit? Or maybe you need run two different graphics-intensive workloads in a LINUX and Microsoft Windows® environment.  With Intel VT-d and Parallels™ Workstation Extreme software you may be able to do just that at near-native speeds.

What else can you do with a virtualized workstation?

Have you ever been in a situation where one application requires version X and another application you use daily requires version Y of the same OS.  With Intel VT/d and Parallels™ Extreme® software you may be able run both at near native performance at the same time.  No rebooting, no dual booting or emulation required.  Just fast, seamless answers to complex problems – across multiple segments like Oil & Gas, DCC, Manufacturing, and Research.

Ever hear of a digital workbench?

It is a tool that designers and engineers use to perform what many call digital prototyping or simulation based engineering.  It is usually a set of tools that combine needs for LINUX and Microsoft Window® based applications to create and test their ideas.  Think of a virtual wind tunnel where simulations are performed in a LINUX environment and the design and visualization is performed in a friendly Microsoft Windows® environment.  With Intel VT-d and Parallels Workstation Extreme you can do both at near native speed.  That means interactive product development and engineering, and that leads to potentially better deigns in less time.

Do you need a virtualized workstation?

If you have a need to run applications in different OSes, diverse OS levels or types, or you need to visualize in different OSes, then answer is probably yes.

To learn more about Intel Virtualization Technology please visit www.intel.com/go/workstation.

To see an online demo of Parallels™ Workstation Extreme software please visit http://www.parallels.com/products/extremel.com/p/en_US/products/server/processor/xeon3000

Intel has launched entry 1 socket servers based on Intel® Xeon® processor 3400 series and Intel® 3400 series chipset. So what? Why should a small business refresh a desktop based IT infrastructure with a server one or refresh an existing server(s) with Xeon 3400 based server? How do these new servers help educators?

 

As a background, there are a large number of small businesses today with no or limited IT that are either using desktops or running server operating systems on desktops for running their day to day operations. These small businesses are frequently characterized as DTOS (Desktop-On-Side) users. While DTOS may be perceived as a low cost way of meeting IT needs by small businesses, there are several limitations to this approach that can cost much more in the longer term. Desktop solutions can’t keep up with the business forever as they are not designed for continuous use or supporting multiple users, thereby putting a lid on employee productivity and business growth. Also, desktop used as a server and supporting multiple clients can cost a business dearly in the event of a downtime or silent corruption of a financial transaction due to transient memory errors.

 

Intel® Xeon® processor 3400 series based servers provides more dependability over desktop systems by protecting critical data through differentiated features such as ECC (Error Correcting Code memory) and RAID 0/1/5/10 for server operating systems. ECC helps detect and correct single bit memory errors which eliminate the need for rebooting a system to fix most errors.  This reduces business downtime and minimizes corruption of critical data. RAID 0/1/5/10 provides additional data protection by providing a data backup on multiples hard drives in the event a single drive should fail. The cost of downtime can quickly negate any perceived savings from using DTOS.

 

These servers based on new Nehalem micro architecture are designed to help small businesses grow by enabling up to 64 percent more sale transactions and up to 56 percent faster business response time. Intel® Turbo Boost Technology and Intel® Hyper-Threading Technology enable these servers to automatically adapt their performance to unique business needs. These improvements are significant enough for a small business to replace a 3 year old desktop to get greater that 200% better performance. Xeon® 3400 processor based servers also deliver new capabilities such as support for 6 RDIMMs (Registered Dual In-Line Memory Module), not available on desktop or prior generations. Option for 6 DIMM slots provide headroom to keep employees productive as the business needs grow by enabling 32GB of memory capacity (4x the prior generation), providing flexible memory configurations and future cost effective upgrades. Performance headroom enables a small business to support more simultaneous users.

 

Entry servers are also starting to find place in classrooms and schools. As the usage of computers in classrooms starts becoming rampant, there is need for sharing content such as email, files and web content and print services similar to small business needs. These new servers improve education by enabling dependable classroom collaboration and making school administrative services more productive.

 

So to summarize, Intel Xeon processor 3400 series are ideal for small businesses stepping up to a first server, companies or government agencies

requiring a dedicated server for a workgroup, or education departments that need a server to support multiple clients. These servers provide technologies designed to deliver 24/7 dependability to help improve productivity and performance that automatically adapts to changing workloads at an entry-level price point comparable to desktop systems. Any price premium is likely due to additional server features, many of which can lead to reduced operating costs in the long run.

 

Watch “First Server Video” on the benefits of using a server at www.youtube.com/watch?v=IebtI5cab0o

For more information on Intel® Xeon Processor 3400 series based platforms, please look at the attached product brief or go to http://www.intel.com/p/en_US/products/server/processor/xeon3000

New Server Security Technologies Are Coming & Why We Need Them

 

 

The other day I had the opportunity to talk with Jeff Casazza and James Green from Intel’s Server Platform Group.  The topic? server security.  Our conversation was focused on the introduction of some new security technologies that are on their way and why we need them.  During our discussion, I found myself thinking back to my days in the US Navy, where security was a core topic of everything we did. The introduction of submarines transformed naval tactics and the stealth fighter changed aviation tactics.

 

 

So, why does IT put so much emphasis on information security?  … because the cost of a data breech is extremely high.  Imagine if a data breech of your IT systems resulted in losing employee social security numbers or customer information – the cost to recover that data (if possible) and the legal costs (penalties from regulatory agencies) is very, very high.   Jeff and James mentioned that business models are also exposed if these types of information escapes happen – a company’s brand, business and employee relationships could be at risk given the nature of trust and integrity that circle throughout our business.

 

 

Security always ranks high in importance, especially when we feel at risk.  As I have transitioned into my new role inside Intel IT, I have found a significant focus on security solutions especially as new threats (for profit attacks), new usages (client / server virtualization, cloud computing) and new collaboration tools (social media) challenge our existing paradigms of information security.

 

 

During my discussion, I learned about two technology standards that Intel is implementing for servers that reduce security risks and address the changing nature of information security attacks happening today and expected tomorrow.

 

 

Stealth Fighters Attacking Your Data: The nature of security attacks have changed.  Previous generation hackers used to target broad wide spread attacks on corporations or the worldwide web trying to disrupt business, gain notoriety with the ability to affect tens of thousands of people.  The newer generation attackers are seeking a smaller target .. a single laptop or a single server.  These new for-profit attacks are aimed at both industrial (business) or government entities and only need a single penetration into your infrastructure to get enough information to create a serious issue for your business.

 

 

Encryption: A solution to defend against the stealth fighter point attack on your data is increased encryption of data.  Data encryption is not new.  Secure Sockets Layer (SSL) encryption for communication over the internet, harddisk encryption and enterprise application encryption are all standard methods IT shops use to protect information.  Unfortunately, encryption is not free, and I’m not talking about purchase cost .. but rather compute cost.  Encryption is a compute intensive process that consumes processing cycles. Intel is planning on introducing new instructions for Advance Encryption Standards (AES-NI) that are intended to dramatically improve the efficiency of encryption in a future version of it’s processor micro architectures.

 

 

Submarines Seeking Your Data From Under Your Hypervisor: Much of the anti-virus and security protection that resides on servers and client machines resides and is run through either the Operating System, Hypervisor or Application layer.   New malware software and root kits are targeting systems at startup before the hypervisor and/or OS boot up undermining the protection you have at the higher levels of the application stack.

 

 

A new server technology from Intel, called Intel® Trusted Execution Technology (Intel TXT) works to ensure your system can boot up to the secure, protected environment you have deployed through your software stack.  In doing this, TXT ensures that your anti-virus software “perimeter” is secure and has not been compromised by a root kit “submarine”.  TXT has been available in Client Intel® vPro™ processor technology-based platforms since 2007.

 

Tune into the upcoming Intel Developers Forum (www.intel.com/idf) to learn more about plans for securing your server’s data and many other technology innovations from Intel.

 

 

Chris

Have you ever had one of those MacGuyver moments? You know – you have a problem to solve and a collection of items at your disposal, and if you use can figure out how to use those items, you can save the day.

 

 

Earlier this year Intel, HP, IBM, Lawrence Berkeley Labs, Emerson Network Power & Wunderlich-Malec collaborated on a California Energy Commissions sponsored MacGuyver-ish project call Advanced Cooling Environment (ACE) to solve an issue that many data centers face – overcooling their data centers beyond the needs of the servers and other IT equipment running inside of the facility.  You see, IT equipment is designed to run within a temperature envelope. If the air coming into the server is warmer than the envelope, you run the risk of overheating. If the air coming into the server is colder than the envelope, you are spending too much money on cooling the air, which does nothing other than needlessly increase the cost of operating the data center and reduces the energy efficiency as well.

 

 

The team surveyed the items at its disposal and determined that they could link data from the front panel temperature sensors (server instrumentation) on the servers to the control systems of the computer room air handlers (CRAHs – essentially air conditioners) via standard data center management communication protocol.   The CRAHs could then dynamically adjust the speed of the fans and the temperature of the air to the requirements of the servers. The results: servers received the appropriate temperature air, power costs for cooling went down and the energy efficiency of the data center went up.  Problem solved….and they didn’t even use a paper clip or shoestring.  The real beauty of the project is that all of the items used are commercially available today for you to instrument your data center and improve the energy efficiency of your operation.

 

 

To learn more about ACE at the upcoming Intel Developer Conference in San Francisco, check out the “ECOS003 Advanced Cooling Environment (ACE) Technology: Controlling Data Center Cooling with Servers” or stop by the ACE demo in the Eco-Tech Zone of the Tech Showcase.

Are you looking to write the most cutting edge SIMD code? Do you want to learn about Intel® AVX, Intel’s newest instruction set? Would you like to know what tools are available to develop and analyze AVX code for the newest architecture? If so, come to the ARCS002 and ACQ001 sessions at Fall 2009 Intel Developer Forum.

 

 

Last year Intel announced Intel® Advanced Vector Extensions (Intel® AVX) at the Spring IDF forum which is slated to be a part of platforms ranging from notebooks to servers.  Intel® AVX is a new 256-bit SIMD FP vector extension of Intel Architecture which is targeted for the Sandy Bridge processor family in the 2010 timeframe. Intel® AVX accelerates the trend towards FP intensive computation in general purpose applications like image, video, and audio processing and engineering applications such as 3D modeling and analysis, scientific simulation, and financial analytics. The enhancements in Intel® AVX allows for improved performance due to larger vectors, new extensible syntax, and rich functionality including the ability to better manage, rearrange and sort data.

 

 

At this year’s fall IDF in San Francisco Intel will be conducting two sessions dedicated to Intel® AVX. The first presentation will include an overview of the AVX instructions, an in-depth discussion of the tools that are available today to develop AVX code and will explore performance of some AVX based kernels. The second presentation will be a chalk-talk/Q&A session on the same topic where Intel’s Senior Microprocessor Architects will be at hand to answer your questions on the Sandy Bridge architecture and the new Intel® AVX instruction set.

 

 

The Intel® AVX session will first delve into a brief overview of the benefits and key features of Intel® AVX such as wider register and vector widths, newer instructions like broadcast, maskmov and permutes and introduction of three/four operand instructions that allow for non-destructive destinations.

 

 

Are you ready to develop an AVX application today? If so, the next part of the presentation will give you the information that you need today to get a head start on Intel AVX software development. Intel engineers have been busy developing tools that can be used today to develop and analyze Intel® AVX applications.First and foremost, there is the Intel Profession Compiler Suite, Version 11.1 which contains an Intel compiler, Intel Performance Primitives and Math Kernel Libraries. The tools in the compiler suite have been extended to support AVX. Intel also provides a software emulation tool called SDE that allows users to run AVX code on any Intel platform. SDE allows developers to do functional validation of AVX code. SDE also packages other tools like xed, a disassembler utility for AVX code.  If the developer is looking extract more performance from their AVX code, Intel provide a tool called Intel Architecture Code Analyzer that can be used to analyze basic blocks for latency, throughput, execution port usage and critical path information. Developers can use that information to modify, fine-tune, and optimize their application for best performance.

 

 

The talk will then explore several AVX code kernels to demonstrate conversion of legacy SSE code to optimized AVX code, showcasing new AVX instructions and programming paradigms. The session will also discuss some AVX tunings tips and best known methods that will aid developers when developing with AVX instructions for Sandy Bridge architecture. We want you to begin your AVX experience armed with the right tools and methods that you need to succeed.

 

 

We hope to see you at the Fall IDF 2009 and our sessions, ARCS002 – Intel® Advanced Vector Extensions (Intel® AVX) – Intel’s Next Major Instruction Set Architecture Extensions and ARCQ001 - Q&A: Intel® Advanced Vector Extensions (Intel® AVX) on September 22, at 4:00pm and 6:00pm respectively.

 

 

Presenters:

 

Pallavi Mehrotra, Senior Software Engineer, Intel Corp.
Richard Hubbard, Senior Software Engineer, Intel Corp.
 

 

 

          

 

Each of the last 3 years, Rich Uhlig, myself and the rest of our colleagues at Intel focused on virtualization technologies, have had the enviable task of participating in two of the technology industry's biggest events. It is always a pleasure to stretch one abilities, work longer hours than you ever thought capable, work on great product introductions, develop new business models and help to redefine an industry while using these events to make your announcements. This week VMWare's VMworld was held in San Francisco with over 11,000 participants focused on virtualization technology. Intel VP and GM Doug Fisher delivered a keynote on "Transforming Flexible Computing", which nicely communicated the message that Rich delivers in the attached video on the Intel Channel on YouTube. We also announced the support of VMWare View and Intel vPro technology with VMWare's Jocelyn Goldfein. This culminates over 2 years worth of work for our engineering and development teams on bringing together 2 of the virtualization industry's leading platforms.

 

This announcement is the beginning of an era of Virtualization Flexibility. Each day we are seeing new usage models emerging, virtualization finding new ways to allow users more flexibility in the Data Center, on the handheld and with their desktop form factors. As we approach IDF 2009, both Rich and I, will be hosting courses on these emerging models and architectural directions. Rich will be hosting a course on architecture, while I have the pleasure of hosting a panel with Simon Crosby, Mike Neil, Ed Bugnion, Lew Tucker and Orran Krieger. It is quite a line up. In addition, one of our colleagues, Charlton Barreto has some breakthrough new usage models to demonstrate that we believe are outstanding. All of these will be available in the IDF Virtualization community for the 3rd year in row. I personally feel very fortunate to have the opportunity to work with such interesting and talented individuals everyday. The conferences provide an opportunity for us to share our enthusiasm for technology, our enthusiasm for innovation and our commitment to excellence with the rest of the world. The feedback has been great and required for us to continue to innovate.

 

Come see us, tell us and push us to build technology that delivers value in the way you work, live and play. It is a challenge we embrace and we are thankful we have the opportunity to take action.

 

See you at IDF!

Intel's RK Hiremane & Sun's David Caplan discuss Xeon 5500 blade servers virtualization ROI

 

Join experts from Intel, Sun Microsystems, and Ziff Davis Enterprise on August 20 for an informative eSeminar, where you will learn:

 

  • How Sun’s Network Express Module technology works
  • How easy it is to achieve high availability and near-instant failover
  • How to reduce network cabling by a factor of 10:1
  • How to simplify network and storage management.

If you have been around the workstation community for a while you may be used to seeing numbers like 4, 8 12 64GB.  Those are the old numbers – sorry.

When the Intel® Xeon® processor 5500 series debuted it introduced systems with up to3 memory channels as opposed to just two.  So how much memory do you need now?  Well to get peak performance from your workstation you need to think in multiples of 3 and not 2.  That means not 4GB and certainly not 3 GB.  Numbers that work best are identified in the table below.

 

Number of Dimm’s

3

6

9

12

18

Size

Of

Dimm’s

2GB

6

12

18

24

36

4GB

12

24

36

48

72

8GB

24

48

72

96

144

 

 

Ok that is a lot of choices.  Which one is most likely to deliver the best workstation experience when you have three memory channels and why?

As before in the two channel days, the best experience will be arrived at when you evenly populate dimm slots.   That answer will vary on whether or not you have a single processor workstation or a dual processor digital workbench.

If you have a single processor entry level workstation you will want to configure your systems with 6, 12 or 18GB if you are using 2GB memory sticks.  If you are using 4GB sticks you want to think of 12, 24 and 36GB.  And if you are really thinking rich memory configurations you will want to focus on 24, 48 and 72GB configurations.  This assumes you have up to nine dim slots.

If you are using dual processor digital workbench and you are mega-tasking through a number of complex task you may want to consider the following sweet spots.

With 2GB memory sticks you will need to think of 12, 24, 36Gb.  With 4GB memory sticks you should think of 24, 48 and 72GB and with 8GB memory sticks you should be thinking of 48, 96 and 144GB.  Again the goal is to keep the memory channels with the same memory sizes and speeds to see the best performance.  When you do that you are most likely going to see the best performance.

If you are wondering which kind of workstation is best for the work you do, you want to visit the Intel workstation technology page and use the workstation selection tool.  It can be found at www.intel.com/products/workstation/processors

At Intel when we think of scaling performance forward we think of one word, evolution, not revolution.

 

By evolution we mean developing high performance computing solutions that offer you the balance your applications require in order to deliver the best performance they can.  We do not maximize processor performance without matching it with the necessary memory capacity, bandwidth and system i/o.  We look to match these important components of performance to insure the data is where it needs to be, when it is needed to be in order to quickly and efficiently change it into actionable information.  We maximize your performance by minimizing your latencies.

 

Maximize your performance today, simplify your software development needs, and scale your performance forward as newer microarchitectures debut.

 

Seamless performance – bigger science – that is what we help you achieve faster than ever before.

 

To learn more about our approach to delivering highly effective HPC processors and software tools come to the Sun HPC Virtual Tradeshow on September 17th 2009 starting at 8am PDT.  In the virtual event attend the Intel presentation on “Accelerating Your Applications And Scaling It Forward” by Wes Shimanek & Dr. Nash Palaniswamy at 10:30am PDT.

Hi All,

 

You found the Intel XEON Workstation Sweepstakes!

 

 

Click HERE to start the quiz and submit your entry today !

 

http://communities.intel.com/servlet/JiveServlet/download/2821-32-2552/URL%20card%20front_small.png

 

Good Luck to all.

Filter Blog

By date:
By tag: