The Data Stack

6 Posts authored by: tstachura

Intel Ethernet Server Cluster Adapters support iWARP, or RDMA over Ethernet which enables high-performance, low-latency networking.  But to take advantage of this, applications must be written to use an RDMA API.  Many HPC applications are already enabled for RDMA, but most datacenter applications are not.   A “hello world” programming example on the Intel site provides good guidance on how this can be accomplished, but the OpenFabrics Alliance is now providing a unique opportunity to learn how to code to an RDMA interface by hosting an RDMA programming training class.  This is definitely worth looking into if you are an application writer familiar with a Sockets interface and interested in how to take advantage of iWARP.

intel ethernet stop sign2.png

SC is NEXT WEEK at New Orleans!   In the Intel booth, we’ll be showing an HPC in the Cloud demonstration (a very, very cool one that I can’t give out the details on just yet) that uses 10GbE iWARP for the local cluster and 10GbE for the “cloud” cluster.  We will also have a static display of our products, including our new single-chip dual 10GBASE-T device (codename:  “Twinville”) developed for LOM (LAN on Motherboard).  And there will be several in-booth presentations from our technology partners - find the schedule on line.  If you are attending SC, please stop by our booth (look for the Intel® Ethernet stop sign) – and you can enter our contest to win a Google TV!

At IDF 2010 in SF, we had the opportunity to demonstrate a nice use  case for HPC in the Cloud – when you have tapped out your local cluster  resources, provision your excess work to the Cloud. 10GbE iWARP was used  for the local cluster as the performant low-latency fabric.  Mainstream  10GbE was used in the Cloud as it provides the dynamic virtualization  and unified networking features required for virtual data centers.  See  the class I presented on HPC in the Cloud networking - it covers some  additional usage models and some key features required for cloud networking.

hpc_cloud.jpg

 


ISC 2010 (International Super-Computing) just wrapped up with another update of the Top500 (www.top500.org), where the top 500 supercomputers in the world are ranked and listed twice a year.  In terms of the fabric used for these supercomputers, InfiniBand continues be the dominant choice for performance.   But there were two new 10 Gigabit Ethernet (10GbE) clusters added this time and both use iWARP.  #102 at Purdue University is showing 52.2TF with 993 nodes and #208 at HHMI (Howard Hughes Medical Institute) is delivering 35.8TF with 500 nodes.

 

The HHMI cluster uses Intel’s NetEffect 10GbE NICs that implement iWARP - learn more at www.intel.com/go/ethernet and click on the iWARP link.   Also of note is the impressive Top-100 efficiency of the HHMI iWARP cluster - #84 @ 84.1% - exceeding the efficiencies of many InfiniBand 40Gb QDR clusters and other custom fabric solutions.  These are good examples of how Ethernet continues to evolve to be a strong performer in the HPC world.

There is more and more buzz around low-latency Ethernet these days, coincident, no doubt, with the growth of the 10 Gigabit Ethernet market.  Two events worth highlighting if you want to hear the latest and greatest…

 

  • February 2nd:  The Ethernet Alliance is hosting a TEF (Technology Exploration Forum) next week where there will be a panel on low-latency Ethernet.  I’ll be talking about iWARP over TCP and participating in a panel discussing why and where we need low-latency Ethernet.

  • March 14-17th: The OpenFabrics Alliance is having its annual workshop in Sonoma where low-latency 10 Gigabit Ethernet will be included in many of the sessions.  Expect good presentations and discussions as Enterprise Data Center and Cloud are key focus areas for this year’s workshop.

tstachura

10 Gigabit iWARP @ SC’09!

Posted by tstachura Nov 19, 2009

What is iWARP?  (Click here to find out)

 

Ethernet Tom here.  I’ve recently come into the Intel Ethernet group – marketing Ethernet products in the HPC, Financial, and Cloud verticals. 

 

And very HPC relevant -  I’ve been very busy getting things lined up for Super Computing ’09 and wanted to share what we have happening:

  • 3 demos:
    • Intel Booth:  Two 6-node clusters (96 total cores!) running NYSE’s Data Fabric* middleware – showing iWARP vs. non-iWARP
    • Supermicro Booth:  4-node cluster running Fluent
    • EA Booth:  4-node cluster running Linpack in a converged Ethernet environment
  • 5 presentations:
    • Data Transportation at iWARP Speed – Feargal O’Sullivan – NYSE Technologies
    • Memory Virtualization over iWARP for  Radical Application Acceleration – Tom Matson – RNA Networks:
    • A 10Gigabit  Ethernet iWARP Storage Appliance – Paul Grun – System Fabric Works
    • CD-adapco Benchmarking Performance using Intel iWARP – William Meigs – Intel Corporation
    • iWARP – What & Why? – Tom Stachura – Intel Corporation


If you are here today (final day of SC’09!), stop by and check it out.  If not, I’ll come back later with some video links.

Filter Blog

By author:
By date:
By tag: