Sometimes you get so deep into something that you don't realize how crazy it is until you take a step back. Like most technology companies, Intel has an inherent love for acronyms. The cacophony of standards bodies, advanced technologies, and intense rates of change in our industry necessitates the use of abbreviation just to be able to communicate clearly and briefly. However, while I am at least as much of a techno-phyliac as most of the folks in the technology jungle, even I sometimes run into an acronym wall. I thought to help myself and others it might be a good idea to decode one of the newer sets of network technologies that I work closely with and to decipher some of the associated names and acronyms that come along with it.

 

10 Gigabit Ethernet: It's here, it's real, and it's growing fast_._

 

Ethernet (IEEE 802.x) has evolved over the years from a new standard linking computers together at slow rates and has moved from 10 Megabit per second (Mbps), to 100Mbps, to 1 Gigabit per second (Gbps), and a few years ago to 10GbE unidirectional throughput. Over time there have been several physical connection types for Ethernet. The most common is copper (Cat 3/4/5/6/7 cabling is used as the physical medium) but Fiber has also been prevalent as well as some other more esoteric (such as BNC Coax) physical media types. The most common 10GbE adapter (until very recently) has been Optical only due the difficulty of making 10GbE function properly over copper cabling.

 

But this post isn't meant to discuss the past, but more to decode the present and future as it relates to 10Gig Ethernet and the variety of flavors that are available. Below I'll cover a number of acronyms for 10GbE IEEE standards that are often lumped together as '10 Gigabit' and discuss some of the differences and usages for each. After that, I'll also try to clear up some of the confusion about ‘form factor' standards for optical modules (which are separate from IEEE) and some of terms and technologies in that realm:

 

 

10GBase-T (aka: IEEE 802.3an):

 

 

This is a 10GbE standard for copper-based networking deployments. Networking silicon and adapters that follow this specification are designed to communicate over CAT6 (or 6a/7) copper cabling up to 100 meters in length. To enable this capability, a 10GbE MAC (media access controller) and a PHY (Physical Layer) designed for copper connections work in tandem.

 

 

10GBase-T is viewed as the holy grail for 10GbE because it will work within the most prevalent Cat 6/7 based infrastructure that is already in place. For this flexibility, 10GBase-T trades off higher power, and higher latency.

 

 

10Gbase-KX4 (aka: IEEE 802.3ap):

 

 

This is a pair of standards that are targeted toward the use of 10GbE silicon in backplane applications (such as a blade design). The specifically is designed for an environment where lower power is required and shorter distances (up to only 40 inches) are sufficient.

 

 

10GBase-SR (aka: IEEE 802.3ae):

 

 

This specification is for 10GbE with optical cabling over short ranges (SR = Short Range) with multi-mode fiber. Depending on the kinds of fiber, SR in this instance can mean anything between 26 - 82 meters on older fiber (50-62um fiber). On the latest fiber technology, SR can reach distances of 300m. To be able to physically support a connection of the cable, any network silicon or adapter that support 10GBase-SR would need to have a 10GbE MAC connected to an Optics module designed for multi-mode fiber. (We'll discuss optics modules in more depth further down in this post.)

 

 

10GBase-SR is often the standard of choice to use inside the datacenters where fiber is already deployed and widely used.

 

 

10GBase-LR (aka: IEEE 802.3ae, Clause 49):

 

 

LR is very similar to the SR specification except that it is for Long Range connections over single-mode fiber. Long Range in this spec is defined as 10km, but distances above that (as much as 25km) can often be obtained.

 

 

10GBase-LR is used sparsely and really only deployed where ultra long distances are absolutely required.

 

 

10GBase-LRM (aka: IEEE 802.3aq):

 

 

LRM stands for Long Range over Multimode and allows distances of up to 220 meters on older standard (50-62um) multi-mode fiber.

 

 

10GBase-LRM is targeted for those customers who have older fiber already in place but need extra reach for their network.

 

 

10GBase-CX4 +(aka: IEEE 802.3ak):+

 

 

This standard of 10GbE connection uses the CX4 connector/cabling that is used in Inifinband^TM^* networks. CX4 is a lower power standard that can be supported without a standalone PHY or optics module (the signals can be routed directly from a CX4 capable 10GbE MAC to the CX4 connector). Due to the physical specification for CX4 based 10 Gigabit, it provides a lower latency than comparable 10GBase-T copper PHY solutions. With the use of CX4 passive (copper) cables, the nominal distance you can expect between your 10GbE links is ~10-15m.   There are also amplified 'active' (but still copper) cables with nominal distances up near 30m.

 

 

Below is an image of a standard CX4 based socket that would be on a 10GBase-CX4 NIC:

 

 

 

 

There are also what referred to as ‘active optical' cables are for CX4, which actually have an optics module in the termination of the cable, and the cable body is fiber. This kind of active design increases cable reach and improves flexibility (fiber is smaller than copper pairs) but also increases cost. These active cables can increase reach up to 100m.

 

 

Intel has recently released our own series of active optical CX4 cables.

 

 

For short distances (such as inside the rack in a datacenter), CX4 offers one of the lowest cost ways to deploy 10GbE from switch to server. Because of its design, CX4 also achieves very low latencies as well.

 

 

</end of IEEE standards ramble>

 

 

Ok, so we've summarized the majority of the IEEE 10GbE standards. But the immediate question arises: "Why are there so many?" Is the IEEE standards body for 10GbE just throwing out all these standards for every possible niche application? The answer is no. For any new standard IEEE phy interface standard to be approved, it must pass on several criteria including "distinct identity" and "broad market potential". While all of these standards certainly won't apply to any given institution's network, they all do all meet real market needs.

 

 

X2, XFP, SFP+... say what?

 

 

A final mystery that I've alluded to above has to do with the various optical module form factors that are available for 10GbE. XENPAK, X2, XPAK, XFP and SFP+ are standard optics module form factors that are used by both switch and NIC vendors in the industry. These modules that go along with the 10GbE networking products are an interesting beast. They are not specified by IEEE, but are standardized by a group of industry participants through what is known as a Multi-Source Agreement (MSA).

 

 

XENPAK, XPAK and X2 are the older module standards originally used for 1GbE, followed by XFP which shrunk the form factor of the actual module as well as the fiber cable pairs. SFP+ is a newer form factor that is now gaining momentum with switch and NIC vendors. An SFP+ optics module can use the same fiber pairs used with XFP (no new fiber cable needed), but the form factor of the cage in the switch or NIC as well as the optics module itself are smaller. The key advantage of using SFP+ is the new form factor can drive lower costs, lower thermals, and higher densities at the switch.

 

 

Here is an image of an older X2 optics module:

 

 

 

 

And here is a comparison of the size of XFP (right) relative to SFP+ (left):

 

 

 

 

The optics modules are driven by a low power interface from the 10GbE MAC. The interfaces are XAUI (for X2 modules), XFI (for XFP modules), and SFI (for SFP+ modules). These interfaces generally are supplied directly from the 10GbE based MAC to the module cage. One of the things the module MSA standards bodies agree on is not only a form factor for the module itself but also the electrical specifications of the driver interface that can be accepted from the MAC.

 

 

The key thing I want to hammer home here is that IEEE specification (such as 10GBase-SR) is separate from the module form factor used.

 

 

For example, you can have a Short Range optical NIC that uses X2, XFP, or SFP. So asking for an "SFP NIC" isn't actually specific enough, because that could mean a lot of different things. You'd have to specify a 10GBase-SR NIC, with SFP+ optics.

 

 

SFP+Direct Attach:

 

 

Now that I've thoroughly confused everyone, I'll take it one step further. Not only can each module form factor be used with different IEEE MAC specifications, but each module doesn't even need to be used for a fiber connection at all. An interesting example of using an ‘optics' module form factor for a non-optical design is SFP+Direct Attach.

 

 

SFPDA is similar in concept to CX4 but provides a bit more flexibility. Normally, you may have a switch or NIC that is designed to be able to support the addition of SFP based optics modules for a 10GbE fiber connection. Direct Attach allows for passive Twin-Axial (2 pair copper) cables to be plugged directly into the SFP+ cage (in place of an optical module) to carry the serial signal from the MAC directly over the cable to another SFP+ form factor enabled NIC or switch.

 

 

Again, the downside is that without either a standalone PHY, or optics module to send the signal over a long distance, a passive cable with SFPDA has a reach in the ~10-15m range. The real advantage for SPFDA over CX4 is that on the switch side the SFP+ module design allows higher density switches than CX4 can provide.

 

 

For a top of the rack switch, SFP+DA will likely provide excellent cost, power and latency characteristic and still have enough reach to be very feasible inside the rack.

 

 

10GbE - The Infrastructure is Ready!

 

 

I hope that I've lifted a little bit of the fog that surrounds the 10GbE market and the related technologies. The last thing I want to leave you with is the fact that 10GbE infrastructure is now starting to roll into the mainstream. CX4 switches are available broadly in the market today and SFP+ type designs for both optical modules as well as Direct Attach connections have been demonstrated and will be getting rolled out very soon by various vendors.

 

 

Intel is already selling a wide variety of NICs and silicon to meet the various form factors and standards based market needs I listed above along with other vendors in the market place.

 

 

After years of anticipation, 10GbE is finally hitting its stride. Next stop... 10_0_GbE...

 

 

 

Today, Intel launched 50W low power versions of the 45nm Quad-Core Xeon processors (the L5400 series).

The 2 new SKUs are listed below:

 

Quad-Core Xeon L5420 2.50 GHz, 12MB L2, 1333MHz

Quad-Core Xeon L5410 2.33 GHz, 12MB L2, 1333MHz

 

These products offer IT and business users 2 primary benefits:

 

  • 45nm 50W quad-core brings 25% improved performance over previous generation 65nm 50W quad-core processors

  • They also run 30W cooler than mainstream 80W quad-core processors delivering the same performance at the same frequency.

 

 

We have seen strong interest for these 50W quad-core products and I'd like to hear from you on where you would use low power quad-core and why?

So what's Intel been doing with NAND based Solid State Disk (SSDs) since my blog on our next generation broadband video streaming demo (http://communities.intel.com/openport/blogs/server/2007/11/14/). Two things: 1) we're close to launching Intel's SATA based SSD products and 2) we've been engaging you to get more details on your usage models and value propositions. In the last few months, there have been a number of announcements for SSDs in server and enterprise storage applications (e.g. EMC: http://www.emc.com/about/news/press/us/2008/011408-1.htm) including a number of small startups offering solutions targeted for server deployments. Based on my discussions with you and looking at what's going on in the industry, here's my view of the value of SSDs in servers and how that maps to server usage models.

 

As a person who focuses typically on the end-users, SSDs are interesting because they weren't designed to specifically solve an end-user server problem. As I said in my previous blog "because we could", SSD largely exist "because they can". They are what Clayton Christensen would call a disruptive technology. As SSDs are considered for server based applications, I look at how SSDs as a technology can provide greater value when replacing server hard drives (HDDs) or server memory and then build possible usage models from there.

 

When comparing to HDD usage in servers, I start with the following:

 

Performance: SSDs can have much better random access performance as measured by higher IOPS, higher throughput and lower read/write latency. SSDs are typically achieving a least 10 times the number of IOPs as HDDs, at least 2-3 times better random access read rate and on the order of 10 times less read and write latency than HDDs. For random access performance, most SSDs blow the highest performing 15K RPM hard drives away.

 

Power: SSDs use lower power especially when compared to a disk is that is active (i.e. spinning). Given that for most server based applications, the hard disk is always active, this is especially significant. My general observation is that SSDs typically use less than 1/5th of the power of an active HDD. Here they look to be a key technology for making data centers more power efficient.

 

Cost: When comparing cost per Gigabyte, SSDs are higher priced. Given this, SSDs today are largely being considered for applications where storage IO is the bottleneck - where many hard drives can be replaced with just a few SSDs.

 

SSDs can be compared to DDR memory with the same three value vectors:

 

Performance: Unlike the SSD to HDD comparison, memory has higher throughput and lower latency than an SSD. When comparing SSDs to memory for server usages, the primary consideration looks to be latency. SSD reads and writes are on the order of 100s of microseconds. On the other hand, memory based reads and writes are typically less than 100 nanoseconds. Even so, for some applications (e.g. video on demand streaming) 100s of microseconds of latency looks to be acceptable.

 

 

Power: Like HDDs, when comparing active power usage, SSDs draw much less power than DDR memory as measure by watts per gigabyte. How much is dependent on how the application uses memory. But generally, SSDs looked to consume 1/10th of the power.

 

 

Cost: Unlike HDDs, when comparing cost per gigabyte, SSDs are significantly lower priced than DDR memory. Generally, I start with NAND based SSDs as being half the price of DDR based memory. Depending on the size of the SSD and the technology (whether Single Level Cell (SLC) or Multi Level Cell (MLC)) the difference can be much more.

 

 

One final vector to look at is the reliability of SSDs when compared to hard disk drives and memory. Going with just the MTBF numbers being published, SSDs look to be better than HDDs and just as reliable as memory. One area that generates confusion is how the write cycle limitations of NAND technology affect the life-time (as measured by MTBF) of SSDs for server applications. Getting into details on this is a good subject for a future blog. But based on discussions with you, I haven't encountered a server application where the write cycle limitation is the deciding factor in a deployment for SLC SSDs (at least for how we expect Intel's SSDs to perform). For many server applications, it's not the deciding factor for MLC SSDs either.

 

 

Using these value vectors, here are my generalizations for the SSD value for enterprise and portal applications:

 

 

 

 

 

 

 

  • Use SSDs for the server boot device. When compared to HDDs, SSDs enable faster boot (typically 30%), consume lower power, and are more reliable.

  • Use SSDs for high throughput, high IOP, low latency application storage. If storage IO is the application bottleneck, replacing with SSDs shifts the bottleneck back to CPU utilization. Example applications include video streaming, search query, and OLTP.

  • Use SSDs for building a high performance storage tier. Many applications have hot and cold (or long tail) data. By creating a storage tier, the solution cost of a deployment can be reduced significantly. Example applications include using SSDs for improving performance in a NAS or SAN (e.g. what EMC calls Tier 0) or to creating a high performance direct attached storage (DAS) solution (e.g an SSD optimized server).

  • Consider SSDs as a lower cost alternative to placing application data in memory. Many applications create memory based databases to achieve low latency access times. These applications create custom data structures, use RamDisks or rely on caching through the OS (e.g. SWAP). For many IO bound applications, memory is typically being used as a buffer for disk data. The lower latency and higher throughput of SSDs promise to require less memory for buffering while maintaining the quality of service objectives of the application.

 

Bottom line for servers today, SSDs look to be cost effective for applications where storage IO throughput and low latency are key. They move the application bottleneck from IO to back to CPU utilization. Get back to me on whether you agree and what additional usage models you're finding.

Intel launched the Intel® Xeon® Robo Brawl Game on 2/20. In just three weeks, thousands of users registered and played the game. I have been received a lot of positive comments about the game, including the following feedback from Ashley on the Intel's Software Blog.

"Other players seem to have an average of 3.5 kills per game and about 19,000 points. With the exception of Decepticon who seemingly has 4.99 kills per game, which is pretty much impossible. Just wondering if people are following the rules or have somehow found a way to break them making it unfair for the rest of the players."

 

This comment prompted me and our game developers to do some digging. Our game developers discovered a handful of individuals who have found a way to "game" the system. I won't go into detail but I can tell you they were able to update their robots' configurations and allow them to allocate extra Intel Xeon points to their robots, allowing them to defeat other robots with just one hit. Our developers have created a patch that essentially flags anybody who logs in with a "manipulated" configuration. We are taking necessary action to notify current players who have used manipulated configurations that their accounts will be terminated and that they will be disqualified from the contest. Going forward we will continue to monitor the game and terminate the account of flagged players. Additionally, we will be doing a full audit of the top ranked players before prizes are awarded to ensure fair play by the winners.

 

 

 

 

 

I want to thank Ashley to help us to reinforce "fair play" of the game. Please review the updated Official Rules of the contest at http://www.robobrawl.com/.

 

Have you been playing the game? Please share with us your thoughts about the game.

 

Regards-

 

Pam Didner , Intel Xeon RoboBrawl Program Manager

whlea

On the Road to "IDF"- China

Posted by whlea Mar 10, 2008

 

Hi everyone, I'm on the road in a couple of weeks for my first trip to China visiting the Spring Intel Developer Forum in Shanghai. While I'm really excited to visit China, I'm also looking forward to seeing all kinds of new technology in action. I'll be adding to this blog during the event and would like to get more ideas from this community before I head out.

 

 

Do you have any special interests that I could report back on? Let me know your ideas and I'll try to capture what's happening at IDF to share with the Server Room community.

 

 

 

I recently found this simple animation that breaks down the Xeon processor family into bite-sized chunks and explains which Xeon-based servers are best suited to meet common IT and business needs.

 

I shared it last week when traveling with customers in Taiwan and it was well received.

 

What do you think of this video?

 

 

Hi! Since this is my first post to the server room, I thought I would introduce myself and give you a bit of background on who I am.

 

My name is Matt Chorman, and I am a validation engineer at Intel where I get to work with EPSD (Enterprise Products and Services Division) servers. My job is to test a wide range of products, from single processor servers up to Itanium2 "Big Iron" boxes. I'm an open source guy, and have been working with Linux in the enterprise for nine years. During recent years, I have also been running performance testing and tuning on all the servers that come through our lab.

 

I work in platform validation (which is an engineering role), but I have a unique perspective on servers; I worked in IT for a number of years, culminating in a System Administration role for a mid-sized finance company. There were many challenges that I faced that are becoming the calling card of IT everywhere:

 

 

 

 

 

  • Heat in the server room.

  • Too many servers for the space we were allocated.

  • Client management difficulties.

  • Power usage of the clients.

 

These problems are directly tied to the lack of efficiency in the software they were using. However, there is a tradeoff when it comes to looking at options to improve the speed of your network.

 

  • How much would it have cost to hire a team of programmers to re-write this custom software to be more efficient?

  • How much would it cost to simply purchase a newer (and much better performing) server to compensate for this?

  • Were there things that I could have done to make the servers we had run more efficiently?

  • Where do you draw the line between employee complaints about the speed of a certain server and the cost of upgrading (i.e. employee efficiency)?

 

These are the type of questions IT folks everywhere are faced with, and unfortunately there are no black-and-white answers. However, these are the questions I'll be exploring in my blog. I may not be able to offer you answers, but perhaps I can give you some ideas you can use in your own organization. Maybe you'll give me some answers that I can use within my own role!

 

Also, like all of us here, I'll be happy to answer any questions you might have for me.  If I don't have the answers, I can probably find someone who can answer them for you.

 

Cheers, and happy computing!

Filter Blog

By author:
By date:
By tag: