Noise to signal? Isn’t it supposed to be signal to noise ratio?  When you’re talking 10 Gigabit BASE-T, you’re talking noise to signal.  10 Gigabit BASE-T has been described to me as “It’s not like looking for a needle in a haystack, it’s like looking for the right snowflake in a snow storm.  At night.” Where does all that noise come from? It comes from the cable next to the cable your data is trying to travel across, it comes from wireless signals, and it comes from outer space.  It even comes from the other wires in the same cable.  With all this noise, how does all that analog chaos get transformed into digital clarity of 1s and 0s?

 

Lots and lots of processing. In our current generation of 10 Gigabit BASE-T products we use PHYs that have 5 channels of DSPs, and tons of specific analog processing silicon to turn that snow storm of waveforms into nice clear digital Ethernet frames.  Far End Cross Talk (FEXT), disturbers, Near End Cross Talk (NEXT), 8/10 encoding, it all gets to be a jumble. FEXT is signal coupled from one channel to a neighboring channel, measured at the far end. NEXT is the same as FEXT, except that it is a measure of how much of that coupled signal is reflected back onto a neighboring channel on the same end. Disturbers are other noise sources outside of the cable.  The 8/10 signaling is how the data is encoded to help make sure there is a roughly even mix of 1’s and 0’s in the data stream. It also helps to ensure the quality of the data by periodically transmitting a checksum that the link partner can use to validate the data that it has received.   If your data stream is highly biased, like all 1’s or all 0’s, the signal processing can get “tone deaf”, limiting what it thinks is a 1, making all the data a 0.  Unless the average voltage is centered you can get signal droop which makes it hard to tell a weak 1 from a strong 0.  The extra bits of data do take up some space on the wire, but things like clock recovery from the extra bits make it worth the overhead.  Just means that to get to your advertised speed, your baud rate must be higher. For example, a 64/66 10G serial signal has a clock rate of 10.3125 G/sec, which transmits data at 10G/sec.  You can’t transfer 10 Gigabits of data. There will always be space on the wire taken up by other things (TCP/IP headers is a big one).   But those extra pieces help keep 10G copper stable.

 

Time to dispel a couple of 10 Gigabit BASE-T rumors I’ve heard over the years.

 

First up, bending the cable. I was at a trade show and a gentleman comes up and says to me “That 10 Gigabit stuff is for the birds, you can’t even bend the cables without dropping link!”  I was covering for another employee that had taken sick, so I wasn’t warned of the rowdiness of this show.  “I think you’re mistaken, sir.”  I said.  He smiles and reaches behind my demo and grabs the cable.  He bends it over on itself, making the straight run into an O shape.  He pushed with his thumb and made into more of an I shape.  He let it go and walked over to the console screen, expecting error message after error message of disconnected link.  To his shock, (and my amusement) nothing happened.  “Wow.”  He mumbled. He took my card and left the show floor, all the while being heckled by his travel buddies.  Unless you damage the cable, I’ve never seen a bent cable effect link.  And I’m not gentle to my cables.

 

Second, cell phones. I have heard concerns about answering your cell phone in a datacenter and taking your whole network offline.  There is some kernel of truth to this one since the signal bands of cell phones and 10G cables do get dangerously close to one another.  But it’s just another disturber source, albeit a powerful one. Modern cell phones and modern 10 Gigabit BASE-T are both designed to use as little power as needed to reach the end station so you would have to be pretty close with a very powerful cell phone to put a ton of noise onto the wire.  And we do test cell phone interference, but with good cables you shouldn’t see any issues.  Use bad cables and things could get ugly.  Good cables include shielding to protect signal wires from disturbers. Those DSPs can only filter out so much; an investment in quality cables is an investment in the quality of your data.


I bet you a least half of you will now go to your 10 Gigabit BASE-T installs and make a call from right next to your servers to see if it drops link.  If you have Intel® Ethernet and good cables, I think you’ll have a phone call and link all at the same time.

 

Thanks for using Intel® Ethernet, and special thanks to Sam J for his help double checking my technical details.

Up until recently, Ethernet has grown to dominate enterprise and Internet networking applications without much consideration for packet latency.  In applications like email or video delivery, an extra microsecond or two doesn’t have that much impact. But many emerging datacenter applications need low latency networks.  These include financial trading, real-time data analytics and cloud-based high-performance computing. In addition, many web service applications can spawn a large amount of east-west traffic, where the latency of these multiple transactions can build up to unacceptable levels.  

 

The Intel® Ethernet switch product family can deliver the industry’s lowest layer 3 forwarding latencies while providing a large number of 10GbE and 40GbE interfaces.   Intel latency numbers are less half of traditional layer 3 Ethernet switches, and to deliver this performance, Intel uses several key technologies:

 

Cut-Through Switching: A switch in cut-through mode will start transmitting a data packet before it has completely received that packet. This is compared to store-and-forward switching, where a packet is completely received by the switch before it is forwarded to its next destination.  Store-and-forward switches can’t deliver low enough latency for the data center. 

 

Terabit Crossbar Switch:  In many cases, latency increases as a result of internal switch congestion.  This causes the packets to be queued until others are sent through, causing delays that can be unacceptable for real-time applications.  Intel uses a matrix-based crossbar switch that’s unique because of its capacity – 1 terabit-per-second.  This amount of crossbar over-speed can greatly reduce internal switch congestion.

 

Single Output Queued Shared Memory: Intel uses a proprietary SRAM technology that is fast enough to allow every input port to write into the same output queue simultaneously.  Without this technology, there may be insufficient on-chip bandwidth for simultaneous output queue access, so chip architects have traditionally relied upon combined input/output queued (CIOQ) memory architectures that build in a set of virtual output queues at every switch input. This is a complex solution that is very difficult to scale to large port count switches, and adds blocking that impacts packet latency through the chip.

 

High Packet-Rate Frame Processing: It doesn’t matter how fast you can enqueue and dequeue packets from shared memory if your frame forwarding pipeline can’t keep up. Intel employs special circuit technology that allows a single L2/L3/L4 frame processing pipeline to forward packets at full line rate even if they are minimum size packets arriving back-to-back on all ports simultaneously.  It does this while maintaining extremely low processing latencies under all forwarding conditions.

 

By adding a high-performance frame processing pipeline, terabit crossbar data path and very fast packet memory to a low latency cut-through switch, the Intel Ethernet Switch family chips deliver the latency that meets the needs of the new mesh, cloud and financial networking applications.

http://www.youtube.com/embed/oYrlRWMbpO0In a cloud environment, virtualization that stops at the server edge stops short of the potential of cloud computing. To operate a robust cloud environment, you also need to not only virtualize but unify your network and storage connections, so those resources can scale up and down with the needs of dynamic workloads. It’s this fully unified and virtualized combination that gives cloud greater flexibility and agility. This is a key point made in a new Unified Networking video now appearing on YouTube. This lively animation—produced on behalf of Intel—provides a high-level look at the benefits of Unified Networking solutions combining network and storage connections to enable better use of resources and better cloud performance. In animated graphics, the video shows that static network and storage connections can’t reallocate themselves to meet the dynamic needs of cloud workloads, and you end up wasting resources. The video also provides a look at a data center built around Unified Networking over 10 Gigabit Ethernet. The Intel® Ethernet Converged Network Adapter used in this example provides more than just a wider pipe; it’s a smarter pipe. You don’t have to break a 10GbE pipe into smaller static connections when using Intel® Ethernet. It gives you the ability to dynamically reallocate bandwidth to provide the correct ratio of storage and network resources to meet the changing needs of individual workloads. The result? “You never have to over-engineer your data center to power your cloud,” the video tells us. If you can spare 4 minutes to watch this video, you can potentially see a new way to look at Unified Networking and how it can help you avoid inefficient hardware allocation in your cloud environment.

 

 

 

To get further details on this new way of looking at using 10GbE connections that the video outlines, check out the Unified Networking content on our Cloud Usage Models site.

At Interop this week, we saw several examples of the industry moving toward the flat data center network and software defined networking. 

 

System latency important in flat networks

Flat data center networks require new, large core switches that absorb what were previously defined as the aggregation and core switch layers. Two such core switches were announced from Huawei* and Gnodal* at the show.

 

But one of the key ingredients in the flat data center network is the top-of-rack (ToR) switch that feeds these core switches. In order to reduce the cost and complexity of these high-bandwidth core switches, L3 forwarding and tunneling is being delegated to the ToR switches. This fact is forcing core switch companies to think about the performance that customers will experience across their entire data center fabric.

 

For example, Gnodal’s new core switch provides very low latency, so why surround it with ToR switches with high L3 latency? That’s why at Interop we drew a lot of attention from attendees to the Intel® Ethernet FM6000 series, which provides the industry’s lowest L3 cut-through latency along with features such as advanced load balancing and network address translation (NAT). As a proof point, we had a low-latency NAT demonstration using our FM6000 series along with our new Seacliff Trail ToR switch reference platform.

 

SDN is the new network OS

Software defined networking (SDN) was another hot topic at the show. As you may know, this promises to bring something similar to a standard OS for networking applications to run on. Our FM6000 series was highlighted in two SDN Interop demos.

 

One was hosted by NEC*, which won “Best of Interop,” and the other one was at the InteropNet Openflow Lab. It was clear from the comments I received, that many end customers are interested in SDN, but are waiting for the standards to become more mature.

 

A nice feature available in our FM6000 series is the ability to divide our frame-processing pipeline and packet memory into separate partitions, one for SDN forwarding and one for traditional layer-3 forwarding. This allows network administrators to experiment with new SDN technology while not disrupting normal network operations.

 

The trends on display at Interop confirm that Intel is on the leading edge of the flat networking and SDN trends for future networks.

The Wired Blog had a chance to sit down with new Wired Ethernet General Manager Dawn Moore to ask her a few questions about being in charge of Intel® Ethernet.  Links in the article are inserted by the editor.ScreenHunter_11 May. 09 15.12.gif

--------------------------------------------------

Wired:  You’re the new GM of Wired Ethernet.  What’s your first couple of months been like?

Dawn:  As GM you’re expected to know and grow your business.  The best way I know to get familiar with all aspects of the business is to visit your key customers, listen to their feedback and be available during times of transition.  I knew a lot of our partners when I ran our NIC business, but meeting key embedded stakeholders was very insightful.  I’ve gotten a lot of frequent flier miles and met a lot of great people.  It’s been a rollercoaster, but an exciting one.

 

Wired:  Speaking of exciting what made you want the GM position?

Dawn:  The people and the products.  Intel has such an array of talent that it’s easy to take them for granted.  We also have an amazing portfolio of products, from 1 Gigabit products like the I350 to world firsts like the X540.    The amazing thing is that I think we’ve just scratched the surface with our 10 Gigabit line and we’ve been doing 10 Gigabit for almost 10 years.

 

Wired:  What have you learned on your visits?

Dawn:  While we aren’t perfect, our customers really appreciate our support.  We stand by our products.  We post our datasheets and spec updates for the public to download without registration.  We have an array of testing facilities like the industry leading X-Lab.  I think there are ways to keep getting better at it, and I see social media as a way to grow our support without breaking the bank.  I think people take Intel® Ethernet seriously, which is a nice change.  I know a lot of places where I go and people say “Intel does Ethernet now?”  We founded it back in the day and have been doing it for 30 years.  Some of our competitors haven’t even been around as a company for half that long.

 

Wired:  Intel seems to have acquired a lot of networking technologies lately. How does this effect Wired Ethernet?

Dawn:  Intel has always invested in technologies that bring data to and from the CPU.  Intel will always do what it needs to do to keep that processor running at its maximum potential so you can get a great return on your CPU investment.  We’ve done some housekeeping inside the company that lets all these new teams work together to really innovate while letting mature business keep executing.    Wired Ethernet is now the big sibling to all these new businesses, and we can learn as much from them as they do from us.  But keeping Wired Ethernet as a separate core business means having the best of both worlds:  Execution without distractions and inter-team creativity to make the next generation even more amazing.  This sets the stage for a collaborative environment that helps Intel continue to bring compelling value propositions to our customers.

 

Wired:  So what can you tell us about those next generation products?

Dawn:  All your readers got their NDAs signed? <laughs>

 

Wired:  Probably not.  What key lessons from your old position are you taking with you as you move up to GM?

Dawn:  People first, and never be afraid to keep innovating.  During my time as NIC leader, we moved from doing a dozen or so new cards a year to over 50 new designs a year.  We did this by just looking at our values and being willing to try things a new way.  By looking at your values you know what you should never change, like our commitment to quality, and what you can change, like some of our processes.  Process should never be used as a weapon against those that want to innovate.  I’m blessed to be working with a talented group of people so a lot of my time as NIC leader was to point the direction, remove roadblocks and let the team execute.

 

Wired:  So once the rollercoaster slows down can you come back for another visit?

Dawn:  I’ve been a supporter of the Wired blog since it was founded almost three years ago, so I’d be happy to visit again later in the year.

 

Wired:  Thanks for your time.

Dawn:  My pleasure.

We’re just days away from Interop, where Intel will put an emphasis on data center switching with a new switch design, a load balancing demonstration and an expanded live software defined networking (SDN) demo.

 

Interop runs from May 8-10 in Las Vegas, and is probably the premier enterprise networking convention today.

 

Stop by booth #543, to see the new Seacliff Trail reference platform designed for top-of-rack data center applications.  The 1RU-high switch has 48 SFP+ 10GbE ports and four QSFP 40GbE ports and delivers less than 400ns L3 latency.  It supports all of the critical data center protocols and services, and is also interesting because its all-Intel design includes both an Intel® 82599 10 Gigabit Ethernet controller and a Crystal Forest-based control plane processor.

 

Also in the booth will be a great demo of low latency network address translation (NAT).  The Intel® Ethernet Switch FM6000 FlexPipe™ frame processing pipeline provides advanced NAT services with very low latency, which is essential in data center networks.

 

We’ve also contributed our Barcelona 10GbE TOR switch reference platform for an SDN demo that is part of the Interop Labs (iLabs). The demo shows how Barcelona performs when it is controlled by an OpenFlow software control plane.  This is similar to the demo we had in our booth at the recent Open Network Summit, but this time the switch is in a multivendor environment.  

 

There is plenty to see at the Intel booth that points toward great solutions for your network both today and in the future.  I hope to see you there.

Filter Blog

By author: By date:
By tag: