Skip navigation
1 2 3 Previous Next

Wired Ethernet

114 Posts authored by: dougb
dougb

Intel® Ethernet and Security

Posted by dougb Apr 16, 2014

There has been a very famous tale of a security issue in the news lately.  Others have done a great job explaining it, so I won't even dare to even try.   Some people are concerned that Intel® Ethernet products might be at risk.  Coming to this blog is a great first step, but there is a much better place for you to look.  Intel® Product Security Center.  It lists all the advisories that Intel is currently tracking.  A great resource, it has clear version and update suggestions for products that have issues.  It even has a mailing list option so you can get updates when they come out.  The one in the news is listed as Multiple Intel Software Products and API Services impacted by CVE-2014-0160, and !spoilers alert! Intel Ethernet doesn't have any products listed.  If we did have any security related issues, you would find them there.  I strongly suggest you add the Intel® Product Security Center to your bookmarks, and sign up for the e-mail.  Vigilance is the first step to better security, and Intel tries to make it easier for busy IT professionals to stay informed.

Intel is pleased to announce the Intel® Ethernet Server Adapter X520 Series for the Open Compute Project.

 

Available in both single and dual-port SKU’s, these adapters deliver a proven, reliable solution for deployments of Ethernet for high bandwidth, low cost, 10GbE network connections.  Increased I/O performance with Intel® Data Direct I/O Technology (DDIO) and support for intelligent offloads make this adapter a perfect match for scaling performance on Intel® Xeon® processor E5/E7 based servers.

 

The best-selling Intel® Ethernet Converged Network Adapter X520 Series is known for its high performance, low latency, reliability, and flexibility.  The addition of the Intel® Ethernet Server Adapter X520 Series for Open Compute Project to the family delivers all the X520 capabilities, in an Open Compute Project (OCP) form factor.  The Open Compute Project (OCP) is a Facebook* initiative to openly share custom data center designs to improve both cost and energy efficiency across the industry.  The OCP uses a minimalist approach to system design thereby reducing complexity and cost, allowing data centers to scale out more effectively.  By publishing the designs and specifications of this low-power, low-cost hardware, it can reduce the cost of infrastructure for businesses large and small.

 

For more information on the Intel® Ethernet Server Adapter X520 for Open Compute Project, visit:  http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/server-adapter-x520-da1-da2-for-ocp-brief.html

 

For more general information on the Open Compute Project initiative, visit:  http://opencompute.org/

Intel is pleased to announce the availability of our latest Gigabit Ethernet server adapter - the Intel® Ethernet Server Adapter I210-T1. Based on the Intel® Ethernet Controller I210, this adapter facilitates efficient data center cooling within a low power design with innovative power management features, including Energy Efficient Ethernet (EEE), DMA Coalescing, and a unique ventilated bracket.   This low-cost, single port adapter supports PCIe 2.1 x 1, 10/100/1000 Ethernet in an ultra-compact footprint, and offers AVB (Audio Video Bridging) support for time-sensitive traffic in entry-level servers.

Intel® Ethernet Server Adapter I210-T1 Key Features:

  • Low cost/Low power single port
  • Energy efficient technologies – EEE and DMAC
  • Advanced features, including Audio Video Bridging and MCTP over PCIe and SMBus
  • Unique ventilated bracket for maximum airflow
  • Low-halogen and lead-free environmentally friendly design
  • Reliable and proven Gigabit Ethernet technology from Intel Corporation
  • Compact footprint (the size of a credit card!)
  • 10/100/1000 BASE-T(X) copper backwards compatibility
  • PCIe Gen 2.1 support (2.5GT/s)
  • Low profile and full height brackets with every board
  • Error Correcting Memory (ECC) protection
  • Broad OS Support (warning huge file!)

 

The Intel® Ethernet Server Adapter I210-T1 is a direct replacement for the current single-port 1GbE product, the Intel® PRO/1000 PT Server Adapter (EXPI9400PT), which will announce the end of it's run in the next year.  The Intel® Ethernet Server Adapter I210-T1 is available with more than 20%* cost savings.

 

The new Intel® Ethernet Server Adapter I210-T1 available NOW!

 

For those that are manufacturing their own Intel® 82574 Gigabit Ethernet Controller based hardware implementations, a key step in creating a valid configuration is selecting the correct image based on the size of the EEPROM and desired functionality.  If the device will attached to a BMC via SMBus or NC-SI, a 32Kb EEPROM (or greater) must be used.  If not, the No-Management image should be used.  Note in table 25 in the 82574 datasheet outlines minimum sizes for those two features sets.  If for some reason you want to use the Management image on a system without a BMC, make sure the design has the NC-SI/SMBus pins properly dealt with.  Putting a Management image (which requires size 32Kb EEPROM) in a No-Management size EEPROM (<32Kb EEPROM size) is not a supported configuration and will cause the 82574 to have non-deterministic and/or erratic behavior.   It’s like going past the end of an array, you end up in the weeds and who knows what data is now being pointed to.  Data that will be used because it thinks it’s okay.   Just to put these sizes into context, a 1Kb EEPROM is just 64 words.  An image that small you can tell by just looking at a dump of it.  1Kb fits onto a small screen, a 32Kb image will go on for a page or three.

 

All an OEM has to do is use the correct image (No-Management/Management) or use the correct size EEPROM to have a valid and supported configuration.  Unless the management functionality is required, the Non-management image should be used.

 

Here is table from the datasheet for easy reference.   Here is the datasheet (http://www.intel.com/content/www/us/en/ethernet-controllers/82574l-gbe-controller-datasheet.html?wapkw=82574+datasheet). Remember the datasheet is always right, the blog is always wrong if they disagree.

Table 25 -82574_datasheet.gif

You can tell if it is a management image or not by looking at Word 0x0F in the EEPROM.  Here is another snippet from the datasheet:

6.1.1.6_82574_datasheet.gif

Now be careful, the EEPROM is dumped in Linux in BYTEs and the manuals are WORDS.   So in Linux the 0x0010 line, the last two bytes make the word 0x0F, with byte swapping, the last byte of the0x0010 line is the high byte of the word.  This is a non-management image:

Non-Managment.gif

Here is a management image:

Managment.gif

Yes, I have put a LOT of NICs in my poor system. Your eth number will vary.

 

So the question to ask is 1) what size is the EEPROM attached to my 82574?  Then 2) What image is in it?  If the first answer is 32Kb or bigger, the second is not required.  If the first is  <32Kb then the second is in play.  The HW vendor should know how to tell them apart.

 

As always thanks for using Intel Ethernet.

Recently there were a few stories published, based on a blog post by an end-user, suggesting specific network packets may cause the Intel® 82574L Gigabit Ethernet Controller to become unresponsive until corrected by a full platform power cycle.

Intel was made aware of this issue in September 2012 by the blogs author.  Intel worked with the author as well as the original motherboard manufacturer to investigate and determine root cause. Intel root caused the issue to the specific vendor’s mother board design where an incorrect EEPROM image was programmed during manufacturing.  We communicated the findings and recommended corrections to the motherboard manufacturer.

It is Intel’s belief that this is an implementation issue isolated to a specific manufacturer, not a design problem with the Intel 82574L Gigabit Ethernet controller.  Intel has not observed this issue with any implementations which follow Intel’s published design guidelines.  Intel recommends contacting your motherboard manufacturer if you have continued concerns or questions whether your products are impacted.

In 2008, we launched our first single port PCI Express* (PCIe*) silicon designed directly for the embedded and entry level markets.  The Intel® 82574 Gigabit Ethernet Controller was an immediate and lasting success.  You can find this controller on thousands of different motherboards offered by hundreds of vendors.  The Intel 82574 Gigabit Ethernet Controller is used in information kiosks, medical equipment, household appliances, rail cars and tons of others applications.

But after four years, it is starting to show its age.

 

We spoke to our Intel 82574 Gigabit Ethernet Controller customers and with their input came up with our newest product:

I210_pic.jpg

The Intel® Ethernet Controller I210.

 

The Intel® Ethernet Controller I210 is fully integrated MAC/PHY in a single low power package that supports Embedded, High End Desktop (HEDT), Server, and MicroServer designs.  The device offers a fully integrated Gigabit Ethernet (GbE) media access control (MAC), physical layer (PHY) ports  (for the I210-AT, I210-IT, and I211-AT models), and SGMII/SerDes port (for the I210-IS model) that can be connected to an external PHY or backplane.  The Intel® Ethernet Controller I211-AT is also available in the same low power package and is targeted to support HEDT and Embedded designs .

The following are key feature of  the Intel® Ethernet Controller I210-AT, I210-IS, and I210-IT:

  • Small Package: 9mm x 9mm
  • PCIe v2.1 Gen1 (2.5GT/s) x1, with iSVR (integrated Switching Voltage Regulator)
  • SGMII/SerDes (I210-IS Only)
  • Platform Power Efficiency
  • IEEE 802.3az Energy Efficient Ethernet (EEE)
  • Proxy: ECMA-393 & Windows Logo for proxy offload
  • DMA Coalescing
  • Converged Platform Power Management (CPPM) Support Ready (requires platform-level tuning)
  • LTR, OBFF
  • Advanced Features:
  • 0-70oC ambient temp (I210-AT)
  • -40-85oC Industrial temp (I210-IT and I210-IS)
  • Audio-Video Bridging
  • IEEE 1588/802.1AS Precision Time Synchronization
  • IEEE 802.1Qav Traffic shaper (w/SW extensions)
  • Time based transmission
  • Jumbo Frames
  • Interrupt Moderation, VLAN support, IP checksum offload
  • Four Software Definable Pins (SDPs)
  • 4 Transmit and 4 Receive queues
  • RSS & MSI-X to lower CPU utilization in multi-core systems
  • Advanced Cable Diagnostics, Auto MDI-X
  • ECC – Error Correcting Memory in Packet buffers
  • Manageability:
    • NC-SI for greater bandwidth pass through
    • SMBus low-speed serial bus to pass network traffic
    • Flexible FW Architecture w/secure NVM update
    • MCTP over SMBus/PCIe
    • PXE and iSCSI Boot


The following are key features of  the Intel® Ethernet Controller I211-AT:

  • Small Package: 9mm x 9mm
  • PCIe v2.1 Gen1 (2.5GT/s) x1, with iSVR (integrated switching voltage regulator)
  • Integrated non-volatile memory (iNVM)
  • Platform Power Efficiency
  • IEEE 802.3az Energy Efficient Ethernet (EEE)
  • Proxy: ECMA-393 & Windows Logo for proxy offload
  • Advanced Features:
  • 0-70oC ambient temp
  • IEEE 1588/802.1AS Precision Time Synchronization
  • Jumbo Frames
  • Interrupt Moderation, VLAN support, IP checksum offload
  • 2 Transmit and 2 Receive queues
  • RSS & MSI-X to lower CPU utilization in multi-core systems
  • Advanced Cable Diagnostics, Auto MDI-X
  • ECC – Error Correcting Memory in Packet buffer

 

The Intel Ethernet Controller I210 family can be used in server system configurations, such as rack mounted or pedestal servers, in an add-on NIC or LAN on Motherboard (LOM) design, in blade servers, and in various embedded platform applications.  The Intel Ethernet Controller I211-AT is also available for cost conscious customers looking for a reduced feature set and OS support.

Another NEW 10GBASE-T Server Adapter from Intel!


Intel is pleased to announce the availability of another new 10GBASE-T server adapter!  The Intel® Ethernet Converged Network Adapter X540-T1 (single port) joins the recently launched Intel® Ethernet Converged Network Adapter X540-T2 (dual port) as the latest innovation in Intel’s leadership to drive 10 Gigabit Ethernet into the broader server market.  This adapter family hosts Intel’s latest silicon, the Intel® Ethernet Controller X540, which is used by many OEMs as a single chip solution for LAN on Motherboard (LOM) to deliver 10GbE on the latest server platforms.


The MAC+PHY integration drives down both cost and power, enabling broad deployment of 10GbE everywhere in the datacenter.  BASE-T is the form factor that is well understood by the industry and is the easiest and most cost effective to deploy.  10GBASE-T is backward compatible with the customer’s existing network infrastructure, providing a smooth transition and natural migration to 10GbE.


Intel® Ethernet Converged Network Adapter X540-T1 & X540-T2 Key Features:

  • Low pricing - $200 less than Intel’s current solution!
  • 10GBASE-T Single & Dual Port
  • Passive heatsink
  • PCI Express* 2.1 (5GT/s)
  • Standard Cat 6a cabling and higher with RJ45 connectors
  • Energy efficient design – single chip solution with integrated MAC + PHY
  • Unified Networking delivering LAN, iSCSI, and FCoE in one low-cost CNA
  • Flexible I/O virtualization for port partitioning and quality of service (QoS) of up to 64 virtual functions per port
  • Backward compatible with existing 1000BASE-T networks simplifies the transition to 10GbE

 

30% lower power

$200 less expensive

No fan!

 

The new Intel® Ethernet Converged Network Adapter X540-T1 will begin shipping by July 2, 2012.


Backlog orders are being accepted now!

Noise to signal? Isn’t it supposed to be signal to noise ratio?  When you’re talking 10 Gigabit BASE-T, you’re talking noise to signal.  10 Gigabit BASE-T has been described to me as “It’s not like looking for a needle in a haystack, it’s like looking for the right snowflake in a snow storm.  At night.” Where does all that noise come from? It comes from the cable next to the cable your data is trying to travel across, it comes from wireless signals, and it comes from outer space.  It even comes from the other wires in the same cable.  With all this noise, how does all that analog chaos get transformed into digital clarity of 1s and 0s?

 

Lots and lots of processing. In our current generation of 10 Gigabit BASE-T products we use PHYs that have 5 channels of DSPs, and tons of specific analog processing silicon to turn that snow storm of waveforms into nice clear digital Ethernet frames.  Far End Cross Talk (FEXT), disturbers, Near End Cross Talk (NEXT), 8/10 encoding, it all gets to be a jumble. FEXT is signal coupled from one channel to a neighboring channel, measured at the far end. NEXT is the same as FEXT, except that it is a measure of how much of that coupled signal is reflected back onto a neighboring channel on the same end. Disturbers are other noise sources outside of the cable.  The 8/10 signaling is how the data is encoded to help make sure there is a roughly even mix of 1’s and 0’s in the data stream. It also helps to ensure the quality of the data by periodically transmitting a checksum that the link partner can use to validate the data that it has received.   If your data stream is highly biased, like all 1’s or all 0’s, the signal processing can get “tone deaf”, limiting what it thinks is a 1, making all the data a 0.  Unless the average voltage is centered you can get signal droop which makes it hard to tell a weak 1 from a strong 0.  The extra bits of data do take up some space on the wire, but things like clock recovery from the extra bits make it worth the overhead.  Just means that to get to your advertised speed, your baud rate must be higher. For example, a 64/66 10G serial signal has a clock rate of 10.3125 G/sec, which transmits data at 10G/sec.  You can’t transfer 10 Gigabits of data. There will always be space on the wire taken up by other things (TCP/IP headers is a big one).   But those extra pieces help keep 10G copper stable.

 

Time to dispel a couple of 10 Gigabit BASE-T rumors I’ve heard over the years.

 

First up, bending the cable. I was at a trade show and a gentleman comes up and says to me “That 10 Gigabit stuff is for the birds, you can’t even bend the cables without dropping link!”  I was covering for another employee that had taken sick, so I wasn’t warned of the rowdiness of this show.  “I think you’re mistaken, sir.”  I said.  He smiles and reaches behind my demo and grabs the cable.  He bends it over on itself, making the straight run into an O shape.  He pushed with his thumb and made into more of an I shape.  He let it go and walked over to the console screen, expecting error message after error message of disconnected link.  To his shock, (and my amusement) nothing happened.  “Wow.”  He mumbled. He took my card and left the show floor, all the while being heckled by his travel buddies.  Unless you damage the cable, I’ve never seen a bent cable effect link.  And I’m not gentle to my cables.

 

Second, cell phones. I have heard concerns about answering your cell phone in a datacenter and taking your whole network offline.  There is some kernel of truth to this one since the signal bands of cell phones and 10G cables do get dangerously close to one another.  But it’s just another disturber source, albeit a powerful one. Modern cell phones and modern 10 Gigabit BASE-T are both designed to use as little power as needed to reach the end station so you would have to be pretty close with a very powerful cell phone to put a ton of noise onto the wire.  And we do test cell phone interference, but with good cables you shouldn’t see any issues.  Use bad cables and things could get ugly.  Good cables include shielding to protect signal wires from disturbers. Those DSPs can only filter out so much; an investment in quality cables is an investment in the quality of your data.


I bet you a least half of you will now go to your 10 Gigabit BASE-T installs and make a call from right next to your servers to see if it drops link.  If you have Intel® Ethernet and good cables, I think you’ll have a phone call and link all at the same time.

 

Thanks for using Intel® Ethernet, and special thanks to Sam J for his help double checking my technical details.

The Wired Blog had a chance to sit down with new Wired Ethernet General Manager Dawn Moore to ask her a few questions about being in charge of Intel® Ethernet.  Links in the article are inserted by the editor.ScreenHunter_11 May. 09 15.12.gif

--------------------------------------------------

Wired:  You’re the new GM of Wired Ethernet.  What’s your first couple of months been like?

Dawn:  As GM you’re expected to know and grow your business.  The best way I know to get familiar with all aspects of the business is to visit your key customers, listen to their feedback and be available during times of transition.  I knew a lot of our partners when I ran our NIC business, but meeting key embedded stakeholders was very insightful.  I’ve gotten a lot of frequent flier miles and met a lot of great people.  It’s been a rollercoaster, but an exciting one.

 

Wired:  Speaking of exciting what made you want the GM position?

Dawn:  The people and the products.  Intel has such an array of talent that it’s easy to take them for granted.  We also have an amazing portfolio of products, from 1 Gigabit products like the I350 to world firsts like the X540.    The amazing thing is that I think we’ve just scratched the surface with our 10 Gigabit line and we’ve been doing 10 Gigabit for almost 10 years.

 

Wired:  What have you learned on your visits?

Dawn:  While we aren’t perfect, our customers really appreciate our support.  We stand by our products.  We post our datasheets and spec updates for the public to download without registration.  We have an array of testing facilities like the industry leading X-Lab.  I think there are ways to keep getting better at it, and I see social media as a way to grow our support without breaking the bank.  I think people take Intel® Ethernet seriously, which is a nice change.  I know a lot of places where I go and people say “Intel does Ethernet now?”  We founded it back in the day and have been doing it for 30 years.  Some of our competitors haven’t even been around as a company for half that long.

 

Wired:  Intel seems to have acquired a lot of networking technologies lately. How does this effect Wired Ethernet?

Dawn:  Intel has always invested in technologies that bring data to and from the CPU.  Intel will always do what it needs to do to keep that processor running at its maximum potential so you can get a great return on your CPU investment.  We’ve done some housekeeping inside the company that lets all these new teams work together to really innovate while letting mature business keep executing.    Wired Ethernet is now the big sibling to all these new businesses, and we can learn as much from them as they do from us.  But keeping Wired Ethernet as a separate core business means having the best of both worlds:  Execution without distractions and inter-team creativity to make the next generation even more amazing.  This sets the stage for a collaborative environment that helps Intel continue to bring compelling value propositions to our customers.

 

Wired:  So what can you tell us about those next generation products?

Dawn:  All your readers got their NDAs signed? <laughs>

 

Wired:  Probably not.  What key lessons from your old position are you taking with you as you move up to GM?

Dawn:  People first, and never be afraid to keep innovating.  During my time as NIC leader, we moved from doing a dozen or so new cards a year to over 50 new designs a year.  We did this by just looking at our values and being willing to try things a new way.  By looking at your values you know what you should never change, like our commitment to quality, and what you can change, like some of our processes.  Process should never be used as a weapon against those that want to innovate.  I’m blessed to be working with a talented group of people so a lot of my time as NIC leader was to point the direction, remove roadblocks and let the team execute.

 

Wired:  So once the rollercoaster slows down can you come back for another visit?

Dawn:  I’ve been a supporter of the Wired blog since it was founded almost three years ago, so I’d be happy to visit again later in the year.

 

Wired:  Thanks for your time.

Dawn:  My pleasure.

I helped David film this short explaining how Intel® DDIO works.  The content is all David, the video production (or the lack thereof) is all mine.  David explains the value of this new technology.

 

 

http://www.youtube.com/watch?v=CvNzX8FGdKA

Arista Networks & Intel Present:

Hadoop for the Common Man:

A practical working guide to building a cost-effective 10GBASE-T foundation

Thursday, March 29, 2012
10:00a.m.-11:00a.m. PST (GMT -8)

Duration: 1 Hour

Register Now

Business analytics represent a significant IT challenge. Large volumes of high quality data across disparate systems, with computational based search, analysis and data summarizations are necessary for providing actionable data to business groups. Over the last several years many companies have begun deploying Hadoop clustering technologies in response to these challenges.

While these Hadoop clusters offer faster and more reliable data results, several of the high performance networking and compute technologies, often cited when deploying these clusters, have been cost prohibitive and operationally disruptive for the broader data center consumer.

This webinar will discuss key networking technology shifts that substantially lower the barriers to entry for deploying Hadoop clusters. Specific topics include advances in 10GBASE-T server connectivity, scalable switching, and I/O optimization within the newest generations of Intel® Xeon® processors. Additionally, this webinar will discuss how to get started with a real world, performance proven, cluster example. Intel and Arista Networks will lead these discussions and will end with a web based question and answer session.

Attendees of this Webinar will also learn about:

  • The need for deep switch packet buffers when moving data files
  • I/O optimization requirements for MapTask and ReduceTask operations
  • Best practices and distance limitations specific to 10GBASE-T cabling
  • Topology designs for east/west Hadoop traffic patterns
Featured SpeakerFeatured Speaker
mark.jpgMatt.jpg
Mark Berly
Sr. Systems Engineer
Arista Networks
Mathew Eszenyi
Product Marketing Engineer
Intel

RegisterButton.jpg

     Time, tide, and embedded waits for no one. It’s been a while since we updated our release package for Windows Embedded operating systems, and I’m happy to announce that it is live. Our partners at the EDC were kind enough to host the file again, so start your downloading!  Okay, hold onto that thought for a second and let me explain a couple of things first.


     1)      Windows Embedded Compact 7 support.  We have native NDIS6-based drivers for WEC7. The drivers for WEC7 do not match the usual driver naming convention.  We have placed several technologies into one binary, the e1i.  See the file “DriverSelectionGuide.txt” for details about which silicon is supported by the e1i. If your favorite isn’t listed, continue to use the CE6 drivers. For Windows Embedded Standard 7, WES7, please build the image using the provided builder, THEN install the normal Win7 drivers from our normal release media.

     2)     Embedded Teaming support. We have XP Embedded teaming support. Just use these INFs with the normal release media ANS binaries. If you run into issues, reproduce with normal XP first since XPe can be built without a lot of packages that can cause false failures. Again build the image first, then install ANS.

     3)     Legacy 10/100 products get a swan song. While they are still on this release media, they may not be for long. Officially we have ended their life, and that means they can end up off the media at any time. I’m working on getting the source for a few 10/100 drivers published, but that is a ways out yet. If you want to be a beta tester for it, send me a message. This is only for 10/100--don’t expect or ask for others yet, please. Gotta shake out the system and an EOL product line is great for that. Thanks in advance for your patience and super kudos for many happy 10/100 years.

     4)     When is the next Embedded release? Good question. If you use Embedded Windows with Intel® Ethernet, call your disti, Intel sales rep or local Intel field person and demand they keep the support coming. Without your inputs, it might be a while, and it was already too long this time.


     Okay that is enough for now.  Here is the link again. Thanks for your patience, and leave a note if you use the package and want to see it regularly updated.


Thanks for using Intel Ethernet.

dougb

Even more ECC updates

Posted by dougb Mar 16, 2012

ECC has many real "definitions" - error correcting circuits, error correcting code, or error correction code - but they all do the same thing.  It helps keep data intact within the chip memory.   ECC uses a special algorithm to encode information in a block of bits that contains sufficient detail to permit the recovery of a single bit error in the safeguarded data.  This protocol will not only detect single bit errors, but will transparently correct them on the fly.   Double errors will be flagged as an error and the device will try to get software’s attention about it.  Related to ECC is parity.  Parity will keep track of the quantity of bits in total and track them as either even or odd.  Should this parity change while it is in the chip memory, it will be flagged as an error.  Since you can’t tell which bit went rogue, this is a poor man's protection. Also, if more than 1 bit changes, parity check can miss it.

(Warning HTML table!)

 

Product

Packet Buffer

(In band Traffic)

Managability

(out of band traffic)

Datasheet Info

X540

ECC

ECC

7.14.1

82599

ECC

ECC

7.14.1

82598

ECC

ECC

Look for DHER/PBUR

I350

ECC

ECC

7.6

82580

ECC

ECC

7.23

82575 and 82576

ECC

ECC

7.6

82571

ECC

Parity

13.7

82573 /  82574 / 82583

None

None

n/a

82546

None

None

n/a


Both ECC and parity have a basic limitation in that if the error is large enough, it will look okay.  ECC is far more resistant to this.  We try to make sure bad things don't happen to your data, but it still might happen.  And while it will try to tell you when it does go bad, sometimes it still won't notice.  That's why our lawyers care about articles like this.  Multiple bit errors are very rare and probably will cause other problems to the machine.  Data integrity isn't made with a single point safety net.  If you want to guarantee your data, use a multiple layered approach since it’s unlikely that all of them will fail.

Today we announce the arrival of our newest product:  the Intel® Ethernet Controller X540

 

What is it?

The Intel® Ethernet Controller X540 is the next generation of 10 Gigabit Ethernet (10GbE) controllers from Intel. Using a 40 nm manufacturing process, the X540 is the first 40 nm, dual-port integrated Media Access Controller/Physical Layer (MAC/PHY) single chip designed for reduced power and package size that is designed for use as a LAN on Motherboard (LOM) network controller. It also powers the Intel® Ethernet Converged Network Adapter X540.

 

Key Features

  • The world’s first fully integrated single chip 10GBASE-T Ethernet Controller specifically optimized to bring 10 GbE networking to server boards as a LOM.
  • Designed for LOM in mainstream rack and tower servers. – Support up to 100m over widely implemented Cat 6A cables
  • Backwards compatible with existing 1 GbE infrastructure, providing a seamless (or easy) upgrade path to 10GbE
  • Provides industry leading features for I/O Virtualization and Storage over Ethernet, including iSCSI and FCoE.
  • Low power <12.5 W
  • Small package 25 x 25 mm,
  • Designed using the latest 40 nm PHY technology with industry-leading integrated Electrical Mechanical Interference/Remedial Frequency Interference (EMI/RFI) filters.

 

Intel® Ethernet Unified Networking Principles

Intel has delivered high quality Ethernet products for over 30 years, and our Unified Networking solutions are built on the original principles that made us successful in Ethernet:

  • Open architecture integrates networking with the server, enabling Information Technology (IT) managers to reduce complexity and overhead while enabling a flexible and scalable data center network.
  • Intelligent offloads lower cost and power while delivering the application performance that customers expect.
  • Proven Ethernet unified networking is built on trusted Intel Ethernet technology, enabling customers to deploy Fiber Channel over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI) while maintaining the quality of their traditional Ethernet networks.

 

Intel’s unified networking solutions are enabled through a combination of standard Intel® Ethernet products along with trusted network protocols integrated in the operating systems. Thus, unified networking is available on every server either through LOM implementation or via an add-in Converged Network Adapter or Network Interface Card (NIC).

After the events of last year, kernel.org needed to redo their infrastructure.  Intel was proud to help out and provide all new Intel® Ethernet X520 adapters to the kernel.org team.  Team member John 'Warthog9' Hawley was kind enough to spend a few minutes answering some questions for us and I figured I’d share his answers.  Wired Blog questions in BOLD.  This interview was conducted over e-mail in December of 2011.

 

How does this donation help to improve kernel.org and the Linux* community?

 

It will help us in 2 ways:

 

(1) It will give us the ability to perform tasks much faster within our back-end infrastructure, which will translate directly to getting data out to our front-ends faster.  This will not only speed up the ability for end users to acquire kernel.org content, but it will also help to speed up Linux Kernel development as well.

 

(2) It will give us the opportunity to serve more content faster to our end users.  By allowing us to go beyond Gigabit uplinks to the Internet, we should be able to serve more users simultaneously and to get high speed users our content even faster

 

About how many servers power kernel.org?

 

Overall we are expected to have about 28 machines worldwide.

 

Had you already made the jump to 10 Gigabit Ethernet before now?

 

Until this recent donation we have been relying on GbE.

 

Do you use Virtualization?

 

Kernel.org has in the past not needed virtualization; however, in the rebuilding of the infrastructure we are going to be moving some of our less resource intensive services into virtualized environments.

We are quite happy using KVM / QEMU*, and this performs spectacularly for our needs!  We’re kernel.org after all! J

 

What type of network attached storage do you use?

 

We will primarily be using NFS and iSCSI, however we also move a lot of

traffic using the rsync protocol.  Currently we have our own home-built SAN based on Red Hat* Enterprise Linux* 6, however we are looking at other providers and options currently.

 

Why did you choose Intel® Ethernet Adapters? (it's okay to be honest and say they were free, but if there was more than that, please say)

 

Free did have something to do with it, but that wasn't the only reason.

 

- Intel NICs have always been exceptional hardware, and that hardware is coupled with Intel's great support of the community makes them rock solid under Linux, and probably the best NICs money can buy.

 

- They are also very interesting from the SR-IOV perspective.  For our virtual machine deployment we really wanted to be able to provide separate distinct Ethernet interfaces to specific virtual machines, SR-IOV gives us the ability to do that, have higher throughput and simplifies our cabling and lowers our need for Gigabit Ethernet ports at the switch.

 

What benefits have you seen since deploying Intel Ethernet Adapters?

 

We are still in the middle of deployment, but I foresee:

Network simplification

Faster throughput

Easier configuration/management

Better reliability

 

What advanced features of the Intel Ethernet Adapters are you using like Teaming/Channel Bonding, Virtualization enhancements (Intel® VT-c, VMDq, SR-IOV), and have you seen specific benefits from these features?

 

By moving to 10G we are attempting to move away / prevent teaming/channel bonding, and make things generally simpler.  SR-IOV will definitely get used, VT-c and VMDq will likely get used but I need to double check how those will work with KVM and QEMU.  SR-IOV lowers our needed port count, and dramatically simplifies our networking with respect to virtualization.

 

What are some things that Intel Ethernet Linux has been doing that makes them an Industry leader?

 

- Better support for visualization, things like PCI pass-through are now basically required features, and not all NIC vendors have done the work for their hardware and software in the Linux Kernel to support this.

Intel, however, has typically been one of the first vendors to support these types of things, and I wish more vendors would follow their example.

 

- Intel manages their firmware blobs needed for the Intel NICs very, very well, and that makes systems administration and maintenance so much better and easier.  When you have an Intel NIC in a system, you know it will just come up and work.  You can't always say that with other vendors.  Most of the time they work, but if you deviate in some odd ways you stand a good chance of breaking the NIC and needing to find a different firmware blob that solves the problem.

 

What's the hardest part about working with Intel Ethernet Linux? Besides these questions

 

The biggest problem: they aren't the NIC that's embedded on HP or Dell's server motherboards.  Most folks (for a variety of reasons) only get the chance to use what's on the motherboard, and additional funding to replace something that will work most of the time is a lot harder to justify.

 

Other than that I've been very happy with Intel Ethernet under Linux, and have been for many years.

 

What is the hardest part about doing the work that you do for something as large and as public as Kernel.org?

 

Just getting people to understand the actual scope of kernel.org.  We are

infrastructure, plain and simple, and when infrastructure works no one knows it's there or what it's doing or how big it is.  When it breaks, well, then, that's when everyone notices.  Just trying to keep people appraised of what we are doing, where we are expanding into, etc.

 

After running kernel.org and the challenges it has had, what types of things would like to see out of Linux networking?

 

Generally speaking not much, my biggest gripes generally come down to vendors who aren't behind or supporting Linux 100%.  Intel is doing a great job here, they have the entire Open Source team and let me be frank, Intel *GETS* open source at a very fundamental level and it shows.  I can't say that of all the NIC chipset manufacturers I use now and have used in the past.

 

Thanks for taking time to talk to me..


You’re welcome, thanks for helping kernel.org.

Filter Blog