Skip navigation

Intel is pleased to announce the availability of our latest Gigabit Ethernet server adapter - the Intel® Ethernet Server Adapter I210-T1. Based on the Intel® Ethernet Controller I210, this adapter facilitates efficient data center cooling within a low power design with innovative power management features, including Energy Efficient Ethernet (EEE), DMA Coalescing, and a unique ventilated bracket.   This low-cost, single port adapter supports PCIe 2.1 x 1, 10/100/1000 Ethernet in an ultra-compact footprint, and offers AVB (Audio Video Bridging) support for time-sensitive traffic in entry-level servers.

Intel® Ethernet Server Adapter I210-T1 Key Features:

  • Low cost/Low power single port
  • Energy efficient technologies – EEE and DMAC
  • Advanced features, including Audio Video Bridging and MCTP over PCIe and SMBus
  • Unique ventilated bracket for maximum airflow
  • Low-halogen and lead-free environmentally friendly design
  • Reliable and proven Gigabit Ethernet technology from Intel Corporation
  • Compact footprint (the size of a credit card!)
  • 10/100/1000 BASE-T(X) copper backwards compatibility
  • PCIe Gen 2.1 support (2.5GT/s)
  • Low profile and full height brackets with every board
  • Error Correcting Memory (ECC) protection
  • Broad OS Support (warning huge file!)

 

The Intel® Ethernet Server Adapter I210-T1 is a direct replacement for the current single-port 1GbE product, the Intel® PRO/1000 PT Server Adapter (EXPI9400PT), which will announce the end of it's run in the next year.  The Intel® Ethernet Server Adapter I210-T1 is available with more than 20%* cost savings.

 

The new Intel® Ethernet Server Adapter I210-T1 available NOW!

 

For those that are manufacturing their own Intel® 82574 Gigabit Ethernet Controller based hardware implementations, a key step in creating a valid configuration is selecting the correct image based on the size of the EEPROM and desired functionality.  If the device will attached to a BMC via SMBus or NC-SI, a 32Kb EEPROM (or greater) must be used.  If not, the No-Management image should be used.  Note in table 25 in the 82574 datasheet outlines minimum sizes for those two features sets.  If for some reason you want to use the Management image on a system without a BMC, make sure the design has the NC-SI/SMBus pins properly dealt with.  Putting a Management image (which requires size 32Kb EEPROM) in a No-Management size EEPROM (<32Kb EEPROM size) is not a supported configuration and will cause the 82574 to have non-deterministic and/or erratic behavior.   It’s like going past the end of an array, you end up in the weeds and who knows what data is now being pointed to.  Data that will be used because it thinks it’s okay.   Just to put these sizes into context, a 1Kb EEPROM is just 64 words.  An image that small you can tell by just looking at a dump of it.  1Kb fits onto a small screen, a 32Kb image will go on for a page or three.

 

All an OEM has to do is use the correct image (No-Management/Management) or use the correct size EEPROM to have a valid and supported configuration.  Unless the management functionality is required, the Non-management image should be used.

 

Here is table from the datasheet for easy reference.   Here is the datasheet (http://www.intel.com/content/www/us/en/ethernet-controllers/82574l-gbe-controller-datasheet.html?wapkw=82574+datasheet). Remember the datasheet is always right, the blog is always wrong if they disagree.

Table 25 -82574_datasheet.gif

You can tell if it is a management image or not by looking at Word 0x0F in the EEPROM.  Here is another snippet from the datasheet:

6.1.1.6_82574_datasheet.gif

Now be careful, the EEPROM is dumped in Linux in BYTEs and the manuals are WORDS.   So in Linux the 0x0010 line, the last two bytes make the word 0x0F, with byte swapping, the last byte of the0x0010 line is the high byte of the word.  This is a non-management image:

Non-Managment.gif

Here is a management image:

Managment.gif

Yes, I have put a LOT of NICs in my poor system. Your eth number will vary.

 

So the question to ask is 1) what size is the EEPROM attached to my 82574?  Then 2) What image is in it?  If the first answer is 32Kb or bigger, the second is not required.  If the first is  <32Kb then the second is in play.  The HW vendor should know how to tell them apart.

 

As always thanks for using Intel Ethernet.

Recently there were a few stories published, based on a blog post by an end-user, suggesting specific network packets may cause the Intel® 82574L Gigabit Ethernet Controller to become unresponsive until corrected by a full platform power cycle.

Intel was made aware of this issue in September 2012 by the blogs author.  Intel worked with the author as well as the original motherboard manufacturer to investigate and determine root cause. Intel root caused the issue to the specific vendor’s mother board design where an incorrect EEPROM image was programmed during manufacturing.  We communicated the findings and recommended corrections to the motherboard manufacturer.

It is Intel’s belief that this is an implementation issue isolated to a specific manufacturer, not a design problem with the Intel 82574L Gigabit Ethernet controller.  Intel has not observed this issue with any implementations which follow Intel’s published design guidelines.  Intel recommends contacting your motherboard manufacturer if you have continued concerns or questions whether your products are impacted.-

I was at the Open Compute Project Summit last week where the news broke about expanded details on the Open Compute Project.

 

While OCP is focused on making servers more flexible, it also will have an impact on data center networking. If you want to know more about OCP, this InfoWorld article has some good details.

 

What I want to concentrate on are the networking aspects of the proposed new system. In a nutshell, the OCP initiative will result in new standards for interoperable components (like processor boards, power supplies, etc.) that allow more flexibility in server designs. So you could imagine a common processor slot, for instance, that allows a company to define a very granular level of processing power.

 

On the networking front, the OCP proposal envisions a board-level switch that has a network connection to each processor / microserver on the board, with another network connection of up to 100 Gbps based on Intel®’s silicon photonics technology. This provides a fast, very low latency connection for up to 50 meters, easily reaching access switches.

 

So what happens to top-of-rack switches? Nothing … for now. First off, many commentators say that OCP equipment could be limited to large Internet and cloud service providers – like OCP founder Facebook. And thus, the TOR switch will remain in other data center networks indefinitely.

 

Even if that is not the case, there are still several years before OCP-based network servers will hit the market, as the Open Compute Project is still building out its ecosystem and talking to partners about the details of the various server components.

 

That leaves TOR switches, like our SeaCliff Trail 10G/40G top-of-rack switch reference design, as the chief building block for data center networks.

 

But this new architecture has a lot of promise and so companies concerned about the future transition to OCP switches should look again at their plan for deploying software-defined networking (SDN).

 

Because SDN moves network flow control and management from within the switches to a controller on a server, it can easily integrate SDN-enabled OCP systems into the network and allow data centers to migrate to the new architecture at their own speed.

Filter Blog

By date: By tag: