Skip navigation

As you may know, Fulcrum Microsystems was acquired by Intel last September and is now the Intel Switch and Router Division (SRD).  Over the last several months we have been working hard to convert all of our switch documentation into the Intel format while at the same time, we have been building a new web page on the main Intel site to showcase this new Intel product line. We are happy to announce that this work is complete and you can now direct your web browsers to the new Intel Ethernet switch web page. This page provides overview information on our switch silicon that is currently in production.


For those of you who have access to the old Fulcrum document library, or if you need deeper technical information on our Ethernet switch products including pre-production silicon, please contact your local Intel sales representative. The old Fulcrum document library web portal will be disabled by the end of the March 2012. We will continue to offer technical support through the old Fulcrum web site until later this year. Stay tuned for more information.

One of the key features that contribute to the dramatic performance of Intel’s Ethernet switch architecture is its output-queued shared memory technology. For many years now, academia has extolled the virtues of an output-queued architecture for high-performance switch chip designs. But it has been difficult for any company in the industry to achieve output queuing due to the high memory bandwidth required between the switch inputs and outputs.


Because of this, most vendors implement a combined input-output queued (CIOQ) architecture, which needs less core memory bandwidth, but requires extra features to avoid blocking.  For example, one way to minimize blocking is to provide virtual output queues at each ingress port, but for an N-port switch, this means N*N input queues and associated schedulers, which adds significant complexity.


With Intel® Ethernet switch silicon, the single shared memory array is implemented alongside our crossbar switching technology to provide the capability to support a fully non-blocking output queued, shared memory architecture with extremely low cut-through latency.  It also provides superior multicast bandwidth and jitter for applications such as video distribution. We feel that Intel has the only technology that can provide one of the highest bandwidth Ethernet switch chips in the industry while maintaining less than 400nS cut-through latency.

After the events of last year, needed to redo their infrastructure.  Intel was proud to help out and provide all new Intel® Ethernet X520 adapters to the team.  Team member John 'Warthog9' Hawley was kind enough to spend a few minutes answering some questions for us and I figured I’d share his answers.  Wired Blog questions in BOLD.  This interview was conducted over e-mail in December of 2011.


How does this donation help to improve and the Linux* community?


It will help us in 2 ways:


(1) It will give us the ability to perform tasks much faster within our back-end infrastructure, which will translate directly to getting data out to our front-ends faster.  This will not only speed up the ability for end users to acquire content, but it will also help to speed up Linux Kernel development as well.


(2) It will give us the opportunity to serve more content faster to our end users.  By allowing us to go beyond Gigabit uplinks to the Internet, we should be able to serve more users simultaneously and to get high speed users our content even faster


About how many servers power


Overall we are expected to have about 28 machines worldwide.


Had you already made the jump to 10 Gigabit Ethernet before now?


Until this recent donation we have been relying on GbE.


Do you use Virtualization? has in the past not needed virtualization; however, in the rebuilding of the infrastructure we are going to be moving some of our less resource intensive services into virtualized environments.

We are quite happy using KVM / QEMU*, and this performs spectacularly for our needs!  We’re after all! J


What type of network attached storage do you use?


We will primarily be using NFS and iSCSI, however we also move a lot of

traffic using the rsync protocol.  Currently we have our own home-built SAN based on Red Hat* Enterprise Linux* 6, however we are looking at other providers and options currently.


Why did you choose Intel® Ethernet Adapters? (it's okay to be honest and say they were free, but if there was more than that, please say)


Free did have something to do with it, but that wasn't the only reason.


- Intel NICs have always been exceptional hardware, and that hardware is coupled with Intel's great support of the community makes them rock solid under Linux, and probably the best NICs money can buy.


- They are also very interesting from the SR-IOV perspective.  For our virtual machine deployment we really wanted to be able to provide separate distinct Ethernet interfaces to specific virtual machines, SR-IOV gives us the ability to do that, have higher throughput and simplifies our cabling and lowers our need for Gigabit Ethernet ports at the switch.


What benefits have you seen since deploying Intel Ethernet Adapters?


We are still in the middle of deployment, but I foresee:

Network simplification

Faster throughput

Easier configuration/management

Better reliability


What advanced features of the Intel Ethernet Adapters are you using like Teaming/Channel Bonding, Virtualization enhancements (Intel® VT-c, VMDq, SR-IOV), and have you seen specific benefits from these features?


By moving to 10G we are attempting to move away / prevent teaming/channel bonding, and make things generally simpler.  SR-IOV will definitely get used, VT-c and VMDq will likely get used but I need to double check how those will work with KVM and QEMU.  SR-IOV lowers our needed port count, and dramatically simplifies our networking with respect to virtualization.


What are some things that Intel Ethernet Linux has been doing that makes them an Industry leader?


- Better support for visualization, things like PCI pass-through are now basically required features, and not all NIC vendors have done the work for their hardware and software in the Linux Kernel to support this.

Intel, however, has typically been one of the first vendors to support these types of things, and I wish more vendors would follow their example.


- Intel manages their firmware blobs needed for the Intel NICs very, very well, and that makes systems administration and maintenance so much better and easier.  When you have an Intel NIC in a system, you know it will just come up and work.  You can't always say that with other vendors.  Most of the time they work, but if you deviate in some odd ways you stand a good chance of breaking the NIC and needing to find a different firmware blob that solves the problem.


What's the hardest part about working with Intel Ethernet Linux? Besides these questions


The biggest problem: they aren't the NIC that's embedded on HP or Dell's server motherboards.  Most folks (for a variety of reasons) only get the chance to use what's on the motherboard, and additional funding to replace something that will work most of the time is a lot harder to justify.


Other than that I've been very happy with Intel Ethernet under Linux, and have been for many years.


What is the hardest part about doing the work that you do for something as large and as public as


Just getting people to understand the actual scope of  We are

infrastructure, plain and simple, and when infrastructure works no one knows it's there or what it's doing or how big it is.  When it breaks, well, then, that's when everyone notices.  Just trying to keep people appraised of what we are doing, where we are expanding into, etc.


After running and the challenges it has had, what types of things would like to see out of Linux networking?


Generally speaking not much, my biggest gripes generally come down to vendors who aren't behind or supporting Linux 100%.  Intel is doing a great job here, they have the entire Open Source team and let me be frank, Intel *GETS* open source at a very fundamental level and it shows.  I can't say that of all the NIC chipset manufacturers I use now and have used in the past.


Thanks for taking time to talk to me..

You’re welcome, thanks for helping


Updated documents Feb 2012

Posted by dougb Feb 9, 2012

Our commitment to providing you the most up to date documentation often means that the datasheet or spec update you just downloaded last month is now not current.  We try not to make a ton of changes, but erratum is part of the complex world we live and create in, so we like to publish those as things come up.   These public statements of our quality help you decided if what you’re buying is really what you think it is.  It would be easy to bury these deep behind some paywall, but that’s not how we roll.  Trust is something you earn every day, and with trust comes a relationship that business can be built on.
This is a major listing, since this article is a summary of the last several weeks of epublishing.


You can find Controller info here:


And Intel® Ethernet Switch details are here:


Intel® 82546GB GbE Controller Specification Update

Intel® 82573 GbE Controller Specification Update, 3.3

Intel® 82574 GbE Controller Family Datasheet, 3.1

Intel® 82576 GbE Controller Specification Update, 2.84

Intel® 82580EB/82580DB GbE Controller Specification Update, 2.43

Intel® 82583V GbE Controller Datasheet, 2.4


Intel® Ethernet Controller I350 Specification Update, 2.04


Intel® 82566 GbE PHY Specification Update, 2.4

Intel® 82577 GbE PHY Specification Update, 1.7

Intel® 82578 GbE PHY Datasheet, 2.4

Intel® 82578 GbE PHY Specification Update, 1.5



Intel® Ethernet Switch FM1010 Datasheet, 2.0

Intel® Ethernet Switch FM2112 Datasheet, 2.2

Intel® Ethernet Switch FM2212 Datasheet, 1.1

Intel® Ethernet Switch FM2224 Datasheet, 2.3


Thanks for using Intel® Ethernet

If you’ve looked over an EEPROM (yes, or NVM) things have really changed since the days of the old 82557 10/100 products.   Back then, the whole image was 64 words, and most of them had no meaning.  Now we have 128KB NVM and most of that is packed full of goodness.  What happened?


Features happened.


We added Firmware code for the management engine into our devices and gave it the ability to be patched from sections in the EEPROM..  Having it in the EEPROM allows us to make continual updates to the quality without having to spin the silicon.  Spinning silicon is expensive and we don’t want to pass that along.  But all these extra features do come at a cost:  the EEPROM size, and the EEPROM complexity.  The old days of being able to look at an EEPROM and know what is going on are long gone.  The EEPROM is now organized into sections and there are pointers to the sections.  All these sections, pointers and checksum means doing a diff of an old image and a new image can be an exercise in futility.    With more ports per EEPROM (four on the I350) and each section being able to be configured, you can end up with a lot of different images.  For our LOM customers this means you should really leverage the development starter images.  If you need a custom image, please see your Intel representative for helping developing a custom image.   These images take time to develop and review, so please let us know as soon as possible, and leave some time in your schedule for us to make them.  We have a band of EEPROM experts go over each custom image, so try to give us a week's notice.


Finally newer devices like the x540 do not use an EEPROM.  They use a Flash device so there is single NVM which holds the legacy  EEPROM data from past devices, Firmware for the PHY an Management engine and the Option ROM code.


And remember the law of EEPROM/NVM expansion: With the each generation of products, the EEPROM/NVM will get even more complex.

Consider yourself warned.


Future-proof your designs by letting us help you.

Filter Blog

By date: By tag: