Skip navigation

Increase Your BMC Network Performance by Over 800%!

Your friendly neighborhood manageability nerd here announcing my latest whitepaper.


The NC-SI (Network Controller Sideband Interface) is a connection between a Network Controller and a BMC.  This DMTF defined interface is designed to provide 100Mbps full duplex connectivity to a BMC.


Over a year ago I began researching why I was having customers were complaining to me that the NC-SI performance under some circumstances was horrible, in most reports only a few megabits per second.


After literally hundreds of tests and countless hours of research and tweaking of different things, I began to understand what the problem was and how to go about providing a solution.
nc-si perf.pngFigure 1 Network Performance Streaming to BMC


What I found was that the NC-SI interface worked exactly as it was supposed to, however the BMC itself needs to take into account that it is using NC-SI and that NC-SI does not have as much buffer space as the BMC itself has.


I tested with not only Intel Network Controllers, but also another well-known manufacturer of Network Controllers.  The results were pretty much identical, very poor performance until I modified the BMC to be a bit more intelligent.

Making changes detailed in the whitepaper, average performance when streaming data to the BMC went from less than 8Mbps to nearly 65Mbps.  The performance would be even better, however a BMC is not a super-powerful processor and it can’t yet handle 100MBps data rates.

All my research, conclusions and actual sample data is included within the whitepaper.   I also provided an overview of what NC-SI is for the uninitiated.


I hope you find it of use.  If you do please comment so we know if folks actually read these docs .

The paper is available here:

I work extensively with BMC’s, to be more precise those that design and write the firmware that runs on BMC’s.    This runs the range from initial code to make sure all the various interfaces (such as NC-SI or SMBus) are working properly to discussing advanced platform manageability features.


A few years ago I whipped up a little utility to exercise the BMC’s Ethernet interface.   This simple utility sends some basic IPMI/RMCP commands to an IP address, waits for a response and sends another command once a response is received.  It does so repeatedly, as a little stress test for the sideband interface to the BMC.  It has proved useful over the years for those beginning implementation on BMC’s.


I recently updated this utility as part of some work I’ve been doing for a paper I’m finishing up on NC-SI performance (will post soon on that one).   The utility and its brute-force code and Microsoft Windows binary is available up on sourceforge:


It’s pretty simple; give it an IP address and optional options for what kind of packet you want to send to the BMC.


One of the updates I recently made was to display the network bandwidth being used.  What I discovered was that the standard request/response pattern used in IPMI, uses a maximum of about 0.6Mbps even at full speed. 


Anyhow – just in case somebody was interested, I thought I’d post a message letting folks know that this simple utility exists and I hope it is found to be of use.





It has been nearly 3 years since I wrote my first blog, and it was on setting up SR-IOV in Red Hat
Xen (for reference, that blog is here).


It has been a longand interesting road for SR-IOV since then, I’ve written several documents and
made a few videos on the topic, available here).


One of the lastmajor virtualization Operating Systems has now added support for SR-IOV, VMware
now provides the capability to utilize SR-IOV.


My talented co-workers have written a very nice step-by-step guide (with lots and lots
of pictures).  This very nice paper is located here.


We hope you enjoy the latest addition to our growing collection of documents for virtualization
technologies available within Intel® Ethernet Devices.




- Patrick

The success of Arista Networks* is proof that there’s always room for an innovative start up – even in markets dominated by large players that execute well. But great innovation often requires market disruption to gain a foothold with customers.


In a recent interview with Network World, Arista President and CEO Jayshree Ullal talked about the market trends that helped Arista take off.  She said:


“Arista saw three disruptions in the market: a hardware disruption; a software disruption; and a customer buying disruption, which in my mind is the most important thing.”


Two of these trends are interesting to me because we’ve been participating in them.  First, the hardware disruption she mentions is the rise of merchant network switch silicon that has performance and features comparable to ASIC switches.


Our Intel® Ethernet switch family is pioneering these merchant switch devices.  We not only provide throughput that is equal to or better than that of an ASIC, but our layer 3 latency is the industry’s lowest as well.


With a switch chip that provided competitive throughput and features to compete, Arista didn’t need to spend the large amount of resources on an ASIC development program, unlike some of its large competitors.


Instead, they were able to differentiate themselves with software – the second disruption on Jayshree’s list.  Arista developed its own operating system – the Extensible Operating System – leveraging Linux as the foundation.


Our Intel Ethernet switch FM6000 series silicon contributes to software innovation through its programmable FlexPipe™ frame processing technology.  FlexPipe’s configurable microcode allows switch manufacturers to update features or support new standards even on systems that are already in the field.


In order for our customers to evaluate the advanced FM6000 series features, we also provide our Seacliff Trail reference design, which has a Crystal Forest-based control plane processor on board.  Crystal Forest can be used as a standard control plane processor, or as a SDN controller host, or even to experiment with Intel’s Data Plane Development Kit (DPDK™).


It’s been great to have played a role in the market changes that have given Arista – and other companies – a chance to launch and to flourish.  Viva la Disruption!

2013 is the year that software-defined networking gets its early installations, having gone through product trials and technology evolution over the past several years. Now, as vendors roll out their SDN networking offerings and companies start the buying process, the question for many is: “How can I best use this technology in my network?”


Network use cases, in fact, were one of the most asked for items at the Gartner Data Center Summit meeting my colleague Gary Lee went to in December 2012. In order to give a sense of what problems SDN can solve in your network I am starting a series of blog posts on key SDN use cases, starting with network virtualization.


First, let’s discuss server virtualization, which has helped to power cloud services by allowing data centers to scale computing power at a lower cost. Server virtualization dramatically reduces the cost of computing services and allows multiple customers to leverage a single server. Both server virtualization and SDN are tied in terms of being high-profile solutions that can have dramatic impact on a data center.


Network virtualization logically divides a 10GB Ethernet connection into multiple lower speed connections so that each virtual machine in a server can have its own dedicated connection without requiring a separate NIC and cable.


In an IP network, this is done using virtual LANs, but that doesn’t scale well across a heterogeneous network unless each vendor supports the same VLAN protocols. By replacing the per-switch IP decision making with a central SDN controller over the entire network, virtualized network connections can be more easily made from one end of the network to the other.




How does this work? In the diagram, the three virtual networks are shown in their logical grouping, but below that we see how the actual switching infrastructure is organized to make this happen. Top-of-rack switches have connections to virtual machines located in the different servers. The connections through the physical infrastructure are guided by the SDN controller. The controller must map the logical network connections to the physical network across all of the switches. This is a very complex state-management task that SDN is particularly good at.


Going forward, as server connections increase from 10GB Ethernet to 40GB Ethernet, there will be even more headroom for virtualized network connections, making for dramatically complex network designs. But SDN is intended to simplify that complexity, so that all cloud networks can use this technology to maximize their networking investment.

Filter Blog

By date: By tag: