Currently Being Moderated

Back in March, I introduced you to the new I/O and networking-related advancements in Intel® Xeon® Processor E5 family-based server platforms: Intel® Integrated I/O, Intel® Data Direct I/O Technology (Intel® DDIO), and the Intel® Ethernet Controller X540. I’ve since provided a closer look at the Intel Ethernet Controller X540 and how integrating 10GBASE-T connectivity in server motherboards will help drive 10 Gigabit Ethernet adoption.

 

But what happens to network data inside the server, before or after it’s on the network? That’s where Intel Integrated I/O and Intel DDIO come in. Intel designed these architectural changes to the Intel® Xeon® processor specifically to improve I/O performance. They are significant advancements, and it’s worth taking a closer look to understand how they work and why they’re important.

 

Intel Integrated I/O, a feature of all Intel Xeon processor E5-based servers, consists of three features:

  • An integrated I/O hub
  • Support for PCI Express* 3.0
  • Intel DDIO

 

The graphic below shows the architectural differences between a previous-generation Intel Xeon processor-based server and a server based on the Intel Xeon processor E5 family.

 

IO_flow.gif

 

In older systems, as shown on the left, a discrete I/O hub managed communications between the network adapter and the server processor. With Intel Integrated I/O, the I/O hub has been integrated into the CPU, making for a faster trip between the network adapter and the processor. Tests performed by Intel have shown a reduction in latency of up to 30 percent for these new systems.

 

The integrated I/O hub includes support for PCI Express (PCIe*) 3.0, the latest generation of the PCIe specification. The PCIe bus is the data pathway used by I/O devices, such as network adapters, to communicate with the rest of the system, and PCIe 3 delivers effectively twice the bandwidth of PCIe 2.0. This greater bandwidth will be important when quad-port 10GbE adapters start finding their way into servers in the coming months.

 

The remaining component of Intel Integrated I/O is Intel DDIO. It’s another significant architectural change to the Intel Xeon processor, and perhaps the best way to explain its benefits is to again compare an Intel Xeon processor E5-based platform to a previous-generation system.

 

io_copies.gif

In those older systems, as shown on the left, data coming into the system via the network adapter was copied into memory and then written to processor cache when the processor was ready to use it. Outbound data took a similar journey through memory before reaching the network adapter.

 

Not very efficient, but that model originated when CPU cache allotments were relatively small and network speeds weren’t fast enough to be affected by these potential bottlenecks in the server. Today, however, we’re in a much different place, with growing 10GbE adoption and Intel Xeon processors that support caches of up to 20MB.

 

With Intel DDIO, Intel has re-architected the processor to allow Intel Ethernet adapters and controllers to talk directly to cache, eliminating all of those initial memory copies. (Once the cache is filled, least-recently-used data will be retired to system memory.) You can see the trips data takes in and out of the system on the right side of the graphic above. Much simpler, isn't it?

 

What does all of this mean for your servers? In a nutshell, better performance. In our labs, we’ve seen a 3x improvement in I/O bandwidth headroom within a single Intel Xeon processor when compared to previous-generation systems. And with fewer trips to memory and a direct path to cache, you’ll see reduced power utilization and lower latency. Specific performance improvements will vary based on application type.

 

If you’d like to learn more, we’ve put together some great resources:

 

For the latest and greatest, follow us on Twitter: @IntelEthernet

Comments

Filter Blog

By author:
By date:
By tag: