If you’re a data center or network  professional, you’ve probably heard of unified networking. If you’re not  familiar with it, the concept of unified networking is pretty simple:  combine the traffic of multiple data center networks (LAN, storage, etc.) onto  a single network or a single network fabric – in this case 10 Gigabit Ethernet.  The benefits are just as simple as the concept: simpler network  infrastructures, lower equipment and power costs, and a trusted, familiar  fabric as its base.

 

The idea of Ethernet incorporating  different types of network traffic isn’t new; it’s been happening for years.  VOIP, streaming video, storage traffic – Ethernet has shown that it’s flexible  and scalable enough to incorporate all of them and more. What’s been missing  until recently was the dominant data center storage fabric: Fibre Channel.

 

Today, nearly every Enterprise IT  department maintains separate LANs and Storage Area Networks (SANs), the latter  of which is often Fibre Channel, though iSCSI is growing rapidly. A separate  SAN infrastructure requires storage-specific server adapters, switches,  cabling, and support staff. Fibre Channel over Ethernet (FCoE) allows Fibre  Channel frames to be encapsulated in Ethernet packets and travel across a 10GbE  infrastructure. This convergence can lead to big reductions in storage network  equipment and its associated costs. There’s more to it than that, but that’s  enough Unified Networking 101 for now.

 

So with that out of the way, let’s take a  look at some recent developments in the world of unified networking and see  where things are headed.

 

 

In late January, Intel announced the availability of Open FCoE on the Intel Ethernet Server Adapter  X520 family and qualification by EMC and NetApp. Why was that important?  Because it means standard, trusted Intel Ethernet adapters now deliver complete  LAN and SAN unification with FCoE and iSCSI support at no additional charge. Until then, FCoE support required expensive and  complicated converged network adapters (CNAs). Open FCoE incorporates  non-proprietary FCoE support into the operating system, similar to what’s done  for iSCSI today, for performance that is optimized for multicore processors and  will scale as processors and platforms get faster. It’s also a simpler solution  that allows IT to save money and simplify management by standardizing on a  single adapter or adapter family for all LAN and SAN connectivity.

 

And earlier today, Cisco announced several product and  family updates that provide further evidence of Ethernet’s expanding reach in  the data center:

 

  • FCoE  support on the Nexus 7000 series switches and MDS fabric switches is important  for a few reasons. First, it shows that FCoE and unified networking in general  are pushing deeper into the network, beyond top of rack switches to the  director-class switches that IT depends on to meet the high availability  requirements of mission-critical networks. Second, customers now have more  choices for deploying unified networking. Today, many 10GbE unified networking  deployments rely on a top of rack switch that connects to the servers in the  rack. The port density of the Nexus 7000 means they can bypass the top of rack  switch altogether, simplifying their switching infrastructure by removing a  layer of switches. And finally, these product updates bring the same  functionality and scalability of Fibre Channel SANs to FCoE, which should help  allay fears that FCoE isn’t ready for prime time.

 

  • Cisco  has updated the Nexus 5000 switch family with fixed ports that support 10  Gigabit LAN traffic, 10GbE SAN traffic (iSCSI and FCoE), and native Fibre  Channel connections. These “unified ports” will make it easier for IT to  connect to FC SANs, as previous switches in this family required add-in modules  to connect to those networks. These ports also provide an easy upgrade path for  folks who are deploying 10GbE and plan to enable FCoE in the future.

 

  • The  Nexus 3000 is a new family of switches aimed at environments where low-latency  performance is critical, such as financial services and high-performance  computing clusters. Infiniband networks are often used for clustering, but with  the rise of technologies such as iWARP (Internet Wide Area Remote Protocol), Ethernet  can deliver the same performance while providing a more flexible and  better-understood network fabric.

 

It’s pretty clear that Intel, Cisco, and  others are headed down the same “Everything Ethernet” path. And why wouldn’t we  be? Time after time, Ethernet has shown that it can expand to incorporate new  types of traffic. And with the roadmap to 40GbE and 100GbE already sketched  out, Ethernet has plenty of headroom for growth. So stay tuned. We’ll have more  to talk about in the coming months.