Skip navigation

This is the time of year when presents are exchanged all across the world.  Sometimes those presents are a home Ethernet network.  Sometimes wired, sometimes not.  Here is some guidance to help you stay safe while joining the wired world.  Nothing hard and fast, and clearly I’m not a trained safety or health professional, so if in doubt consult somebody who is.


  • Wired vs Wireless.  There are times when the choice is made for you.  Hand held gaming products are all Wireless.  But things like digital video records for television have both, with wireless being an after purchase add-on on some models.  Don’t get me wrong, I love Wireless.  I just think Wired has some advantages.  Like speed.  Tapping into a wired network is much harder than a wireless one.  I know the wires can be scary, but you’d be surprised how easy they are to run.  I wired my own house for 1 Gigabit Ethernet, and it wasn’t that bad.  The hard part was going into the crawl space (Spiders!) and figuring out how to go between the first and second floors (I went down in a closet so nobody can see), but it took less than a day to do.  Make sure you obey all local building codes; you don’t want to get into any trouble.  If you’re not sure what you’re doing, consult a professional.
  • Security.  If you go wireless, don’t name your Wireless Access Point (WAP) with anything that can track back to you.  Let’s say somebody wants to do something mean to you, having a WAP named “Dougs House” would make it easier for them to do something via YOUR network.  Is ID 21358973 your house or your neighbors?  A small thing of course, but sometimes it is the small things that make people into victims.  Enforce passwords on all computers in your house, even the kids’ computers.  The sooner they know about security the better they will be at it later.  Teach them that only a responsible caregiver, like mom and/or dad ever get their password, and that might be even be age dependant.  Consider using filter software to keep the more adult portions of the internet at bay.  Consult your local school districts to see if they have classes for adults on internet usage for teens, and other internet topics.  The price for a safe internet family is eternal vigilance, but with a little bit of help and education you can keep things safe.  And fun.
  • Limits.  Computer usage can be addictive.  Negotiate time and amount of usage and stick to them.  Making sure to limit content to age appropriate material can help avoid other addictions related to the internet.  This is a family blog, so I won’t link to them, but that should give you an idea about it.  Also, computers are a good way to avoid making real world friends.  Real world social networks help prevent bullying and round out personalities.  They are good for getting into contact with friends that were left behind after moving, but moderation is a good principle.  I got a great piece of advice from my junior level programming class (don’t ask when ;)):  Computers are a way to make a living, they are not a way of life.  Use them as the tool they are.  Don’t make them into your friend.


Happy end of the year to everybody!


Thanks for making this year the Wired blog’s best so far!


Where's the queues?

Posted by dougb Dec 20, 2011

A food vendor in the US has started using their old slogan from the 80s again directing people to wonder where the main point of the product is.  I’ve started to see some of the same questions around our Intel® Ethernet Controller I350 product and queues.  Our Intel® 82576 GbE Controller product had 16 queues per port, and as a virtualization product, it really excelled.  Now our I350 part is out, and people are looking at it having only 8 queues per port, and exclaiming “Where’s the queues!”

An easy reaction to have.  You have to look at the math and typical usage models to see why we went in the direction we did.  The I350 is a quad port product, which means the total number of available queues is the same as the 82576 product.  But by spreading that load across four physical ports it allows for more total throughput and efficiency.   A modern CPU can do 1 Gigabit of traffic on a single queue with very little CPU overhead, something that is very hard to do on a 10 Gigabit port.  So having extra queues at 1 Gigabit allows for oversubscription, which means more applications or processes using the same maxed out port. By spreading the same number of queues across four physical ports it avoids the oversubscription.  I’ve heard people talking about some O/S vendors have been recommending one physical port per virtual machine!  I’d recommend one queue per VM.  One queue per CPU core has been enough at 1G for a while.  So with a single I350 quad port, that means 32 CPU cores, which is hard to do, or 32 VMs which is easier to do.  But 32 VMs is a lot of them.  With the quad port density you can scale quickly without having to add switch chips which add to latency.   Four quad port I350 cards would give you 128 VMs or 128 cores, all at a modest amount of slots.   With our Virtual Machine Device Queues (VMDq) and SR-IOV technologies you don’t have to do one core one queue, or one VM per queue, you can share queues efficiently and effectively.  But with that many why share?  All this applies to all the other queue using technologies.  With RSS, for example, the cores are the thing, and most machines don’t have 32 cores.  Yet.

We do make a dual port I350 product, and there will be a decrease since there are not enough ports to make up the loss of the queues per port.  But that product is designed to allow design compatibility with our new 10 Gigabit BASE-T product, so it’s designed to fulfill a different market role.  This new product has over a hundred queues, so the I350 and even the 82576 can’t keep up.  The dual I350 was designed to allow a 1G LOM today that can be upgraded to a 10 Gigabit LOM without a redesign (if designed right to start with).  This dual build strategy is the point of the I350 dual, so in the interest of bringing the new features of the I350 to market, the tradeoffs of queues was made.  And leaving the dual port I350 internally the same as the quad port I350 allowed us to be timely in our product offerings.

Thanks for using Intel® Ethernet!


End of the Line

Posted by dougb Dec 14, 2011

Products come and sadly, products go.  We have a team that meets once a quarter to make sure that we are trimming products as they exit their usable life span.  We also have a couple of products that you should be trending away from for new designs, and they fall into this end-of-life bucket.


First up is an old warhorse, the Intel® 82547GI Gigabit Ethernet Controller.  This shouldn’t be a surprise.  The 82547GI only attached to the ICH5, and that stop shipping already.  The only CSA bus part ever made, CSA showed the way to PCI Express, which is the bus of choice these days.  I remember working 82547GI during its silicon development, and it came in like a lion and goes out like a lamb.


The Intel® 82578DM Gigabit Ethernet PHY and the Intel® 82552V Gigabit Ethernet PHY are the ones that you must not use for future designs.  We honor our 7 year embedded roadmap commitment and since the 82578DM was on it, it will be available to the end of the 7 years after launch.  But since this is an immediate transition to the product lines, you’ll want to use other parts for your new designs.  Instead of the 82578DM, use the Intel® 82577LM Gigabit Ethernet PHY; and instead of the 82552V, use the Intel® 82574 Gigabit Ethernet Controller attached via a PCIe lane.


We are also ending some of our “leaded” products as RHOS compliance makes everything go “unleaded.” Most everyone has already moved away and we see very few “leaded” designs moving forward.  If you’re still using a leaded part, the way you manufacture your product needs to change slightly.


We know an end-of-life announcement for any product can impact our customers and it isn’t something we head into lightly.  In order to be able to focus on our current and future products, sometimes we need to prune the tree.


Consult your local Intel field sales staff for more details on any of these changes.

The Intel® Advanced Network Services (Intel® ANS) provides teaming and VLAN functionality to Intel® Ethernet network products running under various Microsoft Windows* operating systems.  The same role is done by channel bonding in Linux*.  Unlike Linux, there is no common teaming architecture provided by the O/S.  That might change for the next Windows edition; we will have to wait and see.  Until then it’s an each vendor have at it free-for-all.  But Intel ANS does provide some wiggle room for our partners that have Intel add-in cards and 3rd party LAN on Motherboard (LOM) implementations.  We call it Multi-Vendor Teaming (MVT) mode of ANS.


While MVT might sound like a great thing, you need to know upfront that MVT is not as good as an all Intel team.  This is a natural outcome of the unique way each vendor implements and defines their products.  Because of this unevenness in the implementations, we have to enforce lowest common denominator (LCD) rules for the MVT.  Jumbo frames are out, as is anything else that isn’t a Microsoft standard with a standard OID.  Why are jumbo’s out?  Does the size listed as supported include the CRC or not?  The difference means the packet will be rejected by the infrastructure for being bigger than the MTU.  Things like RSS and checksum offloads are typical things that end up being turned off for LCD reasons.


We do less testing with MVT than with our own native teams mostly because we are unable to address defects in the third party products.  One day maybe it will move up into the O/S provided realm, like in Linux, but until then, you should know if you add Intel adapters to your non-Intel LOM you can still make a team using Intel ANS.


Let me know if you have any MVT questions and thanks for using Intel® Ethernet!

There has been a lot of discussion recently about the importance of supporting east-west traffic in large data center networks. This is supported in a recent article in The Register that highlights marketing data from Cisco* showing not only a large portion of data traffic staying within the data center, but also a large percentage of workloads moving into large cloud data centers. Even today, a single web site click may spawn hundreds of server-to-server transactions that must be completed in time to maintain a good user experience. All of these factors are fueling the drive to large, flat data center networks.


Traditional data center networks have been built using a hierarchy of access, aggregation and core networking gear. In many cases, these networks provide L2 forwarding at the edge, and L3 tunneling in the core. This means that the core must not only support high bandwidth, it must also support complex frame processing, making this a very expensive solution. In addition, these networks exhibit high latency and high latency variation depending on where the east-west traffic is flowing within the network.


The new data centers will employ flat networks consisting of ToR switches at the edge feeding multiple core switches. New products such as the Intel® Ethernet Switch FM6000 family, can enable cost-effective ToR switch designs with advanced L3 tunneling features at the network edge instead on in the core. This frees up the core switch designers to worry only about efficient bandwidth utilization, greatly reducing system cost. When Intel’s Ethernet controllers and switch silicon are used end-to-end in these large data centers, not only is cost reduced, but the high performance and low latency of these products enable efficient east-west data transfers even in the largest cloud data center networks.


Letting the Cables Speak

Posted by dougb Dec 2, 2011

Here at the wired blog we talk endlessly about our adapters and Ethernet silicon.  But there is more to a network than just an end node.  There is more to it than the link partner.  There is the cable that makes it all work together.  Without the cable you have just nothing (we don’t do wireless here on the wired blog) and without a network you have a static end point.  A network is dynamic, and the cable brings that rich content to you.


I was lucky enough to spend some time with the guys at Panduit*, a worldwide leader in cable manufacturing, at their HQ near Chicago, Illinois.  They have been doing electrical interconnects for ages.  They got started with power lines and power interconnects; it makes sense that they would eventually head into Ethernet cables.  I learned a ton from them about the specifics of the language around the cable, like the RJ45 connector is actually the female part that the cable plugs into, and the jack is what most people would call the RJ45, the end of the cable.  It reminds me of how specific language is developed around an industry.  I was at Panduit to shoot a video (it’s still under construction), but once it’s done, hopefully you’ll learn about 10 Gigabit BASE-T like I did during my visit.  I considered myself an expert, but they taught me a thing or two about the interconnect, the cable.

I asked Tom K from Panduit a few questions about Ethernet cables and here is how it went:


Q1:  What is the most common cable problem?

Believe it or not, Panduit has found the most common problem to be cable management.  As mundane as that sounds, poor cable management can quickly lead to wasted time making moves, adds, and changes as well as drops in network performance.

For example, a patch field or cross connect may start out nice and neat, everything is groomed and looks great.  Over time, with the pressure to get operational changes done quickly, cable management drops off the list of priorities.  People have the best of intentions, thinking “I’ll get back to it later,” and before you know it, the patch field has shifted from being neatly dressed and identified to a disorganized sprawl.

The ironic part is that, in the long term, one is not saving any time at all.  The messier the patch field gets, the more time it takes to make changes because one has to double and triple check to make sure they are making the right move.  It also makes airflow through the cabinet a problem. Finally, from an installation or permanent link standpoint, we often find that poorly managed cables coming out of the jack have too tight of a bend radius, which could cause the link to fail.


Q2:  How much of an impact on a network’s performance do the wrong cables have?

In the context of moving towards 10GBASE-T, a lot.  As an example, let’s say that for whatever reason, a 10GBASE-T link is deployed over CAT5e cabling, rather than CAT6A cabling, which is designed to handle 10 Gig traffic.  The net effect of this decision would be a higher bit error rate than the 10GBASE-T standard calls for, as CAT5e cannot support the cabling requirements for 10GBASE-T.  There also would be more internal crosstalk such a Near End Crosstalk (NEXT), Far End Crosstalk (FEXT), and Alien Crosstalk (AXT).  (NEXT is cross talk between the cable pairs that occurs near the 10GBASE-T PHY, FEXT is crosstalk that occurs at the far end of the link away from the PHY, and AXT is cross talk that occurs to signals coming from another cable laying next to the one in question.)  Overall, if the wrong cabling is used, the link will drop packets and overall network throughput and performance will be compromised.


Q3:  What are some tips on moving from CAT5e to CAT6a?

First tip would be to use proper installation techniques.  CAT5e and 1000BASE-T are a little more forgiving than 10GBASE-T given the higher frequencies used in 10GBASE-T, and the impact of noise is more troublesome too.  An example would be un-twisting too much of the copper pair when terminating CAT6a in a RJ-45 jack.  For 1000BASE-T or CAT5e, excessive un-twist may not impact the performance of the link.  However, with the tighter tolerances of 10GBASE-T and CAT6a cabling, the balance of the twisted pair might be impacted enough by too much un-twist to the point that crosstalk and external noise sources may impact the performance of the link, which then would adversely affect throughput and overall network performance.  This problem can be solved by using the proper tools from the connector manufacturer.

Second tip would be to stress the importance of link testing, or of using cables that are designed to reduce the time and types of testing required.  Testing the usual parameters, such as Near End Crosstalk (NEXT) and return loss, to name just two, are important. However, because of the frequencies involved and the modulation levels that are used for 10GBASE-T, people are most concerned with Alien Crosstalk (AXT). This effect can occur when twisted pair cables are laid down right next to each other in cable raceways, increasing the chance of signal coupling.  Testing for Alien Crosstalk can be a time consuming, and therefore, expensive process.  Fortunately, Panduit’s standard CAT6a and CAT6a-SD (small diameter) cables use a patent pending technology that suppresses Alien Crosstalk.  This means one does not need to perform testing for AXT with Panduit’s CAT6a solutions, which saves the time and expense of testing for it.


Here is a link to the Tom’s Hardware* article that  showed how we test for exactly the kind of stuff Tom is talking about  and more.


If you have questions for the Panduit guys, let me know in the comments.

Thanks for using Intel Ethernet.

(update:  There was an error in the link to Tom's Hardware)

Filter Blog

By date: By tag: