Ethernet Products
Determine ramifications of Intel® Ethernet products and technologies
4811 Discussions

I340 quad port NIC - maximum VLANs?

idata
Employee
2,547 Views

Hi,

Just been looking over the product brief here: http://www.intel.com/Assets/PDF/prodbrief/323205.pdf http://www.intel.com/Assets/PDF/prodbrief/323205.pdf

I'd like to know what the maximum number of definable VLANS is for this card (under Windows 2008 R2). In older cards, the limit has been 64 VLANs per port (or per team if using teaming). I want to know - is this still the limit, what is the limit per card, and, if using multiple cards, is there a limit per system.

My use case is to build a router/firewall with Microsoft Forefront Threat Management Gateway 2010 Enterprise Edition. This needs a network interface per protected network, and I need to put each protected network on its own VLAN. The more supported VLANs the better - I need to be able to scale to thousands of protected networks using as few servers as possible.

In ProSet/ANS, I would define multiple VLANs per team/port. These would appear to Windows as network interfaces, which could then each be configured with IP address etc. Ideally I want to use teaming (for switch and NIC redundancy) and NLB (for TMG redundancy).

Thanks,

Aitor

0 Kudos
4 Replies
Mark_H_Intel
Employee
640 Views

Intel(R) Network Connections software allows configuration of up to 64 VLANs per port (or per team when ports are teamed.) The 64 VLAN limit applies to any Intel(R) Ethernet adapter, because all adapters use the same software for configuring VLANs. The limitation is per port (or team) and not per system.

Mark H

0 Kudos
idata
Employee
640 Views

Hi Mark,

Thanks for your reply! Could you just clarify this for me (I've got a bit of flu at the moment, it's probably affecting my comprehension)

On a quad port card then, the maximum vlans would be 4 ports x 64 VLANs per port = 256 VLANs? And if you had say five of such cards, the limit would be 5 x 4 x 64 = 1280 VLANs? (or half that with teaming).

Finally, I know I didn't ask this before, but do you know if Windows NLB will play nicely with the Intel drivers? I want to build an HA firewall / router using Forefront TMG Enterprise Edition. This would use NLB for load balancing / failover so that I can have two servers. In Windows, this means that NLB would be bound to each Intel VLAN, and then ideally each VLAN would be running on a team, with physical ports connected to different physical switches, so that I have redundancy at the switch, NIC and server level.

Thanks,

Aitor

0 Kudos
Mark_H_Intel
Employee
640 Views

Hi Aitor,

You are absolutley right. You could configure 64 VLANs on each interace (port or team). The number of VLANs supported by the software is 64 x Number of interfaces.

And yes, our teams will still work when you are using Windows NLB. Make sure you are using the latest drivers and software. I do not know the specifics, but some of the older software had issues with Windows NLB.

I do not know how many quad port adapters you will be able to install into one system before you run up against system resource or performance issues. Be sure to look at the software release notes, readme.txt, especially the Quad Port Server Adapter Notes.

The following section was copied and pasted here from the release notes. Even though the particular adapter you are considering is not listed in this note, you might still run into a similar situation if you install too many adapters in one system.

System does not boot

 

--------------------

 

Your system may run out of I/O resources and fail to boot if you install

 

more than four quad port server adapters. Moving the adapters to different

 

slots or rebalancing resources in the system BIOS may resolve the issue.

 

This issue affects the following Adapters:

 

* Intel(R) Gigabit ET2 Quad Port Server Adapter

 

* Intel(R) Gigabit ET Quad Port Server Adapter

 

* Intel(R) Gigabit VT Quad Port Server Adapter

 

* Intel(R) PRO/1000 PF Quad Port Server Adapter

 

* Intel(R) PRO/1000 PT Quad Port LP server Adapter

I hope this answers your questions, and I hope you are recovering from the flu.

Mark H

0 Kudos
idata
Employee
640 Views

Thanks Mark!

I guess I will have to test the specific combination of team > vlan > nlb, as I guess it's relatively rare. In another situation, I'm having problems with team > hyper-v virtual nic with vlan set > vm running nlb - but that's not on intel network hardware / drivers.

It's good to know that in theory I can get so many VLANS in Windows. The 4 card limit you mention might be something to do with lack of PCIe lanes on certain motherboards/chipsets. My hope is that systems with lots of slots would be capable of more - e.g. one of my systems now has two IOH chips giving 72 PCIe lanes, and it has 7 PCIe x8 slots, so plenty of bandwidth for the adapters and all the other stuff on the motherboard.

In the application I'm considering, and looking at the real world bandwidth usage of my current set up (which needs to scale up/out), I'm think I'm only looking at an absolute peak bandwidth of about 1Gbit/sec per 4000 VLANs. The average would be much lower than that. Whilst it's possible that an individual 1GbE on the quad ports might get maxed out briefly, it's very unlikely that all ports would need to run at full speed simultaneously - in fact, if it were possible in ProSet to create say 1024 VLANs per port/team, I would probably be able to meet the total bandwidth requirements with a single quad port card, with another dual or quad port for the outside network.

As for the CPU work running a firewall/router - I don't have enough data but I would imagine that the current state of the art (8 way, 10 core, 20 thread Xeon system announced by HP) would be enough! Not a valid comparison, but I run a software iSCSI target; it can max out 10GbE on a quad core E5520 without even hitting 50% cpu usage. I imagine that routing, firewalling, AV scanning etc means much more CPU work than iSCSI per byte or per packet, but clearly there's plenty of headroom in today's systems.

cheers,

Aitor

0 Kudos
Reply