>>Are you saying you start dropping packets after 116kpps? Or that's just the cutoff where data starts using the second port in the trunk?
What I see happening is when one of the interfaces in the trunk hits 116kpps that yes it does start dropping traffic. But the second interface in the trunk can accept more traffic. The network switch is configured to load balance by SRC mac address I believe, but due to the fact most of this is comming from one server it is not load balancing this so much.
>>I can't see the post to linuxquestions.org, it wants a login. Did you get any feedback from there yet?
I have not gotten any feedback anywhere yet. I have posted there on Nabble.org and the Developers mailing list.
Any help would be appreciated.
I think you've hit the nail on the head. Load balancing by MAC address won't do you any good, if most of your traffic is coming from the same MAC address. Once that TCP conversation hits the packet limit (which 116 sounds about right), you're going to get dropped packets.
I know very little about Linux bonding. Do you have any other options for load balancing, besides source MAC?
I'm not sure I follow here. As this is a firewall and also just a packet filtering firewall not Statefull, TCP would have no relivance here. Actually in this case the packets are all UDP packets averaging about 256Bytes. I definitely know that device have Packet/s limits, but from reading on the internet about Gig interfaces and tests that have been done most people get at minimum 700,000PPS With good interface cards. Others max out about 800,000PPS. So that is kinda my concern they are getting that out of using 1 Gig interface and I can't. I am happy to disable bonding and just use 1 gig interface on each card for testing. But I would imagine the bonding driver has little affect.
Let me know what you think.