After the events of last year, kernel.org needed to redo their infrastructure. Intel was proud to help out and provide all new Intel® Ethernet X520 adapters to the kernel.org team. Team member John 'Warthog9' Hawley was kind enough to spend a few minutes answering some questions for us and I figured I’d share his answers. Wired Blog questions in BOLD. This interview was conducted over e-mail in December of 2011.
How does this donation help to improve kernel.org and the Linux* community?
It will help us in 2 ways:
(1) It will give us the ability to perform tasks much faster within our back-end infrastructure, which will translate directly to getting data out to our front-ends faster. This will not only speed up the ability for end users to acquire kernel.org content, but it will also help to speed up Linux Kernel development as well.
(2) It will give us the opportunity to serve more content faster to our end users. By allowing us to go beyond Gigabit uplinks to the Internet, we should be able to serve more users simultaneously and to get high speed users our content even faster
About how many servers power kernel.org?
Overall we are expected to have about 28 machines worldwide.
Had you already made the jump to 10 Gigabit Ethernet before now?
Until this recent donation we have been relying on GbE.
Do you use Virtualization?
Kernel.org has in the past not needed virtualization; however, in the rebuilding of the infrastructure we are going to be moving some of our less resource intensive services into virtualized environments.
We are quite happy using KVM / QEMU*, and this performs spectacularly for our needs! We’re kernel.org after all! J
What type of network attached storage do you use?
We will primarily be using NFS and iSCSI, however we also move a lot of
traffic using the rsync protocol. Currently we have our own home-built SAN based on Red Hat* Enterprise Linux* 6, however we are looking at other providers and options currently.
Why did you choose Intel® Ethernet Adapters? (it's okay to be honest and say they were free, but if there was more than that, please say)
Free did have something to do with it, but that wasn't the only reason.
- Intel NICs have always been exceptional hardware, and that hardware is coupled with Intel's great support of the community makes them rock solid under Linux, and probably the best NICs money can buy.
- They are also very interesting from the SR-IOV perspective. For our virtual machine deployment we really wanted to be able to provide separate distinct Ethernet interfaces to specific virtual machines, SR-IOV gives us the ability to do that, have higher throughput and simplifies our cabling and lowers our need for Gigabit Ethernet ports at the switch.
What benefits have you seen since deploying Intel Ethernet Adapters?
We are still in the middle of deployment, but I foresee:
What advanced features of the Intel Ethernet Adapters are you using like Teaming/Channel Bonding, Virtualization enhancements (Intel® VT-c, VMDq, SR-IOV), and have you seen specific benefits from these features?
By moving to 10G we are attempting to move away / prevent teaming/channel bonding, and make things generally simpler. SR-IOV will definitely get used, VT-c and VMDq will likely get used but I need to double check how those will work with KVM and QEMU. SR-IOV lowers our needed port count, and dramatically simplifies our networking with respect to virtualization.
What are some things that Intel Ethernet Linux has been doing that makes them an Industry leader?
- Better support for visualization, things like PCI pass-through are now basically required features, and not all NIC vendors have done the work for their hardware and software in the Linux Kernel to support this.
Intel, however, has typically been one of the first vendors to support these types of things, and I wish more vendors would follow their example.
- Intel manages their firmware blobs needed for the Intel NICs very, very well, and that makes systems administration and maintenance so much better and easier. When you have an Intel NIC in a system, you know it will just come up and work. You can't always say that with other vendors. Most of the time they work, but if you deviate in some odd ways you stand a good chance of breaking the NIC and needing to find a different firmware blob that solves the problem.
What's the hardest part about working with Intel Ethernet Linux? Besides these questions
The biggest problem: they aren't the NIC that's embedded on HP or Dell's server motherboards. Most folks (for a variety of reasons) only get the chance to use what's on the motherboard, and additional funding to replace something that will work most of the time is a lot harder to justify.
Other than that I've been very happy with Intel Ethernet under Linux, and have been for many years.
What is the hardest part about doing the work that you do for something as large and as public as Kernel.org?
Just getting people to understand the actual scope of kernel.org. We are
infrastructure, plain and simple, and when infrastructure works no one knows it's there or what it's doing or how big it is. When it breaks, well, then, that's when everyone notices. Just trying to keep people appraised of what we are doing, where we are expanding into, etc.
After running kernel.org and the challenges it has had, what types of things would like to see out of Linux networking?
Generally speaking not much, my biggest gripes generally come down to vendors who aren't behind or supporting Linux 100%. Intel is doing a great job here, they have the entire Open Source team and let me be frank, Intel *GETS* open source at a very fundamental level and it shows. I can't say that of all the NIC chipset manufacturers I use now and have used in the past.
Thanks for taking time to talk to me..
You’re welcome, thanks for helping kernel.org.