I have had my Intel S5520SC Workstation Board for two years now and I thought I would remenisce about the experience.
I hope some Intel system board designers read this - there are some lessons to be learned.
In 2006 I decided I was going to build my own computer. I started by making it a hobby to learn everything I could about building a computer. After about a year I had decided on the AMD dual processor workstation boards because they had a superior architecture with HyperTransport and NUMA, but I was waiting for AMD's next generation of processor the K10, such as the Phenom FX. However, once it came out the reviews revealed disappointing performance, significantly less that AMD had been implying in development.
I had heard that Intel was working on something similar to HyperTransport, but the details were elusive. Eventually Intel announced QuickPath so I became interested in what they might come up with. When Nehalem was announced it was so similar to the K10 that it was easy for me understand the technology, so I waited for a dual processor motherboards to emerge on the market.
There were a lot of Xeon 5500 based motherboards appearing in 2009, but most of them were server boards, and few were workstation boards like I wanted. Of the workstation boards available, some of implementations just looked silly, but most just did not fit my goals, they were always missing one small detail or another. Eventually I stumbled across the Intel S5520SC which was a very close match to my goals. The main drawback was that it was not an enthusiast class board with overclocking capabilities, but all the similar boards with overclocking were missing too many critical features.
Finally I purchaced an S5520SC and built my system. It was relatively straightforward to get it working, albeit my first time assembling a system from scratch. After a few weeks the boad just stopped working and I sent it back to Intel for a replacement. For the person who decided to implement that stupid RAID 5 key - there is no good way to remove it - you make my life a personal hell for about a day.
Once I got my replacement board back and running, then it kept powering down after running for an hour or so. Eventually after some telephone support from Intel I found in the System Event Log (SEL) that the IOH was overheating and causing the system to shutdown. It turns out the first S5520SC I bought had older firmware that did not sense the IOH overheating so it burned out. The second S5520SC had newer firmware that was able to detect the problem and shut down the system before any serious damage was done.
I have a custom SilverStone Temjin TJ07 chassis that was modified by CoolIT in Calgary Canada for their peltier cooling system. I had an extra hole cut in the pexliglass window and installed a 120 mm fan right above the IOH. That seemed to solve the problem as the SEL reveals the IOH temperature is running at nominal levels, and I have had no similar problems since.
Basically the IOH needs a better heat sink, but because it sits right under the slots where the graphics boards go, cooling it is problematic. A better design would be to use a heat pipe like most of the enthusiast boards and dissipate the heat somewhere more convenient.
The next big problem I had was trying to get the RAID 5 working. I was use to working with the Intel Matrix Storage Technology that is found in conventional desktop computers - it is simple to set up and flexible. However the S5520SC uses the Intel Embedded Server RAID Technology that is a software application in the BMC. This is increadibly clumsy to set up from either the BIOS or the Deployment Toolkit CD. It was also very frustrating to find out that the ESRT cannot support RAID 0 on one virtual disk and RAID 5 on another virtual disk the way the MST can. Since the S5520SC has an ICH10 which implements hardware RAID and uses Matrix Storage Technology, I still cannot fathom why Intel chose to use ESRT, an no-one has been able to explain any particular advantage it has over using the hardware RAID in the ICH10.
Another frustration is that the S5520SC only has 6 SATA ports, while most server/workstation chassis have room for six 3.5" disks. This mean that I have to give up one SATA port for my optical drive, and cannot build a RAID system with 6 drives. Many boards from other vendors (who also use the ICH10) add one or more SATA ports for optical and other drives. One extra internal SATA and one extra external SATA is a very popular configuration.
The next big frustration was EFI support. It turns out that if you are using ESRT, you cannot enable EFI optimized boot, so you cannot boot an operating system in native EFI mode. This forced me to create two virtual disks because I had 8 TB of RAID 5, and non-EFI Windows cannot boot from a GPT formatted disk. Consequently I had to create a smaller virtual disk with MBR, and the other virtual disk with GPT.
At this point I was beginning to feel like the S5520SC was an experiment or prototype that was never finished proprerly and sent to market prematurely - there were just too many good intentions with incomplete implementations. This is still my feeling today.
My more recent problem has been that the S5520SC cannot support graphics cards that require more than 300 watts. This is not a physical limitation of the motherboard, it is a design constraint implemented in firmware by the design team - most likely for chassis cooling considerations and not power utilization considerations. It is expecially frustrating that it is not possible to remedy this problem with a firmeware update.
Given all the problems I have encountered I would like to remind people that in most other aspects the S5520SC is a very competent piece of engineering which is why I choose it over about a dozen other similar system boards.
Now what I really want...
Given two years of experience with the S5520SC here is a list of some of the things I really want in a workstation board:
- Get rid of all the stupid SATA and replace it with ThunderBolt. Connect disks, or even better, SSD devices via ThunderBolt. There should be at least 7 or 8 internal ThunderBolt ports, and 2 to 4 (or more) external ThunderBolt ports. You probably don't need 8 external ports the way most systems offer with USB as you can daisy-chain ThunderBolt. You probably don't need as many internal ports either as current SSD devices are not likely to saturate ThunderBolt bandwidth, so daisy-chaning is pragmatic. This is really a long term solution as there are no appropriate storage devices on the market yet - but maybe Intel will build some :-)
- I really have no use for USB or FireWire anymore given ThunderBolt makes more sense. It should not be hard to design a ThunderBolt cable that supports a number of USB devices such as keyboard, mouse, printer, etc. However, we really need to be able to boot the system from a ThunderBolt device, and the EFI shell needs to be able to read ThunderBolt storage devices (i.e. thumb drives).
- Replace the ESRT with Matrix Storage Manager, or do some serious work to improve ESRT (it really sucks).
- Support EFI properly - no exceptions. While you are at it, please write some better EFI utilities that are more consistent and easy to use, with better UIs. Or document enough of the system specs so third parties can do it themselves.
- Don't build any serious decisions - like limiting the graphics card(s) to 300 watts - into firmware that is not field upgradable. Put more options in the FRUSDR so the system is more configurable and flexible.
- Overclocking. Please put some options in the BIOS so that enthusiast builders can have some fun with the system.