2 Replies Latest reply on Sep 30, 2012 5:24 AM by adrianp

    Questions on the Intel Modular Server




      Very close to buying a fully loaded Inter Modular Server with 6 x Modules 2.4Ghz E5645, 64Gb RAM each. 6 x 300Gb SAS 15k, 8 x 900gb SAS 10k


      I'm hoping to dedicate 3 x Modules for Hyper-V, 2 x Modules for Remote Desktop Servers (around 200 users) and 1 x Module for redundancy


      Need advice on:


      1) How the drives could be divided up for such purpose (I'm hoping to virtualise using Fixed VHDs for Exchange 2010 (1.8TB storage),couple of SQL servers (DBs no more than 10gb in size) SCCM 2012, Printserver and a couple of legacy servers.Example OS VHDs for 12 VMs (100Gb each) and the rest for data VHDs


      2) I understand that even with a Mazziane card add on (giving 4 gigabit ports. I could only aggragate 2, the other 2 for redundancy. On HP Proliants I normally dedicate seperate NICs for VMs (say seperate cards to each machine and one for management) how does a Module running as a Hyper V host cope with 4 VMs..would that impact performance if I need to share all VMs plus management over 2 NICs?


      3) Being 3Gb SAS, What impacr does that have to overall disk performance?


      Many Thanks


      I have also enclosed a diagram of the planned design.

        • 1. Re: Questions on the Intel Modular Server

          The drives, you divide by spindle count, I think.  I would recommend at least two different Storage Pools.


          I haven't seen the NICs become a chokepoint for virtual machines, but you are right - there is no way to send and recieve traffic on all four simultaneously


          3 Gb SAS is standard, and like the NICs, usually isn't a bottleneck.

          • 2. Re: Questions on the Intel Modular Server

            I have the same "question" and this is the reponse from INTEL Support:


            "The connection bus from SAS disk to Storage Control Module (SCM) is not 3Gbps. It is a line from each of SCMs to each of SAS Expander and then from Expander to Backplane.


            The bottle neck is the connection from SCM to each Compute Module. It is limited to 3Gbps, so request from one Compute Module to the drive will be limited by those 3Gbps. Even using external Storage you will be limited with those 3Gbps to each Compute Module.


            Anyway it is impossible to reach 500MB/s for now. Necessary changes are under development but there are no timelines of fix for now.


            On the other hand we have SSD disks validated because those disks have several advantages vs HDD: SSD drives are more fail-safe. No vibration issues can occur, so PFA state will occur rarely on SSD drives. SSD drive anyway will be faster than HDD one even in current configuration. Actually you will not always reach even those 3Gbps due to some other factors and those 3Gbps will not be a bottle neck all the time."