1 of 1 people found this helpful
Trickly question. 2 on each processor with modern OS (Win 7/64bit, Redhat 4 and above) which has NUMA support will be better IMHO than one CPU with 4 x DIMMs.
Thanks for your reply! I started to think that no one would ever answer!
I actually tested it! And I must say that speed tests are fantastic!
So I think I shouldn't get another set of 64 gb, just to fill 4 on each processor?
I'm using CentOS 6.4 64bit by the way!
Each processor has 4 separate memory channels (marked by the blue dimm slots) .
The more channels you use, the faster the memory can be accessed.
The memory performance difference is substantial between 2 channels and 4 channels, however, unless you are running memory intensitive application or memory bench marking test. you may not notice in normal operation since the I\O is where the system bottle neck most often occurs. (you can only click the mouse so fast in a web site)
Stability wise, the system does not care. Running 1 to 4 channels and the processor just adjusts to use what it has available.
You are best to keep the memory balanced between the processors.
If you have 24 DImms, great
If you have 12, but 6 on each processor
if you have 4 , put 2 on each processor.
If you have 1 dimm, Hmmm it will work, Put it on CPU 0, (but don't be so cheep. Buy a second dimm.....)
Having 2 processors and making one do memory fetches accross the CPUs slow things way down,
The memory controller is inside the processor so to access memory local to the processor is just one request and responce. To access memory on the 2nd processor, the first makes the request, gets passed to the second, which makes a request then gets a responce. The second then has to request to the first again and send the data .About 8 times as slow as if it would be with local memory.
P.S. Another issue i have seen is people filling all the dimm slots but only having 1 CPU. if you don't have a processor in the second slot, you can not access any memory in the second processor slot.