5 Replies Latest reply on Feb 20, 2018 4:10 AM by brudertl

    RAID 0 Performance Settings for RST




      I have a NUC6i7KYK... I recently installed two 512GB Samsung Pro 960's in a RAID 0 volume and re-installed Windows 10 Pro from scratch.  All seemed to go well.


      After Windows 10 installed, and updated, I also installed the latest Intel RST software.


      I ran Crystal Disk Mark on the system expecting to see significantly improved performance over the results I saw from my single Samsung 960 Pro setup, but I didn't.  As a matter of fact, every time I run CDM, it hangs my system.  It seems to hang when the testing switches from the READ column and moves up to the first sequential WRITE column.


      Although I see a 1TB volume, I'm concerned something isn't setup properly.  Anytime I run Crystal Disk Mark now, I need to power cycle my computer to get it out of the hung state.  Before the RAID 0 setup CDM ran like a champ.


      Any suggestions for me?



        • 1. Re: RAID 0 Performance Settings for RST

          Yes, I have a suggestion: Don't waste your time using RAID; it is NOT going to improve your SSD throughput.




          • The transfer rate of a PCIe lane is 8GT/S. The theoretical throughput of a x4 PCIe connection is 3.94GB/s.
          • The 4 PCIe lanes routed to each of the two M.2 connectors come from the Platform Controller Hub (PCH) component.
          • The CPU communicates with the PCH via a DMI 3.0 link. This link is equivalent to 4 PCIe lanes. That is, overall theoretical throughput is 3.94GB/s.
          • The throughput of the DMI 3.0 link is used by the PCH to support *many* interfaces. This includes: the two x4 PCIe M.2 connectors, the x1 PCIe M.2 connector for WiFi, the x1 PCIe for the SD Card Controller, the x1 PCIe for the GbE Controller, USB-C (USB 3.0), 6 USB 3.0, 2 USB 2.0, two SATA 6.0Gb/s lanes, the LPC Bus (connection to Super I/O component), the SPI bus and the I2S connection to the Audio CODEC.


          Needless to say, with all of these devices and interfaces vying for the throughput of the DMI 3.0 link, there is little chance of getting the theoretical throughput of 3.94GB/s to one of the M.2 NVMe SSDs, let alone that required for overlapped operations to two M.2 NVMe SSDs. Bottom line, enabling RAID is a complete waste of time. It is NOT going to improve your throughput.


          Sorry, but this is reality...


          • 2. Re: RAID 0 Performance Settings for RST

            Thanks for the technical explanation.  That definitely helps me understand the limitations.


            My primary reason for the RAID 0 array was having a single 1TB volume to run windows... I hate having two drives..  any performance improvement was a secondary benefit.


            Most concerning is why the disk tool is hanging my system.... not so much the reported values.


            If I can expect to have system stability issues with this kind of setup, I'll convert back to two individual volumes in a heartbeat. 



            • 3. Re: RAID 0 Performance Settings for RST

              I MAY have found the stability issue... I was trying to tweak the RST settings and changed the value below to DISABLED.... Once I switched it back to ENABLED, my system seems 100% stable... But, as mentioned previously, if I can believe the Crystal Disk Mark scores......... my system is SLOWER in a RAID 0 setup then as individual drives.  I'm leaning towards just going back to an individual setup.


              2018-02-19 16_37_11-Intel® Rapid Storage Technology.png

              • 4. Re: RAID 0 Performance Settings for RST

                That's my recommendation.


                • 5. Re: RAID 0 Performance Settings for RST

                  Took your advice and rebuilt the system.  thanks for all the feedback.