Let's say list of disappointed users is quite long. Intel has changed specs couple of times and as I said, some vendors changed info in motherboard manuals ( in online version ). For sure ASRock had to change it 2 times in last months and now there is info that VROC supports only Intel SSD. Intel list of supported SSD is really short and on Intel Optane series drives that I was testing ( as you see in this thread ) and on drives that ASRock tested, performance in VROC is worse than without it.
For me it's really weird situation as VROC was introduced couple of months ago and there are still barely any official documents about it and there is still the same one product page which is clearly confusing for many users.
Some of my contacts asked Intel about VROC but they got barely any info and if large companies can't get clear info then what can do end-users. I'm Intel's partner and I can't get any support as in my area support department is not existing since last month.
Would you be surprised to find that the Facebook support for Intel is still supplying the Xeon FAQ to anyone who requests info on VROC implementation on the HEDT X299 chipset?
I'm finding this entire fiasco to be rather pathetic from a world leading tech giant. Is this how Intel panics and responds when their complacency and arrogance is shattered by a market contender?
I don't care that I may need a VROC Key - IF it is reasonably priced; lets be real though - the $8 cost is not going to protect their server SKU's from ingress by the x299 sku's.
BLocking non-intel SSD's is just a total dick move - especially when their SSDs are not competitive with the performance of other consumer products; HEDT is for prosumers and Intel has dropped the ball here massively.
It could have been handled better or sure.
The way the MB mfr's touted it from the X299 launch,
it was like the second coming of Jesus.
However, it is new technology and it does "belong" to Intel.
Maybe there will be firmware updates at some point for the keys.
It has been frustrating for quite a while,
and I did get some incorrect Intel "support" at one point,
and I wound up wasting $100 on a key that likely will not work.
I AM, however, determined to make this work,
and with the recent release of the 900P drives, the performance should be stunning.
It's a pretty exciting time to be doing a HEDT build.
At least Intel's CPU sockets work, and the memory works,
and you can boot from VROC.
It is not a Prius trying to be a Shelby like some others.
Does anyone know specifically if the onboard m.2 slots that x299 motherboards have are connected to allow them to function as a bootable CPU PCIe device (in RAID with VROC or otherwise) or are they all run via the PCH and the DMI link?
I'd like to know if I could run a single or dual SSD that isn't going through the DMI link. THe Dimm.2 cards etc that companies like asus offer are also a bit of a ??? for me.
How about the ones on the z370 by comparison (I figured with 16 lanes only it would be DMI / PCH linked)?
I am going through vroc headaches. Intel has not made this easy. the only way to make this work on x299 is to have a vroc key and intel m.2 drives. I am using the asus hyper m.2 x 16 , but, it is only as a passthrough. I tried intel 600 p drive with out the vroc key, nightmare. i had to remove them and now I have Samsung pm691 m.2 install in the hyper card. I will try again next week once I get the vroc key. I boot off a crucial ssd and want the m.2 drives as a raid volume through vroc. now, I am using a windows volume.
yeah the info I have out of Intel STILL conflicts with the fact that x299 will only accept Intel based SSD's for VROC. It seems the info they are providing is only relevant to the xeon chipset.
Perhaps you can provide insight into the following with question 4 being the most important :
1. Do the m.2 sockets on the motherboard allow direct CPU control of the SSD or do they all go through the PCH lanes? I am thinking the answer is 'no' but wanted to confirm.
I see you are using the Asus PCIe card so I assume you have a bios that lets you break the PCIe slot into 4 groups of lanes? A mainboard that does not allow that would require a PCIe card with a PLX chip like the new Highpoint SSD RAID card from what I have read.
2. Does RAID0 work (VROC enabled) with Intel drives? - ok from your testing it seems the answer is 'No' on x299 for it working and it being bootable...
3. Does a 'Standard' VROC key enable RAID 0 with VROC enabled, and bootable with Intel SSD?
4. Does a 'Standard' VROC key enable RAID 0 with VROC enabled, and bootable with Samsung SSD?
Intel is not sure what its own system does so I would not be surprised if it does actually function if you get the VROC key.
5. Even with a bios enabled array (VROC) a driver is enabled for installation of windows? I'm interested to know as I run scripted backups with programs like Acronis and WinPE recovery environments so have to be able to boot and 'see' my array from other than windows.
From my own perspective I am really just very keen to get a bios enabled raid array rather than software, though with a decent CPU I don't see it as a hassle I guess if the array is direct to CPU. Being bootable as an array may not be so important - I can use a small SSD just as an OS drive. I don't want my main array being stuck behind the DMI bottleneck though.
Looking at the z370 and the Coffee Lake CPUs with only 16 lanes you can run one GPU at x8, your RAID array (4 drives) at x16 and that's it. I need x8 for my 10GBe card (none of the z370 boards seem to have it though some of the HEDT do). I'm confirming if my 10GBe card can run wirespeed at x4 if I'm just using a single port off it (I have a Intel x540-T2).
Answers from me:
1. If M.2 socket is using VROC or not depends on the motherboard. For example ASRock X299 Taichi/XE has 1 M.2 socket connected to VROC/CPU - M.2_1 while X299 ITX has 2. Other thing is that VMD are per PCIE slot or shared. You can't make bootable RAID on more than 1 VMD.
2. VROC RAID works with Intel SSD and is bootable as long as RAID is working on 1 VMD.
3. Standard key is the same as Premium but has no RAID5 option so answer is yes. It's bootable as long as it uses one VMD.
4. I wasn't able to run VROC on any non-Intel SSD. Even on Intel SSD performance was lower than using Windows disk manager / stripped volume ( results in this thread ).
5. I'm not sure what about drivers for bootable array. I wasn't testing that.
Right now if you want to keep maximum performance then or use ASUS Hyper X16 card and pass-through mode or set stripped volume in Windows disk manager and use small SSD as OS bootable drive. In my tests VROC was causing stability issues or performance issues. Screenshots are in this thread.
I don't have ASUS Hyper card so I'm not sure how it works but other users said it runs in VROC passthrough mode. It still should use VROC functionality so to be connected to the CPU. If I'm right ( correct me if I'm wrong ) then ASUS card is using 1 VMD as it uses 1 PCIE slot. On ASRock motherboard you can set PCIE slots as VROC or AIC VROC ( simply SSD or add-in-card mode ) but it changes nothing for me and in both cases VROC works in the same way. I assume that for ASUS Hyper card AIC mode it may change something.
VROC key is only locking/unlocking additional modes. It's not adding anything else. If you don't have key and use 2 Intel SSD then you are able to run RAID0/1 too. What more, it's possible to set RAID 0/1 on non-Intel drives without a key but then array won't be bootable and Intel software says something about 90 day free period. I have no idea what will happen after these 90 days.
I have an Asus Rampage VI Extreme, an Asus Hyper x16, and 4x Intel 900p SSD.
I can confirm the findings from woomack that when VROC raid is enabled, the random performance is actually worse than a single drive. This has also been tested by Tomshardware and PCPerspective. When I use straight passthru, and then raid the volume using Windows disk management utility, then the 4k performance goes back to the more normal 250MB/s (for QD1) under Crystal Diskmark.
A couple of things I haven't tested yet, first is actually plugging in a standard VROC key. The thing is on its way so I will find out in a few days. But given woomack actually tested using his premium key and got the same results, I doubt the key will make a difference. Second is whether it would be different under Linux. I suspect the Intel RSTe driver is to blame on this. Maybe their Linux driver will perform differently?
Any thought from Intel would be nice. It's such a shame that after buying all these gears, we cant get it to perform correctly.
Hi Birdman, I have almost the same setup as you do, ie a 4x 900p SSD setup. But as Woomack and I found out separately, the VROC Raid 0 performance is subpar for random read/write. It's basically flat to or even slower than just one drive (being 35-40MB / s under CrystalDiskmark)
May I ask if you have benched your setup under a 4-drive RAID scenario, presumably with the VROC standard key? How is your random read/write performance?
I have pretty much given up on the VROC. I have an asus x299 board with their hyper m.2 x 16 card. when I tried intel drives (600p) with the vroc intel ssd key, the performance was about 30% less that using samsung sm961 drives with the hyper card in pass-through mode ( no vroc key) configured as a regular stripped volume in winodws. I still may try stripping with windows storage space, but, in windows 10, you have to use windows server powershell command for it to strip across drives. very disappointing. also having problem with pcie lanes due to the way asus does their pcie buss configuration. may be looking for another system board. this vroc is definitely not ready for prime time. plus, the system board vendors have nothing in place to supply the correct vroc key and their documentation does not explain which one you need
May I ask when you say 30% less performance using 600p with VROC vs the Samsung drives, what performance metrics are you referring to? Did you bench your 600p RAID array for their 4K random performance? What sort of levels were you hitting if you did bench it?
appreciate your thoughts here as I am determined to get to the bottom of this.
Sounds like the entire setup is rather half-arsed.
I can see the issue from the motherboard side as they are required to accommodate all of the CPU's in the line up .... and that means different number of PCIe lanes. Unfortunately companies like Asus are so busy talking up and hyping their product that they bury any useful information regarding the actual plausible configurations.
The EVGA board and one of the Asrock ones looked pretty decent to me.
"pass-through" just means you are using CPU lanes and VROC is doing nothing? YOu have a windows based array?
In this situation do you just use an RST driver to inject into the installation if required (eg in WinPE or the OS install) to have the array recognised? I've only done hardware raid on NAS devices
I think if I go this path it will be based on a single SSD (perhaps small optane) as an OS drive, and a windows based array off a hyper m.2 or Highpoint card with 2-4 drives.
I need 4 lanes for a single 10GBe port on my X540-T2 (although intel specify 8 for both 1 and 2 port card), 8 for GPU and then 8-16 for the SSD's in RAID. Possibly 4 for the OS SSD as well. THat means 20-24 lanes so the 6 or 8 core i9 is a fair option for me whereas the z370 with only 16 lanes on CPUs is not.
Zen+ is going to hit Q1 2018. Maybe that will be more sensible than x299.
actually, turns out the vroc key I received was premium. i tried retesting with the 600p drive and it was totally unstable. don't know why it worked the first time. I was able to confirm tha the only "supported" vroc with x299 chipset is ntel ssd key using intel ssd. I will have that key in the next couple of day. just for kicks, i installed two Samsung 960 pro in the motherboard m.2 slots and three sm961 in the asus hyper card. created one big stripped volume in windows and here are the stats from crystal dis mark. will redo with intel 600p in the next week or so. This whole vroc thing is not ready for prime time. very poorly documented and supported by intel and other motherboard manufacturers ( asus)
CrystalDiskMark 6.0.0 x64 (C) 2007-2017 hiyohiyo
Crystal Dew World : https://crystalmark.info/
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes
Sequential Read (Q= 32,T= 1) : 7348.278 MB/s
Sequential Write (Q= 32,T= 1) : 5992.115 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1507.521 MB/s [ 368047.1 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 1359.872 MB/s [ 332000.0 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 399.803 MB/s [ 97608.2 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 358.517 MB/s [ 87528.6 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 38.173 MB/s [ 9319.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 153.987 MB/s [ 37594.5 IOPS]
Test : 1024 MiB [Z: 12.1% (578.0/4768.7 GiB)] (x5) [Interval=5 sec]
Date : 2017/12/20 22:57:16
OS : Windows 10 Professional [10.0 Build 16299] (x64)