The answer all along has been very simple, however, Intel was unclear from the start.
Currently, VROC can ONLY be used with Intel drives.
I am in the process of a build with 4 900P Optane U.2-M.2 drives in raid0
Yes the drives are expensive,
however, when finished, mine will CONSISTENTLY benchmark 10+ GBps/8+GBps.
That is something the "other drives"
(From the company which is responsible for current memory pricing btw)
are incapable of accomplishing.
In addition, the Optane drives do not slow down when doing more than one thing at a time.
The cost for the key?
Take a deep breath and sit down gently, ok?
It's about 8 bucks.
Get a key and some Optane, if you want fast performance.
It's Intel's technology to sell, share or give away.
A whole 8 bucks though............................................
Yes I looked at the P900 and it's performance is super impressive, but in order to get the size I want you need more than 1 and then VROC them as you are planing to do so I assume you know that works - does it?
The issue is not the key or the cost of the keys (any of the 3 keys). Also, I went a head and bought the VROCISSDMOD for $8.96 at Mouser Electronics a while back when intel directed me to them so I already have one.
The issue is that each P900 480GB takes up 1 PCI slot in the machine limiting what else I can do (GPUs in SLI etc).
There is a U.2 P900 drive but it has only 280GB and the Motherboard (Asus R6E) only have 1 U.2 slot. I guess I could get some U.2 to M.2 adapters and convert the M.2 slots to U.2 that way but looks to be a mess (too close to the CPU)... the Hyper raid card from ASUS would also be an option and put the U.2 to M.2 adapters there. Would that work? Should work I think but also looks messy.
Now, if Intel would release a larger size P900 say 1TB things would look better and would immediately get 2 of them and I would gladly use 2 PCI slots for them.
I am using an Asus Hyper M.2 x 16 AIC,
and 4 Optane U.2 to M.2 kits.
One slot, 16 lanes.
Will need to remove fan from the Asus AIC, and a slight Dremel,
and route the wires out the back.
Yes, it's expensive, but only 1 slot used for just over 1 TB,
Or, you can use your onboard M.2 slots with that kit.
Newegg has them, same price as without U.2 to M.2.
It actually looks nice, wires are braided, very Professional look to them.
Really a nice kit, and the box is very tight, hard to open.
If you RAID them, that's 4 Spaceships!
Using the Hyper card, it leaves me with 1 slot open,
with lanes remaining. (Also have Asrock Thunderbolt 3 aic)
I was originally going to use 4 32GB Optane drives, (1 slot, 8 lanes as they are PCI 3.0 x 2)
but then the 900P was released.
I have seen a few reviews of this setup, and the speeds are truly ludicrous.
After seeing the PCper review on YouTube, I was convinced.
Still in the build process, but it is going to be incredible.
Also, you will need a fan to cool the U.2 drives.
This combination of VROC and Optane is a world changer.
I just wish Intel had been more clear from the beginning.
Cheers, and GO FAST!
This message was posted on behalf of Intel Corporation
We appreciate the feedback regarding the product and you can be sure that I will forward your comments for future product improvement.
Please don't hesitate to contact us back if you require further assistance
I can't say that my experience with VROC is good. Actually it was total waste of money in my case but at least I got 4x16GB Optane drives for tests really cheap on sale.
I'm using ASRock X299 motherboard and it looks like performance with enabled VROC is worse than the performance with disabled VROC and set stripped volume in Windows disk manager.
Here you can see my results and settings using Premium VROC key:
VROC / RAID0 on 4x Optane 16GB / default settings - really low 4KiB Q1T1 performance which is pretty much the most important for daily work on a typical home/office PC
VROC / stripped volume in Windows disk manager on 4x Optane 16GB - really low sequential Q32T1 performance, VROC enabled but RAID is disabled and random 4K results go up ...
VROC disabled / stripped volume in Windows disk manager on 4x Optane 16GB - the best results in everything once VROC mode is disabled
Simply once I enable VROC ( no matter what PCIE settings, used slot, 2-4 drives etc. ) then performance is worse than without VROC.
Not to mention that I have 2x Patriot Hellfire M.2 SSD and both run in RAID0/VROC mode without VROC key ( there is 90 day limit in RSTe software but I don't know if it means that RAID won't work after that or what ? ). Once I use the key then drives are suddenly unsupported. I don't know if I have to comment that. Anyway I can confirm that to use VROC with hardware key is required Intel SSD.
Are you using the Asus Hyper M.2 x 16 card?
I am also going to use an Asrock board,
and Asus is trying to tell me "Asus boards only" for the Hyper card.
I am getting pretty close here, and don't know if I can handle any more setbacks.
Optane 16 and 32 GB modules use only 2 lanes each.
Your results are using a total of 8 lanes total, not 16
In theory, double the lane bandwidth may double your score.
It's not an issue of how many PCIE lanes are in use as drives in every case ( all 3 screenshots ) are using PCIE x2. Every drive is connected to separated PCIE 3.0 x8/x16 slot ( 32Gbps M.2 in case of one drive ). In use are slots marked as 2, 3, 5 and M.2_1 as this motherboard has one M.2 socket connected to VMD controller too. 3 drives are connected via Alphacool MX2 PCIE controllers which are able to use full PCIE 3.0 x4 bandwidth and it was tested with other drives.
Single 16GB drive has maximum bandwidth of about 900MB/s and 4 are showing 3700MB/s so it's scalling good in sequential read and based on that I can tell that there is no issue with maximum bandwidth.
The only difference which I see between this setting and ASUS card is that ASUS card is using one PCIE slot and is working on one VMD ( controller ) while most PCIE slots work as different controllers or are sharing it. In theory difference is only that RAID on different VMD cannot be bootable. In my case 2 slots are sharing the same VMD ( on the screenshot you can see that too ).
It doesn't change fact that in every screenshot, drives configuration is exactly the same and drives are connected to exactly the same PCIE/VMD. As you see, random 4k performance on enabled VROC is pathetic. I also have ASUS X299 TUF Mark 2 motherboard and I was checking the same but on 2 drives and also then results were worse once I enabled VROC.
It's hard not to notice difference between ~35MB/s and ~220MB/s. 35MB/s is performance of typical SATA SSD.
Other view on this issue. When VROC is enabled then every +1 drive is causing random performance or deeper queue results to drop. So exactly in what VROC supposed to perform great and for what it was designed as server/workstation storage. Results are close to: 2 drives in RAID0 = ~140MB/s random 4k, 3 drives = ~70MB/s, 4 drives = ~35MB/s. When I disable VROC and set stripped volume then on 2, 3 and 4 drives performance in random 4k tests is about the same ... and it shouldn't be any different as in RAID, random bandwidth is not going up so this is correct behaviour.
Results on 2 drives on enabled VROC while using the same or different VMD are the same so it's not an issue of different VMD and some communication between them.
I'm using Win10 Pro x64 with the latest updates. OS was reinstalled. Tests were made on new drives. I was checking drives in every possible PCIE configuration. I don't know if there is any new VROC key firmware or any new drivers for VROC. Intel support page is empty and specification for VROC is changing every couple of weeks. ASRock had to change user's manual because of that ( I'm also in contact with motherboard vendors ).
I am wondering if this is a limitation of the Optane modules.
Originally I was going to setup my build as you have,
along the lines of the tweaktown.com article. (World's fastest system drive)
There is something off there, maybe a setting somewhere.
Once the 900P drives were released, I decided to go that route.
With the cost increase, the build is a bit delayed, shall we say.
I found another build using the 900P drives (below), with screen shots.
Optane d0t weebly d0t c0m
The numbers on that one look pretty good with 3 900P drives.
One thing though, he is using a "Premium key". (WTH?)
At least the Hyper card does work, much to my relief,
as I have been reading lately it is for "Asus boards only",
which made no sense to me, and still doesn't.
Think of yourself as an innovator!
When Thomas Edison had 100 non functional light bulb experiments,
he said he didn't fail.
He just found 100 ways not to build a light bulb.
Keep pushing, you'll get it.
I'm right behind you gathering resources.
Edit: Have you tried with the VROCISSDMOD key?
If you need one, I have a spare.
I have only Premium key.
I don't think it's any issue with Optane drives or motherboard or anything like that. As I said, exactly the same drives, in the same PCIE slots, the same PCIE speed, the same OS/drivers, VROC key installed all the time but the only difference is that in first test VROC key is enabled / PCIE slots configured to support VMD and in last test VROC is disabled and array runs using stripped array set in Windows disk manager. So issue is in the way how VROC works, not in drives or motherboard limits.
I'm using it mainly for tests/reviews and I wouldn't care about VROC if not pure curiosity. This is also why I purchased 4x 16GB drives on black friday sale. Premium key cost me as much as Standard ( long story ) so I can live with that as long as I made everything work at least as fast as without the key. Right now results without VROC key are much better and I can set stripped volume ( so RAID0 ) without VROC using any drive. I was testing it with Patriot Hellfire drives and I could pass 5GB/s sequential read on 2x240 drives ( VROC disabled or enabled without key ).
The only issue is in random bandwidth and I have no idea why performance is getting worse with each next +1 drive also it's clearly scalling down.
On 3 drives I was getting this error ( below ) in first tests after creating new array but it was disappearing and never back after I clicked "mark as normal". Also no issues at all once I added 4th drive. So on 3 drives 1 or 2 randomly picked drives had always an error. Drives passed diagnostic tests and for last 3 days I'm using some programs without any issues 24/7 so again it's not drives issue but VROC itself.
If I'm right then ASUS Hyper X16 card is the only one which let to set RAID on single VMD/single card. I haven't seen any other controller like that and even ASUS one is not available in EU ( I live in Poland ). ASUS supposed to release these cards over half year ago.
BH Photo has the Asus cards in stock as of last night.
I am unsure of they ship overseas, or if there are duties involved.
B&H Photo Video website.
Get rid of spaces between name and add d0t c0m
I am a month or so away from getting to try all of this.
Already have a couple aspirin bottles ready to go.
Seriously, acquire the Intel basic key.
According to Intel, the only key that works "properly" on X299 with Intel drives,
is the one mentioned above.
The premium key might work, but I think that might be what is holding you back.
mouser d0t c0m has them here in the US, overseas, I don't know.
We have gone 12 rounds on that subject here.
AFAIK Intel said nothing about other keys. One rep said only something about compatibility using basic key for Intel SSD only ( VROCISSDMOD ). Point is that official data says almost nothing about compatibility with anything and even motherboard vendors can't get clear answer from Intel about compatibility. Think that they had to change user manuals for their motherboards because Intel was changing specification for VROC during year. I'm in contact with ASRock support which is testing VROC on available drives so I guess they will say some more soon.
If I'm right then the only difference between keys is their firmware which is letting to use or only Intel or Intel+3rd party drives. I don't think it's the reason why only random performance is going down once more drives are in use.
I don't want to invest more in this platform till I get some answers. Results without VROC are good enough for me for everything I do, just that key seems useless now. I can wait for new firmware/BIOS and I have some other things to test anyway. I guess I will back here when I find some more.
I just received reply from ASRock support. Their engineers performed tests on 4x 32GB Optane SSD and their results are about the same as mine so when VROC is enabled then performance is much worse than when array is configured via Windows disk manager ( for that you don't even need RAID controller, not to mention VROC ).
Looks like there is no point to set VROC RAID if you don't have something like ASUS Hyper x16 card and this is one more cost. Pretty disappointing considering that Intel wants additional money to enable VROC and drive support is limited to barely couple of SSD models.
I too am pulling my hair out . I have an asus-x299 board, I9 10core cpu. I have the asus hyper m.2 x16 pcie card. also have premium vroc key. cannot get vroc to work with three pm961 m.2 drives. Looks like Intel restricted the x299 chipset to only work with intel ssd. very disappointing. Had I known this ( intel and asus docs not clear) , I would have gone with AMD. their vroc is not restricted.