Hey, I started this in a discussion forum and though better of it in a blog, please comment!
Original Post: SSD: Throw out your hard disks!
SSD: Throw out your hard disks!
Wait, not all of them just yet! I'm officially jumping on the hype bandwagon as I've been exercising and testing Intel SSDs (read long hours here) for the last three months. The comments from several online hardware reviewers are flattering, but they don't tell the whole story as they focus on single disk client machines. BTW, my Vista Ultimate Intel quad-core takes 20 seconds from POST complete to login with an Intel SSD. But, I'm a hardware guy at heart I need to know what implications these devices have for the datacenter and my apps. So we took the SSDs, dumped them into a number of differing servers and controllers to really work them and find out what breaks and where. Results, fantastic - RAIDed SSDs beat out their 15k spinning cousins in sequential reads and almost best them in sequential writes - ho hum you say? This is where the paradigm shifts, the more interesting story is around random I/O. Somewhere between 6x and 12x the performance of traditional disk is where these cool operators land, depending on block size and queue depth of course. Massive throughput gains and latency reductions with a SATA device attached to a SAS controller, which downgrades the SATA bus speed to 1.5gbps and imposes the overhead of SATA Tunneling Protocol. That's not bad for a hamstrung Olympian?
Backing away from the face melting speed of the SSDs random I/O, what does this mean for the computing? Yes fast, yes low latency, yes.... We've been designing applications, firmware, and everything we know about I/O to get around the random stuff. Ever since your first ST-506... We cache it, lay it down in stripes, defrag it religiously, anything to make it more sequential than random. What if that doesn't matter anymore? What if we don't have to engineer and program around the I/O bottleneck? Time to pull out the random I/O paradigm and watch it crumble. The great part about this shift is that it starts today and only gets better with initial SATA and then native SAS devices at 3gbps. Today NAND is limited by write cycles, needs some time to charge cells, more time in a multi-level cell device (MLC), and has block erase cycles and write amplification to contend with. 3-5 years from now... BOOM - phase change memory replaces traditional NAND for more endurance (+100x), faster read/write (µs vs. ns), and single bit alterability. Passengers, please sit back and place an extra fan on your Northbridge or array controller, we are entering a time when your I/O doesn't lag behind your CPU by a factor of 100x. What can this change, what can this change improve, and what will this do for your business?