1 2 Previous Next 26 Replies Latest reply on Jan 7, 2011 2:55 AM by PeterUK

    An idiots understanding of SSD limitations/trim/GC – guru's please opine

    Snakeyeskm
      In the absence of trim/garbage collection an SSD has the following problem (not present in spinners which can automatically overwrite old data): –
      In a heavily used drive, the SSD has to find "invalid" pages (as marked by the OS) to write new information. Here we have two sets of problems.
      Firstly the SSD cannot overwrite the "invalid" page without deleting this page.
      Secondly, and more seriously, SSD's cannot simply select one or more pages to delete/write but are limited to working with an entire block (each block consisting of 128 pages utilizing 512 K). As a result (barring trim/GC) the SSD has to read the entire block into its cache/flash and perform the delete/write process on the respective pages. It then has to copy the entire "corrected" block from on board cache to the drive even though it might be only working with one or two or more of the total 128 pages within the block. This process is what causes the delays in the heavily used untrimmed SSD.
      Trim when executed correctly, instantly marks the aforementioned, OS identified invalid pages for deletion. This allows the SSD's controller to execute the time-consuming aforementioned process prior to any writes to these pages (whether this process occurs instantaneously or during idle periods is questionable but irrelevant as long as it occurs relatively quickly). Garbage collection also is designed for the SSD controller to execute a similar erase function based on the design of the SSD controller.
      Obviously, in very heavily used SSD's and/or inefficient controllers and/or improper OS set up, SSD's will lose their performance and often cause stuttering. In such situations the secure erase followed by an image restore might be the only solution.
      Wear leveling does not directly affect these processes unless trim/GC cannot keep up with very heavy usage and the drive is saturated.
      Guru's please opine but be gentle. I am trying my best to understand these processes.
        • 1. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
          redux

          I'm not a guru, but I would not describe that as an idiots understanding

          1 of 1 people found this helpful
          • 3. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
            parsec

            Yes, quite accurate, hardly an "idiots understanding" of SSDs.  One statement did catch my eye, that being:

             

            "As a result (barring trim/GC) the SSD has to read the entire block into  its cache/flash and perform the delete/write process on the respective  pages."

             

            My interpretation of this may not be what you meant, and my detail nit-picking notwithstanding, but TRIM and GC have nothing to do with the need to read a block into cache for an erase operation, that is just innate in Flash memory, their layout, or the controllers for some reason.  I imagine you had something else in mind.

             

            This seeming requirement to only work with blocks during delete operations is a huge stumbling block for SSDs.  Remove that constraint, and the NAND flood gates will be open much wider.  Even page level erases would make a big difference.  Wouldn't it be great if the new G3's had that as a feature!  NOT trying to start a rumor here, but OMGosh, I'm sure you know what I mean!

             

            We certainly need more idiots like you Snakeyeskm!

            • 4. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
              Snakeyeskm

              Appreciate the feedback guys.

              parsec, appreciate your point, what I meant was that it was because of that cumbersome read/erase/write process trim/GC were required to trigger the preemptive read/erase part of the process so that the block was ready for just the write process.I hope that is consistent with your understanding.

               

              I hope you are right about the G3's. The whole game is now about Write Amplification in SSD's an area in which Intel has always had the lead. But that's a whole another topic.Thanks for the feedback and your many helpful posts scattered through this forum. I troll and I learn.

              • 5. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                parsec

                               "I troll and I learn."

                 

                Don't we all.

                 

                I see what you are saying, yes that is correct.

                 

                I was reading about Write Amplification (WA) and I was under the impression, possibly false, that a WA factor of 1 was either the minimum or the optimal value, perhaps theoretically, and of course I don't have the WA factor equation handy at the moment.

                 

                Intel has claimed a WA factor of 1.1, which is excellent.  Recently, I read that OCZ (I think is was) claimed they have achieved a WA factor of 0.5.  That seemed odd to me, given what I wrote above.  I highly doubt that is a false claim, and may have been achieved by a different philosophy in their GC algorithm.

                 

                I am wondering what others think about this?

                • 6. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                  Snakeyeskm

                  I think that SandForce makes that claim because the use a compression algorithm as part of their process. This allows them to reduce the amount of rewrites and potentially increases the life of the drives. Unfortunately this also seems to create the Achilles' heel of the vertex2 sand force drives when trying to handle incompressible data. This is a area where the C 300 has a clear advantage over the sand force OCZ drives. Would welcome any further thoughts.

                  • 7. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                    parsec

                    Yes, I am aware of the data compression-thing with the Sandforce controllers/firmware.  I wonder about that, and while I don't understand it, there are ramifications of that which could cause issues in other areas.

                     

                    For instance, that controller obviously must un-compress all data when it is read for use by the OS, user, etc.  The compress/decompress process, and vice versa, takes time but apparently does not hurt performance to much.  I thought that doing backups of those drives, disk images, etc, would be odd if the data was sent from the SSD in the compressed state, but then they just could not allow that to happen, since the data would be useless on any other 'drive.  Storing data in a proprietary format like that is somewhat scary, but I suppose no more so than data stored in some RAID arrays.

                     

                    Regardless, it seems that the Sandforce SSDs are not quite on the same playing field as other SSDs, due to the compression of data.  I wonder if there are any issues that might be caused by the data compression that I am not seeing.

                    • 8. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                      PeterUK

                      The problem I would think where SSD's doing compression and not handling incompressible data well is just down to how fast it can do it like a slow CPU?

                       

                      I think built in compression on an SSD is not needed...and if anyone wanted compression on an SSD to improve Write Amplification or Endurance NTFS has a compression option and with CPU's being as fast as they are it shouldn’t affect performance.

                       

                      Of course you don't want to use NTFS compression with an SSD that has built in compression nor for programs like Microsoft MSMQ that modifies data through mapped sections in a compressed file that can produce "dirty" pages faster than the mapped writer can write them. The other thing is with NTFS compression you can select what gets compressed but an SSD with built in compression does it all.

                       

                      This could likely be why Intel SSD's don't do compression?

                      • 9. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                        DuckieHo

                        PeterUK wrote:

                         

                        The problem I would think where SSD's doing compression and not handling incompressible data well is just down to how fast it can do it like a slow CPU?

                         

                        I think built in compression on an SSD is not needed...and if anyone wanted compression on an SSD to improve Write Amplification or Endurance NTFS has a compression option and with CPU's being as fast as they are it shouldn’t affect performance.

                         

                        Of course you don't want to use NTFS compression with an SSD that has built in compression nor for programs like Microsoft MSMQ that modifies data through mapped sections in a compressed file that can produce "dirty" pages faster than the mapped writer can write them. The other thing is with NTFS compression you can select what gets compressed but an SSD with built in compression does it all.

                         

                        This could likely be why Intel SSD's don't do compression?

                        The compression processing is done by the SandForce controller.  It does not rely on the host system CPU at all.

                         

                        The compression is not just for improving write amplification.  It helps wih performance as there is less data to read/write.  You can see this performance difference on compressable vs uncompressable benchmarks.  I won't comment on how the compression algorithm actually works as I doubt anyone outside of SandForce knows and it is hard to run tests.

                         

                        I believe the concept of on-the-compression is a relatively novel idea.

                        • 10. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                          PeterUK

                          The compression processing is done by the SandForce controller.  It does not rely on the host system CPU at all.

                          Yes I know that I mean like a CPU limit for the SandForce controller itself yes....Which is why I said “like a slow CPU” .

                          • 11. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                            Snakeyeskm

                            If I understand it right, trim on the SF drives occurs in a very controlled manner and is designed to minimize write amplification. As a result, unlike the more traditional (indilinx) SSD's where there can be a rapid implementation of trim/garbage collection resulting in rapid recovery of speed (at the cost of increased write amplification), sand force drives on the other hand, will sacrifice "as new" speeds in favor of a more measured and selective implementation of trim with selective blocks being pushed through for erase and rewrite. There is also reasonable speculation that SF drives monitor the amounts of writes during a given period and use that data to adjust write speed to secure nand longevity consistent with warranty periods. This combination of maximizing nand longevity/minimizing write amplification does result in an overall decline in performance to a "settled state" that can be 15 to 20% below "as new". Furthermore extensive use of incompressible data (video, music, zip and rar etc.) will slow these drives in both performance and ability to minimize write amplification.

                             

                            Curiouser and curiouser.

                            • 12. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                              DuckieHo

                              PeterUK wrote:

                               

                              The compression processing is done by the SandForce controller.  It does not rely on the host system CPU at all.

                              Yes I know that I mean like a CPU limit for the SandForce controller itself yes....Which is why I said “like a slow CPU” .

                              Even if there was a processor-bottleneck, the gains are still noticable.  BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

                              • 13. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                                PeterUK

                                Even if there was a processor-bottleneck, the gains are still noticable.  BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

                                Very noticeable if you like writing ones and zeros.

                                ^ and by that I mean a file that consists when written just ones or zeros so it can compress it well.

                                 

                                Of course there is a disadvantaged (advantages too just posting the disadvantage) by doing compression on a SSD which is you are limited to the port speed since it decompresses at the SSD and so can't push data out faster then the port but with NTFS compression you pull data off the SSD compressed and decompress it after.

                                 

                                Message was edited by: PeterUK

                                • 14. Re: An idiots understanding of SSD limitations/trim/GC – guru's please opine
                                  DuckieHo

                                  PeterUK wrote:

                                   

                                  Even if there was a processor-bottleneck, the gains are still noticable.  BTW, the SF-2xxx will be released next year and are suppose to hit 500MB/s sequential...

                                  Very noticeable if you like writing ones and zeros.

                                  ^ and by that I mean a file that consists when written just ones or zeros so it can compress it well.

                                   

                                  Of course there is a disadvantaged (advantages too just posting the disadvantage) by doing compression on a SSD which is you are limited to the port speed since it decompresses at the SSD and so can't push data out faster then the port but with NTFS compression you pull data off the SSD compressed and decompress it after.

                                   

                                  Message was edited by: PeterUK

                                  .44x write amplification with Vista + Office 2007 install... that means compression is probably in the 60-70% range: http://images.anandtech.com/reviews/storage/SandForce/SF-2000/durawrite.jpg

                                   

                                  Don't forget that port bandwidth is really only an issue on sequential read/writes with relatively deep queue depths.  The SF-2xxx is going to be SATA 6Gb/s so port bandwidth won't be a problem.

                                  1 2 Previous Next