4 Replies Latest reply on Nov 20, 2014 5:31 AM by kevin_intel

    Intel DC P3700 400GB SSD Poor performance on linux system

    Murali_Mohan

      I am using an Intel DC P3700 400GB SSD. As per the Intel factsheet i had tested the SSD on windows using IO meter for the request blocksize of 64K with 128 as IO queue depth and got close to 2800MBps sequential read performance.

       

      When i use the same SSD drive on a linux system and use fio benchmarking tool to benchmark it i get a sequential read performance of only 812MBps. Parameters used for fio benchmark job were direct=1, invalidate=1, engine=libaio, iodepth=128, rw=read, blocksize=64K.

       

      The Linux drivers are the mainline drivers for nvme (drivers/block/nvme-core.c and drivers/block/nvme-scsi.c version 0.8). The number of IO queues is fixed as 128 and i am not able to change them using the nr_requests sysfs file for the SSD block device. The driver decides the q_depth in nvme-core.c in function nvme_dev_map at the following line of code

       

            dev->q_depth = min_t(int, NVME_CAP_MQES(cap) + 1, NVME_Q_DEPTH);

       

      Increasing NVME_Q_DEPTH does not help in this case and the queue depth remains at 128.

      In what way can i increase the nvme io queue depth for this driver.

      Is it feasible to change the queue depth capability of device by changing the Intel SSD properties using nvme user utilities ?

      Or do i need to correct my benchmarking procedure used to test this SSD drive.