5 Replies Latest reply on Jul 22, 2015 10:59 AM by jonathan_intel

    Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot

    humorist

      I just got a workstation with an Asus X99-E WS motherboard and a 2.5-inch form factor Intel DC SSD P3600 1.2 TB. The SSD runs on PCIe 1.0 x4 and achieves a maximum read speed of around 730 to 800 MB/s, depending on the benchmark. The PCIe 1.0 x4 connection was confirmed both using lspci and the Intel® Solid-State Drive Data Center Tool. However, the supplier claims that the SSD performs as advertised on Windows 7, which I do not use in production.


      The drive is connected using a Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK (2.5in NVMe SSD), which is plugged into one of the seven PCIe 3.0 x16 slots on the motherboard (I tried every slot so far). It is the only drive in the drive cage. This drive's LED on the drive cage is green and blinking when the OS boots up. The only other PCIe card is an EVGA Nvidia GT 210 GPU with 512 MB RAM, a PCIe 2.0 x16 device.

       

      I have installed both Ubuntu 15.04 (kernel 3.19) and Centos 7 (kernel 3.10) and both display the same behaviour. Following the Intel Linux NVMe Driver Reference Guide for Developers, I got these results:

       

      dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct

       

      gives me a write rate of about 620 MB/s, and

       

      hdparm -tT --direct /dev/nvme0n1

       

      gives 657 MB/s O_DIRECT cached reads and 664 MB/s O_DIRECT disk reads.

       

      The lspci output:

       

      [user@localhost ~]$ sudo lspci -vvv -s 6:0.0
      06:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])
          Subsystem: Intel Corporation DC P3600 SSD [2.5" SFF]
          Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
          Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
          Latency: 0
          Interrupt: pin A routed to IRQ 40
          Region 0: Memory at fb410000 (64-bit, non-prefetchable) [size=16K]
          Expansion ROM at fb400000 [disabled] [size=64K]
          Capabilities: [40] Power Management version 3
              Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
              Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
          Capabilities: [50] MSI-X: Enable+ Count=32 Masked-
              Vector table: BAR=0 offset=00002000
              PBA: BAR=0 offset=00003000
          Capabilities: [60] Express (v2) Endpoint, MSI 00
              DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <4us
                  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
              DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                  RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                  MaxPayload 256 bytes, MaxReadReq 512 bytes
              DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
              LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us
                  ClockPM- Surprise- LLActRep- BwNot-
              LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
                  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
              LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
              DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
              DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
              LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                   Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                   Compliance De-emphasis: -6dB
              LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1-
                   EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
          Capabilities: [100 v1] Advanced Error Reporting
              UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
              UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
              UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
              CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
              CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
              AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
          Capabilities: [150 v1] Virtual Channel
              Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
              Arb:    Fixed- WRR32- WRR64- WRR128-
              Ctrl:   ArbSelect=Fixed
              Status: InProgress-
              VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                  Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                  Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=01
                  Status: NegoPending- InProgress-
          Capabilities: [180 v1] Power Budgeting <?>
          Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI)
              ARICap: MFVC- ACS-, Next Function: 0
              ARICtl: MFVC- ACS-, Function Group: 0
          Capabilities: [270 v1] Device Serial Number 55-cd-2e-40-4b-fa-80-bc
          Capabilities: [2a0 v1] #19
          Kernel driver in use: nvme

       

      The isdct output:

       

      [user@localhost ~]$ sudo isdct show -a -intelssd
      ls: cannot access /dev/sg*: No such file or directory
      - IntelSSD CVMD5130002L1P2HGN -
      AggregationThreshold: 0
      Aggregation Time: 0
      ArbitrationBurst: 0
      AsynchronousEventConfiguration: 0
      Bootloader: 8B1B012F
      DevicePath: /dev/nvme0n1
      DeviceStatus: Healthy
      EnduranceAnalyzer: 17.22 Years
      ErrorString:
      Firmware: 8DV10151
      FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
      HighPriorityWeightArbitration: 0
      Index: 0
      IOCompletionQueuesRequested: 30
      IOSubmissionQueuesRequested: 30
      LBAFormat: 0
      LowPriorityWeightArbitration: 0
      ProductFamily: Intel SSD DC P3600 Series
      MaximumLBA: 2344225967
      MediumPriorityWeightArbitration: 0
      MetadataSetting: 0
      ModelNumber: INTEL SSDPE2ME012T4
      NativeMaxLBA: 2344225967
      NumErrorLogPageEntries: 63
      NumLBAFormats: 6
      NVMePowerState: 0
      PCILinkGenSpeed: 1
      PCILinkWidth: 4
      PhysicalSize: 1200243695616
      PowerGovernorMode: 0 (25W)
      ProtectionInformation: 0
      ProtectionInformationLocation: 0
      RAIDMember: False
      SectorSize: 512
      SerialNumber: CVMD5130002L1P2HGN
      SystemTrimEnabled:
      TempThreshold: 85 degree C
      TimeLimitedErrorRecovery: 0
      TrimSupported: True
      WriteAtomicityDisableNormal: 0


      Why doesn't the device run on PCIe 3.0 x4?

        • 1. Re: Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot
          jonathan_intel

          Hello humorist,

           

          These are the most relevant aspects we found after reviewing the information you have provided:

           

          - The lspci output shows that the Intel® SSD DC P3600 Series is capable of using PCIe Gen3 x4, however, the link status shows it is running in PCIe Gen1 x4. This suggests that another component in the BUS is limiting the connection to Gen1 and the SSD is using the link speed of the slowest device in the BUS. You might want to check the link capabilities of the related devices.

           

          LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <4us, L1 <4us

          ...

          LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-

           

          - The Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK (2.5in NVMe SSD) is supported by the Intel server specialists, however, the information we found indicates that this is designed to be used with Intel® P4000 Server chassis and Intel® Server Boards, even in this configuration the Intel Motherboards require specific updates to operate properly with this device. 3rd party systems may not be fully compatible with this Backplane kit.

           

          - Checking the Specifications of the Asus X99-E WS*, it does not mention support for the Drive cage kit either. We advise you to check with Asus if they support this configuration. Asus may have another solution to attach 2.5-inch NVMe storage devices with the SFF-8639 connector.

           

          We advise you to check with Asus Support if your motherboard is fully compatible with the Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK. And if it is, make sure it has any applicable system updates and configuration to make full use of it.

           

          Additionally, you could confirm if this behavior is expected with the Hot-swap Backplane PCIe Combination Drive Cage Kit for P4000 Server Chassis FUP8X25S3NVDK. You can obtain assistance for the Backplane kit by contacting the Intel Servers Support Community, or you can Contact Intel Support to get assistance assistance from a server support specialist in your region.

          • 2. Re: Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot
            humorist

            Hello jonathan_intel,

             

            Thank you for the detailed and informative reply. I am following your advice and am in the process of contacting both ASUS support and the Intel Server Support.

             

            The supplier of this workstation claims to have tested the SSD under Windows 7 64-bit and that it achieved the advertised performance. I cannot verify this claim (I only run Linux) but have no reason to doubt it. Would that rule out any hardware issues and signify a problem with software instead?

            • 3. Re: Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot
              jonathan_intel

              I would advise you to contact the support teams for the motherboard and backplane kit to confirm or discard any hardware issues. The SSD is being detected and it has the proper PCIe capabilities advertised, so the bottleneck appears to be in a different component.

               

              As we understand, Asus X99 Motherboards Support NVM Express Devices, however, using the 2.5" drive versions requires a BIOS update and use of the ASUS Hyper Kit expansion card; I was not able to find reference of Intel Backplane kits being used with this type of systems. You might want to check with Asus if this applies to the Asus X99-E WS.

               

              ASUS Announces All X99 and Z97 Motherboards Support NVM Express Devices

              • 4. Re: Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot
                humorist

                Hi jonathan_intel,

                 

                As you initially wrote, this was indeed a hardware compatibility issue. The backplane is not compatible with this motherboard. The HyperKit solution was also problematic to achieve as a special cable is required to connect the SSD to HyperKit. This cable is not bundled with either the SSD nor the HyperKit and is also not sold separately. It only comes bundled with the Intel SSD 750 Series 2.5' form factor, but after a lot of emails and phone calls, our supplier was able to get one specially delivered for us.

                 

                I thank you for the prompt and expert replies

                • 5. Re: Intel DC SSD P3600 runs on PCIe 1.0 x4 although plugged into a PCIe 3.0 x16 slot
                  jonathan_intel

                  Hello Humorist,

                   

                  I am glad to know that the recommendations and analysis helped you resolve this issue, and now you are able to use the full capacity of your new Intel® DC SSD P3600.