1 2 Previous Next 15 Replies Latest reply on Nov 17, 2017 7:20 PM by r81984

    NUC CPU fan left turned on after ESXi shutdown


      In a nutshell, the only issue I have with the NUC system is, the CPU fan would left turned on after ESXi shutdown while the NUC power button would be turned off. To turn the system back on with the power button, I would need to first pressing the power button for 5 seconds or so until the fan is turned off (the system is then forcefully powered off), only then I can turn the system back on again by pressing the power button normally. This annoys me as if I forgot to force the units to go down completely, I'd find the units are warm in the next morning and I don't have air conditioning running all the time. I had done a bit of research and did try a few things and unfortunately I had not found a solution to this, yet. I have two NUC units with the same parts and EXSi, both have the same experience.


      Would appreciate your advise on what I should try.


      And as much as I'd like to get some support regarding the above subject, I'd like share on this thread my experience of running ESXi 6.5 on the NUC units.


      Hardware and firmware

      1AC to DC power supply to the NUCThe stock NUC power adapter unit connects to a battery backup AC outlet on an APC BK650AS UPS
      2NUC modelNUC6i7KYK
      3BIOS version0049
      4Memory32 GB of 2 x Kingston KVR21S15D8/16
      5Storage2 x INTEL® SSD 600P SERIES 1.02 TB
      6INTEL® SSD 600P SERIES 1.02 TB firmware version121C
      7Boot drive1 x SanDisk Ultra Fit USB model SDCZ43-016G-Z46 - 16GB


      Important hardware and firmware remarks

      • For those planning to run ESXi 6.5 or similar, and for those planning to run Linux operating system on the NUC with Intel SSD 600P, I highly recommend that the SSD is updated with firmware version 121C (or better if available)
      • There were issues reported running Linux on the Intel SSD 600P, at the time I'm writing this only Microsoft Windows versions listed on the 600P product page are 'officially' supported
      • Having said, I had see reports on this forum and others that firmware version 121C works good for certain Linux flavors, and for me, ESXi - see also important notes on the NVMe driver for ESXi


      BIOS configuration - configured according to article here by William Lam and article here by Erik Bussink - in summary, I have the following BIOS settings. I'm listing them for reference, not that I know what each of the settings meant



      • disabled - USB Legacy (Default: On)
      • disabled - Portable Device Charging Mode (Default: Charging Only)
      • not change - USB Ports (Port 01-08 enabled)


      • disabled - Chipset SATA (Default AHCI & SMART Enabled)
      • disabled - HDD Activity LED (Default: On)
      • disabled - M.2 PCIe SSD LEG (Default: On)


      • IGD Minimum Memory - 64GB (Default)
      • IGD Aperture Size - 256 (Default)
      • IGD Primary Video Port - Auto (Default)

      BIOS\Devices\Onboard Devices

      • disabled - Audio (Default: On)
      • LAN (Default)
      • disabled - Thunderbolt Controller (Default is Enabled)
      • disabled - WLAN (Default: On)
      • disabled - Bluetooth (Default: On)
      • SD Card - Read/Write (Default was Read)
      • disabled - Enhanced Consumer IR (Default: On)
      • disabled - High Precision Event Timers (Default: On)
      • disabled - Num Lock (Default: On)


      • M.2 Slot 1 - Enabled
      • M.2 Slot 2 - Enabled
      • Fan Control Mode : Cool


      • disabled Real-Time Performance Tuning (Default: On)
      • Select Max Performance Enabled (Default: Balanced Enabled)
      • disabled - Intel Ready Mode Technology (Default: On)
      • disabled - Power Sense (Default: On)
      • After Power Failure: Power On (Default was stay off)


      VMware ESXi

      • Installed on the USB boot drive
      • Updated to ESXi 6.5.0 Update 01 Build 5969303 here
      • Two data store on each NUC, each data store fully occupy each 600P


      Very important remarks

      • Replace the stock NVMe driver that came with ESXi with the one built specifically for Intel SSD here
        • After the update, it showed intel-nvme instead of nvme as the driver for the storage adapter
        • You'll see on the driver page that the model 600P isn't listed, but the driver worked well for me so far
      • I did the driver update after I had applied the ESXi 6.5.0 Update 01
      • And prior to that, I had both of the 600P low-level formatted with HDAT2 (by verify/write/verify), the tool is listed here here


      The catch was, without the intel-nvme adapter driver, I had painful experience working on the ESXi platform with the NUC system - obviously the system or its parts are not ESXi HCL listed

      • Copying virtual machine folders from one ESXi data store to another ESXi data store that was created on another 600P on the same NUC would usually fail before 20% completion
        • ESXi would lose connectivity to the data store that it was writing after the error
        • I had to reboot to recover from the situation
      • Or, similarly, copying virtual machine folders to/from the same data store would also result in similar experience
      • Or, deploying virtual machines with thick disk mode (-diskmode eagerZeroedThick) with the VMware ovftool would also result in similar experience
        • Having said that, I didn't get any issue deploying virtual machines with thin disk mode

      Post updating ESXi with the intel-nvme driver, I had tested the system to PASS the above test cases which failed 100% previously


      Edits: there are 3 items attached to each NUC units

      • The DC power plug
      • The USB boot drive
      • Cat5e network cable

      No other items are connected other than the items above. Specifically I don't have any type of displays attached to the units.


      Message was edited by: scene (added items attached to the NUC units)

        1 2 Previous Next