3 Replies Latest reply on Aug 16, 2018 11:44 PM by Intel Corporation

    Optimization for Pytorch

    kl_divergence

      For optimisation and faster training on models, I checked the docs and slack. All that snippets work for Tensorflow, Caffe and Keras. So i removed those tf flags, and now using this:

      os.environ["OMP_NUM_THREADS"] = "64"

      os.environ["KMP_BLOCKTIME"] = "0"

      os.environ["KMP_SETTINGS"] = "1"

      os.environ["KMP_AFFINITY"]= "granularity=fine,verbose,compact,1,0"

       

      But this is not helping much in speeding the training of my model. Could you please provide better optimisation techniques for Pytorch ?