提交历史

作者 SHA1 备注 提交日期
  Max H. Gerlach 1d217b5994 Bump version to v0.28.1 (#3942) 1 年之前
  Max H. Gerlach 19093eaebe Bump version to 0.28.0 (#3910) 1 年之前
  romerojosh 004fd0d93f Update with_device functions in MXNet and PyTorch to skip unnecessary cudaSetDevice calls (#3912) 1 年之前
  Enrico Minack 3a9bf1ba1f Upgrade CI test frameworks and fix linking NCCL 2.12+ (#3846) 1 年之前
  Nicolas Castet 2ef8ff92f5 Re-enabling new keras optimizers (#3860) 1 年之前
  Pei-Lun Liao 2af9000a24 Force tf.logical_and in hvd allreduce condition running on CPU (#3885) 1 年之前
  Ata Fatahi f356349204 TF: Add get_local_and_global_gradients() to PartialDistributedGradentTape (#3859) 1 年之前
  Max H. Gerlach 687eb29d25 Build: Link CUDA runtime library statically by default (#3867) 1 年之前
  Max H. Gerlach 2ecd714b08 Fix ROCm build for recent Reducescatter changes (#3839) 1 年之前
  Max H. Gerlach b46c5b7085 Reducescatter: Allocate output tensors before enqueuing the operation (#3824) 1 年之前
  Max H. Gerlach bfaca90d5c Bump version to 0.27.0 (#3832) 1 年之前
  Lee Yang 88ecd061a3 support custom data loaders in TorchEstimator (#3787) 1 年之前
  Max H. Gerlach 25e0f6476a Add prescale_factor and postscale_factor for Reducescatter (#3815) 1 年之前
  Li Jiang ede98b82ab Support pinning to local rank GPU index in Spark estimators (#3737) 1 年之前
  Nicolas Castet 305280def3 Fix broken TF DistributedOptimizer with Keras 2.11+ (#3822) 1 年之前
  romerojosh 85619c6851 Several fixes for allreduce and grouped allreduce handling of tf.IndexedSlices. (#3813) 1 年之前
  romerojosh f99dfbf84d Handle tf.IndexedSlices types when scaling local gradients in TF. (#3786) 1 年之前
  Max H. Gerlach b9d0e14801 Pad fused GPU Allgathers for better memory alignment (#3727) 2 年之前
  Enrico Minack 80e4894860 Update CHANGELOG.md from release (#3740) 2 年之前
  Enrico Minack 34604870ea Bum version to 0.26.1 (#3745) 2 年之前
  Travis Addair b601ec9a2c Fixed packaging import to occur after install_requires (#3741) 2 年之前
  Enrico Minack c638dcec97 Bump version to 0.26.0 (#3731) 2 年之前
  chongxiaoc 4e6eaa3780 Keras: Support only legacy optimizers in Keras 2.11+ (#3725) 2 年之前
  Ata Fatahi 4e82ad0144 Support registering local variables for BroadcastGlobalVariablesCallback (#3703) 2 年之前
  Max H. Gerlach 37a6d83dd9 Fix reducescatter() and grouped_reducescatter() to raise clean exceptions for scalar inputs (#3699) 2 年之前
  Ata Fatahi 4f723bb6a5 TF: Add register_local_var to distributed optimizers and gradient aggregators (#3695) 2 年之前
  romerojosh 25ed80371c Enable use of native ncclAvg op for NCCL allreduces. (#3646) 2 年之前
  Max H. Gerlach daf0f4111a Build: Fix finding nvcc (if not in $PATH) with older versions of CMake (#3682) 2 年之前
  Max H. Gerlach 81c4f9ab5f Revert "TF: Add register_local_var to distributed optimizers and gradient aggregators (#3663)" (#3686) 2 年之前
  Ata Fatahi dfe5b8ee27 TF: Add register_local_var to distributed optimizers and gradient aggregators (#3663) 2 年之前