DeepSpeed是一个深度学习优化库,它使分布式训练和推理变得简单、高效。

Quentin Anthony fabe35fb5b Some cleanup 2 年之前
.github e6f444aee2 [CI] force upgrade HF dependencies & output py env (#2015) 2 年之前
DeepSpeedExamples @ 36212dd59c 3401d2516f bump DSE to latest commit 2 年之前
azure 5d40f006bb Fix urls in tutorial (#436) 4 年之前
benchmarks a95922f38e Add more group logic to NCCL backend 2 年之前
bin 91e15593ea Control ds_report output (#1622) 2 年之前
csrc fabe35fb5b Some cleanup 2 年之前
deepspeed a95922f38e Add more group logic to NCCL backend 2 年之前
docker 7bcb4fabeb Enable CG headers on ROCm (#1821) 2 年之前
docs 3da841853c add 530b paper (#1979) 2 年之前
op_builder c31d72eea2 working multi-pg prototype 2 年之前
release d1a7a55ea1 formatting fix for release script 3 年之前
requirements 8164ea9e6d Fixing several bugs in the inference-api and the kernels (#1951) 2 年之前
scripts 36ad3119d5 DeepSpeed comm backend v1 (#1985) 2 年之前
tests 36ad3119d5 DeepSpeed comm backend v1 (#1985) 2 年之前
.clang-format a10e4811fe force set lf instead of crlf (https://github.com/pre-commit/pre-commit-hooks#mixed-line-ending) (#1598) 2 年之前
.gitignore 56c5223868 bf16+pipeline parallelism (#1801) 2 年之前
.gitmodules 31f46feee2 DeepSpeed JIT op + PyPI support (#496) 3 年之前
.pre-commit-config.yaml 36ad3119d5 DeepSpeed comm backend v1 (#1985) 2 年之前
.pylintrc 4cf970e6bb Add codespell to pre-commit checks (#1717) 2 年之前
.readthedocs.yml 5812e84544 readthedocs yaml configuration (#410) 4 年之前
.style.yapf 78bbde7711 submodule and style dot files 4 年之前
CODEOWNERS 117c9cdf25 update CODEOWNERS (#2017) 2 年之前
CODE_OF_CONDUCT.md a10e4811fe force set lf instead of crlf (https://github.com/pre-commit/pre-commit-hooks#mixed-line-ending) (#1598) 2 年之前
CONTRIBUTING.md 752319c782 New feature contribution guideline (#1646) 2 年之前
LICENSE a10e4811fe force set lf instead of crlf (https://github.com/pre-commit/pre-commit-hooks#mixed-line-ending) (#1598) 2 年之前
MANIFEST.in a160d95778 explictly add op_builder to manifest (#1920) 2 年之前
MANIFEST_win.in e46d808a1b MoE inference + PR-MoE model support (#1705) 2 年之前
README.md 828ab7185a [docs] add new build badges to landing page (#1998) 2 年之前
SECURITY.md a10e4811fe force set lf instead of crlf (https://github.com/pre-commit/pre-commit-hooks#mixed-line-ending) (#1598) 2 年之前
install.sh c3c8d5dd93 AMD support (#1430) 2 年之前
setup.cfg 46f4573b1a Seeded unit tests (#1072) 3 年之前
setup.py 7fc3065074 Add torch-latest and torch-nightly CI workflows (#1990) 2 年之前
version.txt eb2ec7f458 bump to 0.6.6 2 年之前

README.md

License MIT PyPI version Downloads Build

Latest News

DeepSpeed is hiring, come join us!


DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

10x Larger Models

10x Faster Training

Minimal Code Change

DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU:

  • Extreme scale: Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters.
  • Extremely memory efficient: With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models.
  • Extremely long sequence length: Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.
  • Extremely communication efficient: 3D parallelism improves communication efficiency allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth. 1-bit Adam, 0/1 Adam and 1-bit LAMB reduce communication volume by up to 26x while achieving similar convergence efficiency to Adam/LAMB, allowing for scaling to different types of GPU clusters and networks.

Early adopters of DeepSpeed have already produced a language model (LM) with over 17B parameters called Turing-NLG, establishing a new SOTA in the LM category.

DeepSpeed is an important part of Microsoft’s new AI at Scale initiative to enable next-generation AI capabilities at scale, where you can find more information here.

For further documentation, tutorials, and technical deep-dives please see deepspeed.ai!

Build Pipeline Status

Description Status
NVIDIA nv-torch12-p40 nv-torch18-v100 nv-torch-latest-v100
AMD amd
PyTorch Nightly nv-torch-nightly-v100
Integrations nv-transformers-v100 nv-lightning-v100
Misc Formatting pages-build-deployment Documentation Status

Table of Contents

Section Description
Why DeepSpeed? DeepSpeed overview
Install Installation details
Features Feature list and overview
Further Reading Documentation, tutorials, etc.
Contributing Instructions for contributing
Publications Publications related to DeepSpeed
Videos Videos related to DeepSpeed

Why DeepSpeed?

Training advanced deep learning models is challenging. Beyond model design, model scientists also need to set up the state-of-the-art training techniques such as distributed training, mixed precision, gradient accumulation, and checkpointing. Yet still, scientists may not achieve the desired system performance and convergence rate. Large model sizes are even more challenging: a large model easily runs out of memory with pure data parallelism and it is difficult to use model parallelism. DeepSpeed addresses these challenges to accelerate model development and training.

Installation

The quickest way to get started with DeepSpeed is via pip, this will install the latest release of DeepSpeed which is not tied to specific PyTorch or CUDA versions. DeepSpeed includes several C++/CUDA extensions that we commonly refer to as our 'ops'. By default, all of these extensions/ops will be built just-in-time (JIT) using torch's JIT C++ extension loader that relies on ninja to build and dynamically link them at runtime.

Note: PyTorch must be installed before installing DeepSpeed.

pip install deepspeed

After installation, you can validate your install and see which extensions/ops your machine is compatible with via the DeepSpeed environment report.

ds_report

If you would like to pre-install any of the DeepSpeed extensions/ops (instead of JIT compiling) or install pre-compiled ops via PyPI please see our advanced installation instructions.

On Windows you can build wheel with following steps, currently only inference mode is supported.

  1. Install pytorch, such as pytorch 1.8 + cuda 11.1
  2. Install visual cpp build tools, such as VS2019 C++ x64/x86 build tools
  3. Launch cmd console with Administrator privilege for creating required symlink folders
  4. Run python setup.py bdist_wheel to build wheel in dist folder

Features

Below we provide a brief feature list, see our detailed feature overview for descriptions and usage.

Further Reading

All DeepSpeed documentation can be found on our website: deepspeed.ai

Article Description
DeepSpeed Features DeepSpeed features
Getting Started First steps with DeepSpeed
DeepSpeed JSON Configuration Configuring DeepSpeed
API Documentation Generated DeepSpeed API documentation
CIFAR-10 Tutorial Getting started with CIFAR-10 and DeepSpeed
Megatron-LM Tutorial Train GPT2 with DeepSpeed and Megatron-LM
BERT Pre-training Tutorial Pre-train BERT with DeepSpeed
Learning Rate Range Test Tutorial Faster training with large learning rates
1Cycle Tutorial SOTA learning schedule in DeepSpeed

Contributing

DeepSpeed welcomes your contributions! Please see our contributing guide for more details on formatting, testing, etc.

Contributor License Agreement

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Publications

  1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: memory optimizations toward training trillion parameter models. arXiv:1910.02054 and In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '20).
  2. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. (2020) DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '20, Tutorial).
  3. Minjia Zhang, Yuxiong He. (2020) Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. arXiv:2010.13369 and NeurIPS 2020.
  4. Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, Yuxiong He. (2021) ZeRO-Offload: Democratizing Billion-Scale Model Training. arXiv:2101.06840.
  5. Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He. (2021) 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed. arXiv:2102.02888 and ICML 2021.
  6. Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, Yuxiong He. (2021) ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning. arXiv:2104.07857.
  7. Conglong Li, Ammar Ahmad Awan, Hanlin Tang, Samyam Rajbhandari, Yuxiong He. (2021) 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed. arXiv:2104.06069.
  8. Conglong Li, Minjia Zhang, Yuxiong He. (2021) Curriculum Learning: A Regularization Method for Efficient and Stable Billion-Scale GPT Model Pre-Training. arXiv:2108.06084.
  9. Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He. (2022) Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam. arXiv:2202.06009.
  10. Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He. (2022) DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale arXiv:2201.05596.
  11. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, Bryan Catanzaro. (2022) Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model arXiv:2201.11990.

Videos

  1. DeepSpeed KDD 2020 Tutorial
    1. Overview
    2. ZeRO + large model training
    3. 17B T-NLG demo
    4. Fastest BERT training + RScan tuning
    5. DeepSpeed hands on deep dive: part 1, part 2, part 3
    6. FAQ
  2. Microsoft Research Webinar
  3. DeepSpeed on AzureML
  4. Community Tutorials