DeepSpeed是一个深度学习优化库,它使分布式训练和推理变得简单、高效。

Shaden Smith 5e858742c0 features page edits 4 年之前
DeepSpeedExamples @ 9e2c34e31c 4a0d2bc8fa title update 4 年之前
azure b84a1fa410 Web edits (#147) 4 年之前
bin e5bbc2e559 Sparse attn + ops/runtime refactor + v0.3.0 (#343) 4 年之前
csrc 41db1c2f03 ZeRO-Offload release (#391) 4 年之前
deepspeed b1d4bd734b fix for 16GB v100 nodes (#393) 4 年之前
docs 5e858742c0 features page edits 4 年之前
requirements 41db1c2f03 ZeRO-Offload release (#391) 4 年之前
tests b1d4bd734b fix for 16GB v100 nodes (#393) 4 年之前
third_party 3ce531c979 Upgrade apex version, turn off legacy fusion (#205) 4 年之前
.clang-format e5bbc2e559 Sparse attn + ops/runtime refactor + v0.3.0 (#343) 4 年之前
.gitignore e5bbc2e559 Sparse attn + ops/runtime refactor + v0.3.0 (#343) 4 年之前
.gitmodules 4ee043cd1c Updating DeepSpeedExamples to track master. (#44) 4 年之前
.pre-commit-config.yaml 734d8991c8 Transformer kernel release (#242) 4 年之前
.pylintrc 78bbde7711 submodule and style dot files 4 年之前
.style.yapf 78bbde7711 submodule and style dot files 4 年之前
CODEOWNERS 21d5f6349b Add code owners for DeepSpeed team (#335) 4 年之前
CODE_OF_CONDUCT.md 42f3834f0d Initial CODE_OF_CONDUCT.md commit 4 年之前
CONTRIBUTING.md 4eb20eb574 Doc edits, typos, etc. (#53) 4 年之前
Dockerfile c0d5424f00 Add openmpi to dockerfile 4 年之前
LICENSE c72cb69012 Initial LICENSE commit 4 年之前
README.md deb55c1818 Update README.md 4 年之前
SECURITY.md b8301c2c53 Initial SECURITY.md commit 4 年之前
azure-pipelines-docker.yml e6c37c043d Azure pipeline for docker image build (#78) 4 年之前
azure-pipelines.yml 79093d74aa Update test triggers to exclude docs 4 年之前
basic_install_test.py 01726ce2b8 Add 1-bit Adam support to DeepSpeed (#380) 4 年之前
install.sh 65c2f974d8 Pipeline parallel training engine. (#392) 4 年之前
setup.py 41db1c2f03 ZeRO-Offload release (#391) 4 年之前

README.md

Build Status Documentation Status License MIT Docker Pulls

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

10x Larger Models

10x Faster Training

Minimal Code Change

DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU:

  • Extreme scale: Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters.
  • Extremely memory efficient: With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models.
  • Extremely long sequence length: Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.
  • Extremely communication efficient: 3D parallelism improves communication efficiency allows users to train multi-billion-parameter models 2–7x faster on clusters with limited network bandwidth. 1-bit Adam reduces communication volume by up to 5x while achieving similar convergence efficiency to Adam, allowing for scaling to different types of GPU clusters and networks.

Early adopters of DeepSpeed have already produced a language model (LM) with over 17B parameters called Turing-NLG, establishing a new SOTA in the LM category.

DeepSpeed is an important part of Microsoft’s new AI at Scale initiative to enable next-generation AI capabilities at scale, where you can find more information here.

For further documentation, tutorials, and technical deep-dives please see deepspeed.ai!

News

Table of Contents

Section Description
Why DeepSpeed? DeepSpeed overview
Features DeepSpeed features
Further Reading DeepSpeed documentation, tutorials, etc.
Contributing Instructions for contributing to DeepSpeed
Publications DeepSpeed publications

Why DeepSpeed?

Training advanced deep learning models is challenging. Beyond model design, model scientists also need to set up the state-of-the-art training techniques such as distributed training, mixed precision, gradient accumulation, and checkpointing. Yet still, scientists may not achieve the desired system performance and convergence rate. Large model sizes are even more challenging: a large model easily runs out of memory with pure data parallelism and it is difficult to use model parallelism. DeepSpeed addresses these challenges to accelerate model development and training.

Features

Below we provide a brief feature list, see our detailed feature overview for descriptions and usage.

Further Reading

All DeepSpeed documentation can be found on our website: deepspeed.ai

Article Description
DeepSpeed Features DeepSpeed features
Getting Started First steps with DeepSpeed
DeepSpeed JSON Configuration Configuring DeepSpeed
API Documentation Generated DeepSpeed API documentation
CIFAR-10 Tutorial Getting started with CIFAR-10 and DeepSpeed
Megatron-LM Tutorial Train GPT2 with DeepSpeed and Megatron-LM
BERT Pre-training Tutorial Pre-train BERT with DeepSpeed
Learning Rate Range Test Tutorial Faster training with large learning rates
1Cycle Tutorial SOTA learning schedule in DeepSpeed

Contributing

DeepSpeed welcomes your contributions! Please see our contributing guide for more details on formatting, testing, etc.

Contributor License Agreement

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Publications

  1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. ArXiv:1910.02054