rllib-examples.rst 9.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
  1. RLlib Examples
  2. ==============
  3. This page is an index of examples for the various use cases and features of RLlib.
  4. If any example is broken, or if you'd like to add an example to this page, feel free to raise an issue on our Github repository.
  5. Tuned Examples
  6. --------------
  7. - `Tuned examples <https://github.com/ray-project/ray/blob/master/rllib/tuned_examples>`__:
  8. Collection of tuned hyperparameters by algorithm.
  9. - `MuJoCo and Atari benchmarks <https://github.com/ray-project/rl-experiments>`__:
  10. Collection of reasonably optimized Atari and MuJoCo results.
  11. Blog Posts
  12. ----------
  13. - `Scaling Multi-Agent Reinforcement Learning <http://bair.berkeley.edu/blog/2018/12/12/rllib>`__:
  14. This blog post is a brief tutorial on multi-agent RL and its design in RLlib.
  15. - `Functional RL with Keras and TensorFlow Eager <https://medium.com/riselab/functional-rl-with-keras-and-tensorflow-eager-7973f81d6345>`__:
  16. Exploration of a functional paradigm for implementing reinforcement learning (RL) algorithms.
  17. Training Workflows
  18. ------------------
  19. - `Custom training workflows <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_train_fn.py>`__:
  20. Example of how to use Tune's support for custom training functions to implement custom training workflows.
  21. - `Curriculum learning <rllib-training.html#example-curriculum-learning>`__:
  22. Example of how to adjust the configuration of an environment over time.
  23. - `Custom metrics <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_metrics_and_callbacks.py>`__:
  24. Example of how to output custom training metrics to TensorBoard.
  25. - `Using rollout workers directly for control over the whole training workflow <https://github.com/ray-project/ray/blob/master/rllib/examples/rollout_worker_custom_workflow.py>`__:
  26. Example of how to use RLlib's lower-level building blocks to implement a fully customized training workflow.
  27. Custom Envs and Models
  28. ----------------------
  29. - `Local Unity3D multi-agent environment example <https://github.com/ray-project/ray/tree/master/rllib/examples/unity3d_env_local.py>`__:
  30. Example of how to setup an RLlib Trainer against a locally running Unity3D editor instance to
  31. learn any Unity3D game (including support for multi-agent).
  32. Use this example to try things out and watch the game and the learning progress live in the editor.
  33. Providing a compiled game, this example could also run in distributed fashion with `num_workers > 0`.
  34. For a more heavy-weight, distributed, cloud-based example, see ``Unity3D client/server`` below.
  35. - `Registering a custom env and model <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_env.py>`__:
  36. Example of defining and registering a gym env and model for use with RLlib.
  37. - `Custom Keras model <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_keras_model.py>`__:
  38. Example of using a custom Keras model.
  39. - `Custom Keras RNN model <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_rnn_model.py>`__:
  40. Example of using a custom Keras- or PyTorch RNN model.
  41. - `Registering a custom model with supervised loss <https://github.com/ray-project/ray/blob/master/rllib/examples/custom_loss.py>`__:
  42. Example of defining and registering a custom model with a supervised loss.
  43. - `Subprocess environment <https://github.com/ray-project/ray/blob/master/rllib/tests/test_env_with_subprocess.py>`__:
  44. Example of how to ensure subprocesses spawned by envs are killed when RLlib exits.
  45. - `Batch normalization <https://github.com/ray-project/ray/blob/master/rllib/examples/batch_norm_model.py>`__:
  46. Example of adding batch norm layers to a custom model.
  47. - `Parametric actions <https://github.com/ray-project/ray/blob/master/rllib/examples/parametric_actions_cartpole.py>`__:
  48. Example of how to handle variable-length or parametric action spaces.
  49. - `Eager execution <https://github.com/ray-project/ray/blob/master/rllib/examples/eager_execution.py>`__:
  50. Example of how to leverage TensorFlow eager to simplify debugging and design of custom models and policies.
  51. Serving and Offline
  52. -------------------
  53. - :ref:`Serving RLlib models with Ray Serve <serve-rllib-tutorial>`: Example of using Ray Serve to serve RLlib models
  54. with HTTP and JSON interface. **This is the recommended way to expose RLlib for online serving use case**.
  55. - `Unity3D client/server <https://github.com/ray-project/ray/tree/master/rllib/examples/serving/unity3d_server.py>`__:
  56. Example of how to setup n distributed Unity3D (compiled) games in the cloud that function as data collecting
  57. clients against a central RLlib Policy server learning how to play the game.
  58. The n distributed clients could themselves be servers for external/human players and allow for control
  59. being fully in the hands of the Unity entities instead of RLlib.
  60. Note: Uses Unity's MLAgents SDK (>=1.0) and supports all provided MLAgents example games and multi-agent setups.
  61. - `CartPole client/server <https://github.com/ray-project/ray/tree/master/rllib/examples/serving/cartpole_server.py>`__:
  62. Example of online serving of predictions for a simple CartPole policy.
  63. - `Saving experiences <https://github.com/ray-project/ray/blob/master/rllib/examples/saving_experiences.py>`__:
  64. Example of how to externally generate experience batches in RLlib-compatible format.
  65. Multi-Agent and Hierarchical
  66. ----------------------------
  67. - `Rock-paper-scissors <https://github.com/ray-project/ray/blob/master/rllib/examples/rock_paper_scissors_multiagent.py>`__:
  68. Example of different heuristic and learned policies competing against each other in rock-paper-scissors.
  69. - `Two-step game <https://github.com/ray-project/ray/blob/master/rllib/examples/two_step_game.py>`__:
  70. Example of the two-step game from the `QMIX paper <https://arxiv.org/pdf/1803.11485.pdf>`__.
  71. - `PPO with centralized critic on two-step game <https://github.com/ray-project/ray/blob/master/rllib/examples/centralized_critic.py>`__:
  72. Example of customizing PPO to leverage a centralized value function.
  73. - `Centralized critic in the env <https://github.com/ray-project/ray/blob/master/rllib/examples/centralized_critic_2.py>`__:
  74. A simpler method of implementing a centralized critic by augmentating agent observations with global information.
  75. - `Hand-coded policy <https://github.com/ray-project/ray/blob/master/rllib/examples/multi_agent_custom_policy.py>`__:
  76. Example of running a custom hand-coded policy alongside trainable policies.
  77. - `Weight sharing between policies <https://github.com/ray-project/ray/blob/master/rllib/examples/multi_agent_cartpole.py>`__:
  78. Example of how to define weight-sharing layers between two different policies.
  79. - `Multiple trainers <https://github.com/ray-project/ray/blob/master/rllib/examples/multi_agent_two_trainers.py>`__:
  80. Example of alternating training between two DQN and PPO trainers.
  81. - `Hierarchical training <https://github.com/ray-project/ray/blob/master/rllib/examples/hierarchical_training.py>`__:
  82. Example of hierarchical training using the multi-agent API.
  83. Community Examples
  84. ------------------
  85. - `Arena AI <https://sites.google.com/view/arena-unity/home>`__:
  86. A General Evaluation Platform and Building Toolkit for Single/Multi-Agent Intelligence
  87. with RLlib-generated baselines.
  88. - `CARLA <https://github.com/layssi/Carla_Ray_Rlib>`__:
  89. Example of training autonomous vehicles with RLlib and `CARLA <http://carla.org/>`__ simulator.
  90. - `The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning <https://arxiv.org/pdf/2008.02616.pdf>`__:
  91. Using Graph Neural Networks and RLlib to train multiple cooperative and adversarial agents to solve the
  92. "cover the area"-problem, thereby learning how to best communicate (or - in the adversarial case - how to disturb communication).
  93. - `Flatland <https://flatland.aicrowd.com/intro.html>`__:
  94. A dense traffic simulating environment with RLlib-generated baselines.
  95. - `GFootball <https://github.com/google-research/football/blob/master/gfootball/examples/run_multiagent_rllib.py>`__:
  96. Example of setting up a multi-agent version of `GFootball <https://github.com/google-research>`__ with RLlib.
  97. - `Neural MMO <https://jsuarez5341.github.io/neural-mmo/build/html/rst/userguide.html>`__:
  98. A multiagent AI research environment inspired by Massively Multiplayer Online (MMO) role playing games –
  99. self-contained worlds featuring thousands of agents per persistent macrocosm, diverse skilling systems, local and global economies, complex emergent social structures,
  100. and ad-hoc high-stakes single and team based conflict.
  101. - `NeuroCuts <https://github.com/neurocuts/neurocuts>`__:
  102. Example of building packet classification trees using RLlib / multi-agent in a bandit-like setting.
  103. - `NeuroVectorizer <https://github.com/ucb-bar/NeuroVectorizer>`__:
  104. Example of learning optimal LLVM vectorization compiler pragmas for loops in C and C++ codes using RLlib.
  105. - `Roboschool / SageMaker <https://github.com/awslabs/amazon-sagemaker-examples/tree/master/reinforcement_learning/rl_roboschool_ray>`__:
  106. Example of training robotic control policies in SageMaker with RLlib.
  107. - `Sequential Social Dilemma Games <https://github.com/eugenevinitsky/sequential_social_dilemma_games>`__:
  108. Example of using the multi-agent API to model several `social dilemma games <https://arxiv.org/abs/1702.03037>`__.
  109. - `StarCraft2 <https://github.com/oxwhirl/smac>`__:
  110. Example of training in StarCraft2 maps with RLlib / multi-agent.
  111. - `Traffic Flow <https://berkeleyflow.readthedocs.io/en/latest/flow_setup.html>`__:
  112. Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.