Sven Mika d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
..
agents d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
contrib d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
env d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
evaluation d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
examples d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
execution d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
models 75078f965d [Rllib] Fix `range()` (no keyword args supported!) in torch version of `attention_net.py`. (#21598) 2 年之前
offline 9af8f11191 Revert "[docs] Clean up doc structure (first part) (#21667)" (#21763) 2 年之前
policy 92f030331e [RLlib] Initial code/comment cleanups in preparation for decentralized multi-agent learner. (#21420) 2 年之前
tests d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
tuned_examples d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
utils d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
BUILD d5bfb7b7da [RLlib] Preparatory PR for multi-agent multi-GPU learner (alpha-star style) #03 (#21652) 2 年之前
README.rst 65bd8e29f8 [RLlib] Update a few things to get rid of the `remote_vector_env` deprecation warning. (#20753) 2 年之前
__init__.py 49cd7ea6f9 [RLlib] Trainer sub-class PPO/DDPPO (instead of `build_trainer()`). (#20571) 2 年之前
asv.conf.json 5d7afe8092 [rllib] Try moving RLlib to top level dir (#5324) 5 年之前
evaluate.py 60b2219d72 [RLlib] Allow for evaluation to run by `timesteps` (alternative to `episodes`) and add auto-setting to make sure train doesn't ever have to wait for eval (e.g. long episodes) to finish. (#20757) 2 年之前
rollout.py c5d20849ae [RLlib] Rename `rllib rollout` into `rllib evaluate` (backward compatible) to match Trainer API. (#18467) 3 年之前
scripts.py c5d20849ae [RLlib] Rename `rllib rollout` into `rllib evaluate` (backward compatible) to match Trainer API. (#18467) 3 年之前
train.py 38c456b6f4 [RLlib; Tune] Fix rllib/train.py script after tune.Experiment c'tor change. (#20283) 2 年之前

README.rst

RLlib: Industry-Grade Reinforcement Learning with TF and Torch
==============================================================

**RLlib** is an open-source library for reinforcement learning (RL), offering support for
production-level, highly distributed RL workloads, while maintaining
unified and simple APIs for a large variety of industry applications.

Whether you would like to train your agents in multi-agent setups,
purely from offline (historic) datasets, or using externally
connected simulators, RLlib offers simple solutions for your decision making needs.

You **don't need** to be an **RL expert** to use RLlib, nor do you need to learn Ray or any
other of its libraries! If you either have your problem coded (in python) as an
`RL environment `_
or own lots of pre-recorded, historic behavioral data to learn from, you will be
up and running in only a few days.

RLlib is already used in production by industry leaders in many different verticals, such as
`climate control `_,
`manufacturing and logistics `_,
`finance `_,
`gaming `_,
`automobile `_,
`robotics `_,
`boat design `_,
and many others.


Installation and Setup
----------------------

Install RLlib and run your first experiment on your laptop in seconds:

**TensorFlow:**

.. code-block:: bash

$ conda create -n rllib python=3.8
$ conda activate rllib
$ pip install "ray[rllib]" tensorflow "gym[atari]" "gym[accept-rom-license]" atari_py
$ # Run a test job:
$ rllib train --run APPO --env CartPole-v0


**PyTorch:**

.. code-block:: bash

$ conda create -n rllib python=3.8
$ conda activate rllib
$ pip install "ray[rllib]" torch "gym[atari]" "gym[accept-rom-license]" atari_py
$ # Run a test job:
$ rllib train --run APPO --env CartPole-v0 --torch


Quick First Experiment
----------------------

.. code-block:: python

import gym
from ray.rllib.agents.ppo import PPOTrainer


# Define your problem using python and openAI's gym API:
class ParrotEnv(gym.Env):
"""Environment in which an agent must learn to repeat the seen observations.

Observations are float numbers indicating the to-be-repeated values,
e.g. -1.0, 5.1, or 3.2.

The action space is always the same as the observation space.

Rewards are r=-abs(observation - action), for all steps.
"""

def __init__(self, config):
# Make the space (for actions and observations) configurable.
self.action_space = config.get(
"parrot_shriek_range", gym.spaces.Box(-1.0, 1.0, shape=(1, )))
# Since actions should repeat observations, their spaces must be the
# same.
self.observation_space = self.action_space
self.cur_obs = None
self.episode_len = 0

def reset(self):
"""Resets the episode and returns the initial observation of the new one.
"""
# Reset the episode len.
self.episode_len = 0
# Sample a random number from our observation space.
self.cur_obs = self.observation_space.sample()
# Return initial observation.
return self.cur_obs

def step(self, action):
"""Takes a single step in the episode given `action`

Returns:
New observation, reward, done-flag, info-dict (empty).
"""
# Set `done` flag after 10 steps.
self.episode_len += 1
done = self.episode_len >= 10
# r = -abs(obs - action)
reward = -sum(abs(self.cur_obs - action))
# Set a new observation (random sample).
self.cur_obs = self.observation_space.sample()
return self.cur_obs, reward, done, {}


# Create an RLlib Trainer instance to learn how to act in the above
# environment.
trainer = PPOTrainer(
config={
# Env class to use (here: our gym.Env sub-class from above).
"env": ParrotEnv,
# Config dict to be passed to our custom env's constructor.
"env_config": {
"parrot_shriek_range": gym.spaces.Box(-5.0, 5.0, (1, ))
},
# Parallelize environment rollouts.
"num_workers": 3,
})

# Train for n iterations and report results (mean episode rewards).
# Since we have to guess 10 times and the optimal reward is 0.0
# (exact match between observation and action value),
# we can expect to reach an optimal episode reward of 0.0.
for i in range(5):
results = trainer.train()
print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")


After training, you may want to perform action computations (inference) in your environment.
Below is a minimal example on how to do this. Also
`check out our more detailed examples here `_
(in particular for `normal models `_,
`LSTMs `_,
and `attention nets `_).


.. code-block:: python

# Perform inference (action computations) based on given env observations.
# Note that we are using a slightly simpler env here (-3.0 to 3.0, instead
# of -5.0 to 5.0!), however, this should still work as the agent has
# (hopefully) learned to "just always repeat the observation!".
env = ParrotEnv({"parrot_shriek_range": gym.spaces.Box(-3.0, 3.0, (1, ))})
# Get the initial observation (some value between -10.0 and 10.0).
obs = env.reset()
done = False
total_reward = 0.0
# Play one episode.
while not done:
# Compute a single action, given the current observation
# from the environment.
action = trainer.compute_single_action(obs)
# Apply the computed action in the environment.
obs, reward, done, info = env.step(action)
# Sum up rewards for reporting purposes.
total_reward += reward
# Report results.
print(f"Shreaked for 1 episode; total-reward={total_reward}")


For a more detailed `"60 second" example, head to our main documentation `_.


Highlighted Features
--------------------

The following is a summary of RLlib's most striking features (for an in-depth overview,
check out our `documentation `_):

The most **popular deep-learning frameworks**: `PyTorch `_ and `TensorFlow
(tf1.x/2.x static-graph/eager/traced) `_.

**Highly distributed learning**: Our RLlib algorithms (such as our "PPO" or "IMPALA")
allow you to set the ``num_workers`` config parameter, such that your workloads can run
on 100s of CPUs/nodes thus parallelizing and speeding up learning.

**Vectorized (batched) and remote (parallel) environments**: RLlib auto-vectorizes
your ``gym.Envs`` via the ``num_envs_per_worker`` config. Environment workers can
then batch and thus significantly speedup the action computing forward pass.
On top of that, RLlib offers the ``remote_worker_envs`` config to create
`single environments (within a vectorized one) as ray Actors `_,
thus parallelizing even the env stepping process.

| **Multi-agent RL** (MARL): Convert your (custom) ``gym.Envs`` into a multi-agent one
via a few simple steps and start training your agents in any of the following fashions:
| 1) Cooperative with `shared `_ or
`separate `_
policies and/or value functions.
| 2) Adversarial scenarios using `self-play `_
and `league-based training `_.
| 3) `Independent learning `_
of neutral/co-existing agents.


**External simulators**: Don't have your simulation running as a gym.Env in python?
No problem! RLlib supports an external environment API and comes with a pluggable,
off-the-shelve
`client `_/
`server `_
setup that allows you to run 100s of independent simulators on the "outside"
(e.g. a Windows cloud) connecting to a central RLlib Policy-Server that learns
and serves actions. Alternatively, actions can be computed on the client side
to save on network traffic.

**Offline RL and imitation learning/behavior cloning**: You don't have a simulator
for your particular problem, but tons of historic data recorded by a legacy (maybe
non-RL/ML) system? This branch of reinforcement learning is for you!
RLlib's comes with several `offline RL `_
algorithms (*CQL*, *MARWIL*, and *DQfD*), allowing you to either purely
`behavior-clone `_
your existing system or learn how to further improve over it.


In-Depth Documentation
----------------------

For an in-depth overview of RLlib and everything it has to offer, including
hand-on tutorials of important industry use cases and workflows, head over to
our `documentation pages `_.


Cite our Paper
--------------

If you've found RLlib useful for your research, please cite our `paper `_ as follows:

.. code-block::

@inproceedings{liang2018rllib,
Author = {Eric Liang and
Richard Liaw and
Robert Nishihara and
Philipp Moritz and
Roy Fox and
Ken Goldberg and
Joseph E. Gonzalez and
Michael I. Jordan and
Ion Stoica},
Title = {{RLlib}: Abstractions for Distributed Reinforcement Learning},
Booktitle = {International Conference on Machine Learning ({ICML})},
Year = {2018}
}