Sven Mika 827ab91741 [RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557) 1 year ago
..
tests 827ab91741 [RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557) 1 year ago
README.md b5bc2b93c3 [RLlib] Move all remaining algos into `algorithms` directory. (#25366) 2 years ago
__init__.py 8a9a176a24 [RLlib] Remove all default config objects and rllib/agents (#33242) 1 year ago
distributional_q_tf_model.py 8e680c483c [RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369) 1 year ago
dqn.py 827ab91741 [RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557) 1 year ago
dqn_tf_policy.py 8a9a176a24 [RLlib] Remove all default config objects and rllib/agents (#33242) 1 year ago
dqn_torch_model.py 8e680c483c [RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369) 1 year ago
dqn_torch_policy.py 8a9a176a24 [RLlib] Remove all default config objects and rllib/agents (#33242) 1 year ago
learner_thread.py e5d8b28f53 [RLlib] Remove `policy_config` property from RolloutWorker (not needed). (#35878) 1 year ago

README.md

Deep Q Networks (DQN)

Code in this package is adapted from https://github.com/openai/baselines/tree/master/baselines/deepq.

Overview

DQN is a model-free off-policy RL algorithm and one of the first deep RL algorithms developed. DQN proposes using a neural network as a function approximator for the Q-function in Q-learning. The algorithm aims to minimize the L2 norm between the Q-value predictions and the Q-value targets, which is computed as 1-step TD. The paper proposes two important concepts, a target network and an experience replay buffer. The target network is a copy of the main Q network and is used to compute Q-value targets for loss-function calculations. To stabilize training, the target network lags slightly behind the main Q-network. Meanwhile, the experience replay stores all data encountered by the agent during training and is uniformly sampled from to generate gradient updates for the Q-value network.

Supported DQN Algorithms

Double DQN - As opposed to learning one Q network in vanilla DQN, Double DQN proposes learning two Q networks akin to double Q-learning. As a solution, Double DQN aims to solve the issue of vanilla DQN's overly-optimistic Q-values, which limits performance.

Dueling DQN - Dueling DQN proposes splitting learning a Q-value function approximator into learning two networks: a value and advantage approximator.

Distributional DQN - Usually, the Q network outputs the predicted Q-value of a state-action pair. Distributional DQN takes this further by predicting the distribution of Q-values (e.g. mean and std of a normal distribution) of a state-action pair. Doing this captures uncertainty of the Q-value and can improve the performance of DQN algorithms.

APEX-DQN - Standard DQN algorithms propose using a experience replay buffer to sample data uniformly and compute gradients from the sampled data. APEX introduces the notion of weighted replay data, where elements in the replay buffer are more or less likely to be sampled depending on the TD-error.

Rainbow - Rainbow DQN, as the word Rainbow suggests, aggregates the many improvements discovered in research to improve DQN performance. This includes a multi-step distributional loss (extended from Distributional DQN), prioritized replay (inspired from APEX-DQN), double Q-networks (inspired from Double DQN), and dueling networks (inspired from Dueling DQN).

Documentation & Implementation:

1) Vanilla DQN (DQN).

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/simple_q.py)**

2) Double DQN.

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/dqn.py)**

3) Dueling DQN

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/dqn.py)**

3) Distributional DQN

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/dqn.py)**

4) APEX DQN

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/agents/dqn/apex.py)**

5) Rainbow DQN

**[Detailed Documentation](https://docs.ray.io/en/master/rllib-algorithms.html#dqn)**

**[Implementation](https://github.com/ray-project/ray/blob/master/rllib/algorithms/dqn/dqn.py)**