Max van Dijck 232c331ce3 [RLlib] Rename all np.product usage to np.prod (#46317) | 3 月之前 | |
---|---|---|
.. | ||
examples | d48717ab51 [RLlib-contrib] CRR. (#36616) | 1 年之前 |
src | 232c331ce3 [RLlib] Rename all np.product usage to np.prod (#46317) | 3 月之前 |
tests | d48717ab51 [RLlib-contrib] CRR. (#36616) | 1 年之前 |
tuned_examples | a9ac55d4f2 [RLlib; RLlib contrib] Move `tuned_examples` into rllib_contrib and remove CI learning tests for contrib algos. (#40444) | 1 年之前 |
BUILD | a9ac55d4f2 [RLlib; RLlib contrib] Move `tuned_examples` into rllib_contrib and remove CI learning tests for contrib algos. (#40444) | 1 年之前 |
README.md | d48717ab51 [RLlib-contrib] CRR. (#36616) | 1 年之前 |
pyproject.toml | 232c331ce3 [RLlib] Rename all np.product usage to np.prod (#46317) | 3 月之前 |
requirements.txt | 232c331ce3 [RLlib] Rename all np.product usage to np.prod (#46317) | 3 月之前 |
CRR is another offline RL algorithm based on Q-learning that can learn from an offline experience replay. The challenge in applying existing Q-learning algorithms to offline RL lies in the overestimation of the Q-function, as well as, the lack of exploration beyond the observed data. The latter becomes increasingly important during bootstrapping in the bellman equation, where the Q-function queried for the next state’s Q-value(s) does not have support in the observed data. To mitigate these issues, CRR implements a simple and yet powerful idea of “value-filtered regression”. The key idea is to use a learned critic to filter-out the non-promising transitions from the replay dataset.
conda create -n rllib-crr python=3.10
conda activate rllib-crr
pip install -r requirements.txt
pip install -e '.[development]'
[CRR Example]()