.. |
documentation
|
ebd56b57db
[RLlib; documentation] "RLlib in 60sec" overhaul. (#20215)
|
2 years ago |
env
|
a931076f59
[RLlib] Tf2 + eager-tracing same speed as framework=tf; Add more test coverage for tf2+tracing. (#19981)
|
3 years ago |
export
|
b213565783
[RLlib] Fix failing test cases: Soft-deprecate ModelV2.from_batch (in favor of ModelV2.__call__). (#19693)
|
3 years ago |
inference_and_serving
|
143d23a278
[RLlib] Issue 20062: Action inference examples missing (#20144)
|
2 years ago |
models
|
a931076f59
[RLlib] Tf2 + eager-tracing same speed as framework=tf; Add more test coverage for tf2+tracing. (#19981)
|
3 years ago |
policy
|
61a1274619
[RLlib] No Preprocessors (part 2). (#18468)
|
3 years ago |
serving
|
05a55a9335
[RLlib] Issue 18668: Unity3D env client/server example not working (fix + add to test cases). (#18942)
|
3 years ago |
simulators
|
e735add268
[RLlib] Integration with SUMO Simulator (#11710)
|
4 years ago |
tune
|
3408b60d2b
[Release] Refactor User Tests (#20028)
|
3 years ago |
__init__.py
|
5d7afe8092
[rllib] Try moving RLlib to top level dir (#5324)
|
5 years ago |
action_masking.py
|
ea4a22249c
[RLlib] Add simple action-masking example script/env/model (tf and torch). (#18494)
|
3 years ago |
attention_net.py
|
82465f9342
[RLlib] Better PolicyServer example (w/ or w/o tune) and add printing out actual listen port address in log-level=INFO. (#18254)
|
3 years ago |
attention_net_supervised.py
|
9eba1871bb
[RLlib] Support easy `use_attention=True` flag for using the GTrXL model. (#11698)
|
3 years ago |
autoregressive_action_dist.py
|
eab9c25856
[RLlib] Better example scripts: Description --no-tune and --local-mode CLI options (autoregressive_action_dist.py) (#17705)
|
3 years ago |
bare_metal_policy_with_custom_view_reqs.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
batch_norm_model.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
cartpole_lstm.py
|
53206dd440
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531)
|
3 years ago |
centralized_critic.py
|
cf21c634a3
[RLlib] Fix deprecated warning for torch_ops.py (soft-replaced by torch_utils.py). (#19982)
|
3 years ago |
centralized_critic_2.py
|
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. (#16569)
|
3 years ago |
checkpoint_by_custom_criteria.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
coin_game_env.py
|
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. (#16569)
|
3 years ago |
complex_struct_space.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
curriculum_learning.py
|
d89fb82bfb
[RLlib] Add simple curriculum learning API and example script. (#15740)
|
3 years ago |
custom_env.py
|
8a72824c63
[RLlib Testig] Split and unflake more CI tests (make sure all jobs are < 30min). (#18591)
|
3 years ago |
custom_eval.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
custom_experiment.py
|
53206dd440
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531)
|
3 years ago |
custom_fast_model.py
|
5a313ba3d6
[RLlib] Refactor: All tf static graph code should reside inside Policy class. (#17169)
|
3 years ago |
custom_input_api.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
custom_keras_model.py
|
ed85f59194
[RLlib] Unify all RLlib Trainer.train() -> results[info][learner][policy ID][learner_stats] and add structure tests. (#18879)
|
3 years ago |
custom_logger.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
custom_loss.py
|
7eb1a29426
[RLlib] Fix ModelV2 custom metrics for torch. (#16734)
|
3 years ago |
custom_metrics_and_callbacks.py
|
9c73871da0
[RLlib; Docs overhaul] Docstring cleanup: Evaluation (#19783)
|
3 years ago |
custom_metrics_and_callbacks_legacy.py
|
c17169dc11
[RLlib] Fix all example scripts to run on GPUs. (#11105)
|
4 years ago |
custom_model_api.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
custom_model_loss_and_metrics.py
|
ed85f59194
[RLlib] Unify all RLlib Trainer.train() -> results[info][learner][policy ID][learner_stats] and add structure tests. (#18879)
|
3 years ago |
custom_observation_filters.py
|
8a066474d4
[RLlib] No Preprocessors; preparatory PR #1 (#18367)
|
3 years ago |
custom_rnn_model.py
|
53206dd440
[RLlib] CQL BC loss fixes; PPO/PG/A2|3C action normalization fixes (#16531)
|
3 years ago |
custom_tf_policy.py
|
b213565783
[RLlib] Fix failing test cases: Soft-deprecate ModelV2.from_batch (in favor of ModelV2.__call__). (#19693)
|
3 years ago |
custom_torch_policy.py
|
99ae7bae05
[RLlib] JAXPolicy prep. PR #1. (#13077)
|
3 years ago |
custom_train_fn.py
|
8a72824c63
[RLlib Testig] Split and unflake more CI tests (make sure all jobs are < 30min). (#18591)
|
3 years ago |
custom_vector_env.py
|
0d8fce8fd8
[RLlib] Discussion 2294: Custom vector env example and fix. (#16083)
|
3 years ago |
deterministic_training.py
|
ad87ddf93e
[rllib] Add deterministic test to gpu (#19306)
|
3 years ago |
dmlab_watermaze.py
|
60d4d5e1aa
Remove future imports (#6724)
|
4 years ago |
eager_execution.py
|
b213565783
[RLlib] Fix failing test cases: Soft-deprecate ModelV2.from_batch (in favor of ModelV2.__call__). (#19693)
|
3 years ago |
env_rendering_and_recording.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
fractional_gpus.py
|
7eb1a29426
[RLlib] Fix ModelV2 custom metrics for torch. (#16734)
|
3 years ago |
hierarchical_training.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
iterated_prisoners_dilemma_env.py
|
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. (#16569)
|
3 years ago |
lstm_auto_wrapping.py
|
6f342a2221
[RLlib] Preparatory PR for: Documentation on Model Building. (#13260)
|
3 years ago |
mobilenet_v2_with_lstm.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
multi_agent_cartpole.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
multi_agent_custom_policy.py
|
649580d735
[RLlib] Redo simplify multi agent config dict: Reverted b/c seemed to break test_typing (non RLlib test). (#17046)
|
3 years ago |
multi_agent_independent_learning.py
|
649580d735
[RLlib] Redo simplify multi agent config dict: Reverted b/c seemed to break test_typing (non RLlib test). (#17046)
|
3 years ago |
multi_agent_parameter_sharing.py
|
649580d735
[RLlib] Redo simplify multi agent config dict: Reverted b/c seemed to break test_typing (non RLlib test). (#17046)
|
3 years ago |
multi_agent_two_trainers.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
nested_action_spaces.py
|
59f796edf3
[RLlib] Fix crash when using StochasticSampling exploration (most PG-style algos) w/ tf and numpy > 1.19.5 (#18366)
|
3 years ago |
offline_rl.py
|
026bf01071
[RLlib] Upgrade gym version to 0.21 and deprecate pendulum-v0. (#19535)
|
3 years ago |
parallel_evaluation_and_training.py
|
56f142cac1
[RLlib] Add support for evaluation_num_episodes=auto (run eval for as long as the parallel train step takes). (#18380)
|
3 years ago |
parametric_actions_cartpole.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
parametric_actions_cartpole_embeddings_learnt_by_model.py
|
a7f8dc9d77
[RLlib] New and changed version of parametric actions cartpole example + small suggested update in policy_client.py (#15664)
|
3 years ago |
partial_gpus.py
|
7eb1a29426
[RLlib] Fix ModelV2 custom metrics for torch. (#16734)
|
3 years ago |
preprocessing_disabled.py
|
61a1274619
[RLlib] No Preprocessors (part 2). (#18468)
|
3 years ago |
random_parametric_agent.py
|
99a0088233
[RLlib] Unify the way we create local replay buffer for all agents (#19627)
|
3 years ago |
recsim_with_slateq.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
remote_envs_with_inference_done_on_main_node.py
|
8248ba531b
[RLlib] Redo #17410: Example script: Remote worker envs with inference done on main node. (#17960)
|
3 years ago |
remote_vector_env_with_custom_api.py
|
fd438d5630
[RLlib] Issue 18104: Cannot set remote_worker_envs=True for non local-mode and MultiAgentEnv. (#19133)
|
3 years ago |
restore_1_of_n_agents_from_checkpoint.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
rnnsac_stateless_cartpole.py
|
fba8461663
[RLlib] Add RNN-SAC agent (#16577)
|
3 years ago |
rock_paper_scissors_multiagent.py
|
246787cdd9
Revert "[RLlib] POC: `PGTrainer` class that works by sub-classing, not `trainer_template.py`. (#20055)" (#20284)
|
2 years ago |
rollout_worker_custom_workflow.py
|
cdf70c2900
[Tune] Remove legacy resources implementations in Runner and Executor. (#19773)
|
2 years ago |
saving_experiences.py
|
5a788474aa
[Core] First pass at privatizing non-public Python APIs. (#14607)
|
3 years ago |
sb2rllib_rllib_example.py
|
489febc6b2
[RLlib] Better example scripts: Description --no-tune and --local-mode CLI options (#17038)
|
3 years ago |
sb2rllib_sb_example.py
|
55709bac7a
[RLlib] Examples for training, saving, loading, testing an agent with SB & RLlib (#15897)
|
3 years ago |
self_play_league_based_with_open_spiel.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
self_play_with_open_spiel.py
|
fd13bac9b3
[RLlib] Add `worker` arg (optional) to `policy_mapping_fn`. (#18184)
|
3 years ago |
sumo_env_local.py
|
be6db06485
[RLlib] Re-do: Trainer: Support add and delete Policies. (#16569)
|
3 years ago |
trajectory_view_api.py
|
828f5d26b7
[RLlib] Custom view requirements (e.g. for prev-n-obs) work with `compute_single_action` and `compute_actions_from_input_dict`. (#18921)
|
3 years ago |
two_step_game.py
|
698b4eeed3
[RLlib] POC: Separate losses for APPO/IMPALA. Enable TFPolicy to handle multiple optimizers/losses (like TorchPolicy). (#18669)
|
3 years ago |
two_trainer_workflow.py
|
89fbfc00f8
[RLlib] Some minor cleanups (buffer buffer_size -> capacity and others). (#19623)
|
3 years ago |
unity3d_env_local.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |
vizdoom_with_attention_net.py
|
d2c755ccef
[RLlib] Examples scripts add argparse help and replace `--torch` with `--framework`. (#15832)
|
3 years ago |