.. |
bandit
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
catalog
|
ab131bb8c2
[RLlib] Early improvements to Catalogs and RL Modules docs + Catalogs improvements (#37245)
|
1 年之前 |
connectors
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
documentation
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
env
|
586f1b5139
[RLlib] Fix MB-MPO bug. (#39654)
|
1 年之前 |
export
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
inference_and_serving
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
learner
|
2913e9b971
[train] Legacy interface cleanup (`air.Checkpoint`, `LegacyExperimentAnalysis`) (#39289)
|
1 年之前 |
models
|
ab131bb8c2
[RLlib] Early improvements to Catalogs and RL Modules docs + Catalogs improvements (#37245)
|
1 年之前 |
multi_agent_and_self_play
|
c17a44cdfa
Revert "Revert "[RLlib] AlphaStar: Parallelized, multi-agent/multi-GPU learni…" (#22153)
|
2 年之前 |
policy
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
rl_module
|
1c29b98c71
[RLlib] Fix issues with action masking examples. (#38095)
|
1 年之前 |
serving
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
simulators
|
0c69020432
Revert "Simplify logging configuration. (#30863)" (#31858)
|
1 年之前 |
tune
|
054e00f8b8
[AIR] Remove head node syncing as the default storage option (#37142)
|
1 年之前 |
__init__.py
|
5d7afe8092
[rllib] Try moving RLlib to top level dir (#5324)
|
5 年之前 |
action_masking.py
|
1c29b98c71
[RLlib] Fix issues with action masking examples. (#38095)
|
1 年之前 |
attention_net.py
|
9a2d5f443e
[RLlib][RLModule][build_base] Disabling RLModule API for LSTM / Attn (#33662)
|
1 年之前 |
attention_net_supervised.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
autoregressive_action_dist.py
|
2df2428cdf
[RLlib] Put notices and error out on invalid ModelV2/Policy related configs for RL Modules (#37526)
|
1 年之前 |
bare_metal_policy_with_custom_view_reqs.py
|
2ed09c5445
[RLlib] Move all config validation logic into AlgorithmConfig classes. (#29854)
|
1 年之前 |
batch_norm_model.py
|
2df2428cdf
[RLlib] Put notices and error out on invalid ModelV2/Policy related configs for RL Modules (#37526)
|
1 年之前 |
cartpole_lstm.py
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
centralized_critic.py
|
17596b03d1
[RLlib][RLModule][build_base] Disabled Centralized Critic and Curiosity tests with RLModules (#33663)
|
1 年之前 |
centralized_critic_2.py
|
17596b03d1
[RLlib][RLModule][build_base] Disabled Centralized Critic and Curiosity tests with RLModules (#33663)
|
1 年之前 |
checkpoint_by_custom_criteria.py
|
a3ec4a936e
[RLlib] Enable `eager_tracing=True` by default. (#36556)
|
1 年之前 |
coin_game_env.py
|
223b39611e
[RLlib] Deprecate/cleanup: AlgorithmConfig["multiagent"] access and usage in tests and examples. (#35879)
|
1 年之前 |
complex_struct_space.py
|
5af66e66cc
[RLlib] AlgorithmConfigs: Broad rollout; Example scripts. (#29700)
|
2 年之前 |
compute_adapted_gae_on_postprocess_trajectory.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
curriculum_learning.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
custom_env.py
|
2df2428cdf
[RLlib] Put notices and error out on invalid ModelV2/Policy related configs for RL Modules (#37526)
|
1 年之前 |
custom_eval.py
|
d127273ec7
[RLlib] Fix: Recovered eval worker should use eval-config's policy_mapping_fn and policy_to_train fn, not the main train workers' ones. (#33648)
|
1 年之前 |
custom_experiment.py
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
custom_fast_model.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
custom_input_api.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
custom_keras_model.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
custom_logger.py
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
custom_metrics_and_callbacks.py
|
7c07659224
Update episode_v2.py with last_info_for (#37382)
|
1 年之前 |
custom_model_api.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
custom_model_loss_and_metrics.py
|
2df2428cdf
[RLlib] Put notices and error out on invalid ModelV2/Policy related configs for RL Modules (#37526)
|
1 年之前 |
custom_observation_filters.py
|
5af66e66cc
[RLlib] AlgorithmConfigs: Broad rollout; Example scripts. (#29700)
|
2 年之前 |
custom_recurrent_rnn_tokenizer.py
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
custom_rnn_model.py
|
9a2d5f443e
[RLlib][RLModule][build_base] Disabling RLModule API for LSTM / Attn (#33662)
|
1 年之前 |
custom_tf_policy.py
|
2ed09c5445
[RLlib] Move all config validation logic into AlgorithmConfig classes. (#29854)
|
1 年之前 |
custom_torch_policy.py
|
2ed09c5445
[RLlib] Move all config validation logic into AlgorithmConfig classes. (#29854)
|
1 年之前 |
custom_train_fn.py
|
2ffd7e49bd
[rllib] Fix storage-path related tests (#38947)
|
1 年之前 |
custom_vector_env.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
deterministic_training.py
|
5b5d83cef9
[RLlib] Fix rest of PPO RL Modules tests (#35672)
|
1 年之前 |
dmlab_watermaze.py
|
7f1bacc7dc
[CI] Format Python code with Black (#21975)
|
2 年之前 |
eager_execution.py
|
a3ec4a936e
[RLlib] Enable `eager_tracing=True` by default. (#36556)
|
1 年之前 |
env_rendering_and_recording.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
fractional_gpus.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
hierarchical_training.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
iterated_prisoners_dilemma_env.py
|
223b39611e
[RLlib] Deprecate/cleanup: AlgorithmConfig["multiagent"] access and usage in tests and examples. (#35879)
|
1 年之前 |
lstm_auto_wrapping.py
|
72fefc3a40
[RLlib] AlgorithmConfig: Replace more of the old-style config dicts across codebase. (#29799)
|
2 年之前 |
mobilenet_v2_with_lstm.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
multi-agent-leela-chess-zero.py
|
56b79117af
[RLlib] Contribution of LeelaChessZero algorithm for playing chess in a MultiAgent env. (#31480)
|
1 年之前 |
multi_agent_cartpole.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
multi_agent_custom_policy.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
multi_agent_different_spaces_for_agents.py
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
multi_agent_independent_learning.py
|
12ff13dda1
[RLlib] Fix waterworld example and test (#32117)
|
1 年之前 |
multi_agent_parameter_sharing.py
|
223b39611e
[RLlib] Deprecate/cleanup: AlgorithmConfig["multiagent"] access and usage in tests and examples. (#35879)
|
1 年之前 |
multi_agent_two_trainers.py
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
nested_action_spaces.py
|
89ac80d883
[RLlib] Issue 39421: MultiDiscrete action spaces not supported on new stack. (#39534)
|
1 年之前 |
offline_rl.py
|
794cfd9725
[RLlib] `AlgorithmConfig.overrides()` to replace `multiagent->policies->config` and `evaluation_config` dicts. (#30879)
|
1 年之前 |
parallel_evaluation_and_training.py
|
a3ec4a936e
[RLlib] Enable `eager_tracing=True` by default. (#36556)
|
1 年之前 |
parametric_actions_cartpole.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
parametric_actions_cartpole_embeddings_learnt_by_model.py
|
05eea3844a
[RLlib] Fix `env_check` for parametric actions (with action mask). (#34790)
|
1 年之前 |
partial_gpus.py
|
0c74ecad12
[Lint] Cleanup incorrectly formatted strings (Part 1: RLLib). (#23128)
|
2 年之前 |
preprocessing_disabled.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
random_parametric_agent.py
|
2ed09c5445
[RLlib] Move all config validation logic into AlgorithmConfig classes. (#29854)
|
1 年之前 |
re3_exploration.py
|
e715a8b761
[RLlib] AlgorithmConfig: Replace more occurrences of old config dicts; Make all Algorithms use the non-dict lookup for config properties. (#30096)
|
1 年之前 |
recommender_system_with_recsim_and_slateq.py
|
32c73a319d
[RLlib] Issue 39031: SlateQ example script bug. (#39550)
|
1 年之前 |
remote_base_env_with_custom_api.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
remote_envs_with_inference_done_on_main_node.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
replay_buffer_api.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
restore_1_of_n_agents_from_checkpoint.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
rnnsac_stateless_cartpole.py
|
c477124a09
[air] fix one rllib example that still uses `log_dir` (#39067)
|
1 年之前 |
rock_paper_scissors_multiagent.py
|
1fbb143950
[RLlib] Issue 39453: PettingZoo wrappers should use correct multi-agent dict action- and observation spaces. (#39459)
|
1 年之前 |
rollout_worker_custom_workflow.py
|
9ed2fa250a
[tune] Deprecate `tune.report`, `tune.checkpoint_dir`, `checkpoint_dir`, and `reporter` (#39093)
|
1 年之前 |
saving_experiences.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
sb2rllib_rllib_example.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
sb2rllib_sb_example.py
|
8e680c483c
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
|
1 年之前 |
self_play_league_based_with_open_spiel.py
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
self_play_with_open_spiel.py
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
sumo_env_local.py
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
trajectory_view_api.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
trajectory_view_api_rlm.py
|
960032a15f
[RLlib][RLModules] RNNs and RLModules (#32723)
|
1 年之前 |
two_step_game.py
|
8d2dc9a399
[RLlib] Change default framework from tf to torch (#33604)
|
1 年之前 |
two_trainer_workflow.py
|
ba04d01ee0
[RLlib][RLModule] Disabled RLModule in Two trainer workflow example (#33727)
|
1 年之前 |
unity3d_env_local.py
|
827ab91741
[RLlib] Replace remaining mentions of "trainer" by "algorithm". (#36557)
|
1 年之前 |
vizdoom_with_attention_net.py
|
a3ec4a936e
[RLlib] Enable `eager_tracing=True` by default. (#36556)
|
1 年之前 |