README.rst 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436
  1. .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png
  2. .. image:: https://readthedocs.org/projects/ray/badge/?version=master
  3. :target: http://docs.ray.io/en/master/?badge=master
  4. .. image:: https://img.shields.io/badge/Ray-Join%20Slack-blue
  5. :target: https://forms.gle/9TSdDYUgxYs8SA9e8
  6. .. image:: https://img.shields.io/badge/Discuss-Ask%20Questions-blue
  7. :target: https://discuss.ray.io/
  8. .. image:: https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter
  9. :target: https://twitter.com/raydistributed
  10. |
  11. **Ray provides a simple, universal API for building distributed applications.**
  12. Ray is packaged with the following libraries for accelerating machine learning workloads:
  13. - `Tune`_: Scalable Hyperparameter Tuning
  14. - `RLlib`_: Scalable Reinforcement Learning
  15. - `Train`_: Distributed Deep Learning (beta)
  16. - `Datasets`_: Distributed Data Loading and Compute
  17. As well as libraries for taking ML and distributed apps to production:
  18. - `Serve`_: Scalable and Programmable Serving
  19. - `Workflow`_: Fast, Durable Application Flows (alpha)
  20. There are also many `community integrations <https://docs.ray.io/en/master/ray-libraries.html>`_ with Ray, including `Dask`_, `MARS`_, `Modin`_, `Horovod`_, `Hugging Face`_, `Scikit-learn`_, and others. Check out the `full list of Ray distributed libraries here <https://docs.ray.io/en/master/ray-libraries.html>`_.
  21. Install Ray with: ``pip install ray``. For nightly wheels, see the
  22. `Installation page <https://docs.ray.io/en/master/installation.html>`__.
  23. .. _`Modin`: https://github.com/modin-project/modin
  24. .. _`Hugging Face`: https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.hyperparameter_search
  25. .. _`MARS`: https://docs.ray.io/en/latest/data/mars-on-ray.html
  26. .. _`Dask`: https://docs.ray.io/en/latest/data/dask-on-ray.html
  27. .. _`Horovod`: https://horovod.readthedocs.io/en/stable/ray_include.html
  28. .. _`Scikit-learn`: https://docs.ray.io/en/master/joblib.html
  29. .. _`Serve`: https://docs.ray.io/en/master/serve/index.html
  30. .. _`Datasets`: https://docs.ray.io/en/master/data/dataset.html
  31. .. _`Workflow`: https://docs.ray.io/en/master/workflows/concepts.html
  32. .. _`Train`: https://docs.ray.io/en/master/train/train.html
  33. Quick Start
  34. -----------
  35. Execute Python functions in parallel.
  36. .. code-block:: python
  37. import ray
  38. ray.init()
  39. @ray.remote
  40. def f(x):
  41. return x * x
  42. futures = [f.remote(i) for i in range(4)]
  43. print(ray.get(futures))
  44. To use Ray's actor model:
  45. .. code-block:: python
  46. import ray
  47. ray.init()
  48. @ray.remote
  49. class Counter(object):
  50. def __init__(self):
  51. self.n = 0
  52. def increment(self):
  53. self.n += 1
  54. def read(self):
  55. return self.n
  56. counters = [Counter.remote() for i in range(4)]
  57. [c.increment.remote() for c in counters]
  58. futures = [c.read.remote() for c in counters]
  59. print(ray.get(futures))
  60. Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download `this configuration file <https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml>`__, and run:
  61. ``ray submit [CLUSTER.YAML] example.py --start``
  62. Read more about `launching clusters <https://docs.ray.io/en/master/cluster/index.html>`_.
  63. Tune Quick Start
  64. ----------------
  65. .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png
  66. `Tune`_ is a library for hyperparameter tuning at any scale.
  67. - Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
  68. - Supports any deep learning framework, including PyTorch, `PyTorch Lightning <https://github.com/williamFalcon/pytorch-lightning>`_, TensorFlow, and Keras.
  69. - Visualize results with `TensorBoard <https://www.tensorflow.org/tensorboard>`__.
  70. - Choose among scalable SOTA algorithms such as `Population Based Training (PBT)`_, `Vizier's Median Stopping Rule`_, `HyperBand/ASHA`_.
  71. - Tune integrates with many optimization libraries such as `Facebook Ax <http://ax.dev>`_, `HyperOpt <https://github.com/hyperopt/hyperopt>`_, and `Bayesian Optimization <https://github.com/fmfn/BayesianOptimization>`_ and enables you to scale them transparently.
  72. To run this example, you will need to install the following:
  73. .. code-block:: bash
  74. $ pip install "ray[tune]"
  75. This example runs a parallel grid search to optimize an example objective function.
  76. .. code-block:: python
  77. from ray.air import session
  78. def objective(step, alpha, beta):
  79. return (0.1 + alpha * step / 100)**(-1) + beta * 0.1
  80. def training_function(config):
  81. # Hyperparameters
  82. alpha, beta = config["alpha"], config["beta"]
  83. for step in range(10):
  84. # Iterative training function - can be any arbitrary training procedure.
  85. intermediate_score = objective(step, alpha, beta)
  86. # Feed the score back back to Tune.
  87. session.report({"mean_loss": intermediate_score})
  88. tuner = tune.Tuner(
  89. training_function,
  90. param_space={
  91. "alpha": tune.grid_search([0.001, 0.01, 0.1]),
  92. "beta": tune.choice([1, 2, 3])
  93. })
  94. results = tuner.fit()
  95. print("Best config: ", results.get_best_result(metric="mean_loss", mode="min").config)
  96. # Get a dataframe for analyzing trial results.
  97. df = results.get_dataframe()
  98. If TensorBoard is installed, automatically visualize all trial results:
  99. .. code-block:: bash
  100. tensorboard --logdir ~/ray_results
  101. .. _`Tune`: https://docs.ray.io/en/master/tune.html
  102. .. _`Population Based Training (PBT)`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining
  103. .. _`Vizier's Median Stopping Rule`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#median-stopping-rule-tune-schedulers-medianstoppingrule
  104. .. _`HyperBand/ASHA`: https://docs.ray.io/en/master/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler
  105. RLlib Quick Start
  106. -----------------
  107. .. image:: https://github.com/ray-project/ray/raw/master/doc/source/rllib/images/rllib-logo.png
  108. `RLlib`_ is an industry-grade library for reinforcement learning (RL), built on top of Ray.
  109. It offers high scalability and unified APIs for a
  110. `variety of industry- and research applications <https://www.anyscale.com/event-category/ray-summit>`_.
  111. .. code-block:: bash
  112. $ pip install "ray[rllib]" tensorflow # or torch
  113. .. Do NOT edit the following code directly in this README! Instead, edit
  114. the ray/rllib/examples/documentation/rllib_on_ray_readme.py script and then
  115. copy the new code in here:
  116. .. code-block:: python
  117. import gym
  118. from ray.rllib.algorithms.ppo import PPO
  119. # Define your problem using python and openAI's gym API:
  120. class SimpleCorridor(gym.Env):
  121. """Corridor in which an agent must learn to move right to reach the exit.
  122. ---------------------
  123. | S | 1 | 2 | 3 | G | S=start; G=goal; corridor_length=5
  124. ---------------------
  125. Possible actions to chose from are: 0=left; 1=right
  126. Observations are floats indicating the current field index, e.g. 0.0 for
  127. starting position, 1.0 for the field next to the starting position, etc..
  128. Rewards are -0.1 for all steps, except when reaching the goal (+1.0).
  129. """
  130. def __init__(self, config):
  131. self.end_pos = config["corridor_length"]
  132. self.cur_pos = 0
  133. self.action_space = gym.spaces.Discrete(2) # left and right
  134. self.observation_space = gym.spaces.Box(0.0, self.end_pos, shape=(1,))
  135. def reset(self):
  136. """Resets the episode and returns the initial observation of the new one.
  137. """
  138. self.cur_pos = 0
  139. # Return initial observation.
  140. return [self.cur_pos]
  141. def step(self, action):
  142. """Takes a single step in the episode given `action`
  143. Returns:
  144. New observation, reward, done-flag, info-dict (empty).
  145. """
  146. # Walk left.
  147. if action == 0 and self.cur_pos > 0:
  148. self.cur_pos -= 1
  149. # Walk right.
  150. elif action == 1:
  151. self.cur_pos += 1
  152. # Set `done` flag when end of corridor (goal) reached.
  153. done = self.cur_pos >= self.end_pos
  154. # +1 when goal reached, otherwise -1.
  155. reward = 1.0 if done else -0.1
  156. return [self.cur_pos], reward, done, {}
  157. # Create an RLlib Trainer instance.
  158. trainer = PPO(
  159. config={
  160. # Env class to use (here: our gym.Env sub-class from above).
  161. "env": SimpleCorridor,
  162. # Config dict to be passed to our custom env's constructor.
  163. "env_config": {
  164. # Use corridor with 20 fields (including S and G).
  165. "corridor_length": 20
  166. },
  167. # Parallelize environment rollouts.
  168. "num_workers": 3,
  169. })
  170. # Train for n iterations and report results (mean episode rewards).
  171. # Since we have to move at least 19 times in the env to reach the goal and
  172. # each move gives us -0.1 reward (except the last move at the end: +1.0),
  173. # we can expect to reach an optimal episode reward of -0.1*18 + 1.0 = -0.8
  174. for i in range(5):
  175. results = trainer.train()
  176. print(f"Iter: {i}; avg. reward={results['episode_reward_mean']}")
  177. After training, you may want to perform action computations (inference) in your environment.
  178. Here is a minimal example on how to do this. Also
  179. `check out our more detailed examples here <https://github.com/ray-project/ray/tree/master/rllib/examples/inference_and_serving>`_
  180. (in particular for `normal models <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training.py>`_,
  181. `LSTMs <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_lstm.py>`_,
  182. and `attention nets <https://github.com/ray-project/ray/blob/master/rllib/examples/inference_and_serving/policy_inference_after_training_with_attention.py>`_).
  183. .. code-block:: python
  184. # Perform inference (action computations) based on given env observations.
  185. # Note that we are using a slightly different env here (len 10 instead of 20),
  186. # however, this should still work as the agent has (hopefully) learned
  187. # to "just always walk right!"
  188. env = SimpleCorridor({"corridor_length": 10})
  189. # Get the initial observation (should be: [0.0] for the starting position).
  190. obs = env.reset()
  191. done = False
  192. total_reward = 0.0
  193. # Play one episode.
  194. while not done:
  195. # Compute a single action, given the current observation
  196. # from the environment.
  197. action = trainer.compute_single_action(obs)
  198. # Apply the computed action in the environment.
  199. obs, reward, done, info = env.step(action)
  200. # Sum up rewards for reporting purposes.
  201. total_reward += reward
  202. # Report results.
  203. print(f"Played 1 episode; total-reward={total_reward}")
  204. .. _`RLlib`: https://docs.ray.io/en/master/rllib/index.html
  205. Ray Serve Quick Start
  206. ---------------------
  207. .. image:: https://raw.githubusercontent.com/ray-project/ray/master/doc/source/serve/logo.svg
  208. :width: 400
  209. `Ray Serve`_ is a scalable model-serving library built on Ray. It is:
  210. - Framework Agnostic: Use the same toolkit to serve everything from deep
  211. learning models built with frameworks like PyTorch or Tensorflow & Keras
  212. to Scikit-Learn models or arbitrary business logic.
  213. - Python First: Configure your model serving declaratively in pure Python,
  214. without needing YAMLs or JSON configs.
  215. - Performance Oriented: Turn on batching, pipelining, and GPU acceleration to
  216. increase the throughput of your model.
  217. - Composition Native: Allow you to create "model pipelines" by composing multiple
  218. models together to drive a single prediction.
  219. - Horizontally Scalable: Serve can linearly scale as you add more machines. Enable
  220. your ML-powered service to handle growing traffic.
  221. To run this example, you will need to install the following:
  222. .. code-block:: bash
  223. $ pip install scikit-learn
  224. $ pip install "ray[serve]"
  225. This example runs serves a scikit-learn gradient boosting classifier.
  226. .. code-block:: python
  227. import pickle
  228. import requests
  229. from sklearn.datasets import load_iris
  230. from sklearn.ensemble import GradientBoostingClassifier
  231. from ray import serve
  232. serve.start()
  233. # Train model.
  234. iris_dataset = load_iris()
  235. model = GradientBoostingClassifier()
  236. model.fit(iris_dataset["data"], iris_dataset["target"])
  237. @serve.deployment(route_prefix="/iris")
  238. class BoostingModel:
  239. def __init__(self, model):
  240. self.model = model
  241. self.label_list = iris_dataset["target_names"].tolist()
  242. async def __call__(self, request):
  243. payload = (await request.json())["vector"]
  244. print(f"Received flask request with data {payload}")
  245. prediction = self.model.predict([payload])[0]
  246. human_name = self.label_list[prediction]
  247. return {"result": human_name}
  248. # Deploy model.
  249. BoostingModel.deploy(model)
  250. # Query it!
  251. sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
  252. response = requests.get("http://localhost:8000/iris", json=sample_request_input)
  253. print(response.text)
  254. # Result:
  255. # {
  256. # "result": "versicolor"
  257. # }
  258. .. _`Ray Serve`: https://docs.ray.io/en/master/serve/index.html
  259. More Information
  260. ----------------
  261. - `Documentation`_
  262. - `Tutorial`_
  263. - `Blog`_
  264. - `Ray 1.0 Architecture whitepaper`_ **(new)**
  265. - `Exoshuffle: large-scale data shuffle in Ray`_ **(new)**
  266. - `RLlib paper`_
  267. - `RLlib flow paper`_
  268. - `Tune paper`_
  269. *Older documents:*
  270. - `Ray paper`_
  271. - `Ray HotOS paper`_
  272. .. _`Documentation`: http://docs.ray.io/en/master/index.html
  273. .. _`Tutorial`: https://github.com/ray-project/tutorial
  274. .. _`Blog`: https://medium.com/distributed-computing-with-ray
  275. .. _`Ray 1.0 Architecture whitepaper`: https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview
  276. .. _`Exoshuffle: large-scale data shuffle in Ray`: https://arxiv.org/abs/2203.05072
  277. .. _`Ray paper`: https://arxiv.org/abs/1712.05889
  278. .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924
  279. .. _`RLlib paper`: https://arxiv.org/abs/1712.09381
  280. .. _`RLlib flow paper`: https://arxiv.org/abs/2011.12719
  281. .. _`Tune paper`: https://arxiv.org/abs/1807.05118
  282. Getting Involved
  283. ----------------
  284. .. list-table::
  285. :widths: 25 50 25 25
  286. :header-rows: 1
  287. * - Platform
  288. - Purpose
  289. - Estimated Response Time
  290. - Support Level
  291. * - `Discourse Forum`_
  292. - For discussions about development and questions about usage.
  293. - < 1 day
  294. - Community
  295. * - `GitHub Issues`_
  296. - For reporting bugs and filing feature requests.
  297. - < 2 days
  298. - Ray OSS Team
  299. * - `Slack`_
  300. - For collaborating with other Ray users.
  301. - < 2 days
  302. - Community
  303. * - `StackOverflow`_
  304. - For asking questions about how to use Ray.
  305. - 3-5 days
  306. - Community
  307. * - `Meetup Group`_
  308. - For learning about Ray projects and best practices.
  309. - Monthly
  310. - Ray DevRel
  311. * - `Twitter`_
  312. - For staying up-to-date on new features.
  313. - Daily
  314. - Ray DevRel
  315. .. _`Discourse Forum`: https://discuss.ray.io/
  316. .. _`GitHub Issues`: https://github.com/ray-project/ray/issues
  317. .. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray
  318. .. _`Meetup Group`: https://www.meetup.com/Bay-Area-Ray-Meetup/
  319. .. _`Twitter`: https://twitter.com/raydistributed
  320. .. _`Slack`: https://forms.gle/9TSdDYUgxYs8SA9e8