faq.rst 4.2 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
  1. .. _cluster-FAQ:
  2. ===
  3. FAQ
  4. ===
  5. These are some Frequently Asked Questions that we've seen pop up for using Ray clusters.
  6. If you still have questions after reading this FAQ, please reach out on
  7. `our Discourse <https://discuss.ray.io/>`__!
  8. Do Ray clusters support multi-tenancy?
  9. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  10. Yes, you can run multiple :ref:`jobs <jobs-overview>` from different users simultaneously in a Ray cluster
  11. but it's NOT recommended in production.
  12. The reason is that Ray currently still misses some features for multi-tenancy in production:
  13. * Ray doesn't provide strong resource isolation:
  14. Ray :ref:`resources <core-resources>` are logical and they don't limit the physical resources a task or actor can use while running.
  15. This means simultaneous jobs can interfere with each other and makes them less reliable to run in production.
  16. * Ray doesn't support priorities: All jobs, tasks and actors have the same priority so there is no way to prioritize important jobs under load.
  17. * Ray doesn't support access control: jobs have full access to a Ray cluster and all of the resources within it.
  18. On the other hand, you can run the same job multiple times using the same cluster to save the cluster startup time.
  19. .. note::
  20. A Ray :ref:`namespace <namespaces-guide>` is just a logical grouping of jobs and named actors. Unlike a Kubernetes namespace, it doesn't provide any other multi-tenancy functions like resource quotas.
  21. I have multiple Ray users. What's the right way to deploy Ray for them?
  22. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  23. It's recommended to start a Ray cluster for each user so that their workloads are isolated.
  24. What is the difference between ``--node-ip-address`` and ``--address``?
  25. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  26. When starting a head node on a machine with more than one network address, you
  27. may need to specify the externally-available address so worker nodes can
  28. connect. This is done with:
  29. .. code:: bash
  30. ray start --head --node-ip-address xx.xx.xx.xx --port nnnn``
  31. Then when starting the worker node, use this command to connect to the head node:
  32. .. code:: bash
  33. ray start --address xx.xx.xx.xx:nnnn
  34. What does a worker node failure to connect look like?
  35. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  36. If the worker node cannot connect to the head node, you should see this error
  37. Unable to connect to GCS at xx.xx.xx.xx:nnnn. Check that (1) Ray GCS with
  38. matching version started successfully at the specified address, and (2)
  39. there is no firewall setting preventing access.
  40. The most likely cause is that the worker node cannot access the IP address
  41. given. You can use ``ip route get xx.xx.xx.xx`` on the worker node to start
  42. debugging routing issues.
  43. You may also see failures in the log like
  44. This node has an IP address of xx.xx.xx.xx, while we can not found the
  45. matched Raylet address. This maybe come from when you connect the Ray
  46. cluster with a different IP address or connect a container.
  47. which can be caused by overloading the head node with too many simultaneous
  48. connections. The solution for this is to start the worker nodes more slowly.
  49. I am having problems getting my SLURM cluster to work
  50. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  51. There seem to be a class of issues starting ray on SLURM clusters. While we
  52. have not been able to pin down the exact causes (as of June 2023), work has
  53. been done to mitigate some of the resource contention. Some of the issues
  54. reported:
  55. * Using a machine with a large number of CPUs, and starting one worker per CPU
  56. together with OpenBLAS (as used in NumPy) may allocate too many threads. This
  57. is an `known OpenBLAS limitation`_ and can be mitigated by limiting OpenBLAS
  58. to one thread per process as explained in the link.
  59. * Resource allocation is not what was expected: usually too many CPUs per node
  60. were allocated. Best practice is to verify your SLURM configuration without
  61. starting ray to verify that the allocations are as expected. For more
  62. detailed information see :ref:`ray-slurm-deploy`.
  63. .. _`known OpenBLAS limitation`: https://github.com/xianyi/OpenBLAS/wiki/faq#how-can-i-use-openblas-in-multi-threaded-applications