testing-tips.rst 5.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146
  1. Tips for testing Ray programs
  2. =============================
  3. Ray programs can be a little tricky to test due to the nature of parallel programs. We've put together a list of tips and tricks for common testing practices for Ray programs.
  4. .. contents::
  5. :local:
  6. Tip 1: Fixing the resource quantity with ``ray.init(num_cpus=...)``
  7. -------------------------------------------------------------------
  8. By default, ``ray.init()`` detects the number of CPUs and GPUs on your local machine/cluster.
  9. However, your testing environment may have a significantly lower number of resources. For example, the TravisCI build environment only has `2 cores <https://docs.travis-ci.com/user/reference/overview/>`_
  10. If tests are written to depend on ``ray.init()``, they may be implicitly written in a way that relies on a larger multi-core machine.
  11. This may easily result in tests exhibiting unexpected, flaky, or faulty behavior that is hard to reproduce.
  12. To overcome this, you should override the detected resources by setting them in ``ray.init`` like: ``ray.init(num_cpus=2)``
  13. .. _local-mode-tips:
  14. Tip 2: Use ``ray.init(local_mode=True)`` if possible
  15. ----------------------------------------------------
  16. A test suite for a Ray program may take longer to run than other test suites. One common culprit for long test durations is the overheads from inter-process communication.
  17. Ray provides a local mode for running Ray programs in a single process via ``ray.init(local_mode=True)``. This can be especially useful for testing since it allows you to reduce/remove inter-process communication.
  18. However, there are some caveats with using this. You should not do this if:
  19. 1. If your application depends on setting environment variables per process
  20. 2. If your application has recursive actor calls
  21. 3. If your remote actor/task sets any sort of process-level global variables
  22. 4. If you use are using async actors
  23. Also note, if you are using GPUs, you must set the ``CUDA_VISIBLE_DEVICES`` environment
  24. variable to a comma separated list of your GPU Device IDs.
  25. Tip 3: Sharing the ray cluster across tests if possible
  26. --------------------------------------------------------
  27. It is safest to start a new ray cluster for each test.
  28. .. code-block:: python
  29. class RayTest(unittest.TestCase):
  30. def setUp(self):
  31. ray.init(num_cpus=4, num_gpus=0)
  32. def tearDown(self):
  33. ray.shutdown()
  34. However, starting and stopping a Ray cluster can actually incur a non-trivial amount of latency. For example, on a typical Macbook Pro laptop, starting and stopping can take nearly 5 seconds:
  35. .. code-block:: bash
  36. python -c 'import ray; ray.init(); ray.shutdown()' 3.93s user 1.23s system 116% cpu 4.420 total
  37. Across 20 tests, this ends up being 90 seconds of added overhead.
  38. Reusing a Ray cluster across tests can provide significant speedups to your test suite. This reduces the overhead to a constant, amortized quantity:
  39. .. code-block:: python
  40. class RayClassTest(unittest.TestCase):
  41. @classmethod
  42. def setUpClass(cls):
  43. # Start it once for the entire test suite/module
  44. ray.init(num_cpus=4, num_gpus=0)
  45. @classmethod
  46. def tearDownClass(cls):
  47. ray.shutdown()
  48. Depending on your application, there are certain cases where it may be unsafe to reuse a Ray cluster across tests. For example:
  49. 1. If your application depends on setting environment variables per process.
  50. 2. If your remote actor/task sets any sort of process-level global variables.
  51. Tip 4: Create a mini-cluster with ``ray.cluster_utils.Cluster``
  52. ---------------------------------------------------------------
  53. If writing an application for a cluster setting, you may want to mock a multi-node Ray cluster. This can be done with the ``ray.cluster_utils.Cluster`` utility.
  54. .. code-block:: python
  55. from ray.cluster_utils import Cluster
  56. # Starts a head-node for the cluster.
  57. cluster = Cluster(
  58. initialize_head=True,
  59. head_node_args={
  60. "num_cpus": 10,
  61. })
  62. After starting a cluster, you can execute a typical ray script in the same process:
  63. .. code-block:: python
  64. ray.init(address=cluster.address)
  65. @ray.remote
  66. def f(x):
  67. return x
  68. for _ in range(1):
  69. ray.get([f.remote(1) for _ in range(1000)])
  70. for _ in range(10):
  71. ray.get([f.remote(1) for _ in range(100)])
  72. for _ in range(100):
  73. ray.get([f.remote(1) for _ in range(10)])
  74. for _ in range(1000):
  75. ray.get([f.remote(1) for _ in range(1)])
  76. You can also add multiple nodes, each with different resource quantities:
  77. .. code-block:: python
  78. mock_node = cluster.add_node(num_cpus=10)
  79. assert ray.cluster_resources()["CPU"] == 20
  80. You can also remove nodes, which is useful when testing failure-handling logic:
  81. .. code-block:: python
  82. cluster.remove_node(mock_node)
  83. assert ray.cluster_resources()["CPU"] == 10
  84. See the `Cluster Util for more details <https://github.com/ray-project/ray/blob/master/python/ray/cluster_utils.py>`_.
  85. Tip 5: Be careful when running tests in parallel
  86. ------------------------------------------------
  87. Since Ray starts a variety of services, it is easy to trigger timeouts if too many services are started at once. Therefore, when using tools such as `pytest xdist <https://pypi.org/project/pytest-xdist/>`_ that run multiple tests in parallel, one should keep in mind that this may introduce flakiness into the test environment.