Ray Versions Save

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

ray-2.3.1

1 year ago

The Ray 2.3.1 patch release contains fixes for multiple components:

Ray Data Processing

Ray Serve

Ray Core

Dashboard

ray-2.3.0

1 year ago

Release Highlights

  • The streaming backend for Ray Datasets is in Developer Preview. It is designed to enable terabyte-scale ML inference and training workloads. Please contact us if you'd like to try it out on your workload, or you can find the preview guide here: https://docs.google.com/document/d/1BXd1cGexDnqHAIVoxTnV3BV0sklO9UXqPwSdHukExhY/edit
  • New Information Architecture (Beta): We’ve restructured the Ray dashboard to be organized around user personas and workflows instead of entities.
  • Ray-on-Spark is now available (Preview)!: You can launch Ray clusters on Databricks and Spark clusters and run Ray applications. Check out the documentation to learn more.

Ray Libraries

Ray AIR

💫Enhancements:

  • Add set_preprocessor method to Checkpoint (#31721)
  • Rename Keras callback and its parameters to be more descriptive (#31627)
  • Deprecate MlflowTrainableMixin in favor of setup_mlflow() function (#31295)
  • W&B
    • Have train_loop_config logged as a config (#31901)
    • Allow users to exclude config values with WandbLoggerCallback (#31624)
    • Rename WandB save_checkpoints to upload_checkpoints (#31582)
    • Add hook to get project/group for W&B integration (#31035, 31643)
    • Use Ray actors instead of multiprocessing for WandbLoggerCallback (#30847)
    • Update WandbLoggerCallback example (#31625)
  • Predictor
    • Place predictor kwargs in object store (#30932)
    • Delegate BatchPredictor stage fusion to Datasets (#31585)
    • Rename DLPredictor.call_model tensor parameter to inputs (#30574)
    • Add use_gpu to HuggingFacePredictor (#30945)
  • Checkpoints
    • Various Checkpoint improvements (#30948)
    • Implement lazy checkpointing for same-node case (#29824)
    • Automatically strip "module." from state dict (#30705)
    • Allow user to pass model to TensorflowCheckpoint.get_model (#31203)

🔨 Fixes:

  • Fix and improve support for HDFS remote storage. (#31940)
  • Use specified Preprocessor configs when using stream API. (#31725)
  • Support nested Chain in BatchPredictor (#31407)

📖Documentation:

  • Restructure API References (#32535)
  • API Deprecations (#31777, #31867)
  • Various fixes to docstrings, documentation, and examples (#30782, #30791)

🏗 Architecture refactoring:

  • Use NodeAffinitySchedulingPolicy for scheduling (#32016)
  • Internal resource management refactor (#30777, #30016)

Ray Data Processing

🎉 New Features:

  • Lazy execution by default (#31286)
  • Introduce streaming execution backend (#31579)
  • Introduce DatasetIterator (#31470)
  • Add per-epoch preprocessor (#31739)
  • Add TorchVisionPreprocessor (#30578)
  • Persist Dataset statistics automatically to log file (#30557)

💫Enhancements:

  • Async batch fetching for map_batches (#31576)
  • Add informative progress bar names to map_batches (#31526)
  • Provide an size bytes estimate for mongodb block (#31930)
  • Add support for dynamic block splitting to actor pool (#31715)
  • Improve str/repr of Dataset to include execution plan (#31604)
  • Deal with nested Chain in BatchPredictor (#31407)
  • Allow MultiHotEncoder to encode arrays (#31365)
  • Allow specify batch_size when reading Parquet file (#31165)
  • Add zero-copy batch API for ds.map_batches() (#30000)
  • Text dataset should save texts in ArrowTable format (#30963)
  • Return ndarray dicts for single-column tabular datasets (#30448)
  • Execute randomize_block_order eagerly if it's the last stage for ds.schema() (#30804)

🔨 Fixes:

  • Don't drop first dataset when peeking DatasetPipeline (#31513)
  • Handle np.array(dtype=object) constructor for ragged ndarrays (#31670)
  • Emit warning when starting Dataset execution with no CPU resources available (#31574)
  • Fix the bug of eagerly clearing up input blocks (#31459)
  • Fix Imputer failing with categorical dtype (#31435)
  • Fix schema unification for Datasets with ragged Arrow arrays (#31076)
  • Fix Discretizers transforming ignored cols (#31404)
  • Fix to_tf when the input feature_columns is a list. (#31228)
  • Raise error message if user calls Dataset.iter (#30575)

📖Documentation:

  • Refactor Ray Data API documentation (#31204)
  • Add seealso to map-related methods (#30579)

Ray Train

🎉 New Features:

  • Add option for per-epoch preprocessor (#31739)

💫Enhancements:

  • Change default NCCL_SOCKET_IFNAME to blacklist veth (#31824)
  • Introduce DatasetIterator for bulk and streaming ingest (#31470)
  • Clarify which RunConfig is used when there are multiple places to specify it (#31959)
  • Change ScalingConfig to be optional for DataParallelTrainers if already in Tuner param_space (#30920)

🔨 Fixes:

  • Use specified Preprocessor configs when using stream API. (#31725)
  • Fix off-by-one AIR Trainer checkpoint ID indexing on restore (#31423)
  • Force GBDTTrainer to use distributed loading for Ray Datasets (#31079)
  • Fix bad case in ScalingConfig->RayParams (#30977)
  • Don't raise TuneError on fail_fast="raise" (#30817)
  • Report only once in SklearnTrainer (#30593)
  • Ensure GBDT PGFs match passed ScalingConfig (#30470)

📖Documentation:

  • Restructure API References (#32535)
  • Remove Ray Client references from Train docs/examples (#32321)
  • Various fixes to docstrings, documentation, and examples (#29463, #30492, #30543, #30571, #30782, #31692, #31735)

🏗 Architecture refactoring:

  • API Deprecations (#31763)

Ray Tune

💫Enhancements:

  • Improve trainable serialization error (#31070)
  • Add support for Nevergrad optimizer with extra parameters (#31015)
  • Add timeout for experiment checkpoint syncing to cloud (#30855)
  • Move validate_upload_dir to Syncer (#30869)
  • Enable experiment restore from moved cloud uri (#31669)
  • Save and restore stateful callbacks as part of experiment checkpoint (#31957)

🔨 Fixes:

  • Do not default to reuse_actors=True when mixins are used (#31999)
  • Only keep cached actors if search has not ended (#31974)
  • Fix best trial in ProgressReporter with nan (#31276)
  • Make ResultGrid return cloud checkpoints (#31437)
  • Wait for final experiment checkpoint sync to finish (#31131)
  • Fix CheckpointConfig validation for function trainables (#31255)
  • Fix checkpoint directory assignment for new checkpoints created after restoring a function trainable (#31231)
  • Fix AxSearch save and nan/inf result handling (#31147)
  • Fix AxSearch search space conversion for fixed list hyperparameters (#31088)
  • Restore searcher and scheduler properly on Tuner.restore (#30893)
  • Fix progress reporter sort_by_metric with nested metrics (#30906)
  • Don't raise TuneError on fail_fast="raise" (#30817)
  • Fix duplicate printing when trial is done (#30597)

📖Documentation:

  • Restructure API references (#32449)
  • Remove Ray Client references from Tune docs/examples (#32321)
  • Various fixes to docstrings, documentation, and examples (#29581, #30782, #30571, #31045, #31793, #32505)

🏗 Architecture refactoring:

  • Deprecate passing a custom trial executor (#31792)
  • Move signal handling into separate method (#31004)
  • Update staged resources in a fixed counter for faster lookup (#32087)
  • Rename overwrite_trainable argument in Tuner restore to trainable (#32059)

Ray Serve

🎉 New Features:

  • Serve python API to support multi application (#31589)

💫Enhancements:

  • Add exponential backoff when retrying replicas (#31436)
  • Enable Log Rotation on Serve (#31844)
  • Use tasks/futures for asyncio.wait (#31608)
  • Change target_num_ongoing_requests_per_replica to positive float (#31378)

🔨 Fixes:

  • Upgrade deprecated calls (#31839)
  • Change Gradio integration to take a builder function to avoid serialization issues (#31619)
  • Add initial health check before marking a replica as RUNNING (#31189)

📖Documentation:

  • Document end-to-end timeout in Serve (#31769)
  • Document Gradio visualization (#28310)

RLlib

🎉 New Features:

  • Gymnasium is now supported. (Notes)
  • Connectors are now activated by default (#31693, 30388, 31618, 31444, 31092)
  • Contribution of LeelaChessZero algorithm for playing chess in a MultiAgent env. (#31480)

💫Enhancements:

  • [RLlib] Error out if action_dict is empty in MultiAgentEnv. (#32129)
  • [RLlib] Upgrade tf eager code to no longer use experimental_relax_shapes (but reduce_retracing instead). (#29214)
  • [RLlib] Reduce SampleBatch counting complexity (#30936)
  • [RLlib] Use PyTorch vectorized max() and sum() in SampleBatch.init when possible (#28388)
  • [RLlib] Support multi-gpu CQL for torch (tf already supported). (#31466)
  • [RLlib] Introduce IMPALA off_policyness test with GPU (#31485)
  • [RLlib] Properly serialize and restore StateBufferConnector states for policy stashing (#31372)
  • [RLlib] Clean up deprecated concat_samples calls (#31391)
  • [RLlib] Better support MultiBinary spaces by treating Tuples as superset of them in ComplexInputNet. (#28900)
  • [RLlib] Add backward compatibility to MeanStdFilter to restore from older checkpoints. (#30439)
  • [RLlib] Clean up some signatures for compute_actions. (#31241)
  • [RLlib] Simplify logging configuration. (#30863)
  • [RLlib] Remove native Keras Models. (#30986)
  • [RLlib] Convert PolicySpec to a readable format when converting to_dict(). (#31146)
  • [RLlib] Issue 30394: Add proper __str__() method to PolicyMap. (#31098)
  • [RLlib] Issue 30840: Option to only checkpoint policies that are trainable. (#31133)
  • [RLlib] Deprecate (delete) contrib folder. (#30992)
  • [RLlib] Better behavior if user does not specify stopping condition in RLLib CLI. (#31078)
  • [RLlib] PolicyMap LRU cache enhancements: Swap out policies (instead of GC'ing and recreating) + use Ray object store (instead of file system). (#29513)
  • [RLlib] AlgorithmConfig.overrides() to replace multiagent->policies->config and evaluation_config dicts. (#30879)
  • [RLlib] deprecation_warning(.., error=True) should raise ValueError, not DeprecationWarning. (#30255)
  • [RLlib] Add gym.spaces.Text serialization. (#30794)
  • [RLlib] Convert MultiAgentBatch to SampleBatch in offline_rl.py. (#30668)
  • [RLlib; Tune] Make Algorithm.train() return Tune-style config dict (instead of AlgorithmConfig object). (#30591)

🔨 Fixes:

  • [RLlib] Fix waterworld example and test (#32117)
  • [RLlib] Change Waterworld v3 to v4 and reinstate indep. MARL test case w/ pettingzoo. (#31820)
  • [RLlib] Fix OPE checkpointing. Save method name in configuration dict. (#31778)
  • [RLlib] Fix worker state restoration. (#31644)
  • [RLlib] Replace ordinary pygame imports by try_import_..(). (#31332)
  • [RLlib] Remove crude VR checks in agent collector. (#31564)
  • [RLlib] Fixed the 'RestoreWeightsCallback' example script. (#31601)
  • [RLlib] Issue 28428: QMix not working w/ GPUs. (#31299)
  • [RLlib] Fix using yaml files with empty stopping conditions. (#31363)
  • [RLlib] Issue 31174: Move all checks into AlgorithmConfig.validate() (even simple ones) to avoid errors when using tune hyperopt objects. (#31396)
  • [RLlib] Fix tensorflow_probability imports. (#31331)
  • [RLlib] Issue 31323: BC/MARWIL/CQL do work with multi-GPU (but config validation prevents them from running in this mode). (#31393)
  • [RLlib] Issue 28849: DT fails with num_gpus=1. (#31297)
  • [RLlib] Fix PolicyMap.__del__() to also remove a deleted policy ID from the internal deque. (#31388)
  • [RLlib] Use get_model_v2() instead of get_model() with MADDPG. (#30905)
  • [RLlib] Policy mapping fn can not be called with keyword arguments. (#31141)
  • [RLlib] Issue 30213: Appending RolloutMetrics to sampler outputs should happen after(!) all callbacks (such that custom metrics for last obs are still included). (#31102)
  • [RLlib] Make convert_to_torch tensor adhere to docstring. (#31095)
  • [RLlib] Fix convert to torch tensor (#31023)
  • [RLlib] Issue 30221: random policy does not handle nested spaces. (#31025)
  • [RLlib] Fix crashing remote envs example (#30562)
  • [RLlib] Recursively look up the original space from obs_space (#30602)

📖Documentation:

  • [RLlib; docs] Change links and references in code and docs to "Farama foundation's gymnasium" (from "OpenAI gym"). (#32061)

Ray Core and Ray Clusters

Ray Core

🎉 New Features:

  • Task Events Backend: Ray aggregates all submitted task information to provide better observability (#31840, #31761, #31278, #31247, #31316, #30934, #30979, #31207, #30867, #30829, #31524, #32157). This will back up features like task state API, advanced progress bar, and Ray timeline.

💫Enhancements:

  • Remote generator now works for ray actors and ray clients (#31700, #31710).
  • Revamp default scheduling strategy, improve worker startup performance up to 8x for embarrassingly parallel workloads (#31934, #31868).
  • Worker code clean up and allow workers lazy bind to jobs (#31836, #31846, #30349, #31375).
  • A single Ray cluster can scale up to 2000 nodes and 20k actors(#32131, #30131, #31939, #30166, #30460, #30563).
  • Out-of-memory prevention enhancement is now GA with more robust worker killing policies and better user experiences (#32217, #32361, #32219, #31768, #32107, #31976, #31272, #31509, #31230).

🔨 Fixes:

  • Improve garbage collection upon job termination (#32127, #31155)
  • Fix opencensus protobuf bug (#31632)
  • Support python 3.10 for runtime_env conda (#30970)
  • Fix crashes and memory leaks (#31640, #30476, #31488, #31917, #30761, #31018)

📖Documentation:

  • Deprecation (#31845, #31140, #31528)

Ray Clusters

🎉 New Features:

  • Ray-on-Spark is now available as Preview! (#28771, #31397, #31962)

💫Enhancements:

  • [observability] Better memory formatting for ray status and autoscaler (#32337)
  • [autoscaler] Add flag to disable periodic cluster status log. (#31869)

🔨 Fixes:

  • [observability][autoscaler] Ensure pending nodes is reset to 0 after scaling (#32085)
  • Make ~/.bashrc optional in cluster launcher commands (#32393)

📖Documentation:

  • Improvements to job submission
  • Remove references to Ray Client

Dashboard

🎉 New Features:

  • New Information Architecture (beta): We’ve restructured the Ray dashboard to be organized around user personas and workflows instead of entities. For developers, the jobs and actors tab will be most useful. For infrastructure engineers, the cluster tab may be more valuable.
  • Advanced progress bar: Tasks visualization that allow you to see the progress of all your ray tasks
  • Timeline view: We’ve added a button to download detailed timeline data about your ray job. Then, one can click a link and use the perfetto open-source visualization tool to visualize the timeline data.
  • More metadata tables. You can now see placement groups, tasks, actors, and other information related to your jobs.

📖Documentation:

  • We’ve restructured the documentation to make the dashboard documentation more prominent
  • We’ve improved the documentation around setting up Prometheus and Grafana for metrics.

Many thanks to all those who contributed to this release!

@minerharry, @scottsun94, @iycheng, @DmitriGekhtman, @jbedorf, @krfricke, @simonsays1980, @eltociear, @xwjiang2010, @ArturNiederfahrenhorst, @richardliaw, @avnishn, @WeichenXu123, @Capiru, @davidxia, @andreapiso, @amogkam, @sven1977, @scottjlee, @kylehh, @yhna940, @rickyyx, @sihanwang41, @n30111, @Yard1, @sriram-anyscale, @Emiyalzn, @simran-2797, @cadedaniel, @harelwa, @ijrsvt, @clarng, @pabloem, @bveeramani, @lukehsiao, @angelinalg, @dmatrix, @sijieamoy, @simon-mo, @jbesomi, @YQ-Wang, @larrylian, @c21, @AndreKuu, @maxpumperla, @architkulkarni, @wuisawesome, @justinvyu, @zhe-thoughts, @matthewdeng, @peytondmurray, @kevin85421, @tianyicui-tsy, @cassidylaidlaw, @gvspraveen, @scv119, @kyuyeonpooh, @Siraj-Qazi, @jovany-wang, @ericl, @shrekris-anyscale, @Catch-Bull, @jianoaix, @christy, @MisterLin1995, @kouroshHakha, @pcmoritz, @csko, @gjoliver, @clarkzinzow, @SongGuyang, @ckw017, @ddelange, @alanwguo, @Dhul-Husni, @Rohan138, @rkooo567, @fzyzcjy, @chaokunyang, @0x2b3bfa0, @zoltan-fedor, @Chong-Li, @crypdick, @jjyao, @emmyscode, @stephanie-wang, @starpit, @smorad, @nikitavemuri, @zcin, @tbukic, @ayushthe1, @mattip

ray-2.2.0

1 year ago

Release Highlights

Ray 2.2 is a stability-focused release, featuring stability improvements across many Ray components.

  • Ray Jobs API is now GA. The Ray Jobs API allows you to submit locally developed applications to a remote Ray Cluster for execution. It simplifies the experience of packaging, deploying, and managing a Ray application.
  • Ray Dashboard has received a number of improvements, such as the ability to see cpu flame graphs of your Ray workers and new metrics for memory usage.
  • The Out-Of-Memory (OOM) Monitor is now enabled by default. This will increase the stability of memory-intensive applications on top of Ray.
  • [Ray Data] we’ve heard numerous users report that when files are too large, Ray Data can have out-of-memory or performance issues. In this release, we’re enabling dynamic block splitting by default, which will address the above issues by avoiding holding too much data in memory.

Ray Libraries

Ray AIR

🎉 New Features:

  • Add a NumPy first path for Torch and TensorFlow Predictors (#28917)

💫Enhancements:

  • Suppress "NumPy array is not writable" error in torch conversion (#29808)
  • Add node rank and local world size info to session (#29919)

🔨 Fixes:

  • Fix MLflow database integrity error (#29794)
  • Fix ResourceChangingScheduler dropping PlacementGroupFactory args (#30304)
  • Fix bug passing 'raise' to FailureConfig (#30814)
  • Fix reserved CPU warning if no CPUs are used (#30598)

📖Documentation:

  • Fix examples and docs to specify batch_format in BatchMapper (#30438)

🏗 Architecture refactoring:

  • Deprecate Wandb mixin (#29828)
  • Deprecate Checkpoint.to_object_ref and Checkpoint.from_object_ref (#30365)

Ray Data Processing

🎉 New Features:

  • Support all PyArrow versions released by Apache Arrow (#29993, #29999)
  • Add select_columns() to select a subset of columns (#29081)
  • Add write_tfrecords() to write TFRecord files (#29448)
  • Support MongoDB data source (#28550)
  • Enable dynamic block splitting by default (#30284)
  • Add from_torch() to create dataset from Torch dataset (#29588)
  • Add from_tf() to create dataset from TensorFlow dataset (#29591)
  • Allow to set batch_size in BatchMapper (#29193)
  • Support read/write from/to local node file system (#29565)

💫Enhancements:

  • Add include_paths in read_images() to return image file path (#30007)
  • Print out Dataset statistics automatically after execution (#29876)
  • Cast tensor extension type to opaque object dtype in to_pandas() and to_dask() (#29417)
  • Encode number of dimensions in variable-shaped tensor extension type (#29281)
  • Fuse AllToAllStage and OneToOneStage with compatible remote args (#29561)
  • Change read_tfrecords() output from Pandas to Arrow format (#30390)
  • Handle all Ray errors in task compute strategy (#30696)
  • Allow nested Chain preprocessors (#29706)
  • Warn user if missing columns and support str exclude in Concatenator (#29443)
  • Raise ValueError if preprocessor column doesn't exist (#29643)

🔨 Fixes:

  • Support custom resource with remote args for random_shuffle() (#29276)
  • Support custom resource with remote args for random_shuffle_each_window() (#29482)
  • Add PublicAPI annotation to preprocessors (#29434)
  • Tensor extension column concatenation fixes (#29479)
  • Fix iter_batches() to not return empty batch (#29638)
  • Change map_batches() to fetch input blocks on-demand (#29289)
  • Change take_all() to not accept limit argument (#29746)
  • Convert between block and batch correctly for map_groups() (#30172)
  • Fix stats() call causing Dataset schema to be unset (#29635)
  • Raise error when batch_format is not specified for BatchMapper (#30366)
  • Fix ndarray representation of single-element ragged tensor slices (#30514)

📖Documentation:

  • Improve map_batches() documentation about execution model and UDF pickle-ability requirement (#29233)
  • Improve to_tf() docstring (#29464)

Ray Train

🎉 New Features:

  • Added MosaicTrainer (#29237, #29620, #29919)

💫Enhancements:

  • Fast fail upon single worker failure (#29927)
  • Optimize checkpoint conversion logic (#29785)

🔨 Fixes:

  • Propagate DatasetContext to training workers (#29192)
  • Show correct error message on training failure (#29908)
  • Fix prepare_data_loader with enable_reproducibility (#30266)
  • Fix usage of NCCL_BLOCKING_WAIT (#29562)

📖Documentation:

  • Deduplicate Train examples (#29667)

🏗 Architecture refactoring:

  • Hard deprecate train.report (#29613)
  • Remove deprecated Train modules (#29960)
  • Deprecate old prepare_model DDP args #30364

Ray Tune

🎉 New Features:

  • Make Tuner.restore work with relative experiment paths (#30363)
  • Tuner.restore from a local directory that has moved (#29920)

💫Enhancements:

  • with_resources takes in a ScalingConfig (#30259)
  • Keep resource specifications when nesting with_resources in with_parameters (#29740)
  • Add trial_name_creator and trial_dirname_creator to TuneConfig (#30123)
  • Add option to not override the working directory (#29258)
  • Only convert a BaseTrainer to Trainable once in the Tuner (#30355)
  • Dynamically identify PyTorch Lightning Callback hooks (#30045)
  • Make remote_checkpoint_dir work with query strings (#30125)
  • Make cloud checkpointing retry configurable (#30111)
  • Sync experiment-checkpoints more often (#30187)
  • Update generate_id algorithm (#29900)

🔨 Fixes:

  • Catch SyncerCallback failure with dead node (#29438)
  • Do not warn in BayesOpt w/ Uniform sampler (#30350)
  • Fix ResourceChangingScheduler dropping PGF args (#30304)
  • Fix Jupyter output with Ray Client and Tuner (#29956)
  • Fix tests related to TUNE_ORIG_WORKING_DIR env variable (#30134)

📖Documentation:

  • Add user guide for analyzing results (using ResultGrid and Result) (#29072)
  • Tune checkpointing and Tuner restore docfix (#29411)
  • Fix and clean up PBT examples (#29060)
  • Fix TrialTerminationReporter in docs (#29254)

🏗 Architecture refactoring:

  • Remove hard deprecated SyncClient/Syncer (#30253)
  • Deprecate Wandb mixin, move to setup_wandb() function (#29828)

Ray Serve

🎉 New Features:

  • Guard for high latency requests (#29534)
  • Java API Support (blog)

💫Enhancements:

  • Serve K8s HA benchmarking (#30278)
  • Add method info for http metrics (#29918)

🔨 Fixes:

  • Fix log format error (#28760)
  • Inherit previous deployment num_replicas (29686)
  • Polish serve run deploy message (#29897)
  • Remove calling of get_event_loop from python 3.10

RLlib

🎉 New Features:

  • Fault tolerant, elastic WorkerSets: An asynchronous Ray Actor manager class is now used inside all of RLlib’s Algorithms, adding fully flexible fault tolerance to rollout workers and workers used for evaluation. If one or more workers (which are Ray actors) fails - e.g. due to a SPOT instance going down - the RLlib Algorithm will now flexibly wait it out and periodically try to recreate the failed workers. In the meantime, only the remaining healthy workers are used for sampling and evaluation. (#29938, #30118, #30334, #30252, #29703, #30183, #30327, #29953)

💫Enhancements:

  • RLlib CLI: A new and enhanced RLlib command line interface (CLI) has been added, allowing for automatically downloading example configuration files, python-based config files (defining an AlgorithmConfig object to use), better interoperability between training and evaluation runs, and many more. For a detailed overview of what has changed, check out the new CLI documentation. (#29204, #29459, #30526, #29661, #29972)
  • Checkpoint overhaul: Algorithm checkpoints and Policy checkpoints are now more cohesive and transparent. All checkpoints are now characterized by a directory (with files and maybe sub-directories), rather than a single pickle file; Both Algorithm and Policy classes now have a utility static method (from_checkpoint()) for directly instantiating instances from a checkpoint directory w/o knowing the original configuration used or any other information (having the checkpoint is sufficient). For a detailed overview, see here. (#28812, #29772, #29370, #29520, #29328)
  • A new metric for APPO/IMPALA/PPO has been added that measures off-policy’ness: The difference in number of grad-updates the sampler policy has received thus far vs the trained policy’s number of grad-updates thus far. (#29983)

🏗 Architecture refactoring:

  • AlgorithmConfig classes: All of RLlib’s Algorithms, RolloutWorkers, and other important classes now use AlgorithmConfig objects under the hood, instead of python config dicts. It is no longer recommended (however, still supported) to create a new algorithm (or a Tune+RLlib experiment) using a python dict as configuration. For more details on how to convert your scripts to the new AlgorithmConfig design, see here. (#29796, #30020, #29700, #29799, #30096, #29395, #29755, #30053, #29974, #29854, #29546, #30042, #29544, #30079, #30486, #30361)
  • Major progress was made on the new Connector API and making sure it can be used (tentatively) with the “config.rollouts(enable_connectors=True)” flag. Will be fully supported, across all of RLlib’s algorithms, in Ray 2.3. (#30307, #30434, #30459, #30308, #30332, #30320, #30383, #30457, #30446, #30024, #29064, #30398, #29385, #30481, #30241, #30285, #30423, #30288, #30313, #30220, #30159)
  • Progress was made on the upcoming RLModule/RLTrainer/RLOptimizer APIs. (#30135, #29600, #29599, #29449, #29642)

🔨 Fixes:

  • Various bug fixes: #25925, #30279, #30478, #30461, #29867, #30099, #30185, #29222, #29227, #29494, #30257, #29798, #30176, #29648, #30331

📖Documentation:

Ray Core and Ray Clusters

Ray Core

🎉 New Features:

💫Enhancements:

  • The Ray Jobs API has graduated from Beta to GA. This means Ray Jobs will maintain API backward compatibility.
  • Run Ray job entrypoint commands (“driver scripts”) on worker nodes by specifying entrypoint_num_cpus, entrypoint_num_gpus, or entrypoint_resources. (#28564, #28203)
  • (Beta) OpenAPI spec for Ray Jobs REST API (#30417)
  • Improved Ray health checking mechanism. The fix will reduce the frequency of GCS marking raylets fail mistakenly when it is overloaded. (#29346, #29442, #29389, #29924)

🔨 Fixes:

  • Various fixes for hanging / deadlocking (#29491, #29763, #30371, #30425)
  • Set OMP_NUM_THREADS to num_cpus required by task/actors by default (#30496)
  • set worker non recyclable if gpu is envolved by default (#30061)

📖Documentation:

  • General improvements of Ray Core docs, including design patterns and tasks.

Ray Clusters

💫Enhancements:

  • Stability improvements for Ray Autoscaler / KubeRay Operator integration. (#29933 , #30281, #30502)

Dashboard

🎉 New Features:

  • Additional improvements from the default metrics dashboard. We now have actor, placement group, and per component memory usage breakdown. You can see details from the doc.
  • New profiling feature using py-spy under the hood. You can click buttons to see stack trace or cpu flame graphs of your workers.
  • Autoscaler and job events are available from the dashboard. You can also access the same data using ray list cluster-events.

🔨 Fixes:

  • Stability improvements from the dashboard
  • Dashboard now works at large scale cluster! It is tested with 250 nodes and 10K+ actors (which matches the Ray scalability envelope).
    • Smarter api fetching logic. We now wait for the previous API to finish before sending a new API request when polling for new data.
    • Fix agent memory leak and high CPU usage.

💫Enhancements:

  • General improvements to the progress bar. You can now see progress bars for each task name if you drill into the job details.
  • More metadata is available in the jobs and actors tables.
  • There is now a feedback button embedded into the dashboard. Please submit any bug reports or suggestions!

Many thanks to all those who contributed to this release!

@shrekris-anyscale, @rickyyx, @scottjlee, @shogohida, @liuyang-my, @matthewdeng, @wjrforcyber, @linusbiostat, @clarkzinzow, @justinvyu, @zygi, @christy, @amogkam, @cool-RR, @jiaodong, @EvgeniiTitov, @jjyao, @ilee300a, @jianoaix, @rkooo567, @mattip, @maxpumperla, @ericl, @cadedaniel, @bveeramani, @rueian, @stephanie-wang, @lcipolina, @bparaj, @JoonHong-Kim, @avnishn, @tomsunelite, @larrylian, @alanwguo, @VishDev12, @c21, @dmatrix, @xwjiang2010, @thomasdesr, @tiangolo, @sokratisvas, @heyitsmui, @scv119, @pcmoritz, @bhavika, @yzs981130, @andraxin, @Chong-Li, @clarng, @acxz, @ckw017, @krfricke, @kouroshHakha, @sijieamoy, @iycheng, @gjoliver, @peytondmurray, @xcharleslin, @DmitriGekhtman, @andreichalapco, @vitrioil, @architkulkarni, @simon-mo, @ArturNiederfahrenhorst, @sihanwang41, @pabloem, @sven1977, @avivhaber, @wuisawesome, @jovany-wang, @Yard1

ray-2.1.0

1 year ago

Release Highlights

  • Ray AI Runtime (AIR)
    • Better support for Image-based workloads.
      • Ray Datasets read_images() API for loading data.
      • Numpy-based API for user-defined functions in Preprocessor.
    • Ability to read TFRecord input.
      • Ray Datasets read_tfrecords() API to read TFRecord files.
  • Ray Serve:
    • Add support for gRPC endpoint (alpha release). Instead of using an HTTP server, Ray Serve supports gRPC protocol and users can bring their own schema for their use case.
  • RLlib:
    • Introduce decision transformer (DT) algorithm.
    • New hook for callbacks with on_episode_created().
    • Learning rate schedule to SimpleQ and PG.
  • Ray Core:
    • Ray OOM prevention (alpha release).
    • Support dynamic generators as task return values.
  • Dashboard:
    • Time series metrics support.
    • Export configuration files can be used in Prometheus or Grafana instances.
    • New progress bar in job detail view.

Ray Libraries

Ray AIR

💫Enhancements:

  • Improve readability of training failure output (#27946, #28333, #29143)
  • Auto-enable GPU for Predictors (#26549)
  • Add ability to create TorchCheckpoint from state dict (#27970)
  • Add ability to create TensorflowCheckpoint from saved model/h5 format (#28474)
  • Add attribute to retrieve URI from Checkpoint (#28731)
  • Add all allowable types to WandB Callback (#28888)

🔨 Fixes:

  • Handle nested metrics properly as scoring attribute (#27715)
  • Fix serializability of Checkpoints (#28387, #28895, #28935)

📖Documentation:

  • Miscellaneous updates to documentation and examples (#28067, #28002, #28189, #28306, #28361, #28364, #28631, #28800)

🏗 Architecture refactoring:

  • Deprecate Checkpoint.to_object_ref and Checkpoint.from_object_ref (#28318)
  • Deprecate legacy train/tune functions in favor of Session (#28856)

Ray Data Processing

🎉 New Features:

  • Add read_images (#29177)
  • Add read_tfrecords (#28430)
  • Add NumPy batch format to Preprocessor and BatchMapper (#28418)
  • Ragged tensor extension type (#27625)
  • Add KBinsDiscretizer Preprocessor (#28389)

💫Enhancements:

  • Simplify to_tf interface (#29028)
  • Add metadata override and inference in Dataset.to_dask() (#28625)
  • Prune unused columns before aggregate (#28556)
  • Add Dataset.default_batch_format (#28434)
  • Add partitioning parameter to read_ functions (#28413)
  • Deprecate "native" batch format in favor of "default" (#28489)
  • Support None partition field name (#28417)
  • Re-enable Parquet sampling and add progress bar (#28021)
  • Cap the number of stats kept in StatsActor and purge in FIFO order if the limit exceeded (#27964)
  • Customized serializer for Arrow JSON ParseOptions in read_json (#27911)
  • Optimize groupby/mapgroups performance (#27805)
  • Improve size estimation of image folder data source (#27219)
  • Use detached lifetime for stats actor (#25271)
  • Pin _StatsActor to the driver node (#27765)
  • Better error message for partition filtering if no file found (#27353)
  • Make Concatenator deterministic (#27575)
  • Change FeatureHasher input schema to expect token counts (#27523)
  • Avoid unnecessary reads when truncating a dataset with ds.limit() (#27343)
  • Hide tensor extension from UDFs (#27019)
  • Add repr to AIR classes (#27006)

🔨 Fixes:

  • Add upper bound to pyarrow version check (#29674) (#29744)
  • Fix map_groups to work with different output type (#29184)
  • read_csv not filter out files by default (#29032)
  • Check columns when adding rows to TableBlockBuilder (#29020)
  • Fix the peak memory usage calculation (#28419)
  • Change sampling to use same API as read Parquet (#28258)
  • Fix column assignment in Concatenator for Pandas 1.2. (#27531)
  • Doing partition filtering in reader constructor (#27156)
  • Fix split ownership (#27149)

📖Documentation:

  • Clarify dataset transformation. (#28482)
  • Update map_batches documentation (#28435)
  • Improve docstring and doctest for read_parquet (#28488)
  • Activate dataset doctests (#28395)
  • Document using a different separator for read_csv (#27850)
  • Convert custom datetime column when reading a CSV file (#27854)
  • Improve preprocessor documentation (#27215)
  • Improve limit() and take() docstrings (#27367)
  • Reorganize the tensor data support docs (#26952)
  • Fix nyc_taxi_basic_processing notebook (#26983)

Ray Train

🎉 New Features:

  • Add FullyShardedDataParallel support to TorchTrainer (#28096)

💫Enhancements:

  • Add rich notebook repr for DataParallelTrainer (#26335)
  • Fast fail if training loop raises an error on any worker (#28314)
  • Use torch.encode_data with HorovodTrainer when torch is imported (#28440)
  • Automatically set NCCL_SOCKET_IFNAME to use ethernet (#28633)
  • Don't add Trainer resources when running on Colab (#28822)
  • Support large checkpoints and other arguments (#28826)

🔨 Fixes:

  • Fix and improve HuggingFaceTrainer (#27875, #28154, #28170, #28308, #28052)
  • Maintain dtype info in LightGBMPredictor (#28673)
  • Fix prepare_model (#29104)
  • Fix train.torch.get_device() (#28659)

📖Documentation:

  • Clarify LGBM/XGB Trainer documentation (#28122)
  • Improve Hugging Face notebook example (#28121)
  • Update Train API reference and docs (#28192)
  • Mention FSDP in HuggingFaceTrainer docs (#28217)

🏗 Architecture refactoring:

  • Improve Trainer modularity for extensibility (#28650)

Ray Tune

🎉 New Features:

  • Add Tuner.get_results() to retrieve results after restore (#29083)

💫Enhancements:

  • Exclude files in sync_dir_between_nodes, exclude temporary checkpoints (#27174)
  • Add rich notebook output for Tune progress updates (#26263)
  • Add logdir to W&B run config (#28454)
  • Improve readability for long column names in table output (#28764)
  • Add functionality to recover from latest available checkpoint (#29099)
  • Add retry logic for restoring trials (#29086)

🔨 Fixes:

  • Re-enable progress metric detection (#28130)
  • Add timeout to retry_fn to catch hanging syncs (#28155)
  • Correct PB2’s beta_t parameter implementation (#28342)
  • Ignore directory exists errors to tackle race conditions (#28401)
  • Correctly overwrite files on restore (#28404)
  • Disable pytorch-lightning multiprocessing per default (#28335)
  • Raise error if scheduling an empty PlacementGroupFactory#28445
  • Fix trial cleanup after x seconds, set default to 600 (#28449)
  • Fix trial checkpoint syncing after recovery from other node (#28470)
  • Catch empty hyperopt search space, raise better Tuner error message (#28503)
  • Fix and optimize sample search algorithm quantization logic (#28187)
  • Support tune.with_resources for class methods (#28596)
  • Maintain consistent Trial/TrialRunner state when pausing and resuming trial with PBT (#28511)
  • Raise better error for incompatible gcsfs version (#28772)
  • Ensure that exploited in-memory checkpoint is used by trial with PBT (#28509)
  • Fix Tune checkpoint tracking for minimizing metrics (#29145)

📖Documentation:

  • Miscelleanous documentation fixes (#27117, #28131, #28210, #28400, #28068, #28809)
  • Add documentation around trial/experiment checkpoint (#28303)
  • Add basic parallel execution guide for Tune (#28677)
  • Add example PBT notebook (#28519)

🏗 Architecture refactoring:

  • Store SyncConfig and CheckpointConfig in Experiment and Trial (#29019)

Ray Serve

🎉 New Features:

  • Added gRPC direct ingress support [alpha version] (#28175)
  • Serve cli can provide kubernetes formatted output (#28918)
  • Serve cli can provide user config output without default value (#28313)

💫Enhancements:

  • Enrich more benchmarks
  • image objection with resnet50 mode with image preprocessing (#29096)
  • gRPC vs HTTP inference performance (#28175)
  • Add health check metrics to reflect the replica health status (#29154)

🔨 Fixes:

  • Fix memory leak issues during inference (#29187)
  • Fix unexpected http options omit warning when using serve cli to start the ray serve (#28257)
  • Fix unexpected long poll exceptions (#28612)

📖Documentation:

  • Add e2e fault tolerance instructions (#28721)
  • Add Direct Ingress instructions (#29149)
  • Bunch of doc improvements on “dev workflow”, “custom resources”, “serve cli” etc (#29147, #28708, #28529, #28527)

RLlib

🎉 New Features:

  • Decision Transformer (DT) Algorithm added (#27890, #27889, #27872, #27829).
  • Callbacks now have a new hook on_episode_created(). (#28600)
  • Added learning rate schedule to SimpleQ and PG. (#28381)

💫Enhancements:

  • Soft target network update is now supported by all off-policy algorithms (e.g DQN, DDPG, etc.) (#28135)
  • Stop RLlib from "silently" selecting atari preprocessors. (#29011)
  • Improved offline RL and off-policy evaluation performance (#28837, #28834, #28593, #28420, #28136, #28013, #27356, #27161, #27451).
  • Escalated old deprecation warnings to errors (#28807, #28795, #28733, #28697).
  • Others: #27619, #27087.

🔨 Fixes:

  • Various bug fixes: #29077, #28811, #28637, #27785, #28703, #28422, #28405, #28358, #27540, #28325, #28357, #28334, #27090, #28133, #27981, #27980, #26666, #27390, #27791, #27741, #27424, #27544, #27459, #27572, #27255, #27304, #26629, #28166, #27864, #28938, #28845, #28588, #28202, #28201, #27806

📖Documentation:

  • Connectors. (#27528)
  • Training step API. (#27344)
  • Others: #28299, #27460

Ray Workflows

🔨 Fixes:

  • Fixed the object loss due to driver exit (#29092)
  • Change the name in step to task_id (#28151)

Ray Core and Ray Clusters

Ray Core

🎉 New Features:

  • Ray OOM prevention feature alpha release! If your Ray jobs suffer from OOM issues, please give it a try.
  • Support dynamic generators as task return values. (#29082 #28864 #28291)

💫Enhancements:

  • Fix spread scheduling imbalance issues (#28804 #28551 #28551)
  • Widening range of grpcio versions allowed (#28623)
  • Support encrypted redis connection. (#29109)
  • Upgrade redis from 6.x to 7.0.5. (#28936)
  • Batch ScheduleAndDispatchTasks calls (#28740)

🔨 Fixes:

  • More robust spilled object deletion (#29014)
  • Fix the initialization/destruction order between reference_counter_ and node change subscription (#29108)
  • Suppress the logging error when python exits and actor not deleted (#27300)
  • Mark run_function_on_all_workers as deprecated until we get rid of this (#29062)
  • Remove unused args for default_worker.py (#28177)
  • Don't include script directory in sys.path if it's started via python -m (#28140)
  • Handling edge cases of max_cpu_fraction argument (#27035)
  • Fix out-of-band deserialization of actor handle (#27700)
  • Allow reuse of cluster address if Ray is not running (#27666)
  • Fix a uncaught exception upon deallocation for actors (#27637)
  • Support placement_group=None in PlacementGroupSchedulingStrategy (#27370)

📖Documentation:

  • Ray 2.0 white paper is published.
  • Revamp ray core docs (#29124 #29046 #28953 #28840 #28784 #28644 #28345 #28113 #27323 #27303)
  • Fix cluster docs (#28056 #27062)
  • CLI Reference Documentation Revamp (#27862)

Ray Clusters

💫Enhancements:

  • Distinguish Kubernetes deployment stacks (#28490)

📖Documentation:

  • State intent to remove legacy Ray Operator (#29178)
  • Improve KubeRay migration notes (#28672)
  • Add FAQ for cluster multi-tenancy support (#29279)

Dashboard

🎉 New Features:

  • Time series metrics are now built into the dashboard
  • Ray now exports some default configuration files which can be used for your Prometheus or Grafana instances. This includes default metrics which show common information important to your Ray application.
  • New progress bar is shown in the job detail view. You can see how far along your ray job is.

🔨 Fixes:

  • Fix to prometheus exporter producing a slightly incorrect format.
  • Fix several performance issues and memory leaks

📖Documentation:

  • Added additional documentation on the new time series and the metrics page

Many thanks to all those who contributed to this release!

@sihanwang41, @simon-mo, @avnishn, @MyeongKim, @markrogersjr, @christy, @xwjiang2010, @kouroshHakha, @zoltan-fedor, @wumuzi520, @alanwguo, @Yard1, @liuyang-my, @charlesjsun, @DevJake, @matteobettini, @jonathan-conder-sm, @mgerstgrasser, @guidj, @JiahaoYao, @Zyiqin-Miranda, @jvanheugten, @aallahyar, @SongGuyang, @clarng, @architkulkarni, @Rohan138, @heyitsmui, @mattip, @ArturNiederfahrenhorst, @maxpumperla, @vale981, @krfricke, @DmitriGekhtman, @amogkam, @richardliaw, @maldil, @zcin, @jianoaix, @cool-RR, @kira-lin, @gramhagen, @c21, @jiaodong, @sijieamoy, @tupui, @ericl, @anabranch, @se4ml, @suquark, @dmatrix, @jjyao, @clarkzinzow, @smorad, @rkooo567, @jovany-wang, @edoakes, @XiaodongLv, @klieret, @rozsasarpi, @scottsun94, @ijrsvt, @bveeramani, @chengscott, @jbedorf, @kevin85421, @nikitavemuri, @sven1977, @acxz, @stephanie-wang, @PaulFenton, @WangTaoTheTonic, @cadedaniel, @nthai, @wuisawesome, @rickyyx, @artemisart, @peytondmurray, @pingsutw, @olipinski, @davidxia, @stestagg, @yaxife, @scv119, @mwtian, @yuanchi2807, @ntlm1686, @shrekris-anyscale, @cassidylaidlaw, @gjoliver, @ckw017, @hakeemta, @ilee300a, @avivhaber, @matthewdeng, @afarid, @pcmoritz, @Chong-Li, @Catch-Bull, @justinvyu, @iycheng

ray-2.0.1

1 year ago

The Ray 2.0.1 patch release contains dependency upgrades and fixes for multiple components:

  • Upgrade grpcio version to 1.32 (#28025)
  • Upgrade redis version to 7.0.5 (#28936)
  • Fix segfault when using runtime environments (#28409)
  • Increase RPC timeout for dashboard (#28330)
  • Set correct path when using python -m (#28140)
  • [Autoscaler] Fix autoscaling for 0 CPU head node (#26813)
  • [Serve] Allow code in private remote Git URIs to be imported (#28250)
  • [Serve] Allow host and port in Serve config (#27026)
  • [RLlib] Evaluation supports asynchronous rollout (single slow eval worker will not block the overall evaluation progress). (#27390)
  • [Tune] Fix hang during checkpoint synchronization (#28155)
  • [Tune] Fix trial restoration from different IP (#28470)
  • [Tune] Fix custom synchronizer serialization (#28699)
  • [Workflows] Replace deprecated name option with task_id (#28151)

ray-2.0.0

1 year ago

Release Highlights

Ray 2.0 is an exciting release with enhancements to all libraries in the Ray ecosystem. With this major release, we take strides towards our goal of making distributed computing scalable, unified, and open.

Towards these goals, Ray 2.0 features new capabilities for unifying the machine learning (ML) ecosystem, improving Ray's production support, and making it easier than ever for ML practitioners to use Ray's libraries.

Highlights:

  • Ray AIR, a scalable and unified toolkit for ML applications, is now in Beta.
  • ​​Ray now supports natively shuffling 100TB or more of data with the Ray Datasets library.
  • KubeRay, a toolkit for running Ray on Kubernetes, is now in Beta. This replaces the legacy Python-based Ray operator.
  • Ray Serve’s Deployment Graph API is a new and easier way to build, test, and deploy an inference graph of deployments. This is released as Beta in 2.0.

A migration guide for all the different libraries can be found here: Ray 2.0 Migration Guide.

Ray Libraries

Ray AIR

Ray AIR is now in beta. Ray AIR builds upon Ray’s libraries to enable end-to-end machine learning workflows and applications on Ray. You can install all dependencies needed for Ray AIR via pip install -u "ray[air]".

🎉 New Features:

  • Predictors:
    • BatchPredictors now have support for scalable inference on GPUs.
    • All Predictors can now be constructed from pre-trained models, allowing you to easily scale batch inference with trained models from common ML frameworks.
    • ray.ml.predictors has been moved to the Ray Train namespace (ray.train).
  • Preprocessing: New preprocessors and API changes on Ray Datasets now make feature processing easier to do on AIR. See the Ray Data release notes for more details.
  • New features for Datasets/Train/Tune/Serve can be found in the corresponding library release notes for more details.

💫 Enhancements:

  • Major package refactoring is included in this release.
    • ray.ml is renamed to ray.air.
    • ray.ml.preprocessors have been moved to ray.data.
      • train_test_split is now a new method of ray.data.Dataset (#27065)
    • ray.ml.trainers have been moved to ray.train (#25570)
    • ray.ml.predictors has been moved to ray.train.
    • ray.ml.config has been moved to ray.air.config (#25712).
    • Checkpoints are now framework-specific -- meaning that each Trainer generates its own Framework-specific Checkpoint class. See Ray Train for more details.
    • ModelWrappers have been renamed to PredictorDeployments.
  • API stability annotations have been added (#25485)
  • Train/Tune now have the same reporting and checkpointing API -- see the Train notes for more details (#26303)
  • ScalingConfigs are now Dataclasses not Dict types
  • Many AIR examples, benchmarks, and documentation pages were added in this release. The Ray AIR documentation will cover breadth of usage (end to end workflows across different libraries) while library-specific documentation will cover depth (specific features of a specific library).

🔨 Fixes:

  • Many documentation examples were previously untested. This release fixes those examples and adds them to the CI.
  • Predictors:
    • Torch/Tensorflow Predictors have correctness fixes (#25199, #25190, #25138, #25136)
    • Update KerasCallback to work with TensorflowPredictor (#26089)
    • Add streaming BatchPredictor support (#25693)
    • Add predict_pandas implementation (#25534)
    • Add _predict_arrow interface for Predictor (#25579)
    • Allow creating Predictor directly from a UDF (#26603)
    • Execute GPU inference in a separate stage in BatchPredictor (#26616, #27232, #27398)
    • Accessors for preprocessor in Predictor class (#26600)
    • [AIR] Predictor call_model API for unsupported output types (#26845)

Ray Data Processing

🎉 New Features:

  • Add ImageFolderDatasource (#24641)
  • Add the NumPy batch format for batch mapping and batch consumption (#24870)
  • Add iter_torch_batches() and iter_tf_batches() APIs (#26689)
  • Add local shuffling API to iterators (#26094)
  • Add drop_columns() API (#26200)
  • Add randomize_block_order() API (#25568)
  • Add random_sample() API (#24492)
  • Add support for len(Dataset) (#25152)
  • Add UDF passthrough args to map_batches() (#25613)
  • Add Concatenator preprocessor (#26526)
  • Change range_arrow() API to range_table() (#24704)

💫 Enhancements:

  • Autodetect dataset parallelism based on available resources and data size (#25883)
  • Use polars for sorting (#25454)
  • Support tensor columns in to_tf() and to_torch() (#24752)
  • Add explicit resource allocation option via a top-level scheduling strategy (#24438)
  • Spread actor pool actors evenly across the cluster by default (#25705)
  • Add ray_remote_args to read_text() (#23764)
  • Add max_epoch argument to iter_epochs() (#25263)
  • Add Pandas-native groupby and sorting (#26313)
  • Support push-based shuffle in groupby operations (#25910)
  • More aggressive memory releasing for Dataset and DatasetPipeline (#25461, #25820, #26902, #26650)
  • Automatically cast tensor columns on Pandas UDF outputs (#26924)
  • Better error messages when reading from S3 (#26619, #26669, #26789)
  • Make dataset splitting more efficient and stable (#26641, #26768, #26778)
  • Use sampling to estimate in-memory data size for Parquet data source (#26868)
  • De-experimentalized lazy execution mode (#26934)

🔨 Fixes:

  • Fix pipeline pre-repeat caching (#25265)
  • Fix stats construction for from_*() APIs (#25601)
  • Fixes label tensor squeezing in to_tf() (#25553)
  • Fix stage fusion between equivalent resource args (fixes BatchPredictor) (#25706)
  • Fix tensor extension string formatting (repr) (#25768)
  • Workaround for unserializable Arrow JSON ReadOptions (#25821)
  • Make ActorPoolStrategy kill pool of actors if exception is raised (#25803)
  • Fix max number of actors for default actor pool strategy (#26266)
  • Fix byte size calculation for non-trivial tensors (#25264)

Ray Train

Ray Train has received a major expansion of scope with Ray 2.0.

In particular, the Ray Train module now contains:

  1. Trainers
  2. Predictors
  3. Checkpoints

for common different ML frameworks including Pytorch, Tensorflow, XGBoost, LightGBM, HuggingFace, and Scikit-Learn. These API help provide end-to-end usage of Ray libraries in Ray AIR workflows.

🎉 New Features:

  • The Trainer API is now deprecated for the new Ray AIR Trainers API. Trainers for Pytorch, Tensorflow, Horovod, XGBoost, and LightGBM are now in Beta. (#25570)
  • ML framework-specific Predictors have been moved into the ray.train namespace. This provides streamlined API for offline and online inference of Pytorch, Tensorflow, XGBoost models and more. (#25769 #26215, #26251, #26451, #26531, #26600, #26603, #26616, #26845)
  • ML framework-specific checkpoints are introduced. Checkpoints are consumed by Predictors to load model weights and information. (#26777, #25940, #26532, #26534)

💫 Enhancements:

  • Train and Tune now use the same reporting and checkpointing API (#24772, #25558)
  • Add tunable ScalingConfig dataclass (#25712)
  • Randomize block order by default to avoid hotspots (#25870)
  • Improve checkpoint configurability and extend results (#25943)
  • Improve prepare_data_loader to support multiple batch data types (#26386)
  • Discard returns of train loops in Trainers (#26448)
  • Clean up logs, reprs, warning s(#26259, #26906, #26988, #27228, #27519)

📖 Documentation:

  • Update documentation to use new Train API (#25735)
  • Update documentation to use session API (#26051, #26303)
  • Add Trainer user guide and update Trainer docs (#27570, #27644, #27685)
  • Add Predictor documentation (#25833)
  • Replace to_torch with iter_torch_batches (#27656)
  • Replace to_tf with iter_tf_batches (#27768)
  • Minor doc fixes (#25773, #27955)

🏗 Architecture refactoring:

  • Clean up ray.train package (#25566)
  • Mark Trainer interfaces as Deprecated (#25573)

🔨 Fixes:

  • An issue with GPU ID detection and assignment was fixed. (#26493)
  • Fix AMP for models with a custom __getstate__ method (#25335)
  • Fix transformers example for multi-gpu (#24832)
  • Fix ScalingConfig key validation (#25549)
  • Fix ResourceChangingScheduler integration (#26307)
  • Fix auto_transfer cuda device (#26819)
  • Fix BatchPredictor.predict_pipelined not working with GPU stage (#27398)
  • Remove rllib dependency from tensorflow_predictor (#27688)

Ray Tune

🎉 New Features:

  • The Tuner API is the new way of running Ray Tune experiments. (#26987, #26987, #26961, #26931, #26884, #26930)
  • Ray Tune and Ray Train now have the same API for reporting (#25558)
  • Introduce tune.with_resources() to specify function trainable resources (#26830)
  • Add Tune benchmark for AIR (#26763, #26564)
  • Allow Tuner().restore() from cloud URIs (#26963)
  • Add top-level imports for Tuner, TuneConfig, move CheckpointConfig (#26882)
  • Add resume experiment options to Tuner.restore() (#26826)
  • Add checkpoint_frequency/checkpoint_at_end arguments to CheckpointConfig (#26661)
  • Add more config arguments to Tuner (#26656)
  • Better error message for Tune nested tasks / actors (#25241)
  • Allow iterators in tune.grid_search (#25220)
  • Add get_dataframe() method to result grid, fix config flattening (#24686)

💫 Enhancements:

  • Expose number of errored/terminated trials in ResultGrid (#26655)
  • remove fully_executed from Tune. (#25750)
  • Exclude in remote storage upload (#25544)
  • Add TempFileLock (#25408)
  • Add annotations/set scope for Tune classes (#25077)

📖 Documentation:

  • Improve Tune + Datasets documentation (#25389)
  • Tune examples better navigation, minor fixes (#24733)

🏗 Architecture refactoring:

  • Consolidate checkpoint manager 3: Ray Tune (#24430)
  • Clean up ray.tune scope (remove stale objects in all) (#26829)

🔨 Fixes:

  • Fix k8s release test + node-to-node syncing (#27365)
  • Fix Tune custom syncer example (#27253)
  • Fix tune_cloud_aws_durable_upload_rllib_* release tests (#27180)
  • Fix test_tune (#26721)
  • Larger head node for tune_scalability_network_overhead weekly test (#26742)
  • Fix tune-sklearn notebook example (#26470)
  • Fix reference to dataset_tune (#25402)
  • Fix Tune-Pytorch-CIFAR notebook example (#26474)
  • Fix documentation testing (#26409)
  • Fix set_tune_experiment (#26298)
  • Fix GRPC resource exhausted test for tune trainables (#24467)

Ray Serve

🎉 New Features:

  • We are excited to introduce you to the 2.0 API centered around multi-model composition API, operation API, and production stability. (#26310,#26507,#26217,#25932,#26374,#26901,#27058,#24549,#24616,#27479,#27576,#27433,#24306,#25651,#26682,#26521,#27194,#27206,#26804,#25575,#26574)
    • Deployment Graph API is the new API for model composition. It provides a declarative layer on top of the 1.x deployment API to help you author performant inference pipeline easily. (#27417,#27420,#24754,#24435,#24630,#26573,#27349,#24404,#25424,#24418,#27815,#27844,#25453,#24629)
    • We introduced a new K8s native way to deploy Ray Serve. Along with a brand new REST API to perform deployment, update, and configure. (#25935,#27063,#24814,#26093,#25213,#26588,#25073,#27000,#27444,#26578,#26652,#25610,#25502,#26096,#24265,#26177,#25861,#25691,#24839,#27498,#27561,#25862,#26347)
    • Serve can now survive Ray GCS failure. This used to be a single-point-of-failure in Ray Serve's architecture. Now, when the GCS goes down, Serve can continue to Serve traffic. We recommend you to try out this feature and give us feedback! (#25633,#26107,#27608,#27763,#27771,#25478,#25637,#27526,#27674,#26753,#26797,#24560,#26685,#26734,#25987,#25091,#24934)
  • Autoscaling has been promoted to stable. Additionally, we added a scale to zero support. (#25770,#25733,#24892,#26393)
  • The documentation has been revamped. Check them at rayserve.org (#24414,#26211,#25786,#25936,#26029,#25830,#24760,#24871,#25243,#25390,#25646,#24657,#24713,#25270,#25808,#24693,#24736,#24524,#24690,#25494)

💫 Enhancements:

  • Serve natively supports deploying predictor and checkpoints from Ray AI Runtime (#26026,#25003,#25537,#25609,#25962,#26494,#25688,#24512,#24417)
  • Serve now supports scaling Gradio application (#27560)
  • Java Client API, marking the complete alpha release Java API (#22726)
  • Improved out-of-box performance by using uvicorn with uvloop (#25027)

RLlib

🎉 New Features:

  • In 2.0, RLlib is introducing an object-oriented configuration API instead of using a python dict for algorithm configuration (#24332, #24374, #24375, #24376, #24433, #24576, #24650, #24577, #24339, #24687, #24775, #24584, #24583, #24853, #25028, #25059, #25065, #25066, #25067, #25256, #25255, #25278, #25279)
  • RLlib is introducing a Connectors API (alpha). Connectors are a new component that handles transformations on inputs and outputs of a given RL policy. (#25311, #25007, #25923, #25922, #25954, #26253, #26510, #26645, #26836, #26803, #26998, #27016)
  • New improvements to off-policy estimators, including a new Doubly-Robust Off-Policy Estimator implementation (#24384, #25107, #25056, #25899, #25911, #26279, #26893)
  • CRR Algorithm (#25459, #25667, #25905, #26142, #26304, #26770, #27161)
  • Feature importance evaluation for offline RL (#26412)
  • RE3 exploration algorithm TF2 framework support (#25221)
  • Unified replay Buffer API (#24212, #24156, #24473, #24506, #24866, #24683, #25841, #25560, #26428)

💫 Enhancements:

  • Improvements to RolloutWorker / Env fault tolerance (#24967, #26134, #26276, #26809)
  • Upgrade gym to 0.23 (#24171), Bump gym dep to 0.24 (#26190)
  • Agents has been renamed to Algorithms (#24511, #24516, #24739, #24797, #24841, #24896, #25014, #24579, #25314, #25346, #25366, #25539, #25869)
  • Execution Plan API is now deprecated. Training step function API is the new way of specifying RLlib algorithms (#23454, #24488, #2450, #24212, #24165, #24545, #24507, #25076, #25624, #25924, #25856, #25851, #27344, #24423)
  • Policy V2 subclassing implementation migration (#24742, #24746, #24914, #25117, #25203, #25078, #25254, #25384, #25585, #25871, #25956, #26054)
  • Allow passing **kwargs to action distribution. (#24692)
  • Deprecation: Replace remaining evaluation_num_episodes with evaluation_duration. (#26000)

🔨 Fixes:

  • Multi-GPU learner thread key error in MA-scenarios (#24382)
  • Add release learning tests for SlateQ (#24429)
  • APEX-DQN replay buffer config validation fix. (#24588)
  • Automatic sequencing in function timeslice_along_seq_lens_with_overlap (#24561)
  • Policy Server/Client metrics reporting fix (#24783)
  • Re-establish dashboard performance tests. (#24728)
  • Bandit tf2 fix (+ add tf2 to test cases). (#24908)
  • Fix estimated buffer size in replay buffers. (#24848)
  • Fix RNNSAC example failing on CI + fixes for recurrent models for other Q Learning Algos. (#24923)
  • Curiosity bug fix. (#24880)
  • Auto-infer different agents' spaces in multi-agent env. (#24649)
  • Fix the bug “WorkerSet.stop() will raise error if self._local_worker is None (e.g. in evaluation worker sets)”. (#25332)
  • Fix Policy global timesteps being off by init sample batch size. (#25349)
  • Disambiguate timestep fragment storage unit in replay buffers. (#25242)
  • Fix the bug where on GPU, sample_batch.to_device() only converts the device and does not convert float64 to float32. (#25460)
  • Fix faulty usage of get_filter_config in ComplexInputNextwork (#25493)
  • Custom resources per worker should get added to default_resource_request (#24463)
  • Better default values for training_intensity and target_network_update_freq for R2D2. (#25510)
  • Fix multi agent environment checks for observations that contain only some agents' obs each step. (#25506)
  • Fixes PyTorch grad clipping logic and adds grad clipping to QMIX. (#25584)
  • Discussion 6432: Automatic train_batch_size calculation fix. (#25621)
  • Added meaningful error for multi-agent failure of SampleCollector in case no agent steps in episode. (#25596)
  • Replace torch.range with torch.arange. (#25640)\
  • Fix the bug where there is no gradient clipping in QMix. (#25656)
  • Fix sample batch concatination. (#25572)
  • Fix action_sampler_fn call in TorchPolicyV2 (obs_batch instead of input_dict arg). (#25877)
  • Fixes logging of all of RLlib's Algorithm names as warning messages. (#25840)
  • IMPALA/APPO multi-agent mix-in-buffer fixes (plus MA learningt ests). (#25848)
  • Move offline input into replay buffer using rollout ops in CQL. (#25629)
  • Include SampleBatch.T column in all collected batches. (#25926)
  • Add timeout to filter synchronization. (#25959)
  • SimpleQ PyTorch Multi GPU fix (#26109)
  • IMPALA and APPO metrics fixes; remove deprecated async_parallel_requests utility. (#26117)
  • Added 'episode.hist_data' to the 'atari_metrics' to nsure that custom metrics of the user are kept in postprocessing when using Atari environments. (#25292)
  • Make the dataset and json readers batchable (#26055)
  • Fix Issue 25696: Output writers not working w/ multiple workers. (#25722)
  • Fix all the erroneous on_trainer_init warning. (#26433)
  • In env check, step only expected agents. (#26425)
  • Make DQN update_target use only trainable variables. (#25226)
  • Fix FQE Policy call (#26671)
  • Make queue placement ops blocking (#26581)
  • Fix memory leak in APEX_DQN (#26691)
  • Fix MultiDiscrete not being one-hotted correctly (#26558)
  • Make IOContext optional for DatasetReader (#26694)
  • Make sure we step() after adding init_obs. (#26827)
  • Fix ModelCatalog for nested complex inputs (#25620)
  • Use compress observations where replay buffers and image obs are used in tuned examples (#26735)
  • Fix SampleBatch.split_by_episode to use dones if episode id is not available (#26492)
  • Fix torch None conversion in torch_utils.py::convert_to_torch_tensor. (#26863)
  • Unify gnorm mixin for tf and torch policies. (#26102)

Ray Workflows

🎉 New Features:

  • Support ray client (#26702)
  • Http event is supported (#26010)
  • Support retry_exceptions (#26913)
  • Support queuing in workflow (#24697)
  • Make status indexed (#24767)

🔨 Fixes:

  • Push logs to drivers correctly (#24490)
  • Make resume no side effect (#26918)
  • Make the max_retries aligned with ray (#26350)

🏗 Architecture refactoring:

  • Rewrite workflow execution engine (#25618)
  • Simplify the resume flow (#24594)
  • Deprecate step and use bind (#26232)
  • Deprecate virtual actor (#25394)
  • Refactor the exception processing (#26398)

Ray Core and Ray Clusters

Ray Core

🎉 New Features:

  • Ray State API is now at alpha. You can access the live information of tasks, actors, objects, placement groups, and etc. through Ray CLI (summary / list / get) and Python SDK. See the Ray State API documentation for more information.
  • Support generators for tasks with multiple return values (#25247)
  • Support GCS Fault tolerance.(#24764, #24813, #24887, #25131, #25126, #24747, #25789, #25975, #25994, #26405, #26421, #26919)

💫 Enhancements:

  • Allow failing new tasks immediately while the actor is restarting (#22818)
  • Add more accurate worker exit (#24468)
  • Allow user to override global default for max_retries (#25189)
  • Export additional metrics for workers and Raylet memory (#25418)
  • Push message to driver when a Raylet dies (#25516)
  • Out of Disk prevention (#25370)
  • ray.init defaults to an existing Ray instance if there is one (#26678)
  • Reconstruct manually freed objects (#27567)

🔨 Fixes:

  • Fix a task cancel hanging bug (#24369)
  • Adjust worker OOM scores to prioritize the raylet during memory pressure (#24623)
  • Fix pull manager deadlock due to object reconstruction (#24791)
  • Fix bugs in data locality aware scheduling (#25092)
  • Fix node affinity strategy when resource is empty (#25344)
  • Fix object transfer resend protocol (#26349)

🏗 Architecture refactoring:

  • Raylet and GCS schedulers share the same code (#23829)
  • Remove multiple core workers in one process (#24147, #25159)

Ray Clusters

🎉 New Features:

  • The KubeRay operator is now the preferred tool to run Ray on Kubernetes.
    • Ray Autoscaler + KubeRay operator integration is now beta.

💫 Enhancements:

🔨 Fixes:

  • Previously deprecated fields, head_node, worker_nodes, head_node_type, default_worker_node_type, autoscaling_mode, target_utilization_fraction are removed. Check out the migration guide to learn how to migrate to the new versions.

Ray Client

🎉 New Features:

  • Support for configuring request metadata for client gRPC (#24946)

💫 Enhancements:

  • Remove 2 GiB size limit on remote function arguments (#24555)

🔨 Fixes:

  • Fix excessive memory usage when submitting large remote arguments (#24477)

Dashboard

🎉 New Features:

  • The new dashboard UI is now to default dashboard. Please leave any feedback about the dashboard on Github Issues or Discourse! You can still go to the legacy dashboard UI by clicking “Back to legacy dashboard”.
  • New Dashboard UI now shows all ray jobs. This includes jobs submitted via the job submission API and jobs launched from python scripts via ray.init().
  • New Dashboard UI now shows worker nodes in the main node tab
  • New Dashboard UI now shows more information in the actors tab

Breaking changes:

  • The job submission list_jobs API endpoint, CLI command, and SDK function now returns a list of jobs instead of a dictionary from id to job.
  • The Tune tab is no longer in the new dashboard UI. It is still available in the legacy dashboard UI but will be removed.
  • The memory tab is no longer in the new dashboard UI. It is still available in the legacy dashboard UI but will be removed.

🔨 Fixes:

  • We reduced the memory usage of the dashboard. We are no longer caching logs and we cache a maximum of 1000 actors. As a result of this change, node level logs can no longer be accessed in the legacy dashboard.
  • Jobs status error message now properly truncates logs to 10 lines. We also added a max characters of 20000 to avoid passing too much data.

Many thanks to all those who contributed to this release!

@ujvl, @xwjiang2010, @EricCousineau-TRI, @ijrsvt, @waleedkadous, @captain-pool, @olipinski, @danielwen002, @amogkam, @bveeramani, @kouroshHakha, @jjyao, @larrylian, @goswamig, @hanming-lu, @edoakes, @nikitavemuri, @enori, @grechaw, @truelegion47, @alanwguo, @sychen52, @ArturNiederfahrenhorst, @pcmoritz, @mwtian, @vakker, @c21, @rberenguel, @mattip, @robertnishihara, @cool-RR, @iamhatesz, @ofey404, @raulchen, @nmatare, @peterghaddad, @n30111, @fkaleo, @Riatre, @zhe-thoughts, @lchu-ibm, @YoelShoshan, @Catch-Bull, @matthewdeng, @VishDev12, @valtab, @maxpumperla, @tomsunelite, @fwitter, @liuyang-my, @peytondmurray, @clarkzinzow, @VeronikaPolakova, @sven1977, @stephanie-wang, @emjames, @Nintorac, @suquark, @javi-redondo, @xiurobert, @smorad, @brucez-anyscale, @pdames, @jjyyxx, @dmatrix, @nakamasato, @richardliaw, @juliusfrost, @anabranch, @christy, @Rohan138, @cadedaniel, @simon-mo, @mavroudisv, @guidj, @rkooo567, @orcahmlee, @lixin-wei, @neigh80, @yuduber, @JiahaoYao, @simonsays1980, @gjoliver, @jimthompson5802, @lucasalavapena, @zcin, @clarng, @jbn, @DmitriGekhtman, @timgates42, @charlesjsun, @Yard1, @mgelbart, @wumuzi520, @sihanwang41, @ghost, @jovany-wang, @siavash119, @yuanchi2807, @tupui, @jianoaix, @sumanthratna, @code-review-doctor, @Chong-Li, @FedericoGarza, @ckw017, @Makan-Ar, @kfstorm, @flanaman, @WangTaoTheTonic, @franklsf95, @scv119, @kvaithin, @wuisawesome, @jiaodong, @mgerstgrasser, @tiangolo, @architkulkarni, @MyeongKim, @ericl, @SongGuyang, @avnishn, @chengscott, @shrekris-anyscale, @Alyetama, @iycheng, @rickyyx, @krfricke, @sijieamoy, @kimikuri, @czgdp1807, @michalsustr

ray-1.13.0

1 year ago

Highlights:

  • Python 3.10 support is now in alpha.
  • Ray usage stats collection is now on by default (guarded by an opt-out prompt).
  • Ray Tune can now synchronize Trial data from worker nodes via the object store (without rsync!)
  • Ray Workflow comes with a new API and is integrated with Ray DAG.

Ray Autoscaler

💫Enhancements:

  • CI tests for KubeRay autoscaler integration (#23365, #23383, #24195)
  • Stability enhancements for KubeRay autoscaler integration (#23428)

🔨 Fixes:

  • Improved GPU support in KubeRay autoscaler integration (#23383)
  • Resources scheduled with the node affinity strategy are not reported to the autoscaler (#24250)

Ray Client

💫Enhancements:

  • Add option to configure ray.get with >2 sec timeout (#22165)
  • Return None from internal KV for non-existent keys (#24058)

🔨 Fixes:

  • Fix deadlock by switching to SimpleQueue on Python 3.7 and newer in async dataclient (#23995)

Ray Core

🎉 New Features:

  • Ray usage stats collection is now on by default (guarded by an opt-out prompt)
  • Alpha support for python 3.10 (on Linux and Mac)
  • Node affinity scheduling strategy (#23381)
  • Add metrics for disk and network I/O (#23546)
  • Improve exponential backoff when connecting to the redis (#24150)
  • Add the ability to inject a setup hook for customization of runtime_env on init (#24036)
  • Add a utility to check GCS / Ray cluster health (#23382)

🔨 Fixes:

  • Fixed internal storage S3 bugs (#24167)
  • Ensure "get_if_exists" takes effect in the decorator. (#24287)
  • Reduce memory usage for Pubsub channels that do not require total memory cap (#23985)
  • Add memory buffer limit in publisher for each subscribed entity (#23707)
  • Use gRPC instead of socket for GCS client health check (#23939)
  • Trim size of Reference struct (#23853)
  • Enable debugging into pickle backend (#23854)

🏗 Architecture refactoring:

  • Gcs storage interfaces unification (#24211)
  • Cleanup pickle5 version check (#23885)
  • Simplify options handling (#23882)
  • Moved function and actor importer away from pubsub (#24132)
  • Replace the legacy ResourceSet & SchedulingResources at Raylet (#23173)
  • Unification of AddSpilledUrl and UpdateObjectLocationBatch RPCs (#23872)
  • Save task spec in separate table (#22650)

Ray Datasets

🎉 New Features:

  • Performance improvement: the aggregation computation is vectorized (#23478)
  • Performance improvement: bulk parquet file reading is optimized with the fast metadata provider (#23179)
  • Performance improvement: more efficient move semantics for Datasets block processing (#24127)
  • Supports Datasets lineage serialization (aka out-of-band serialization) (#23821, #23931, #23932)
  • Supports native Tensor views in map processing for pure-tensor datasets (#24812)
  • Implemented push-based shuffle (#24281)

🔨 Fixes:

  • Documentation improvement: Getting Started page (#24860)
  • Documentation improvement: FAQ (#24932)
  • Documentation improvement: End to end examples (#24874)
  • Documentation improvement: Feature guide - Creating Datasets (#24831)
  • Documentation improvement: Feature guide - Saving Datasets (#24987)
  • Documentation improvement: Feature guide - Transforming Datasets (#25033)
  • Documentation improvement: Datasets APIs docstrings (#24949)
  • Performance: fixed block prefetching (#23952)
  • Fixed zip() for Pandas dataset (#23532)

🏗 Architecture refactoring:

  • Refactored LazyBlockList (#23624)
  • Added path-partitioning support for all content types (#23624)
  • Added fast metadata provider and refactored Parquet datasource (#24094)

RLlib

🎉 New Features:

  • Replay buffer API: First algorithms are using the new replay buffer API, allowing users to define and configure their own custom buffers or use RLlib’s built-in ones: SimpleQ, DQN (#24164, #22842, #23523, #23586)

🏗 Architecture refactoring:

  • More algorithms moved into the training iteration function API (no longer using execution plans). Users can now more easily read, develop, and debug RLlib’s algorithms: A2C, APEX-DQN, CQL, DD-PPO, DQN, MARWIL + BC, PPO, QMIX , SAC, SimpleQ, SlateQ, Trainers defined in examples folder. (#22937, #23420, #23673, #24164, #24151, #23735, #24157, #23798, #23906, #24118, #22842, #24166, #23712). This will be fully completed and documented with Ray 2.0.
  • Make RolloutWorkers (optionally) recoverable after failure via the new recreate_failed_workers=True config flag. (#23739)
  • POC for new TrainerConfig objects (instead of python config dicts): PPOConfig (for PPOTrainer) and PGConfig (for PGTrainer). (#24295, #23491)
  • Hard-deprecate build_trainer() (trainer_templates.py): All custom Trainers should now sub-class from any existing Trainer class. (#23488)

💫Enhancements:

  • Add support for complex observations in CQL. (#23332)
  • Bandit support for tf2. (#22838)
  • Make actions sent by RLlib to the env immutable. (#24262)
  • Memory leak finding toolset using tracemalloc + CI memory leak tests. (#15412)
  • Enable DD-PPO to run on Windows. (#23673)

🔨 Fixes:

  • APPO eager fix (APPOTFPolicy gets wrapped as_eager() twice by mistake). (#24268)
  • CQL gets stuck when deprecated timesteps_per_iteration is used (use min_train_timesteps_per_reporting instead). (#24345)
  • SlateQ runs on GPU (torch). (#23464)
  • Other bug fixes: #24016, #22050, #23814, #24025, #23740, #23741, #24006, #24005, #24273, #22010, #24271, #23690, #24343, #23419, #23830, #24335, #24148, #21735, #24214, #23818, #24429

Ray Workflow

🎉 New Features:

  • Workflow step is deprecated (#23796, #23728, #23456, #24210)

🔨 Fixes:

  • Fix one bug where max_retries is not aligned with ray core’s max_retries. (#22903)

🏗 Architecture refactoring:

  • Integrate ray storage in workflow (#24120)

Tune

🎉 New Features:

  • Add RemoteTask based sync client (#23605) (rsync not required anymore!)
  • Chunk file transfers in cross-node checkpoint syncing (#23804)
  • Also interrupt training when SIGUSR1 received (#24015)
  • reuse_actors per default for function trainables (#24040)
  • Enable AsyncHyperband to continue training for last trials after max_t (#24222)

💫Enhancements:

  • Improve testing (#23229
  • Improve docstrings (#23375)
  • Improve documentation (#23477, #23924)
  • Simplify trial executor logic (#23396
  • Make MLflowLoggerUtil copyable (#23333)
  • Use new Checkpoint interface internally (#22801)
  • Beautify Optional typehints (#23692)
  • Improve missing search dependency info (#23691)
  • Skip tmp checkpoints in analysis and read iteration from metadata (#23859)
  • Treat checkpoints with nan value as worst (#23862)
  • Clean up base ProgressReporter API (#24010)
  • De-clutter log outputs in trial runner (#24257)
  • hyperopt searcher to support tune.choice([[1,2],[3,4]]). (#24181)

🔨Fixes:

  • Optuna should ignore additional results after trial termination (#23495)
  • Fix PTL multi GPU link (#23589)
  • Improve Tune cloud release tests for durable storage (#23277)
  • Fix tensorflow distributed trainable docstring (#23590)
  • Simplify experiment tag formatting, clean directory names (#23672)
  • Don't include nan metrics for best checkpoint (#23820)
  • Fix syncing between nodes in placement groups (#23864)
  • Fix memory resources for head bundle (#23861)
  • Fix empty CSV headers on trial restart (#23860)
  • Fix checkpoint sorting with nan values (#23909)
  • Make Timeout stopper work after restoring in the future (#24217)
  • Small fixes to tune-distributed for new restore modes (#24220)

Train

Most distributed training enhancements will be captured in the new Ray AIR category!

🔨Fixes:

  • Copy resources_per_worker to avoid modifying user input
  • Fix train.torch.get_device() for fractional GPU or multiple GPU per worker case (#23763)
  • Fix multi node horovod bug (#22564)
  • Fully deprecate Ray SGD v1 (#24038)
  • Improvements to fault tolerance (#22511)
  • MLflow start run under correct experiment (#23662)
  • Raise helpful error when required backend isn't installed (#23583)
  • Warn pending deprecation for ray.train.Trainer and ray.tune DistributedTrainableCreators (#24056)

📖Documentation:

  • add FAQ (#22757)

Ray AIR

🎉 New Features:

  • HuggingFaceTrainer & HuggingFacePredictor (#23615, #23876)
  • SklearnTrainer & SklearnPredictor (#23803, #23850)
  • HorovodTrainer (#23437)
  • RLTrainer & RLPredictor (#23465, #24172)
  • BatchMapper preprocessor (#23700)
  • Categorizer preprocessor (#24180)
  • BatchPredictor (#23808)

💫Enhancements:

  • Add Checkpoint.as_directory() for efficient checkpoint fs processing (#23908)
  • Add config to Result, extend ResultGrid.get_best_config (#23698)
  • Add Scaling Config validation (#23889)
  • Add tuner test. (#23364)
  • Move storage handling to pyarrow.fs.FileSystem (#23370)
  • Refactor _get_unique_value_indices (#24144)
  • Refactor most_frequent SimpleImputer (#23706)
  • Set name of Trainable to match with Trainer #23697
  • Use checkpoint.as_directory() instead of cleaning up manually (#24113)
  • Improve file packing/unpacking (#23621)
  • Make Dataset ingest configurable (#24066)
  • Remove postprocess_checkpoint (#24297)

🔨Fixes:

  • Better exception handling (#23695)
  • Do not deepcopy RunConfig (#23499)
  • reduce unnecessary stacktrace (#23475)
  • Tuner should use run_config from Trainer per default (#24079)
  • Use custom fsspec handler for GS (#24008)

📖Documentation:

  • Add distributed torch_geometric example (#23580)
  • GNN example cleanup (#24080)

Serve

🎉 New Features:

  • Serve logging system was revamped! Access log is now turned on by default. (#23558)
  • New Gradio notebook example for Ray Serve deployments (#23494)
  • Serve now includes full traceback in deployment update error message (#23752)

💫Enhancements:

  • Serve Deployment Graph was enhanced with performance fixes and structural clean up. (#24199, #24026, #24065, #23984)
  • End to end tutorial for deployment graph (#23512, #22771, #23536)
  • input_schema is now renamed as http_adapter for usability (#24353, #24191)
  • Progress towards a declarative REST API (#23232, #23481)
  • Code cleanup and refactoring (#24067, #23578, #23934, #23759)
  • Protobuf based controller API for cross language client (#23004)

🔨Fixes:

  • Handle None in ReplicaConfig's resource_dict (#23851)
  • Set "memory" to None in ray_actor_options by default (#23619)
  • Make serve.shutdown() shutdown remote Serve applications (#23476)
  • Ensure replica reconfigure runs after allocation check (#24052)
  • Allow cloudpickle serializable objects as init args/kwargs (#24034)
  • Use controller namespace when getting actors (#23896)

Dashboard

🔨Fixes:

  • Add toggle to enable showing node disk usage on K8s (#24416, #24440)
  • Add job submission id as field to job snapshot (#24303)

Thanks Many thanks to all those who contributed to this release! @matthewdeng, @scv119, @xychu, @iycheng, @takeshi-yoshimura, @iasoon, @wumuzi520, @thetwotravelers, @maxpumperla, @krfricke, @jgiannuzzi, @kinalmehta, @avnishn, @dependabot[bot], @sven1977, @raulchen, @acxz, @stephanie-wang, @mgelbart, @xwjiang2010, @jon-chuang, @pdames, @ericl, @edoakes, @gjoseph92, @ddelange, @bkasper, @sriram-anyscale, @Zyiqin-Miranda, @rkooo567, @jbedorf, @architkulkarni, @osanseviero, @simonsays1980, @clarkzinzow, @DmitriGekhtman, @ashione, @smorad, @andenrx, @mattip, @bveeramani, @chaokunyang, @richardliaw, @larrylian, @Chong-Li, @fwitter, @shrekris-anyscale, @gjoliver, @simontindemans, @silky, @grypesc, @ijrsvt, @daikeshi, @kouroshHakha, @mwtian, @mesjou, @sihanwang41, @PavelCz, @czgdp1807, @jianoaix, @GuillaumeDesforges, @pcmoritz, @arsedler9, @n30111, @kira-lin, @ckw017, @max0x7ba, @Yard1, @XuehaiPan, @lchu-ibm, @HJasperson, @SongGuyang, @amogkam, @liuyang-my, @WangTaoTheTonic, @jovany-wang, @simon-mo, @dynamicwebpaige, @suquark, @ArturNiederfahrenhorst, @jjyao, @KepingYan, @jiaodong, @frosk1

ray-1.12.1

1 year ago

Patch release with the following fixes:

ray-1.11.1

2 years ago

Patch release including fixes for the following issues:

ray-1.12.0

2 years ago

Highlights

  • Ray AI Runtime (AIR), an open-source toolkit for building end-to-end ML applications on Ray, is now in Alpha. AIR is an effort to unify the experience of using different Ray libraries (Ray Data, Train, Tune, Serve, RLlib). You can find more information on the docs or on the public RFC.
    • Getting involved with Ray AIR. We’ll be holding office hours, development sprints, and other activities as we get closer to the Ray AIR Beta/GA release. Want to join us? Fill out this short form!
  • Ray usage data collection is now off by default. If you have any questions or concerns, please comment on the RFC.
  • New algorithms are added to RLlib: SlateQ & Bandits (for recommender systems use cases) and AlphaStar (multi-agent, multi-GPU w/ league-based self-play)
  • Ray Datasets: new lazy execution model with automatic task fusion and memory-optimizing move semantics; first-class support for Pandas DataFrame blocks; efficient random access datasets.

Ray Autoscaler

🎉 New Features

  • Support cache_stopped_nodes on Azure (#21747)
  • AWS Cloudwatch support (#21523)

💫 Enhancements

  • Improved documentation and standards around built in autoscaler node providers. (#22236, 22237)
  • Improved KubeRay support (#22987, #22847, #22348, #22188)
  • Remove redis requirement (#22083)

🔨 Fixes

  • No longer print infeasible warnings for internal placement group resources. Placement groups which cannot be satisfied by the autoscaler still trigger warnings. (#22235)
  • Default ami’s per AWS region are updated/fixed. (#22506)
  • GCP node termination updated (#23101)
  • Retry legacy k8s operator on monitor failure (#22792)
  • Cap min and max workers for manually managed on-prem clusters (#21710)
  • Fix initialization artifacts (#22570)
  • Ensure initial scaleup with high upscaling_speed isn't limited. (#21953)

Ray Client

🎉 New Features:

  • ray.init has consistent return value in client mode and driver mode #21355

💫Enhancements:

  • Gets and puts are streamed to support arbitrary object sizes #22100, #22327

🔨 Fixes:

  • Fix ray client object ref releasing in wrong context #22025

Ray Core

🎉 New Features

  • RuntimeEnv:
    • Support setting timeout for runtime_env setup. (#23082)
    • Support setting pip_check and pip_version for runtime_env. (#22826, #23306)
    • env_vars will take effect when the pip install command is executed. (temporarily ineffective in conda) (#22730)
    • Support strongly-typed API ray.runtime.RuntimeEnv to define runtime env. (#22522)
    • Introduce virtualenv to isolate the pip type runtime env. (#21801,#22309)
  • Raylet shares fate with the dashboard agent. And the dashboard agent will stay alive when it catches the port conflicts. (#22382,#23024)
  • Enable dashboard in the minimal ray installation (#21896)
  • Add task and object reconstruction status to ray memory cli tools(#22317)

🔨 Fixes

  • Report only memory usage of pinned object copies to improve scaledown. (#22020)
  • Scheduler:
    • No spreading if a node is selected for lease request due to locality. (#22015)
    • Placement group scheduling: Non-STRICT_PACK PGs should be sorted by resource priority, size (#22762)
    • Round robin during spread scheduling (#21303)
  • Object store:
    • Increment ref count when creating an ObjectRef to prevent object from going out of scope (#22120)
    • Cleanup handling for nondeterministic object size during transfer (#22639)
    • Fix bug in fusion for spilled objects (#22571)
    • Handle IO worker failures correctly (#20752)
  • Improve ray stop behavior (#22159)
  • Avoid warning when receiving too much logs from a different job (#22102)
  • Gcs resource manager bug fix and clean up. (#22462, #22459)
  • Release GIL when running parallel_memcopy() / memcpy() during serializations. (#22492)
  • Fix registering serializer before initializing Ray. (#23031)

🏗 Architecture refactoring

  • Ray distributed scheduler refactoring: (#21927, #21992, #22160, #22359, #22722, #22817, #22880, #22893, #22885, #22597, #22857, #23124)
  • Removed support for bootstrapping with Redis.

Ray Data Processing

🎉 New Features

  • Big Performance and Stability Improvements:
    • Add lazy execution mode with automatic stage fusion and optimized memory reclamation via block move semantics (#22233, #22374, #22373, #22476)
    • Support for random access datasets, providing efficient random access to rows via binary search (#22749)
    • Add automatic round-robin load balancing for reading and shuffle reduce tasks, obviating the need for the _spread_resource_prefix hack (#21303)
  • More Efficient Tabular Data Wrangling:
    • Add first-class support for Pandas blocks, removing expensive Arrow <-> Pandas conversion costs (#21894)
    • Expose TableRow API + minimize copies/type-conversions on row-based ops (#22305)
  • Groupby + Aggregations Improvements:
    • Support mapping over groupby groups (#22715)
    • Support ignoring nulls in aggregations (#20787)
  • Improved Dataset Windowing:
    • Support windowing a dataset by bytes instead of number of blocks (#22577)
    • Batch across windows in DatasetPipelines (#22830)
  • Better Text I/O:
    • Support streaming snappy compression for text files (#22486)
    • Allow for custom decoding error handling in read_text() (#21967)
    • Add option for dropping empty lines in read_text() (#22298)
  • New Operations:
    • Add add_column() utility for adding derived columns (#21967)
  • Support for metadata provider callback for read APIs (#22896)
  • Support configuring autoscaling actor pool size (#22574)

🔨 Fixes

  • Force lazy datasource materialization in order to respect DatasetPipeline stage boundaries (#21970)
  • Simplify lifetime of designated block owner actor, and don’t create it if dynamic block splitting is disabled (#22007)
  • Respect 0 CPU resource request when using manual resource-based load balancing (#22017)
  • Remove batch format ambiguity by always converting Arrow batches to Pandas when batch_format=”native” is given (#21566)
  • Fix leaked stats actor handle due to closure capture reference counting bug (#22156)
  • Fix boolean tensor column representation and slicing (#22323)
  • Fix unhandled empty block edge case in shuffle (#22367)
  • Fix unserializable Arrow Partitioning spec (#22477)
  • Fix incorrect iter_epochs() batch format (#22550)
  • Fix infinite iter_epochs() loop on unconsumed epochs (#22572)
  • Fix infinite hang on split() when num_shards < num_rows (#22559)
  • Patch Parquet file fragment serialization to prevent metadata fetching (#22665)
  • Don’t reuse task workers for actors or GPU tasks (#22482)
  • Pin pipeline executor actors to driver node to allow for lineage-based fault tolerance for pipelines (#​​22715)
  • Always use non-empty blocks to determine schema (#22834)
  • API fix bash (#22886)
  • Make label_column optional for to_tf() so it can be used for inference (#22916)
  • Fix schema() for DatasetPipelines (#23032)
  • Fix equalized split when num_splits == num_blocks (#23191)

💫 Enhancements

  • Optimize Parquet metadata serialization via batching (#21963)
  • Optimize metadata read/write for Ray Client (#21939)
  • Add sanity checks for memory utilization (#22642)

🏗 Architecture refactoring

  • Use threadpool to submit DatasetPipeline stages (#22912)

RLlib

🎉 New Features

  • New “AlphaStar” algorithm: A parallelized, multi-agent/multi-GPU learning algorithm, implementing league-based self-play. (#21356, #21649)
  • SlateQ algorithm has been re-tested, upgraded (multi-GPU capable, TensorFlow version), and bug-fixed (added to weekly learning tests). (#22389, #23276, #22544, #22543, #23168, #21827, #22738)
  • Bandit algorithms: Moved into agents folder as first-class citizens, TensorFlow-Version, unified w/ other agents’ APIs. (#22821, #22028, #22427, #22465, #21949, #21773, #21932, #22421)
  • ReplayBuffer API (in progress): Allow users to customize and configure their own replay buffers and use these inside custom or built-in algorithms. (#22114, #22390, #21808)
  • Datasets support for RLlib: Dataset Reader/Writer and documentation. (#21808, #22239, #21948)

🔨 Fixes

  • Fixed memory leak in SimpleReplayBuffer. (#22678)
  • Fixed Unity3D built-in examples: Action bounds from -inf/inf to -1.0/1.0. (#22247)
  • Various bug fixes. (#22350, #22245, #22171, #21697, #21855, #22076, #22590, #22587, #22657, #22428, #23063, #22619, #22731, #22534, #22074, #22078, #22641, #22684, #22398, #21685)

🏗 Architecture refactoring

  • A3C: Moved into new training_iteration API (from exeution_plan API). Lead to a ~2.7x performance increase on a Atari + CNN + LSTM benchmark. (#22126, #22316)
  • Make multiagent->policies_to_train more flexible via callable option (alternative to providing a list of policy IDs). (#20735)

💫Enhancements:

  • Env pre-checking module now active by default. (#22191)
  • Callbacks: Added on_sub_environment_created and on_trainer_init callback options. (#21893, #22493)
  • RecSim environment wrappers: Ability to use google’s RecSim for recommender systems more easily w/ RLlib algorithms (3 RLlib-ready example environments). (#22028, #21773, #22211)
  • MARWIL loss function enhancement (exploratory term for stddev). (#21493)

📖Documentation:

  • Docs enhancements: Setup-dev instructions; Ray datasets integration. (#22239)
  • Other doc enhancements and fixes. (#23160, #23226, #22496, #22489, #22380)

Ray Workflow

🎉 New Features:

  • Support skip checkpointing.

🔨 Fixes:

  • Fix an issue where the event loop is not set.

Tune

🎉 New Features:

  • Expose new checkpoint interface to users (#22741)

💫Enhancements:

🔨Fixes:

  • Cleanup incorrectly formatted strings (Part 2: Tune) (#23129)
  • fix error handling for fail_fast case. (#22982)
  • Remove Trainable.update_resources (#22471)
  • Bump flaml from 0.6.7 to 0.9.7 in /python/requirements/ml (#22071)
  • Fix analysis without registered trainable (#21475)
  • Update Lightning examples to support PTL 1.5 (#20562)
  • Fix WandbTrainableMixin config for rllib trainables (#22063)
  • [wandb] Use resume=False per default (#21892)

🏗 Refactoring:

📖Documentation:

  • Tune docs overhaul (first part) (#22112)
  • Tune overhaul part II (#22656)
  • Note TPESampler performance issues in docs (#22545)
  • hyperopt notebook (#22315)

Train

🎉 New Features

  • Integration with PyTorch profiler. Easily enable the pytorch profiler with Ray Train to profile training and visualize stats in Tensorboard (#22345).
  • Automatic pipelining of host to device transfer. While training is happening on one batch of data, the next batch of data is concurrently being moved from CPU to GPU (#22716, #22974)
  • Automatic Mixed Precision. Easily enable PyTorch automatic mixed precision during training (#22227).

💫 Enhancements

  • Add utility function to enable reproducibility for Pytorch training (#22851)
  • Add initial support for metrics aggregation (#22099)
  • Add support for trainer.best_checkpoint and Trainer.load_checkpoint_path. You can now directly access the best in memory checkpoint, or load an arbitrary checkpoint path to memory. (#22306)

🔨 Fixes

  • Add a utility function to turn off TF autosharding (#21887)
  • Fix fault tolerance for Tensorflow training (#22508)
  • Train utility methods (train.report(), etc.) can now be called outside of a Train session (#21969)
  • Fix accuracy calculation for CIFAR example (#22292)
  • Better error message for placement group time out (#22845)

📖 Documentation

  • Update docs for ray.train.torch import (#22555)
  • Clarify shuffle documentation in prepare_data_loader (#22876)
  • Denote train.torch.get_device as a Public API (#22024)
  • Minor fixes on Ray Train user guide doc (#22379)

Serve

🎉 New Features

  • Deployment Graph API is now in alpha. It provides a way to build, test and deploy complex inference graph composed of many deployments. (#23177, #23252, #23301, #22840, #22710, #22878, #23208, #23290, #23256, #23324, #23289, #23285, #22473, #23125, #23210)
  • New experimental REST API and CLI for creating and managing deployments. ( #22839, #22257, #23198, #23027, #22039, #22547, #22578, #22611, #22648, #22714, #22805, #22760, #22917, #23059, #23195, #23265, #23157, #22706, #23017, #23026, #23215)
  • New sets of HTTP adapters making it easy to build simple application, as well as Ray AI Runtime model wrappers in alpha. (#22913, #22914, #22915, #22995)
  • New health_check API for end to end user provided health check. (#22178, #22121, #22297)

🔨 Fixes

  • Autoscaling algorithm will now relingquish most idle nodes when scaling down (#22669)
  • Serve can now manage Java replicas (#22628)
  • Added a hands-on self-contained MLflow and Ray Serve deployment example (#22192)
  • Added root_path setting to http_options (#21090)
  • Remove shard_key, http_method, and http_headers in ServeHandle (#21590)

Dashboard

🔨Fixes:

  • Update CPU and memory reporting in kubernetes. (#21688)

Thanks

Many thanks to all those who contributed to this release! @edoakes, @pcmoritz, @jiaodong, @iycheng, @krfricke, @smorad, @kfstorm, @jjyyxx, @rodrigodelazcano, @scv119, @dmatrix, @avnishn, @fyrestone, @clarkzinzow, @wumuzi520, @gramhagen, @XuehaiPan, @iasoon, @birgerbr, @n30111, @tbabej, @Zyiqin-Miranda, @suquark, @pdames, @tupui, @ArturNiederfahrenhorst, @ashione, @ckw017, @siddgoel, @Catch-Bull, @vicyap, @spolcyn, @stephanie-wang, @mopga, @Chong-Li, @jjyao, @raulchen, @sven1977, @nikitavemuri, @jbedorf, @mattip, @bveeramani, @czgdp1807, @dependabot[bot], @Fabien-Couthouis, @willfrey, @mwtian, @SlowShip, @Yard1, @WangTaoTheTonic, @Wendi-anyscale, @kaushikb11, @kennethlien, @acxz, @DmitriGekhtman, @matthewdeng, @mraheja, @orcahmlee, @richardliaw, @dsctt, @yupbank, @Jeffwan, @gjoliver, @jovany-wang, @clay4444, @shrekris-anyscale, @jwyyy, @kyle-chen-uber, @simon-mo, @ericl, @amogkam, @jianoaix, @rkooo567, @maxpumperla, @architkulkarni, @chenk008, @xwjiang2010, @robertnishihara, @qicosmos, @sriram-anyscale, @SongGuyang, @jon-chuang, @wuisawesome, @valiantljk, @simonsays1980, @ijrsvt