Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
The Ray 2.3.1 patch release contains fixes for multiple components:
zip()
(https://github.com/ray-project/ray/pull/32795)serve run
to use Ray Client instead of Ray Jobs (https://github.com/ray-project/ray/pull/32976)max_concurrent_queries
being ignored when autoscaling (https://github.com/ray-project/ray/pull/32772 and https://github.com/ray-project/ray/pull/33022)--block
(https://github.com/ray-project/ray/pull/32961)💫Enhancements:
set_preprocessor
method to Checkpoint
(#31721)save_checkpoints
to upload_checkpoints
(#31582)WandbLoggerCallback
example (#31625)DLPredictor.call_model
tensor
parameter to inputs
(#30574)use_gpu
to HuggingFacePredictor
(#30945)Checkpoint
improvements (#30948)TensorflowCheckpoint.get_model
(#31203)🔨 Fixes:
📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
💫Enhancements:
ds.map_batches()
(#30000)🔨 Fixes:
📖Documentation:
🎉 New Features:
💫Enhancements:
NCCL_SOCKET_IFNAME
to blacklist veth
(#31824)RunConfig
is used when there are multiple places to specify it (#31959)ScalingConfig
to be optional for DataParallelTrainer
s if already in Tuner param_space
(#30920)🔨 Fixes:
Preprocessor
configs when using stream API. (#31725)fail_fast="raise"
(#30817)SklearnTrainer
(#30593)📖Documentation:
🏗 Architecture refactoring:
💫Enhancements:
validate_upload_dir
to Syncer (#30869)🔨 Fixes:
AxSearch
save and nan/inf result handling (#31147)AxSearch
search space conversion for fixed list hyperparameters (#31088)Tuner.restore
(#30893)sort_by_metric
with nested metrics (#30906)fail_fast="raise"
(#30817)📖Documentation:
🏗 Architecture refactoring:
overwrite_trainable
argument in Tuner restore to trainable
(#32059)🎉 New Features:
💫Enhancements:
🔨 Fixes:
📖Documentation:
🎉 New Features:
💫Enhancements:
experimental_relax_shapes
(but reduce_retracing
instead). (#29214)__str__()
method to PolicyMap. (#31098)contrib
folder. (#30992)AlgorithmConfig.overrides()
to replace multiagent->policies->config
and evaluation_config
dicts. (#30879)deprecation_warning(.., error=True)
should raise ValueError
, not DeprecationWarning
. (#30255)gym.spaces.Text
serialization. (#30794)MultiAgentBatch
to SampleBatch
in offline_rl.py. (#30668)Algorithm.train()
return Tune-style config dict (instead of AlgorithmConfig object). (#30591)🔨 Fixes:
try_import_..()
. (#31332)tensorflow_probability
imports. (#31331)PolicyMap.__del__()
to also remove a deleted policy ID from the internal deque. (#31388)get_model_v2()
instead of get_model()
with MADDPG. (#30905)📖Documentation:
🎉 New Features:
💫Enhancements:
🔨 Fixes:
📖Documentation:
🎉 New Features:
💫Enhancements:
ray status
and autoscaler (#32337)🔨 Fixes:
📖Documentation:
🎉 New Features:
📖Documentation:
Many thanks to all those who contributed to this release!
@minerharry, @scottsun94, @iycheng, @DmitriGekhtman, @jbedorf, @krfricke, @simonsays1980, @eltociear, @xwjiang2010, @ArturNiederfahrenhorst, @richardliaw, @avnishn, @WeichenXu123, @Capiru, @davidxia, @andreapiso, @amogkam, @sven1977, @scottjlee, @kylehh, @yhna940, @rickyyx, @sihanwang41, @n30111, @Yard1, @sriram-anyscale, @Emiyalzn, @simran-2797, @cadedaniel, @harelwa, @ijrsvt, @clarng, @pabloem, @bveeramani, @lukehsiao, @angelinalg, @dmatrix, @sijieamoy, @simon-mo, @jbesomi, @YQ-Wang, @larrylian, @c21, @AndreKuu, @maxpumperla, @architkulkarni, @wuisawesome, @justinvyu, @zhe-thoughts, @matthewdeng, @peytondmurray, @kevin85421, @tianyicui-tsy, @cassidylaidlaw, @gvspraveen, @scv119, @kyuyeonpooh, @Siraj-Qazi, @jovany-wang, @ericl, @shrekris-anyscale, @Catch-Bull, @jianoaix, @christy, @MisterLin1995, @kouroshHakha, @pcmoritz, @csko, @gjoliver, @clarkzinzow, @SongGuyang, @ckw017, @ddelange, @alanwguo, @Dhul-Husni, @Rohan138, @rkooo567, @fzyzcjy, @chaokunyang, @0x2b3bfa0, @zoltan-fedor, @Chong-Li, @crypdick, @jjyao, @emmyscode, @stephanie-wang, @starpit, @smorad, @nikitavemuri, @zcin, @tbukic, @ayushthe1, @mattip
Ray 2.2 is a stability-focused release, featuring stability improvements across many Ray components.
🎉 New Features:
💫Enhancements:
🔨 Fixes:
📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
select_columns()
to select a subset of columns (#29081)write_tfrecords()
to write TFRecord files (#29448)from_torch()
to create dataset from Torch dataset (#29588)from_tf()
to create dataset from TensorFlow dataset (#29591)batch_size
in BatchMapper
(#29193)💫Enhancements:
include_paths
in read_images()
to return image file path (#30007)to_pandas()
and to_dask()
(#29417)read_tfrecords()
output from Pandas to Arrow format (#30390)str
exclude in Concatenator
(#29443)🔨 Fixes:
random_shuffle()
(#29276)random_shuffle_each_window()
(#29482)iter_batches()
to not return empty batch (#29638)map_batches()
to fetch input blocks on-demand (#29289)take_all()
to not accept limit argument (#29746)map_groups()
(#30172)stats()
call causing Dataset schema to be unset (#29635)batch_format
is not specified for BatchMapper
(#30366)📖Documentation:
map_batches()
documentation about execution model and UDF pickle-ability requirement (#29233)to_tf()
docstring (#29464)🎉 New Features:
💫Enhancements:
🔨 Fixes:
📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
Tuner.restore
work with relative experiment paths (#30363)Tuner.restore
from a local directory that has moved (#29920)💫Enhancements:
with_resources
takes in a ScalingConfig
(#30259)with_resources
in with_parameters
(#29740)trial_name_creator
and trial_dirname_creator
to TuneConfig
(#30123)BaseTrainer
to Trainable
once in the Tuner (#30355)remote_checkpoint_dir
work with query strings (#30125)🔨 Fixes:
ResourceChangingScheduler
dropping PGF args (#30304)Tuner
(#29956)TUNE_ORIG_WORKING_DIR
env variable (#30134)📖Documentation:
ResultGrid
and Result
) (#29072)🏗 Architecture refactoring:
setup_wandb()
function (#29828)🎉 New Features:
💫Enhancements:
🔨 Fixes:
🎉 New Features:
💫Enhancements:
from_checkpoint()
) for directly instantiating instances from a checkpoint directory w/o knowing the original configuration used or any other information (having the checkpoint is sufficient). For a detailed overview, see here. (#28812, #29772, #29370, #29520, #29328)🏗 Architecture refactoring:
🔨 Fixes:
📖Documentation:
🎉 New Features:
💫Enhancements:
entrypoint_num_cpus
, entrypoint_num_gpus
, or entrypoint_resources
. (#28564, #28203)🔨 Fixes:
num_cpus
required by task/actors by default (#30496)📖Documentation:
💫Enhancements:
🎉 New Features:
ray list cluster-events
.🔨 Fixes:
💫Enhancements:
Many thanks to all those who contributed to this release!
@shrekris-anyscale, @rickyyx, @scottjlee, @shogohida, @liuyang-my, @matthewdeng, @wjrforcyber, @linusbiostat, @clarkzinzow, @justinvyu, @zygi, @christy, @amogkam, @cool-RR, @jiaodong, @EvgeniiTitov, @jjyao, @ilee300a, @jianoaix, @rkooo567, @mattip, @maxpumperla, @ericl, @cadedaniel, @bveeramani, @rueian, @stephanie-wang, @lcipolina, @bparaj, @JoonHong-Kim, @avnishn, @tomsunelite, @larrylian, @alanwguo, @VishDev12, @c21, @dmatrix, @xwjiang2010, @thomasdesr, @tiangolo, @sokratisvas, @heyitsmui, @scv119, @pcmoritz, @bhavika, @yzs981130, @andraxin, @Chong-Li, @clarng, @acxz, @ckw017, @krfricke, @kouroshHakha, @sijieamoy, @iycheng, @gjoliver, @peytondmurray, @xcharleslin, @DmitriGekhtman, @andreichalapco, @vitrioil, @architkulkarni, @simon-mo, @ArturNiederfahrenhorst, @sihanwang41, @pabloem, @sven1977, @avivhaber, @wuisawesome, @jovany-wang, @Yard1
read_images()
API for loading data.read_tfrecords()
API to read TFRecord files.on_episode_created()
.💫Enhancements:
🔨 Fixes:
📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
BatchMapper
(#28418)💫Enhancements:
Dataset.to_dask()
(#28625)ds.limit()
(#27343)🔨 Fixes:
📖Documentation:
limit()
and take()
docstrings (#27367)🎉 New Features:
💫Enhancements:
🔨 Fixes:
train.torch.get_device()
(#28659)📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
Tuner.get_results()
to retrieve results after restore (#29083)💫Enhancements:
🔨 Fixes:
📖Documentation:
🏗 Architecture refactoring:
🎉 New Features:
💫Enhancements:
🔨 Fixes:
📖Documentation:
🎉 New Features:
on_episode_created()
. (#28600)💫Enhancements:
🔨 Fixes:
📖Documentation:
🔨 Fixes:
🎉 New Features:
💫Enhancements:
🔨 Fixes:
run_function_on_all_workers
as deprecated until we get rid of this (#29062)📖Documentation:
💫Enhancements:
📖Documentation:
🎉 New Features:
🔨 Fixes:
📖Documentation:
Many thanks to all those who contributed to this release!
@sihanwang41, @simon-mo, @avnishn, @MyeongKim, @markrogersjr, @christy, @xwjiang2010, @kouroshHakha, @zoltan-fedor, @wumuzi520, @alanwguo, @Yard1, @liuyang-my, @charlesjsun, @DevJake, @matteobettini, @jonathan-conder-sm, @mgerstgrasser, @guidj, @JiahaoYao, @Zyiqin-Miranda, @jvanheugten, @aallahyar, @SongGuyang, @clarng, @architkulkarni, @Rohan138, @heyitsmui, @mattip, @ArturNiederfahrenhorst, @maxpumperla, @vale981, @krfricke, @DmitriGekhtman, @amogkam, @richardliaw, @maldil, @zcin, @jianoaix, @cool-RR, @kira-lin, @gramhagen, @c21, @jiaodong, @sijieamoy, @tupui, @ericl, @anabranch, @se4ml, @suquark, @dmatrix, @jjyao, @clarkzinzow, @smorad, @rkooo567, @jovany-wang, @edoakes, @XiaodongLv, @klieret, @rozsasarpi, @scottsun94, @ijrsvt, @bveeramani, @chengscott, @jbedorf, @kevin85421, @nikitavemuri, @sven1977, @acxz, @stephanie-wang, @PaulFenton, @WangTaoTheTonic, @cadedaniel, @nthai, @wuisawesome, @rickyyx, @artemisart, @peytondmurray, @pingsutw, @olipinski, @davidxia, @stestagg, @yaxife, @scv119, @mwtian, @yuanchi2807, @ntlm1686, @shrekris-anyscale, @cassidylaidlaw, @gjoliver, @ckw017, @hakeemta, @ilee300a, @avivhaber, @matthewdeng, @afarid, @pcmoritz, @Chong-Li, @Catch-Bull, @justinvyu, @iycheng
The Ray 2.0.1 patch release contains dependency upgrades and fixes for multiple components:
python -m
(#28140)host
and port
in Serve config (#27026)name
option with task_id
(#28151)Ray 2.0 is an exciting release with enhancements to all libraries in the Ray ecosystem. With this major release, we take strides towards our goal of making distributed computing scalable, unified, and open.
Towards these goals, Ray 2.0 features new capabilities for unifying the machine learning (ML) ecosystem, improving Ray's production support, and making it easier than ever for ML practitioners to use Ray's libraries.
Highlights:
A migration guide for all the different libraries can be found here: Ray 2.0 Migration Guide.
Ray AIR is now in beta. Ray AIR builds upon Ray’s libraries to enable end-to-end machine learning workflows and applications on Ray. You can install all dependencies needed for Ray AIR via pip install -u "ray[air]"
.
🎉 New Features:
💫 Enhancements:
🔨 Fixes:
KerasCallback
to work with TensorflowPredictor
(#26089)predict_pandas
implementation (#25534)_predict_arrow
interface for Predictor (#25579)call_model
API for unsupported output types (#26845)🎉 New Features:
💫 Enhancements:
🔨 Fixes:
Ray Train has received a major expansion of scope with Ray 2.0.
In particular, the Ray Train module now contains:
for common different ML frameworks including Pytorch, Tensorflow, XGBoost, LightGBM, HuggingFace, and Scikit-Learn. These API help provide end-to-end usage of Ray libraries in Ray AIR workflows.
🎉 New Features:
ray.train
namespace. This provides streamlined API for offline and online inference of Pytorch, Tensorflow, XGBoost models and more. (#25769 #26215, #26251, #26451, #26531, #26600, #26603, #26616, #26845)💫 Enhancements:
📖 Documentation:
🏗 Architecture refactoring:
🔨 Fixes:
__getstate__
method (#25335)🎉 New Features:
get_dataframe()
method to result grid, fix config flattening (#24686)💫 Enhancements:
TempFileLock
(#25408)📖 Documentation:
🏗 Architecture refactoring:
🔨 Fixes:
dataset_tune
(#25402)set_tune_experiment
(#26298)🎉 New Features:
💫 Enhancements:
🎉 New Features:
💫 Enhancements:
evaluation_duration
. (#26000)🔨 Fixes:
self._local_worker
is None (e.g. in evaluation worker sets)”. (#25332) (#25493)
target_network_update_freq
for R2D2. (#25510)input_dict
arg). (#25877)async_parallel_requests
utility. (#26117)torch_utils.py::convert_to_torch_tensor
. (#26863)🎉 New Features:
🔨 Fixes:
🏗 Architecture refactoring:
🎉 New Features:
💫 Enhancements:
🔨 Fixes:
🏗 Architecture refactoring:
🎉 New Features:
💫 Enhancements:
🔨 Fixes:
head_node
, worker_nodes
, head_node_type
, default_worker_node_type
, autoscaling_mode
, target_utilization_fraction
are removed. Check out the migration guide to learn how to migrate to the new versions.🎉 New Features:
💫 Enhancements:
🔨 Fixes:
🎉 New Features:
Breaking changes:
🔨 Fixes:
Many thanks to all those who contributed to this release!
@ujvl, @xwjiang2010, @EricCousineau-TRI, @ijrsvt, @waleedkadous, @captain-pool, @olipinski, @danielwen002, @amogkam, @bveeramani, @kouroshHakha, @jjyao, @larrylian, @goswamig, @hanming-lu, @edoakes, @nikitavemuri, @enori, @grechaw, @truelegion47, @alanwguo, @sychen52, @ArturNiederfahrenhorst, @pcmoritz, @mwtian, @vakker, @c21, @rberenguel, @mattip, @robertnishihara, @cool-RR, @iamhatesz, @ofey404, @raulchen, @nmatare, @peterghaddad, @n30111, @fkaleo, @Riatre, @zhe-thoughts, @lchu-ibm, @YoelShoshan, @Catch-Bull, @matthewdeng, @VishDev12, @valtab, @maxpumperla, @tomsunelite, @fwitter, @liuyang-my, @peytondmurray, @clarkzinzow, @VeronikaPolakova, @sven1977, @stephanie-wang, @emjames, @Nintorac, @suquark, @javi-redondo, @xiurobert, @smorad, @brucez-anyscale, @pdames, @jjyyxx, @dmatrix, @nakamasato, @richardliaw, @juliusfrost, @anabranch, @christy, @Rohan138, @cadedaniel, @simon-mo, @mavroudisv, @guidj, @rkooo567, @orcahmlee, @lixin-wei, @neigh80, @yuduber, @JiahaoYao, @simonsays1980, @gjoliver, @jimthompson5802, @lucasalavapena, @zcin, @clarng, @jbn, @DmitriGekhtman, @timgates42, @charlesjsun, @Yard1, @mgelbart, @wumuzi520, @sihanwang41, @ghost, @jovany-wang, @siavash119, @yuanchi2807, @tupui, @jianoaix, @sumanthratna, @code-review-doctor, @Chong-Li, @FedericoGarza, @ckw017, @Makan-Ar, @kfstorm, @flanaman, @WangTaoTheTonic, @franklsf95, @scv119, @kvaithin, @wuisawesome, @jiaodong, @mgerstgrasser, @tiangolo, @architkulkarni, @MyeongKim, @ericl, @SongGuyang, @avnishn, @chengscott, @shrekris-anyscale, @Alyetama, @iycheng, @rickyyx, @krfricke, @sijieamoy, @kimikuri, @czgdp1807, @michalsustr
💫Enhancements:
🔨 Fixes:
💫Enhancements:
None
from internal KV for non-existent keys (#24058)🔨 Fixes:
SimpleQueue
on Python 3.7 and newer in async dataclient
(#23995)🎉 New Features:
🔨 Fixes:
🏗 Architecture refactoring:
🎉 New Features:
🔨 Fixes:
🏗 Architecture refactoring:
🎉 New Features:
🏗 Architecture refactoring:
recreate_failed_workers=True
config flag. (#23739)build_trainer()
(trainer_templates.py): All custom Trainers should now sub-class from any existing Trainer
class. (#23488)💫Enhancements:
🔨 Fixes:
as_eager()
twice by mistake). (#24268)timesteps_per_iteration
is used (use min_train_timesteps_per_reporting
instead). (#24345)🎉 New Features:
🔨 Fixes:
🏗 Architecture refactoring:
🎉 New Features:
💫Enhancements:
MLflowLoggerUtil
copyable (#23333)🔨Fixes:
Most distributed training enhancements will be captured in the new Ray AIR category!
🔨Fixes:
train.torch.get_device()
for fractional GPU or multiple GPU per worker case (#23763)ray.train.Trainer
and ray.tune
DistributedTrainableCreators (#24056)📖Documentation:
🎉 New Features:
HuggingFaceTrainer
& HuggingFacePredictor
(#23615, #23876)SklearnTrainer
& SklearnPredictor
(#23803, #23850)HorovodTrainer
(#23437)RLTrainer
& RLPredictor
(#23465, #24172)BatchMapper
preprocessor (#23700)Categorizer
preprocessor (#24180)BatchPredictor
(#23808)💫Enhancements:
Checkpoint.as_directory()
for efficient checkpoint fs processing (#23908)config
to Result
, extend ResultGrid.get_best_config
(#23698)_get_unique_value_indices
(#24144)most_frequent
SimpleImputer
(#23706)🔨Fixes:
run_config
from Trainer per default (#24079)📖Documentation:
torch_geometric
example (#23580)🎉 New Features:
💫Enhancements:
input_schema
is now renamed as http_adapter
for usability (#24353, #24191)🔨Fixes:
None
in ReplicaConfig
's resource_dict
(#23851)"memory"
to None
in ray_actor_options
by default (#23619)serve.shutdown()
shutdown remote Serve applications (#23476)🔨Fixes:
Thanks Many thanks to all those who contributed to this release! @matthewdeng, @scv119, @xychu, @iycheng, @takeshi-yoshimura, @iasoon, @wumuzi520, @thetwotravelers, @maxpumperla, @krfricke, @jgiannuzzi, @kinalmehta, @avnishn, @dependabot[bot], @sven1977, @raulchen, @acxz, @stephanie-wang, @mgelbart, @xwjiang2010, @jon-chuang, @pdames, @ericl, @edoakes, @gjoseph92, @ddelange, @bkasper, @sriram-anyscale, @Zyiqin-Miranda, @rkooo567, @jbedorf, @architkulkarni, @osanseviero, @simonsays1980, @clarkzinzow, @DmitriGekhtman, @ashione, @smorad, @andenrx, @mattip, @bveeramani, @chaokunyang, @richardliaw, @larrylian, @Chong-Li, @fwitter, @shrekris-anyscale, @gjoliver, @simontindemans, @silky, @grypesc, @ijrsvt, @daikeshi, @kouroshHakha, @mwtian, @mesjou, @sihanwang41, @PavelCz, @czgdp1807, @jianoaix, @GuillaumeDesforges, @pcmoritz, @arsedler9, @n30111, @kira-lin, @ckw017, @max0x7ba, @Yard1, @XuehaiPan, @lchu-ibm, @HJasperson, @SongGuyang, @amogkam, @liuyang-my, @WangTaoTheTonic, @jovany-wang, @simon-mo, @dynamicwebpaige, @suquark, @ArturNiederfahrenhorst, @jjyao, @KepingYan, @jiaodong, @frosk1
Patch release with the following fixes:
ray-ml
Docker images for CPU will start being built again after they were stopped in Ray 1.9 (https://github.com/ray-project/ray/pull/24266).Patch release including fixes for the following issues:
working_dir
URLs in their runtime environment (https://github.com/ray-project/ray/pull/22018)gym
not pinned, leading to version incompatibility issues (https://github.com/ray-project/ray/pull/23705)🎉 New Features
💫 Enhancements
🔨 Fixes
🎉 New Features:
💫Enhancements:
🔨 Fixes:
🎉 New Features
🔨 Fixes
parallel_memcopy()
/ memcpy()
during serializations. (#22492)🏗 Architecture refactoring
🎉 New Features
_spread_resource_prefix
hack (#21303)TableRow
API + minimize copies/type-conversions on row-based ops (#22305)DatasetPipeline
s (#22830)read_text()
(#21967)read_text()
(#22298)add_column()
utility for adding derived columns (#21967)🔨 Fixes
DatasetPipeline
stage boundaries (#21970)batch_format=”native”
is given (#21566)iter_epochs()
batch format (#22550)iter_epochs()
loop on unconsumed epochs (#22572)split()
when num_shards < num_rows
(#22559)to_tf()
so it can be used for inference (#22916)schema()
for DatasetPipeline
s (#23032)num_splits == num_blocks
(#23191)💫 Enhancements
🏗 Architecture refactoring
DatasetPipeline
stages (#22912)🎉 New Features
agents
folder as first-class citizens, TensorFlow-Version, unified w/ other agents’ APIs. (#22821, #22028, #22427, #22465, #21949, #21773, #21932, #22421)🔨 Fixes
🏗 Architecture refactoring
training_iteration
API (from exeution_plan
API). Lead to a ~2.7x performance increase on a Atari + CNN + LSTM benchmark. (#22126, #22316)multiagent->policies_to_train
more flexible via callable option (alternative to providing a list of policy IDs). (#20735)💫Enhancements:
on_sub_environment_created
and on_trainer_init
callback options. (#21893, #22493)📖Documentation:
🎉 New Features:
🔨 Fixes:
🎉 New Features:
💫Enhancements:
🔨Fixes:
🏗 Refactoring:
📖Documentation:
🎉 New Features
💫 Enhancements
trainer.best_checkpoint
and Trainer.load_checkpoint_path
. You can now directly access the best in memory checkpoint, or load an arbitrary checkpoint path to memory. (#22306)🔨 Fixes
train.report()
, etc.) can now be called outside of a Train session (#21969)📖 Documentation
prepare_data_loader
(#22876)train.torch.get_device
as a Public API (#22024)🎉 New Features
health_check
API for end to end user provided health check. (#22178, #22121, #22297)🔨 Fixes
root_path
setting to http_options
(#21090)shard_key
, http_method
, and http_headers
in ServeHandle
(#21590)🔨Fixes:
Many thanks to all those who contributed to this release! @edoakes, @pcmoritz, @jiaodong, @iycheng, @krfricke, @smorad, @kfstorm, @jjyyxx, @rodrigodelazcano, @scv119, @dmatrix, @avnishn, @fyrestone, @clarkzinzow, @wumuzi520, @gramhagen, @XuehaiPan, @iasoon, @birgerbr, @n30111, @tbabej, @Zyiqin-Miranda, @suquark, @pdames, @tupui, @ArturNiederfahrenhorst, @ashione, @ckw017, @siddgoel, @Catch-Bull, @vicyap, @spolcyn, @stephanie-wang, @mopga, @Chong-Li, @jjyao, @raulchen, @sven1977, @nikitavemuri, @jbedorf, @mattip, @bveeramani, @czgdp1807, @dependabot[bot], @Fabien-Couthouis, @willfrey, @mwtian, @SlowShip, @Yard1, @WangTaoTheTonic, @Wendi-anyscale, @kaushikb11, @kennethlien, @acxz, @DmitriGekhtman, @matthewdeng, @mraheja, @orcahmlee, @richardliaw, @dsctt, @yupbank, @Jeffwan, @gjoliver, @jovany-wang, @clay4444, @shrekris-anyscale, @jwyyy, @kyle-chen-uber, @simon-mo, @ericl, @amogkam, @jianoaix, @rkooo567, @maxpumperla, @architkulkarni, @chenk008, @xwjiang2010, @robertnishihara, @qicosmos, @sriram-anyscale, @SongGuyang, @jon-chuang, @wuisawesome, @valiantljk, @simonsays1980, @ijrsvt