Datasets, Transforms and Models specific to Computer Vision
[datasets] gdown
is now a required dependency for downloading datasets that are on Google Drive. This change was actually introduced in 0.17.1
(repeated here for visibility) (#8237)
[datasets] The StanfordCars
dataset isn’t available for download anymore. Please follow these instructions to manually download it (#8309, #8324)
[transforms] to_grayscale
and corresponding transform now always return 3 channels when num_output_channels=3
(#8229)
[datasets] Fix download URL of EMNIST
dataset (#8350)
[datasets] Fix root path expansion in Kitti
dataset (#8164)
[models] Fix default momentum value of BatchNorm2d
in MaxViT
from 0.99 to 0.01 (#8312)
[reference scripts] Fix CutMix and MixUp arguments (#8287)
[MPS, build] Link essential libraries in cmake (#8230)
[build] Fix build with ffmpeg 6.0 (#8096)
[transforms] New GrayscaleToRgb transform (#8247) [transforms] New JPEG augmentation transform (#8316)
[datasets, io] Added pathlib.Path
support to datasets and io utilities. (#8196, #8200, #8314, #8321)
[datasets] Added allow_empty
parameter to ImageFolder
and related utils to support empty classes during image discovery (#8311)
[datasets] Raise proper error in CocoDetection
when a slice is passed (#8227)
[io] Added support for EXIF orientation in JPEG and PNG decoders (#8303, #8279, #8342, #8302)
[io] Avoiding unnecessary copies on io.VideoReader
with pyav
backend (#8173)
[transforms] Allow SanitizeBoundingBoxes
to sanitize more than labels (#8319)
[transforms] Add sanitize_bounding_boxes
kernel/functional (#8308)
[transforms] Make perspective
more numerically stable (#8249)
[transforms] Allow 2D numpy arrays as inputs for to_image
(#8256)
[transforms] Speed-up rotate
for 90, 180, 270 degrees (#8295)
[transforms] Enabled torch compile on affine
transform (#8218)
[transforms] Avoid some graph breaks in transforms (#8171)
[utils] Add float support to draw_keypoints
(#8276)
[utils] Add visibility
parameter to draw_keypoints
(#8225)
[utils] Add float support to draw_segmentation_masks
(#8150)
[utils] Better show overlap section of masks in draw_segmentation_masks
(#8213)
[Docs] Various documentation improvements (#8341, #8332, #8198, #8318, #8202, #8246, #8208, #8231, #8300, #8197)
[code quality] Various code quality improvements (#8273, #8335, #8234, #8345, #8334, #8119, #8251, #8329, #8217, #8180, #8105, #8280, #8161, #8313)
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Adam Dangoor Ahmad Sharif , ahmadsharif1, Andrey Talman, Anner, anthony-cabacungan, Arun Sathiya, Brizar, Brizar , cdzhan, Danylo Baibak, Huy Do, Ivan Magazinnik, JavaZero, Johan Edstedt, Li-Huai (Allan) Lin, Mantas, Mark Harfouche, Mithra, Nicolas Hug, Nicolas Hug , nihui, Philip Meier, Philip Meier , RazaProdigy , Richard Barnes , Riza Velioglu, sam-watts, Santiago Castro, Sergii Dymchenko, Syed Raza, talcs, Thien Tran, Thien Tran , TilmannR, Tobias Fischer, vfdev, vfdev , Zhu Lin Ch'ng, Zoltán Böszörményi.
This is a patch release, which is compatible with PyTorch 2.2.2. There are no new features added.
This is a patch release, which is compatible with PyTorch 2.2.1.
gdown
dependency to support downloading datasets from Google Drive (https://github.com/pytorch/vision/pull/8237)convert_bounding_box_format
when passing string parameters (https://github.com/pytorch/vision/issues/8258)The torchvision.transforms.v2
namespace was still in BETA stage until now. It is now stable! Whether you’re new to Torchvision transforms, or you’re already experienced with them, we encourage you to start with Getting started with transforms v2 in order to learn more about what can be done with the new v2 transforms.
Browse our main docs for general information and performance tips. The available transforms and functionals are listed in the API reference. Additional information and tutorials can also be found in our example gallery, e.g. Transforms v2: End-to-end object detection/segmentation example or How to write your own v2 transforms.
torch.compile()
supportWe are progressively adding support for torch.compile()
to torchvision interfaces, reducing graph breaks and allowing dynamic shape.
The torchvision ops (nms
, [ps_]roi_align
, [ps_]roi_pool
and deform_conv_2d
) are now compatible with torch.compile
and dynamic shapes.
On the transforms side, the majority of low-level kernels (like resize_image()
or crop_image()
) should compile properly without graph breaks and with dynamic shapes. We are still addressing the remaining edge-cases, moving up towards full functional support and classes, and you should expect more progress on that front with the next release.
antialias
parameter from None to True, in all transforms that perform resizing. This change of default has been communicated in previous versions, and should drastically reduce the amount of bugs/surprises as it aligns the tensor backend with the PIL backend. Simply put: from now on, antialias is always applied when resizing (with bilinear or bicubic modes), whether you're using tensors or PIL images. This change only affects the tensor backend, as PIL always applies antialias anyway. (#7949)torchvision.transforms.functional_tensor.py
and torchvision.transforms.functional_pil.py
modules, as these had been deprecated for a while. Use the public functionals from torchvision.transforms.v2.functional
instead. (#7953)to_pil_image
now provides the same output for equivalent numpy arrays and tensor inputs (#8097)[datasets] Fix root path expansion in datasets.Kitti (#8165) [transforms] allow sequence fill for v2 AA scripted (#7919) [reference scripts] Fix quantized references (#8073) [reference scripts] Fix IoUs reported in segmentation references (#7916)
[datasets] add Imagenette dataset (#8139)
[transforms] The v2 transforms are now officially stable and out of BETA stage (#8111)
[ops] The ops ([ps_]roi_align
, ps_[roi_pool]
, deform_conv_2d
) are now compatible with torch.compile
and dynamic shapes (#8061, #8049, #8062, #8063, #7942, #7944)
[models] Allow custom atrous_rates
for deeplabv3_mobilenet_v3_large (#8019)
[transforms] allow float fill for integer images in F.pad (#7950)
[transforms] allow len 1 sequences for fill with PIL (#7928)
[transforms] allow size to be generic Sequence in Resize (#7999)
[transforms] Making root parameter optional for Vision Dataset (#8124)
[transforms] Added support for tv tensors in torch compile for func ops (#8110)
[transforms] Reduced number of graphs for compiled resize (#8108)
[misc] Various fixes for S390x support (#8149)
[Docs] Various Documentation enhancements (#8007, #8014, #7940, #7989, #7993, #8114, #8117, #8121, #7978, #8002, #7957, #7907, #8000, #7963)
[Tests] Various test enhancements (#8032, #7927, #7933, #7934, #7935, #7939, #7946, #7943, #7968, #7967, #8033, #7975, #7954, #8001, #7962, #8003, #8011, #8012, #8013, #8023, #7973, #7970, #7976, #8037, #8052, #7982, #8145, #8148, #8144, #8058, #8057, #7961, #8132, #8133, #8160)
[Code Quality] (#8077, #8070, #8004, #8113,
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Aleksei Nikiforov. Alex Wei, Andrey Talman, Chunyuan WU, CptCaptain, Edward Z. Yang, Gu Wang, Haochen Yu, Huy Do, Jeff Daily, Josh Levy-Kramer, moto, Nicolas Hug, NVS Abhilash, Omkar Salpekar, Philip Meier, Sergii Dymchenko, Siddharth Singh, Thiago Crepaldi, Thomas Fritz, TilmannR, vfdev-5, Zeeshan Khan Suri.
This is a patch release, which is compatible with PyTorch 2.1.2. There are no new features added.
The new transforms in torchvision.transforms.v2
support image classification, segmentation, detection, and video tasks. They are now 10%-40% faster than before! This is mostly achieved thanks to 2X-4X improvements made to v2.Resize()
, which now supports native uint8
tensors for Bilinear and Bicubic mode. Output results are also now closer to PIL's! Check out our performance recommendations to learn more.
Additionally, torchvision
now ships with libjpeg-turbo
instead of libjpeg
, which should significantly speed-up the jpeg decoding utilities (read_image
, decode_jpeg
), and avoid compatibility issues with PIL.
Long-awaited support for the CutMix
and MixUp
augmentations is now here! Check our tutorial to learn how to use them.
In the previous release 0.15 we BETA-released a new set of transforms in torchvision.transforms.v2
with native support for tasks like segmentation, detection, or videos. We have now stabilized the design decisions of these transforms and made further improvements in terms of speedups, usability, new transforms support, etc.
We're keeping the torchvision.transforms.v2
and torchvision.tv_tensors
namespaces as BETA until 0.17 out of precaution, but we do not expect disruptive API changes in the future.
Whether you’re new to Torchvision transforms, or you’re already experienced with them, we encourage you to start with Getting started with transforms v2 in order to learn more about what can be done with the new v2 transforms.
Browse our main docs for general information and performance tips. The available transforms and functionals are listed in the API reference. Additional information and tutorials can also be found in our example gallery, e.g. Transforms v2: End-to-end object detection/segmentation example or How to write your own v2 transforms.
The nms
and roi-align kernels (roi_align
, roi_pool
, ps_roi_align
, ps_roi_pool
) now support MPS. Thanks to Li-Huai (Allan) Lin for this contribution!
All changes below happened in the transforms.v2
and datapoints
namespaces, which were BETA and protected with a warning. We do not expect other disruptive changes to these APIs moving forward!
[transforms.v2] to_grayscale()
is not deprecated anymore (#7707)
[transforms.v2] Renaming: torchvision.datapoints.Datapoint
-> torchvision.tv_tensors.TVTensor
(#7904, #7894)
[transforms.v2] Renaming: BoundingBox
-> BoundingBoxes
(#7778)
[transforms.v2] Renaming: BoundingBoxes.spatial_size
-> BoundingBoxes.canvas_size
(#7734)
[transforms.v2] All public method on TVTensor
classes (previously: Datapoint
classes) were removed
[transforms.v2] transforms.v2.utils
is now private. (#7863)
[transforms.v2] Remove wrap_like
class method and add tv_tensors.wrap()
function (#7832)
[transforms.v2] Add support for MixUp
and CutMix
(#7731, #7784)
[transforms.v2] Add PermuteChannels
transform (#7624)
[transforms.v2] Add ToPureTensor
transform (#7823)
[ops] Add MPS kernels for nms
and roi
ops (#7643)
[io] Added support for CMYK images in decode_jpeg
(#7741)
[io] Package torchvision with libjpeg-turbo
instead of libjpeg
(#7672, #7840)
[models] Downloaded weights are now sha256-validated (#7219)
[transforms.v2] Massive Resize
speed-up by adding native uint8
support for bilinear and bicubic modes (#7557, #7668)
[transforms.v2] Enforce pickleability for v2 transforms and wrapped datasets (#7860)
[transforms.v2] Allow catch-all "others" key in fill
dicts. (#7779)
[transforms.v2] Allow passthrough for Resize
(#7521)
[transforms.v2] Add scale
option to ToDtype
. Remove ConvertDtype
. (#7759, #7862)
[transforms.v2] Improve UX for Compose
(#7758)
[transforms.v2] Allow users to choose whether to return TVTensor
subclasses or pure Tensor
(#7825)
[transforms.v2] Remove import-time warning for v2 namespaces (#7853, 7897)
[transforms.v2] Speedup hsv2rgb
(#7754)
[models] Add filter
parameters to list_models()
(#7718)
[models] Assert RAFT
input resolution is 128 x 128 or higher (#7339)
[ops] Replaced gpuAtomicAdd
by fastAtomicAdd
(#7596)
[utils] Add GPU support for draw_segmentation_masks
(#7684)
[ops] Add deterministic, pure-Python roi_align
implementation (#7587)
[tv_tensors] Make TVTensors
deepcopyable (#7701)
[datasets] Only return small set of targets by default from dataset wrapper (#7488)
[references] Added support for v2 transforms and tensors
/ tv_tensors
backends (#7732, #7511, #7869, #7665, #7629, #7743, #7724, #7742)
[doc] A lot of documentation improvements (#7503, #7843, #7845, #7836, #7830, #7826, #7484, #7795, #7480, #7772, #7847, #7695, #7655, #7906, #7889, #7883, #7881, #7867, #7755, #7870, #7849, #7854, #7858, #7621, #7857, #7864, #7487, #7859, #7877, #7536, #7886, #7679, #7793, #7514, #7789, #7688, #7576, #7600, #7580, #7567, #7459, #7516, #7851, #7730, #7565, #7777)
[datasets] Fix split=None
in MovingMNIST
(#7449)
[io] Fix heap buffer overflow in decode_png
(#7691)
[io] Fix blurry screen in video decoder (#7552)
[models] Fix weight download URLs for some models (#7898)
[models] Fix ShuffleNet
ONNX export (#7686)
[models] Fix detection models with pytorch 2.0 (#7592, #7448)
[ops] Fix segfault in DeformConv2d
when mask
is None (#7632)
[transforms.v2] Stricter SanitizeBoundingBoxes
labels_getter
heuristic (#7880)
[transforms.v2] Make sure RandomPhotometricDistort
transforms all images the same (#7442)
[transforms.v2] Fix v2.Lambda
’s transformed types (#7566)
[transforms.v2] Don't call round()
on float images for Resize
(#7669)
[transforms.v2] Let SanitizeBoundingBoxes
preserve output type (#7446)
[transforms.v2] Fixed int type support for sigma in GaussianBlur
(#7887)
[transforms.v2] Fixed issue with jitted AutoAugment
transforms (#7839)
[transforms] Fix Resize
pass-through logic (#7519)
[utils] Fix color in draw_segmentation_masks
(#7520)
[tests] Various test improvements / fixes (#7693, #7816, #7477, #7783, #7716, #7355, #7879, #7874, #7882, #7447, #7856, #7892, #7902, #7884, #7562, #7713, #7708, #7712, #7703, #7641, #7855, #7842, #7717, #7905, #7553, #7678, #7908, #7812, #7646, #7841, #7768, #7828, #7820, #7550, #7546, #7833, #7583, #7810, #7625, #7651) [CI] Various CI improvements (#7485, #7417, #7526, #7834, #7622, #7611, #7872, #7628, #7499, #7616, #7475, #7639, #7498, #7467, #7466, #7441, #7524, #7648, #7640, #7551, #7479, #7634, #7645, #7578, #7572, #7571, #7591, #7470, #7574, #7569, #7435, #7635, #7590, #7589, #7582, #7656, #7900, #7815, #7555, #7694, #7558, #7533, #7547, #7505, #7502, #7540, #7573) [Code Quality] Various code quality improvements (#7559, #7673, #7677, #7771, #7770, #7710, #7709, #7687, #7454, #7464, #7527, #7462, #7662, #7593, #7797, #7805, #7786, #7831, #7829, #7846, #7806, #7814, #7606, #7613, #7608, #7597, #7792, #7781, #7685, #7702, #7500, #7804, #7747, #7835, #7726, #7796)
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release: Adam J. Stewart, Aditya Oke , Andrey Talman, Camilo De La Torre, Christoph Reich, Danylo Baibak, David Chiu, David Garcia, Dennis M. Pöpperl, Dhuige, Duc Mguyen, Edward Z. Yang, Eric Sauser , Fansure Grin, Huy Do, Illia Vysochyn, Johannes, Kai Wana, Kobrin Eli, kurtamohler, Li-Huai (Allan) Lin, Liron Ilouz, Masahiro Hiramori, Mateusz Guzek, Max Chuprov, Minh-Long Luu (刘明龙), Minliang Lin, mpearce25, Nicolas Granger, Nicolas Hug , Nikita Shulga, Omkar Salpekar, Paul Mulders, Philip Meier , ptrblck, puhuk, Radek Bartoň, Richard Barnes , Riza Velioglu, Sahil Goyal, Shu, Sim Sun, SvenDS9, Tommaso Bianconcini, Vadim Zubov, vfdev-5
This is a minor release, which is compatible with PyTorch 2.0.1 and contains some minor bug fixes.
TorchVision is extending its Transforms API! Here is what’s new:
The API is completely backward compatible with the previous one, and remains the same to assist the migration and adoption. We are now releasing this new API as Beta in the torchvision.transforms.v2
namespace, and we would love to get early feedback from you to improve its functionality. Please reach out to us if you have any questions or suggestions.
import torchvision.transforms.v2 as transforms
# Exactly the same interface as V1:
trans = transforms.Compose([
transforms.ColorJitter(contrast=0.5),
transforms.RandomRotation(30),
transforms.CenterCrop(480),
])
imgs, bboxes, masks, labels = trans(imgs, bboxes, masks, labels)
You can read more about these new transforms in our docs, and you can also check out our examples:
Note that this API is still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes.
We added a Video SwinTransformer model is based on the Video Swin Transformer paper.
import torch
from torchvision.models.video import swin3d_t
video = torch.rand(1, 3, 32, 800, 600)
# or swin3d_b, swin3d_s
model = swin3d_t(weights="DEFAULT")
model.eval()
with torch.inference_mode():
prediction = model(video)
print(prediction)
The model has the following accuracies on the Kinetics-400 dataset:
Model | Acc@1 | Acc@5 |
---|---|---|
swin3d_t | 77.7 | 93.5 |
swin3d_s | 79.5 | 94.1 |
swin3d_b | 79.4 | 94.4 |
We would like to thank oke-aditya for this contribution.
[models] Fixed a bug inside ops.MLP
when backpropagating with dropout>0
by implicitly setting the inplace
argument of nn.Dropout
to False
(#7209)
[models, transforms] remove functionality scheduled for 0.15 after deprecation (#7176)
We removed deprecated functionalities according to the deprecation cycle: gen_bar_updater
, model_urls
/quant_model_urls
in models
.
[transforms] Change default of antialias parameter from None to 'warn' (#7160)
For all transforms / functionals that have the interpolate parameter, we change its current default from None
to "warn" value that behaves exactly like None
, but raises a warning indicating users to explicitly set either True
, False
or None
. In v0.17.0 we plan remove "warn" and set the default to True.
[transforms] Deprecate functional_pil and functional_tensor and make them private (#7269)
Since v0.15.0 torchvision.transforms.functional_pil
and torchvision.transforms.functional_tensor
have become private and will be removed in v0.17.0. Please use torchvision.transforms.functional
or torchvision.transforms.v2.functional
instead.
[transforms] Undeprecate PIL int constants for interpolation (#7241) We restored the support for integer interpolation mode (Pillow constants) which was deprecated since v0.13.0 (as PIL un-deprecated those as well).
[transforms] New transforms API (see highlight) [models] Add Video SwinTransformer (see highlight) (#6521)
[transforms] introduce nearest-exact interpolation (#6754) [transforms] add sequence fill support for ElasticTransform (#7141) [transforms] perform out of bounds check for single values and two tuples in ColorJitter (#7133) [datasets] Fixes use download of SBU dataset (#7046) (#7051) [hub] Add video models to torchhub (#7083) [hub] Expose maxvit and swin_v2 models to torchhub (#7078) [io] suppress warning in VideoReader (#6976, 6971) [io] Set pytorch vision decoder probesize for getting stream info based on the value from decode setting (#6900) (#6950) [io] improve warning message for missing image extension (#7150) [io] Read video from memory newapi (#6771) [models] Allow dropout overwrites on EfficientNet (#7031) [models] Don't use named args in MHA calls to allow applying pytorch forward hooks to VIT (#6956) [onnx] Support exporting RoiAlign align=True to ONNX with opset 16 (#6685) [ops] Handle invalid reduction values (#6675) [datasets] Add MovingMNIST dataset (#7042) Add torchvision maintainers guide (#7109) [Documentation] Various doc improvements (#7041, #6947, #6690, #7142, #7156, #7025, #7048, #7074, #6936, #6694, #7161, #7164, #6912, #6854, #6926, #7065, #6813) [CI] Various CI improvements (#6864, #6863, #6855, #6856, #6803, #6893, #6865, #6804, #6866, #6742, #7273, #6999, #6713, #6972, #6954, #6968, #6987, #7004, #7010, #7014, #6915, #6797, #6759, #7060, #6857, #7212, #7199, #7186, #7183, #7178, #7163, #7181, #6789, #7110, #7088, #6955, #6788, #6970) [tests] Various tests improvements (#7020, #6939, #6658, #7216, #6996, #7363, #7379, #7218, #7286, #6901, #7059, #7202, #6708, #7013, #7206, #7204, #7233)
[datasets] fix MNIST byte flipping (#7081)
[models] properly support deepcopying and serialization of model weights (#7107)
[models] Use inplace=None
as default in ops.MLP
(#7209)
[models] Fix dropout issue in swin transformers (#7224)
[reference scripts] Fix quantized classif reference - missing args (#7072)
[models, tests] [FBcode->GH] Fix GRACE_HOPPER file internal discovery (#6719)
[transforms] Replace getbands()
with get_image_num_channels()
(#6941)
[transforms] Switch view()
with reshape()
on equalize (#6772)
[transforms] add sequence fill support for ElasticTransform (#7141)
[transforms] make RandomErasing scriptable for integer value (#7134)
[video] fix bug in output format for pyav (#6672)
[video, datasets] [bugfix] Fix the output format for VideoClips.subset (#6700)
[onnx] Fix dtype for NonMaxSuppression (#7056)
[datasets] Remove unused import (#7245)
[models] Fix error message typo (#6682)
[models] make weights deepcopyable (#6883)
[models] Fix missing f-string prefix in error message (#6684)
[onnx] [ONNX] Rephrase ONNX RoiAlign warning for aligned=True (#6704)
[onnx] [ONNX] misc improvements (#7249)
[ops] Raise kernel launch errors instead of just print error message in cuda ops (#7080)
[ops, tests] Remove torch.jit.fuser("fuser2") in test (#7069)
[tests] replace assert torch.allclose with torch.testing.assert_allclose (#6895)
[transforms] Remove old TODO about using _log_api_usage_once()
(#7277)
[transforms] Fixed repr for ElasticTransform (#6758)
[transforms] Use is False
for some antialias checks (#7234)
[datasets, models] Various type-hints improvements (#6844, #6929, #6843, #7087, #6735, #6845, #6846)
[all] switch to C++17 following the core library (#7116)
Most of these PRs (not all) relate to the transforms V2 work (#7122, #7120, #7113, #7270, #7037, #6665, #6944, #6919, #7033, #7138, #6718, #6068, #7194, #6997, #6647, #7279, #7232, #7225, #6663, #7235, #7236, #7275, #6791, #6786, #7203, #7009, #7278, #7238, #7230, #7118, #7119, #6876, #7190, #6995, #6879, #6904, #6921, #6905, #6977, #6714, #6924, #6984, #6631, #7276, #6757, #7227, #7197, #7170, #7228, #7246, #7255, #7254, #7253, #7248, #7256, #7257, #7252, #6724, #7215, #7260, #7261, #7244, #7271, #7231, #6738, #7268, #7258, #6933, #6891, #6890, #7012, #6896, #6881, #6880, #6877, #7045, #6858, #6830, #6935, #6938, #6914, #6907, #6897, #6903, #6859, #6835, #6837, #6807, #6776, #6784, #6795, #7135, #6930, #7153, #6762, #6681, #7139, #6831, #6826, #6821, #6819, #6820, #6805, #6811, #6783, #6978, #6667, #6741, #6763, #6774, #6748, #6749, #6722, #6756, #6712, #6733, #6736, #6874, #6767, #6902, #6847, #6851, #6777, #6770, #6800, #6812, #6702, #7223, #6906, #7226, #6860, #6934, #6726, #6730, #7196, #7211, #7229, #7177, #6923, #6949, #6913, #6775, #7091, #7136, #7154, #6833, #6824, #6785, #6710, #6653, #6751, #6503, #7266, #6729, #6989, #7002, #6892, #6888, #6894, #6988, #6940, #6942, #6945, #6983, #6773, #6832, #6834, #6828, #6801, #7084)
We're grateful for our community, which helps us improve torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release:
Aditya Gandhamal, Aditya Oke, Aidyn-A, Akira Noda, Andrey Talman, Bowen Bao, Bruno Korbar, Chen Liu, cyy, David Berard, deepsghimire, Erjia Guan, F-G Fernandez, Jithun Nair, Joao Gomes, John Detloff, Justin Chu, Karan Desai, lezcano, mpearce25, Nghia, Nicolas Hug, Nikita Shulga, nps1ngh, Omkar Salpekar, Philip Meier, Robert Perrotta, RoiEX, Samantha Andow, Sergii Dymchenko, shunsuke yokokawa, Sim Sun, Toni Blaslov, toni057, Vasilis Vryniotis, vfdev-5, Vladislav Sovrasov, vsuryamurthy, Yosua Michael Maranatha, Yuxin Wu
This is a minor release, which is compatible with PyTorch 1.13.1. There are no new features added.