Data manipulation and transformation for audio signal processing, powered by PyTorch
This release supports Python 3.9.
Continuing from the previous release, torchaudio improves the audio I/O mechanism. In this release, we have four major updates.
Backend migration. We have migrated the default backend for audio I/O. The new default backend is “sox_io” (for Linux/macOS). The interface for “soundfile” backend has been also changed to align that of “sox_io”. Following the change of default backends, the legacy backend/interface have been marked as deprecated. The legacy backend/interface are still accessible, though it is strongly discouraged to use them. For the detail on the migration, please refer to #903.
File-like object support.
We have added file-like object support to I/O functions and sox_effects. You can perform the info
, load
, save
and apply_effects_file
operation on file-like objects.
# Query audio metadata over HTTP
# Will only fetch the first few kB
with requests.get(URL, stream=True) as response:
metadata = torchaudio.info(response.raw)
# Load audio from tar file
# No need to extract TAR file.
with tarfile.open(TAR_PATH, mode='r') as tarfile_:
fileobj = tarfile_.extractfile(SAMPLE_TAR_ITEM)
waveform, sample_rate = torchaudio.load(fileobj)
# Saving to Bytes buffer
# Using BytesIO, you can perform in-memory encoding/decoding.
buffer_ = io.BytesIO()
torchaudio.save(buffer_, waveform, sample_rate, format="wav")
# Apply effects (lowpass filter / resampling) while loading audio from S3
client = boto3.client('s3')
response = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY)
waveform, sample_rate = torchaudio.sox_effects.apply_effect_file(
response['Body'], [["lowpass", "-1", "300"], ["rate", "8000"]])
[Beta] Codec Application.
Built upon the file-like object support, we added functional.apply_codec
function, which can degrades audio data by applying audio codecs supported by “sox_io” backend, in in-memory fashion.
# Apply MP3 codec
degraded = F.apply_codec(
waveform, sample_rate, format="mp3", compression=-9)
# Apply GSM codec
degraded = F.apply_codec(waveform, sample_rate, format="gsm")
Encoding options.
We have added encoding options to save function of new backends. Now you can change the format and encodings with format
, encoding
and bits_per_sample
options
# Save without any encoding option.
# The function will pick the encoding which the provided data fit
# For Tensor of float32 type, that is 32-bit floating-point PCM.
torchaudio.save("data.wav", waveform, sample_rate)
# Save as 16-bit signed integer Linear PCM
# The resulting file occupies half the storage but loses precision
torchaudio.save(
"data.wav", waveform, sample_rate, encoding="PCM_S", bits_per_sample=16)
More format support to "sox_io"’s save function. We have added support for GSM, HTK, AMB, and AMR-NB formats to "sox_io"’s save function.
torchaudio was utilizing CMake to build third party dependencies. Now torchaudio uses CMake to build its C++ extension. This will open the door to integrate torchaudio in non-Python environments (such as C++ applications and mobile). We will work on adding example applications and mobile integrations in upcoming releases.
This release introduces support for python 3.9. There is no 0.7.1 release, and the following changes are compared to 0.7.0.
download=True
in CommonVoice (#1076)torchaudio is expanding its support for models and end-to-end applications. Please file an issue on github to provide feedback on them.
As you are likely already aware from the last release we’re currently in the process of making sox_io
, which ships with new features such as TorchScript support and performance improvements, the new default. If you want to benefit from these features now, we encourage you to migrate. For more information see issue #903.
str.format
to adopt changes in PyTorch, leading to improved error messages for TorchScript (#850)sox_utils.list_formats()
for read and write (#811)VCTK_092
dataset (#812)sox_io
backend (#871)soundfile
backend to the one identical to sox_io
backend. (#922)soundfile
compatibility backend. (#922)torchaudio.compliance.kaldi.fbank
(#947)pathlib.Path
support to sox_io
backend (#907)sox_io
C++ implementation (#779)sox_io
and sox_effects
(#806)noise_shaping = True
(#865)zip_safe = False
to disable egg installation (#842)istft
wrapper in favor of torch.istft. (#841)SoxEffect
and SoxEffectsChain
(#787)sox
backend. (#904)soundfile
. (#922)load_wav
functions. (#905)torchaudio now includes a new model module (with wav2letter included), new functionals (contrast, cvm, dcshift, overdrive, vad, phaser, flanger, biquad), datasets (GTZAN, CMU), and a new optional sox backend with support for torchscript. torchaudio now also supports Windows, with the soundfile backend.
torchaudio requires python 3.6 or more recent.
v1.5.1
torchaudio includes new transforms (e.g. Griffin-Lim and inverse Mel scale), new filters (e.g. all pass, fade, band pass/reject, band, treble, deemph, riaa), and datasets (LJ Speech and SpeechCommands).
torchaudio 0.4 improves on current transformations, datasets, and backend support.
We would like to thank again our contributors and the wider community for their significant contributions to this release. In particular we'd like to thank @keunwoochoi, @ksanjeevan, and all the other maintainers and contributors of torchaudio-contrib for their significant and valuable additions around augmentations (#285) and batching (#327).
downsample
, transform
, target_transform
, and return_dict
are being deprecated.torchaudio.functional.detect_pitch_frequency
. (#313, #322)torchaudio.transforms
: TimeStretch
, FrequencyMasking
, TimeMasking
. (#285, #333, #348)torchaudio.transform.ComplexNorm
. (#285, #333)torchaudio.functional.compute_deltas
. (#268, #326)torchaudio.functional.gain
and torchaudio.functional.dither
(#319, #360). We welcome work to continue the effort to implement features available in SoX, see #260.equalizer_biquad
(#315, #340), lowpass_biquad
, highpass_biquad
(#275), lfilter
, and biquad
(#275, #291, #326) in torchaudio.functional
.torchaudio.functional.mfcc
. (#228)MelScale
and librosa. (#294)torchaudio.compliance.kaldi.resample_waveform
where internal variables where not moved to the GPU when used. (#277)istft
where the dtype
and device
of parameters were not created on the same device as the tensor provided by the user. (#264)load_state_dict
). (#246)torchaudio.load
to [-1, 1]. (#283)This release is to update the dependency to PyTorch 1.3.1.
torchaudio has been redesigned to be an extension of PyTorch and part of the domain APIs (DAPI) ecosystem. Domain specific libraries such as this one are kept separated in order to maintain a coherent environment for each of them. As such, torchaudio is an ML library that provides relevant signal processing functionality, but it is not a general signal processing library. The full rationale of this new standardization can be found in the README.md.
In light of these changes some transforms have been removed or have different argument names and conventions. See the section on backwards breaking changes for a migration guide.
We provide binaries via pip and conda. They require PyTorch 1.2.0 and newer. See https://pytorch.org/ for installation instructions.
We would like to thank our contributors and the wider community for their significant contributions to this release. We are happy to see an active community around torchaudio and are eager to further grow and support it.
In particular we'd like to thank @keunwoochoi, @ksanjeevan, and all the other maintainers and contributors of torchaudio-contrib for their significant and valuable additions around standardization and the support of complex numbers (https://github.com/pytorch/audio/pull/131, https://github.com/pytorch/audio/issues/110, https://github.com/keunwoochoi/torchaudio-contrib/issues/61, https://github.com/keunwoochoi/torchaudio-contrib/issues/36).
An implementation of basic transforms with a Kaldi-like interface.
We added the functions spectrogram, fbank, and resample_waveform (https://github.com/pytorch/audio/pull/119, https://github.com/pytorch/audio/pull/127, and https://github.com/pytorch/audio/pull/134). For more details see the documentation on torchaudio.compliance.kaldi which mirrors the arguments and outputs of Kaldi features.
As an example we can look at the sinc interpolation resampling similar to Kaldi’s implementation. In the figure below, the blue dots are the original signal and red dots are the downsampled signal with half the original frequency. The red dot elements are approximately every other original element.
specgram = torchaudio.compliance.kaldi.spectrogram(waveform, frame_length=...)
fbank = torchaudio.compliance.kaldi.fbank(waveform, num_mel_bins=...)
resampled_waveform = torchaudio.compliance.kaldi.resample_waveform(waveform, orig_freq=...)
Constructing a signal from a spectrogram can be used in applications like source separation or to generate audio signals to listen to. More specifically torchaudio.functional.istft is the inverse of torch.stft. It has the same parameters (+ additional optional parameter of length
) and returns the least squares estimation of an original signal.
torch.manual_seed(0)
n_fft = 5
waveform = torch.rand(2, 5)
stft = torch.stft(waveform, n_fft=n_fft)
approx_waveform = torchaudio.functional.istft(stft, n_fft=n_fft, length=waveform.size(1))
>>> waveform
tensor([[0.4963, 0.7682, 0.0885, 0.1320, 0.3074],
[0.6341, 0.4901, 0.8964, 0.4556, 0.6323]])
>>> approx_waveform
tensor([[0.4963, 0.7682, 0.0885, 0.1320, 0.3074],
[0.6341, 0.4901, 0.8964, 0.4556, 0.6323]])
Compose
:
Please use core abstractions such as nn.Sequential() or a for-loop over a list of transforms.SPECTROGRAM
, F2M
, and MEL
have been removed. Please use Spectrogram
, MelScale
, and MelSpectrogram
LC2CL
and BLC2CBL
): While the LC layout might be common in signal processing, support for it is out of scope of this library and transforms such as LC2CL only aid their proliferation. Please use transpose if you need this behavior.Scale
, PadTrim
, DownmixMono
: Please use division in place of Scale
torch.nn.functional.pad/trim in place of PadTrim
, torch.mean on the channel dimension in place of DownmixMono
.torchaudio.legacy
has been removed. Please use torchaudio.load
and torchaudio.save
Spectrogram
used to be of dimension (channel, time, freq) and is now (channel, freq, time). Similarly for MelScale
, MelSpectrogram
, and MFCC
, time is the last dimension. Please see our README for an explanation of the rationale behind these changes. Please use transpose to get the previous behavior.MuLawExpanding
was renamed to MuLawDecoding
as the inverse of MuLawEncoding
( https://github.com/pytorch/audio/pull/159)SpectrogramToDB
was renamed to AmplitudeToDB
( https://github.com/pytorch/audio/pull/170). The input does not necessarily have to be a spectrogram and as such can be used in many more cases as the name should reflect.Spectrogram
, AmplitudeToDB
, MelScale
, MelSpectrogram
, MFCC
, MuLawEncoding
, and MuLawDecoding
. (https://github.com/pytorch/audio/pull/118)Spectrogram
, AmplitudeToDB
, MelScale
, MelSpectrogram
, MFCC
, MuLawEncoding
, and MuLawDecoding
(https://github.com/pytorch/audio/pull/118)test_transforms.py
where double tensors were compared with floats (https://github.com/pytorch/audio/pull/132)vctk.read_audio
(issue https://github.com/pytorch/audio/issues/143) as there were issues with downsampling using SoxEffectsChain
(https://github.com/pytorch/audio/pull/145)sox_close
(https://github.com/pytorch/audio/pull/174)