TS RIR Save

Translating Synthetic RIRs to Real RIRs

Project README

Related Works

  1. IR-GAN: Room Impulse Response Generator for Far-field Speech Recognition (INTERSPEECH 2021)
  2. FAST-RIR: FAST NEURAL DIFFUSE ROOM IMPULSE RESPONSE GENERATOR (ICASSP 2022)
  3. MESH2IR: Neural Acoustic Impulse Response Generator for Complex 3D Scenes (ACM Multimedia 2022)

NEWS: We release MULTI-CHANNEL MULTI-SPEAKER MULTI-SPATIAL AUDIO CODEC. The official code of our network M3-AUDIODEC is available.

TS-RIR (Accepted to IEEE ASRU 2021)

This is the official implementation of TS-RIRGAN. We started our implementation from WaveGAN. TS-RIRGAN is a one-dimensional CycleGAN that takes synthetic RIRs as raw waveform audio and translates it into real RIRs. Our network architecture is shown below.

Architecture-1.png

You can find more details about our implementation from TS-RIR: Translated synthetic room impulse responses for speech augmentation.

Requirements

tensorflow-gpu==1.12.0
scipy==1.0.0
matplotlib==3.0.2
librosa==0.6.2
ffmpeg ==4.2.1
cuda ==9.0.176
cudnn ==7.6.5

Datasets

In order to train TS-RIRGAN to translate Synthetic RIRs to Real RIRs, download the RIRs from IRs_for_GAN. Unzip IRs_for_GAN directory inside TS-RIR folder.

This folder contains Synthetic RIRs generated using Geometric Acoustic Simulator and Real RIRs from BUT ReverbDB dataset.et.

Translate Synthetic RIRs to Real RIRs using the trained model

Download all the MODEL FILES and move all the files to the generator folder. Create a similar structure as the dataset inside the generator folder. You can convert Synthetic RIRs to Real RIRs by running the following command inside the generator folder.

export CUDA_VISIBLE_DEVICES=1
python3 generator.py --data1_dir ../IRs_for_GAN/Real_IRs/train --data1_first_slice --data1_pad_end --data1_fast_wav --data2_dir ../IRs_for_GAN/Synthetic_IRs/train --data2_first_slice --data2_pad_end --data2_fast_wav

Training TS-RIRGAN

Run following command to train TS-RIRGAN.

export CUDA_VISIBLE_DEVICES=0
python3 train_TSRIRgan.py train ./train --data1_dir ./IRs_for_GAN/Real_IRs/train --data1_first_slice --data1_pad_end --data1_fast_wav --data2_dir ./IRs_for_GAN/Synthetic_IRs/train --data2_first_slice --data2_pad_end --data2_fast_wav

To backup the mode for every 1 hour, run the follwing command

export CUDA_VISIBLE_DEVICES=1
python3 backup.py ./train 60

To monitor the training using tensorboard, run the followind command

tensorboard --logdir=./train

Results

The figure below shows Synthetic RIR generated using Geometric Acoustic Simulator, Synthetic RIR translated to Real RIR using our TS-RIRGAN and a Real RIR from BUT ReverbDB dataset. Please note that there is no one-to-one relationship between Synthetic RIR and Real RIR from BUT ReverbDB. We show an example of Real RIR to compare the energy distribution of our translated RIR with the energy distribution of Real RIR. spectrogram.png

Output

You can download RIRs generated for our Kaldi Far-field Automatic Speech Recognition Exepriments.

  • RIR generated using Geomteric Acoustic Simulator (GAS). -- Output
  • Perform room equalization on Synthetic RIRs from GAS. -- Output
  • First, perform room equalization, then translate the equalized synthetic RIR to a real RIR. -- Output
  • Only translate synthetic RIR to real RIR. -- Output
  • First, translate a synthetic RIR to a real RIR, then perform room equalization to the translated RIR. -- Output

Attribution

If you use this code in your research, please consider citing

@article{DBLP:journals/corr/abs-2103-16804,
  author    = {Anton Ratnarajah and
               Zhenyu Tang and
               Dinesh Manocha},
  title     = {{TS-RIR:} Translated synthetic room impulse responses for speech augmentation},
  journal   = {CoRR},
  volume    = {abs/2103.16804},
  year      = {2021}
}
@inproceedings{donahue2019wavegan,
  title={Adversarial Audio Synthesis},
  author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
  booktitle={ICLR},
  year={2019}
}

If you use Sub-band Room Equalization please consider citing

@inproceedings{9054454,  
  author={Z. {Tang} and H. {Meng} and D. {Manocha}},  
  booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},  
  title={Low-Frequency Compensated Synthetic Impulse Responses For Improved Far-Field Speech Recognition},   
  year={2020},  
  volume={},  
  number={},  
  pages={6974-6978},
}

If you use Real RIRs from our dataset folder(IRs_for_GAN), please consider citing

@article{DBLP:journals/jstsp/SzokeSMPC19,
  author    = {Igor Sz{\"{o}}ke and
               Miroslav Sk{\'{a}}cel and
               Ladislav Mosner and
               Jakub Paliesek and
               Jan Honza Cernock{\'{y}}},
  title     = {Building and Evaluation of a Real Room Impulse Response Dataset},
  journal   = {{IEEE} J. Sel. Top. Signal Process.},
  volume    = {13},
  number    = {4},
  pages     = {863--876},
  year      = {2019}
}

If you use Synthetic RIRs from our dataset folder(IRs_for_GAN), please consider citing

@inproceedings{9052932,
  author={Z. {Tang} and L. {Chen} and B. {Wu} and D. {Yu} and D. {Manocha}},  
  booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},  
  title={Improving Reverberant Speech Training Using Diffuse Acoustic Simulation},   
  year={2020},  
  volume={},  
  number={},  
  pages={6969-6973},
}
Open Source Agenda is not affiliated with "TS RIR" Project. README Source: anton-jeran/TS-RIR
Stars
37
Open Issues
0
Last Commit
7 months ago
Repository

Open Source Agenda Badge

Open Source Agenda Rating