A PyTorch implementation of Time-domain Audio Separation Network (TasNet) with Permutation Invariant Training (PIT) for speech separation.
A PyTorch implementation of "TasNet: Time-domain Audio Separation Network for Real-time, single-channel speech separation", published in ICASSP2018, by Yi Luo and Nima Mesgarani.
Method | Causal | SDRi | SI-SNRi | Config |
---|---|---|---|---|
TasNet-BLSTM (Paper) | No | 11.1 | 10.8 | |
TasNet-BLSTM (Here) | No | 11.84 | 11.54 | L40 N500 hidden500 layer4 lr1e-3 epoch100 batch size10 |
TasNet-BLSTM (Here) | No | 11.77 | 11.46 | + L2 1e-4 |
TasNet-BLSTM (Here) | No | 13.07 | 12.78 | + L2 1e-5 |
pip install -r requirements.txt
cd tools; make
If you already have mixture wsj0 data:
$ cd egs/wsj0
, modify wsj0 data path data
to your path in the beginning of run.sh
.$ bash run.sh
, that's all!If you just have origin wsj0 data (sphere format):
$ cd egs/wsj0
, modify three wsj0 data path to your path in the beginning of run.sh
.Stage 0
part provides an example.$ bash run.sh
, that's all!You can change hyper-parameter by $ bash run.sh --parameter_name parameter_value
, egs, $ bash run.sh --stage 3
. See parameter name in egs/aishell/run.sh
before . utils/parse_options.sh
.
Workflow of egs/wsj0/run.sh
:
# Set PATH and PYTHONPATH
$ cd egs/wsj0/; . ./path.sh
# Train:
$ train.py -h
# Evaluate performance:
$ evaluate.py -h
# Separate mixture audio:
$ separate.py -h
If you want to visualize your loss, you can use visdom to do that:
$ visdom
$ bash run.sh --visdom 1 --visdom_id "<any-string>"
or $ train.py ... --visdom 1 --vidsdom_id "<any-string>"
<your-remote-server-ip>:8097
, egs, 127.0.0.1:8097
<any-string>
in Environment
to see your loss$ bash run.sh --continue_from <model-path>