A PyTorch implementation of Conv-TasNet described in "TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation" with Permutation Invariant Training (PIT).
A PyTorch implementation of Conv-TasNet described in "TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation".
From | N | L | B | H | P | X | R | Norm | Causal | batch size | SI-SNRi(dB) | SDRi(dB) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Paper | 256 | 20 | 256 | 512 | 3 | 8 | 4 | gLN | X | - | 14.6 | 15.0 |
Here | 256 | 20 | 256 | 512 | 3 | 8 | 4 | gLN | X | 3 | 15.5 | 15.7 |
pip install -r requirements.txt
cd tools; make
If you already have mixture wsj0 data:
$ cd egs/wsj0
, modify wsj0 data path data
to your path in the beginning of run.sh
.$ bash run.sh
, that's all!If you just have origin wsj0 data (sphere format):
$ cd egs/wsj0
, modify three wsj0 data path to your path in the beginning of run.sh
.Stage 0
part provides an example.$ bash run.sh
, that's all!You can change hyper-parameter by $ bash run.sh --parameter_name parameter_value
, egs, $ bash run.sh --stage 3
. See parameter name in egs/aishell/run.sh
before . utils/parse_options.sh
.
Workflow of egs/wsj0/run.sh
:
# Set PATH and PYTHONPATH
$ cd egs/wsj0/; . ./path.sh
# Train:
$ train.py -h
# Evaluate performance:
$ evaluate.py -h
# Separate mixture audio:
$ separate.py -h
If you want to visualize your loss, you can use visdom to do that:
$ visdom
$ bash run.sh --visdom 1 --visdom_id "<any-string>"
or $ train.py ... --visdom 1 --vidsdom_id "<any-string>"
<your-remote-server-ip>:8097
, egs, 127.0.0.1:8097
<any-string>
in Environment
to see your loss
$ bash run.sh --continue_from <model-path>
Use comma separated gpu-id sequence, such as:
$ bash run.sh --id "0,1"
batch_size
or use more GPU. $ bash run.sh --batch_size <lower-value>
cv_maxlen
. $ bash run.sh --cv_maxlen <lower-value>