Adaptive and Focusing Neural Layers for Multi-Speaker Separation Problem
This repository contains the implementation of an Adaptive Layer and Focus Layer for the Multi-Speaker Separation Problem. The Adaptive Layer consists in a sparse linear AutoEncoder replacing the use of Spectograms and dealing directly with raw audio files. This AutoEncoder is added around the current following state of the art architectures:
These are compared to traditional STFT approaches with the following architectures:
The Focus Layer is in constuction.
python -m experiments.training.pretraining --men --women --loss sdr+l2 --separation mask --learning_rate 0.001 --nb_speakers 2 --batch_size 4 --filters 256 --max_pool 256 --beta 0.0 --regularization 0.0 --overlap_coef 1.0 --no_random_picking
Architecture:
Coefficients:
scipy==1.0.0
tqdm==4.19.4
SoundFile==0.9.0.post1
matplotlib==2.1.0
numpy==1.12.0
tensorflow_gpu==1.4.0
librosa==0.5.1
haikunator==2.1.0
h5py==2.7.0
scikit_learn==0.19.1
tensorflow==1.5.0rc1