SpeakerEmbeddingLossComparison Save

Companion repository for the paper "A Comparison of Metric Learning Loss Functions for End-to-End Speaker Verification" published at SLSP 2020

Project README

A Comparison of Metric Learning Loss Functions for End-to-End Speaker Verification

This is the companion repository for the paper A Comparison of Metric Learning Loss Functions for End-to-End Speaker Verification, published at the SLSP 2020 conference. It hosts our best model trained with additive angular margin loss, and contains instructions for reproducing our results and using the model.

Architecture

Architecture

The architecture of our model consists of SincNet for feature extraction followed by x-vector.

Training

You can train our model from scratch using the configuration file config.yml that we provide. All you need to do is run the following commands in your terminal:

$ export EXP=models/AAM # Replace with the new path to config.yml
$ export PROTOCOL=VoxCeleb.SpeakerVerification.VoxCeleb2
$ pyannote-audio emb train --parallel=10 --gpu --to=1000 $EXP $PROTOCOL 

Note that you may need to change parameters based on your setup.

Evaluation

We provide a step-by-step guide on reproducing our equal error rates alongside their 95% confidence intervals. The guide first evaluates the pretrained model using raw cosine distances, and then improves it with adaptive s-norm score normalization.

If you want to reproduce our results, check out this notebook

Fine-tuning

You can fine-tune our model to your dataset with the following commands:

$ export WEIGHTS=models/AAM/train/VoxCeleb.SpeakerVerification.VoxCeleb2.train/weights/0560.pt
$ export EXP=<your_experiment_directory>
$ export PROTOCOL=<your_pyannote_database_protocol>
$ pyannote-audio emb train --pretrained $WEIGHTS --gpu --to=1000 $EXP $PROTOCOL

Inference in Python

The default pyannote model for speaker embedding on torch.hub is our AAM loss model trained on variable length audio chunks. If you want to use the model right away, you can do so easily in a Python script:

# load pretrained model from torch.hub
import torch
model = torch.hub.load('pyannote/pyannote-audio', 'emb')

# extract embeddings for the whole files
emb1 = model({'audio': '/path/to/file1.wav'})
emb2 = model({'audio': '/path/to/file2.wav'})

# compute distance between embeddings
from scipy.spatial.distance import cdist
import numpy as np
distance = np.mean(cdist(emb1, emb2, metric='cosine'))

You can also replace the call to torch.hub.load with a pyannote Pretrained instance pointing to the model in this repo:

from pyannote.audio.features import Pretrained
model = Pretrained(
    'models/AAM/train/VoxCeleb.SpeakerVerification.VoxCeleb2.train/validate_equal_error_rate/'
    'VoxCeleb.SpeakerVerification.VoxCeleb1_X.development', step=0.0333)

print(f'Embeddings of {model.sliding_window.duration:g}s duration and of dimension {model.dimension:d}, '
      f'extracted every {1000 * model.sliding_window.step:g}ms')

Some Compatibility Notes

This project depends on the pyannote-audio toolkit, so make sure you install it before running any code.

Under normal circumstances, everything should work with the newest version of pyannote. However, given that pyannote is constantly evolving, some compatibility issues may appear. To make sure these don't happen, you can install the version at this commit from the develop branch.

Citation

If our work has been useful to you, please cite our paper:

@InProceedings{10.1007/978-3-030-59430-5_11,
    author="Coria, Juan M.
    and Bredin, Herv{\'e}
    and Ghannay, Sahar
    and Rosset, Sophie",
    editor="Espinosa-Anke, Luis
    and Mart{\'i}n-Vide, Carlos
    and Spasi{\'{c}}, Irena",
    title="{A Comparison of Metric Learning Loss Functions for End-To-End Speaker Verification}",
    booktitle="Statistical Language and Speech Processing",
    year="2020",
    publisher="Springer International Publishing",
    address="Cham",
    pages="137--148",
    isbn="978-3-030-59430-5"
}
Open Source Agenda is not affiliated with "SpeakerEmbeddingLossComparison" Project. README Source: juanmc2005/SpeakerEmbeddingLossComparison

Open Source Agenda Badge

Open Source Agenda Rating