This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices.
Read more about the procedure we followed and the findings here
If you want to try and test out the samples trained and how the model is performing on custom text you can follow these instructions.
Pre-requisites:
Directions to install:
irtualenv env
d env/scripts
ctivate
ip install -r requirements.txt
/run.sh
/test.sh
Directions to execute
Test through interface
ython app.py
Test through the Synthesize function
Test using Docker
docker build -t smoketrees/voice:latest .
docker run smoketrees/voice:latest -p 5000:5000
If you want to work with the source code and want to train your own models on different dataset and different language medium you can check out the instructions mentioned here
For more information about the samples tested and there results you can get all the information from here