I implement yet another text-to-speech model, dc-tts, introduced in Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention. My goal, however, is not just replicating the paper. Rather, I'd like to gain insights about various sound projects.
tf.contrib.layers.layer_normhas changed since 1.3)
I train English models and an Korean model on four different speech datasets.
1. LJ Speech Dataset
2. Nick Offerman's Audiobooks
3. Kate Winslet's Audiobook
4. KSS Dataset
LJ Speech Dataset is recently widely used as a benchmark dataset in the TTS task because it is publicly available, and it has 24 hours of reasonable quality samples. Nick's and Kate's audiobooks are additionally used to see if the model can learn even with less data, variable speech samples. They are 18 hours and 5 hours long, respectively. Finally, KSS Dataset is a Korean single speaker speech dataset that lasts more than 12 hours.
hyperparams.py. (If you want to do preprocessing, set prepro True`.
python train.py 1for training Text2Mel. (If you set prepro True, run python prepro.py first)
python train.py 2for training SSRN.
You can do STEP 2 and 3 at the same time, if you have more than one gpu card.
I generate speech samples based on Harvard Sentences as the original paper does. It is already included in the repo.
synthesize.pyand check the files in
|LJ||50k 200k 310k 800k|
|Nick||40k 170k 300k 800k|
|Kate||40k 160k 300k 800k|