Parakeet Versions Save

PAddle PARAllel text-to-speech toolKIT (supporting Tacotron2, Transformer TTS, FastSpeech2/FastPitch, SpeedySpeech, WaveFlow and Parallel WaveGAN)

v0.4.0

2 years ago

We add some features in v0.4.0, including:

  • Text FrontEnd
    • Rule based Mandarin text frontend.
  • Acoustic Models
    • FastSpeech2/FastPitch for CSMSC and Multi-speaker AISHEL-3
    • SpeedySpeech for CSMSC
  • Vocoders
    • Parallel WaveGAN for CSMSC
  • Others
    • An example to use MFA1.x

V0.3.1

2 years ago

Fix a config key error.

v0.3.0

2 years ago
  1. An experiment for voice cloning in Chinese based on "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis" is added.
  2. Switch to visualdl as the visualizer.

v0.2.1

3 years ago

fix some bugs about multiprocess training.

v0.2.0

3 years ago

Experiemnts conducted with LJSpeech dataset are extended, from separate ones for acoustic models and vocoders, to chained ones. Neural acoustic models with neural vocoders work togather to make a simpler TTS pipeline.

  1. Transformer TTS + Waveflow;
  2. Tacotron2 + Waveflow.

Since the acoustic configurations for training the acoustic model and the vocoder is the same, chaining them is seamless.

v0.1.0

3 years ago

Parakeet aims to provide a flexible, efficient and state-of-the-art text-to-speech toolkit for the open-source community. It is built on PaddlePaddle Dynamic graph and includes many influential TTS models proposed by Baidu Research and other research groups. This is the first release of Parakeet.

In particular, it features the latest WaveFlow model proposed by Baidu Research.

  • WaveFlow can synthesize 22.05 kHz high-fidelity speech around 40x faster than real-time on a Nvidia V100 GPU without engineered inference kernels, which is faster than WaveGlow and serveral orders of magnitude faster than WaveNet.
  • WaveFlow is a small-footprint flow-based model for raw audio. It has only 5.9M parameters, which is 15x smalller than WaveGlow (87.9M).
  • WaveFlow is directly trained with maximum likelihood without probability density distillation and auxiliary losses as used in Parallel WaveNet and ClariNet, which simplifies the training pipeline and reduces the cost of development.