Voice Synthesis Save

This repository is an implementation of Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (SV2TTS) with a vocoder that works in real-time. SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices.

Project README

Voice Cloning and Text to Speech Synthesis

A Standalone service for cloning your own voice and synthesize any text in English in your own voice.


Read more about the procedure we followed and the findings here


Functionalities

  • Clone voices after feeding samples to it
  • Synthesized voice on custom texts
  • Speech-to-text facility for input using microphones
  • RestAPI with a testing UI for testing the model

Instructions to run the trained models

If you want to try and test out the samples trained and how the model is performing on custom text you can follow these instructions.

  • Pre-requisites:

    • For Windows

      • python (3.6 or 3.7 works best)
      • virtualenv

        If you dont have virtualenv check it out here to install

      • Trained embeddings from here
    • For linux

      • Bash shell for executing the scripts
  • Directions to install:

    • For windows
      • Clone the repo
      • Setup Virtualenv
        irtualenv env
        d env/scripts
        ctivate
        
      • Install all requirements packages
        ip install -r requirements.txt
        
    • For Linux
      • Run the run.sh file to install the project
        /run.sh
        

    After installing all the dependencies and environment prerequisites run the below file to check you are ready and good to go!
    /test.sh
    
  • Directions to execute

    • Test through interface

      • Start the python flask server
        ython app.py
        
      • Log on to localhost:5000 to test the model
    • Test through the Synthesize function

      • Follow the instructions given here
    • Test using Docker

      docker build -t smoketrees/voice:latest .
      docker run smoketrees/voice:latest -p 5000:5000
      

Instructions to train your own models

If you want to work with the source code and want to train your own models on different dataset and different language medium you can check out the instructions mentioned here

For more information about the samples tested and there results you can get all the information from here

Contributors

Open Source Agenda is not affiliated with "Voice Synthesis" Project. README Source: smoke-trees/Voice-synthesis

Open Source Agenda Badge

Open Source Agenda Rating