Tts Generation Webui Save

TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS)

Project README

TTS Generation WebUI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNeT)

One click installers

Download || Upgrading || Manual installation

Google Colab demo: Open In Colab

Note: Not all models support all platforms. For example, MusicGen and AudioGen are not supported on MacOS as of yet.

Videos

How To Use TTS Voice Generation Web UI With AI Voice Cloning Technology (Bark AI Tutorial) TTS Generation WebUI - A Tool for Text to Speech and Voice Cloning Text to speech and voice cloning - TTS Generation WebUI
Watch the video Watch the video Watch the video

Screenshots

react musicgen rvc
history Screenshot 1 Screenshot 5

Examples

audio__bark__continued_generation__2023-05-04_16-07-49_long.webm

audio__bark__continued_generation__2023-05-04_16-09-21_long.webm

audio__bark__continued_generation__2023-05-04_16-10-55_long.webm

Extra Voices for Bark

Echo AI https://rsxdalv.github.io/bark-speaker-directory/

Bark Readme

README_Bark.md

Info about managing models, caches and system space for AI projects

https://github.com/rsxdalv/tts-generation-webui/discussions/186#discussioncomment-7291274

Changelog

Apr 6:

  • Add Vall-E-X generation demo tab.
  • Add MMS demo tab.
  • Add Maha TTS demo tab.
  • Add StyleTTS2 demo tab.

Apr 5:

  • Fix RVC installation bug.
  • Add basic UVR5 demo tab.

Apr 4:

  • Upgrade RVC to include RVMPE and FCPE. Remove the direct file input for models and indexes due to file duplication. Improve React UI interface for RVC.

Mar 28:

  • Add GPU Info tab

Mar 27:

  • Add information about voice cloning to tab voice clone

Mar 26:

  • Add Maha TTS demo notebook

Mar 22:

  • Vall-E X demo via notebook (#292)
  • Add React UI to Docker image
  • Add install disclaimer

Mar 16:

  • Upgrade vocos to 0.1.0

Mar 14:

  • StyleTTS2 Demo Notebook

Mar 13:

  • Add Experimental Pipeline (Bark / Tortoise / MusicGen / AudioGen / MAGNeT -> RVC / Demucs / Vocos) (#287)
  • Fix RVC bug with model reloading on each generation. For short inputs that results in a visible speedup.

Mar 11:

  • Add Play as Audio and Save to Voices to bark (#286)
  • Change UX to show that files are deleted from favorites
  • Fix images for bark voices not showing
  • Fix audio playback in favorites

Mar 10:

  • Add Batching to React UI Magnet (#283)
  • Add audio to audio translation to SeamlessM4T (#284)

Mar 5:

Mar 3:

  • Add MMS demo as a notebook
  • Add MultiBandDiffusion high VRAM disclaimer

Feb 21:

  • Fix Docker container builds and bug with Docker-Audiocraft

Feb 8:

Feb 6:

Jan 21:

  • Add CPU/M1 torch auto-repair script with each update. To disable, edit check_cuda.py and change FORCE_NO_REPAIR = True

Jan 16:

  • Upgrade MusicGen, adding support for stereo and large melody models
  • Add MAGNeT

Jan 15:

  • Upgraded Gradio to 3.48.0
    • Several visual bugs have appeared, if they are critical, please report them or downgrade gradio.
    • Gradio: Suppress useless warnings
  • Supress Triton warnings
  • Gradio-Bark: Fix "Use last generation as history" behavior, empty selection no longer errors
  • Improve extensions loader display
  • Upgrade transformers to 4.36.1 from 4.31.0
  • Add SeamlessM4T Demo

Jan 14:

  • React UI: Fix missing directory errors

Jan 13:

  • React UI: Fix missing npm build step from automatic install

Jan 12:

  • React UI: Fix names for audio actions
  • Gradio: Fix multiple API warnings
  • Integration - React UI now is launched alongside Gradio, with a link to open it

Jan 11:

  • React UI: Make the build work without any errors

Jan 9:

  • React UI
    • Fix 404 handler for Wavesurfer
    • Group Bark tabs together

Jan 8:

  • Release React UI

Oct 26:

  • Improve model selection UX for Musicgen

Oct 24:

Sep 21:

Sep 9:

Sep 5:

  • Add voice mixing to Bark
  • Add v1 Burn in prompt to Bark (Burn in prompts are for directing the semantic model without spending time on generating the audio. The v1 works by generating the semantic tokens and then using it as a prompt for the semantic model.)
  • Add generation length limiter to Bark

Aug 27:

Aug 26:

  • Add Send to RVC, Demucs, Vocos buttons to Bark and Vocos

Aug 24:

Aug 21:

  • Add torchvision install to colab for musicgen issue fix
  • Remove rvc_tab file logging

Aug 20:

  • Fix MBD by reinstalling hydra-core at the end of an update

Aug 18:

  • CI: Add a GitHub Action to automatically publish docker image.

Aug 16:

  • Add "name" to tortoise generation parameters

Aug 15:

  • Pin torch to 2.0.0 in all requirements.txt files
  • Bump audiocraft and bark versions
  • Remove Tortoise transformers fix from colab
  • Update Tortoise to 2.8.0

Aug 13:

  • Potentially big fix for new user installs that had issues with GPU not being supported

Aug 11:

  • Tortoise hotfix thanks to manmay-nakhashi
  • Add Tortoise option to change tokenizer

Aug 8:

  • Update AudioCraft, improving MultiBandDiffusion performance
  • Fix Tortoise parameter 'cond_free' mismatch with 'ultra_fast' preset

Aug 7:

  • add tortoise deepspeed fix to colab

Aug 6:

  • Fix audiogen + mbd error, add tortoise fix for colab

Aug 4:

Aug 3:

Aug 2:

  • Fix Model locations not showing after restart

July 26:

July 24:

  • Change bark file format to include history hash: ...continued_generation... -> ...from_3ea0d063...

July 23:

July 21:

July 19:

July 16:

July 10:

July 9:

July 5:

July 2:

July 1:

Jun 29:

Jun 27:

Jun 20

Jun 19

June 18:

  • Update to newest audiocraft, add longer generations

Jun 14:

June 5:

  • Fix "Save to Favorites" button on bark generation page, clean up console (v4.1.1)
  • Add "Collections" tab for managing several different data sets and easier curration.

June 4:

  • Update to v4.1 - improved hash function, code improvements

June 3:

  • Update to v4 - new output structure, improved history view, codebase reorganization, improved metadata, output extensions support

May 21:

  • Update to v3 - voice clone demo

May 17:

  • Update to v2 - generate results as they appear, preview long prompt generations piece by piece, enable up to 9 outputs, UI tweaks

May 16:

  • Add gradio settings tab, fix gradio errors in console, improve logging.
  • Update History and Favorites with "use as voice" and "save voice" buttons
  • Add voices tab
  • Bark tab: Remove "or Use last generation as history"
  • Improve code organization

May 13:

May 10:

  • Enable the possibility of reusing history prompts from older generations. Save generations as npz files. Add a convenient method of reusing any of the last 3 generations for the next prompts. Add a button for saving and collecting history prompts under /voices. https://github.com/rsxdalv/tts-generation-webui/pull/10

May 4:

May 3:

May 2 Update 2:

  • Added support for history recylcing to continue longer prompts manually

May 2 Update 1:

  • Added support for v2 prompts

Before:

  • Added support for Tortoise TTS

Upgrading

In case of issues, feel free to contact the developers.

Upgrading from v5 to v6 installer

  • Download and run the new installer
  • Replace the "tts-generation-webui" directory in the newly installed directory
  • Run update_platform

Is there any more optimal way to do this?

Not exactly, the dependencies clash, especially between conda and python (and dependencies are already in a critical state, moving them to conda is ways off). Therefore, while it might be possible to just replace the old installer with the new one and running the update, the problems are unpredictable and unfixable. Making an update to installer requires a lot of testing so it's not done lightly.

Upgrading from v4 to v5 installer

  • Download and run the new installer
  • Replace the "tts-generation-webui" directory in the newly installed directory
  • Run update_platform
  • Install conda or another virtual environment

  • Highly recommended to use Python 3.10

  • Install git (conda install git)

  • Install ffmpeg (conda install -y -c pytorch ffmpeg)

  • Set up pytorch with CUDA or CPU (https://pytorch.org/audio/stable/build.windows.html#install-pytorch)

  • Clone the repo: git clone https://github.com/rsxdalv/tts-generation-webui.git

  • install the root requirements.txt with pip install -r requirements.txt

  • clone the repos in the ./models/ directory and install requirements under them

  • run using (venv) python server.py

  • Potentially needed to install build tools (without Visual Studio): https://visualstudio.microsoft.com/visual-cpp-build-tools/

React UI

  • Install nodejs (if not already installed with conda)
  • Install react dependencies: npm install
  • Build react: npm run build
  • Run react: npm start
  • Also run the python server: python server.py or with start_(platform) script

Docker Setup

tts-generation-webui can also be ran inside of a Docker container. To get started, first build the Docker image while in the root directory:

docker build -t rsxdalv/tts-generation-webui .

Once the image has built it can be started with Docker Compose:

docker compose up -d

The container will take some time to generate the first output while models are downloaded in the background. The status of this download can be verified by checking the container logs:

docker logs tts-generation-webui

Open Source Libraries

This project utilizes the following open source libraries:

Ethical and Responsible Use

This technology is intended for enablement and creativity, not for harm.

By engaging with this AI model, you acknowledge and agree to abide by these guidelines, employing the AI model in a responsible, ethical, and legal manner.

  • Non-Malicious Intent: Do not use this AI model for malicious, harmful, or unlawful activities. It should only be used for lawful and ethical purposes that promote positive engagement, knowledge sharing, and constructive conversations.
  • No Impersonation: Do not use this AI model to impersonate or misrepresent yourself as someone else, including individuals, organizations, or entities. It should not be used to deceive, defraud, or manipulate others.
  • No Fraudulent Activities: This AI model must not be used for fraudulent purposes, such as financial scams, phishing attempts, or any form of deceitful practices aimed at acquiring sensitive information, monetary gain, or unauthorized access to systems.
  • Legal Compliance: Ensure that your use of this AI model complies with applicable laws, regulations, and policies regarding AI usage, data protection, privacy, intellectual property, and any other relevant legal obligations in your jurisdiction.
  • Acknowledgement: By engaging with this AI model, you acknowledge and agree to abide by these guidelines, using the AI model in a responsible, ethical, and legal manner.

License

Codebase and Dependencies

The codebase is licensed under MIT. However, it's important to note that when installing the dependencies, you will also be subject to their respective licenses. Although most of these licenses are permissive, there may be some that are not. Therefore, it's essential to understand that the permissive license only applies to the codebase itself, not the entire project.

That being said, the goal is to maintain MIT compatibility throughout the project. If you come across a dependency that is not compatible with the MIT license, please feel free to open an issue and bring it to our attention.

Known non-permissive dependencies:

Library License Notes
encodec CC BY-NC 4.0 Newer versions are MIT, but need to be installed manually
diffq CC BY-NC 4.0 Optional in the future, not necessary to run, can be uninstalled, should be updated with demucs
lameenc GPL License Future versions will make it LGPL, but need to be installed manually
unidecode GPL License Not mission critical, can be replaced with another library, issue: https://github.com/neonbjb/tortoise-tts/issues/494

Model Weights

Model weights have different licenses, please pay attention to the license of the model you are using.

Most notably:

  • Bark: CC BY-NC 4.0 (MIT but HuggingFace has not been updated yet)
  • Tortoise: Unknown (Apache-2.0 according to repo, but no license file in HuggingFace)
  • MusicGen: CC BY-NC 4.0
  • AudioGen: CC BY-NC 4.0

Compatibility / Errors

Audiocraft is currently only compatible with Linux and Windows. MacOS support still has not arrived, although it might be possible to install manually.

Torch being reinstalled

Due to the python package manager (pip) limitations, torch can get reinstalled several times. This is a wide ranging issue of pip and torch.

Red messages in console

These messages:

---- requires ----, but you have ---- which is incompatible.

Are completely normal. It's both a limitation of pip and because this Web UI combines a lot of different AI projects together. Since the projects are not always compatible with each other, they will complain about the other projects being installed. This is normal and expected. And in the end, despite the warnings/errors the projects will work together. It's not clear if this situation will ever be resolvable, but that is the hope.

Configuration Guide

You can configure the interface through the "Settings" tab or, for advanced users, via the config.json file in the root directory (not recommended). Below is a detailed explanation of each setting:

Model Configuration

Argument Default Value Description
text_use_gpu true Determines whether the GPU should be used for text processing.
text_use_small true Determines whether a "small" or reduced version of the text model should be used.
coarse_use_gpu true Determines whether the GPU should be used for "coarse" processing.
coarse_use_small true Determines whether a "small" or reduced version of the "coarse" model should be used.
fine_use_gpu true Determines whether the GPU should be used for "fine" processing.
fine_use_small true Determines whether a "small" or reduced version of the "fine" model should be used.
codec_use_gpu true Determines whether the GPU should be used for codec processing.
load_models_on_startup false Determines whether the models should be loaded during application startup.

Gradio Interface Options

Argument Default Value Description
inline false Display inline in an iframe.
inbrowser true Automatically launch in a new tab.
share false Create a publicly shareable link.
debug false Block the main thread from running.
enable_queue true Serve inference requests through a queue.
max_threads 40 Maximum number of total threads.
auth null Username and password required to access interface, format: username:password.
auth_message null HTML message provided on login page.
prevent_thread_lock false Block the main thread while the server is running.
show_error false Display errors in an alert modal.
server_name 0.0.0.0 Make app accessible on local network.
server_port null Start Gradio app on this port.
show_tips false Show tips about new Gradio features.
height 500 Height in pixels of the iframe element.
width 100% Width in pixels of the iframe element.
favicon_path null Path to a file (.png, .gif, or .ico) to use as the favicon.
ssl_keyfile null Path to a file to use as the private key file for a local server running on HTTPS.
ssl_certfile null Path to a file to use as the signed certificate for HTTPS.
ssl_keyfile_password null Password to use with the SSL certificate for HTTPS.
ssl_verify true Skip certificate validation.
quiet true Suppress most print statements.
show_api true Show the API docs in the footer of the app.
file_directories null List of directories that Gradio is allowed to serve files from.
_frontend true Frontend.
Open Source Agenda is not affiliated with "Tts Generation Webui" Project. README Source: rsxdalv/tts-generation-webui

Open Source Agenda Badge

Open Source Agenda Rating