强大的少样本语音转换与语音合成Web用户界面。

YSC-hain 5d126f98b2 修复 无法监听双栈 (#1621) 4 天之前
Docker 4c78f66431 明确Dockerfile中所使用的asr模型的版本,使得和cmd-asr.py中保持一致 8 月之前
GPT_SoVITS eee607b71d Fix bug in #1660 and #1667 (#1670) 2 周之前
docs d67bbd2166 Fix typo (#1568) 1 月之前
tools 7dac47ca95 chores (#1528) 2 月之前
.dockerignore b0b039ad21 Docker镜像构建脚本对于镜像的Tag增加Git Commit的Hash值,便于知道镜像中应用版本 8 月之前
.gitignore 52c50c6c81 All in one! 合并main分支和fast_inference_分支 (#1490) 2 月之前
Dockerfile 82a5672361 删除重复的 COPY 指令 (#1073) 5 月之前
GPT_SoVITS_Inference.ipynb 26e6fe6b15 Add inference-only 8 月之前
LICENSE 6e05506a85 Initial commit 9 月之前
README.md d67bbd2166 Fix typo (#1568) 1 月之前
api.py 6ca4aecea2 API修复优化 (#1503) 2 月之前
api_v2.py 5d126f98b2 修复 无法监听双栈 (#1621) 4 天之前
colab_webui.ipynb 9c4ba08ccb Fix the colab comma which make the colab notebook broken (#983) 6 月之前
config.py db40317d9c Update config.py 8 月之前
docker-compose.yaml c990387f7e 修改原有Dockerfile和文档中错误的端口号;Dockerfile中可以去掉 VOLUME 的声明,同时将端口暴露缩减成一行。 8 月之前
dockerbuild.sh b0b039ad21 Docker镜像构建脚本对于镜像的Tag增加Git Commit的Hash值,便于知道镜像中应用版本 8 月之前
go-webui.bat 50a63c57d1 若干杂项,界面优化 (#1387) 2 月之前
go-webui.ps1 50a63c57d1 若干杂项,界面优化 (#1387) 2 月之前
gpt-sovits_kaggle.ipynb 0b806dba37 一些小问题修复 (#1021) 5 月之前
install.sh 7e533c6995 Update install.sh 9 月之前
requirements.txt 3488cffd68 Update requirements.txt 1 月之前
webui.py fecb35dd80 Update gradio 2 月之前

README.md

GPT-SoVITS-WebUI

A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/RVC-Boss/GPT-SoVITS) [![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) [![License](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) [![Huggingface](https://img.shields.io/badge/🤗%20-online%20demo-yellow.svg?style=for-the-badge)](https://huggingface.co/spaces/lj1995/GPT-SoVITS-v2) [![Discord](https://img.shields.io/discord/1198701940511617164?color=%23738ADB&label=Discord&style=for-the-badge)](https://discord.gg/dnrgs5GHfG) **English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md)

Features:

  1. Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.

  2. Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.

  3. Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.

  4. WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.

Check out our demo video here!

Unseen speakers few-shot fine-tuning demo:

https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb

User guide: 简体中文 | English

Installation

For users in China, you can click here to use AutoDL Cloud Docker to experience the full functionality online.

Tested Environments

  • Python 3.9, PyTorch 2.0.1, CUDA 11
  • Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
  • Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon)
  • Python 3.9, PyTorch 2.2.2, CPU devices

Note: numba==0.56.4 requires py<3.11

Windows

If you are a Windows user (tested with win>=10), you can download the integrated package and double-click on go-webui.bat to start GPT-SoVITS-WebUI.

Users in China can download the package here.

Linux

conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh

macOS

Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.

  1. Install Xcode command-line tools by running xcode-select --install.
  2. Install FFmpeg by running brew install ffmpeg.
  3. Install the program by running the following commands:
conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
pip install -r requirements.txt

Install Manually

Install FFmpeg

Conda Users
conda install ffmpeg
Ubuntu/Debian Users
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
Windows Users

Download and place ffmpeg.exe and ffprobe.exe in the GPT-SoVITS root.

Install Visual Studio 2017 (Korean TTS Only)

MacOS Users
brew install ffmpeg

Install Dependences

pip install -r requirements.txt

Using Docker

docker-compose.yaml configuration

  1. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check Docker Hub for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
  2. Environment Variables:
  • is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
  1. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
  2. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
  3. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.

Running with docker compose

docker compose -f "docker-compose.yaml" up -d

Running with docker command

As above, modify the corresponding parameters based on your actual situation, then run the following command:

docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx

Pretrained Models

Users in China can download all these models here.

  1. Download pretrained models from GPT-SoVITS Models and place them in GPT_SoVITS/pretrained_models.

  2. Download G2PW models from G2PWModel_1.1.zip, unzip and rename to G2PWModel, and then place them in GPT_SoVITS/text.(Chinese TTS Only)

  3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from UVR5 Weights and place them in tools/uvr5/uvr5_weights.

  4. For Chinese ASR (additionally), download models from Damo ASR Model, Damo VAD Model, and Damo Punc Model and place them in tools/asr/models.

  5. For English or Japanese ASR (additionally), download models from Faster Whisper Large V3 and place them in tools/asr/models. Also, other models may have the similar effect with smaller disk footprint.

Dataset Format

The TTS annotation .list file format:

vocal_path|speaker_name|language|text

Language dictionary:

  • 'zh': Chinese
  • 'ja': Japanese
  • 'en': English
  • 'ko': Korean
  • 'yue': Cantonese

Example:

D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.

Finetune and inference

### Open WebUI

#### Integrated Package Users

Double-click go-webui.bator use go-webui.ps1 if you want to switch to V1,then double-clickgo-webui-v1.bat or use go-webui-v1.ps1

#### Others

 python webui.py <language(optional)>

if you want to switch to V1,then

 python webui.py v1 <language(optional)>

Or maunally switch version in WebUI

### Finetune

#### Path Auto-filling is now supported

 1.Fill in the audio path

 2.Slice the audio into small chunks

 3.Denoise(optinal)

 4.ASR

 5.Proofreading ASR transcriptions

 6.Go to the next Tab, then finetune the model

### Open Inference WebUI

#### Integrated Package Users

Double-click go-webui-v2.bat or use go-webui-v2.ps1 ,then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference

#### Others

 python GPT_SoVITS/inference_webui.py <language(optional)>

OR

 python webui.py

then open the inference webui at 1-GPT-SoVITS-TTS/1C-inference

## V2 Release Notes

New Features:

  1. Support Korean and Cantonese

  2. An optimized text frontend

  3. Pre-trained model extended from 2k hours to 5k hours

  4. Improved synthesis quality for low-quality reference audio

    more details

Use v2 from v1 environment:

  1. pip install -r requirements.txt to update some packages

  2. Clone the latest codes from github.

  3. Download v2 pretrained models from huggingface and put them into GPT_SoVITS\pretrained_models\gsv-v2final-pretrained.

    Chinese v2 additional: G2PWModel_1.1.zip(Download G2PW models, unzip and rename to G2PWModel, and then place them in GPT_SoVITS/text.

Todo List

  • [x] High Priority:

    • Localization in Japanese and English.
    • User guide.
    • Japanese and English dataset fine tune training.
  • [ ] Features:

    • Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
    • TTS speaking speed control.
    • Enhanced TTS emotion control.
    • Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
    • Improve English and Japanese text frontend.
    • Develop tiny and larger-sized TTS models.
    • Colab scripts.
    • Try expand training dataset (2k hours -> 10k hours).
    • better sovits base model (enhanced audio quality)
    • model mix

(Additional) Method for running from the command line

Use the command line to open the WebUI for UVR5

python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>

This is how the audio segmentation of the dataset is done using the command line

python audio_slicer.py \
    --input_path "<path_to_original_audio_file_or_directory>" \
    --output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
    --threshold <volume_threshold> \
    --min_length <minimum_duration_of_each_subclip> \
    --min_interval <shortest_time_gap_between_adjacent_subclips> 
    --hop_size <step_size_for_computing_volume_curve>

This is how dataset ASR processing is done using the command line(Only Chinese)

python tools/asr/funasr_asr.py -i <input> -o <output>

ASR processing is performed through Faster_Whisper(ASR marking except Chinese)

(No progress bars, GPU performance may cause time delays)

python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>

A custom list save path is enabled

Credits

Special thanks to the following projects and contributors:

Theoretical Research

Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.

Thanks to all contributors for their efforts