File size: 15,634 Bytes
7730edf ef521aa 7730edf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 |
---
license: cc-by-nc-sa-4.0
language:
- en
- ar
- zh
- nl
- fr
- de
- it
- ja
- ko
- lt
- ru
- es
- pt
- be
- bn
- ka
- hu
- lv
- fa
- pl
- sw
- ta
- uk
pipeline_tag: text-to-speech
library_name: outetts
---
<div class="p-4 bg-gray-50 dark:bg-gray-800 rounded-lg shadow-sm mb-12">
<div class="text-center mb-4">
<h2 class="text-xl font-light text-gray-900 dark:text-white tracking-tight mt-0 mb-0">Oute A I</h2>
<div class="flex justify-center gap-6 mt-4">
<a href="https://www.outeai.com/" target="_blank" class="flex items-center gap-1 text-gray-700 dark:text-gray-300 text-m font-medium hover:text-gray-900 dark:hover:text-white transition-colors underline">
<svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
<circle cx="12" cy="12" r="10"></circle>
<path d="M2 12h20M12 2a15.3 15.3 0 0 1 4 10 15.3 15.3 0 0 1-4 10 15.3 15.3 0 0 1-4-10 15.3 15.3 0 0 1 4-10z"></path>
</svg>
outeai.com
</a>
<a href="https://discord.gg/vyBM87kAmf" target="_blank" class="flex items-center gap-1 text-gray-700 dark:text-gray-300 text-m font-medium hover:text-gray-900 dark:hover:text-white transition-colors underline">
<svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
<path d="M21 11.5a8.38 8.38 0 0 1-.9 3.8 8.5 8.5 0 0 1-7.6 4.7 8.38 8.38 0 0 1-3.8-.9L3 21l1.9-5.7a8.38 8.38 0 0 1-.9-3.8 8.5 8.5 0 0 1 4.7-7.6 8.38 8.38 0 0 1 3.8-.9h.5a8.48 8.48 0 0 1 8 8v.5z"></path>
</svg>
Discord
</a>
<a href="https://x.com/OuteAI" target="_blank" class="flex items-center gap-1 text-gray-700 dark:text-gray-300 text-m font-medium hover:text-gray-900 dark:hover:text-white transition-colors underline">
<svg width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
<path d="M23 3a10.9 10.9 0 0 1-3.14 1.53 4.48 4.48 0 0 0-7.86 3v1A10.66 10.66 0 0 1 3 4s-4 9 5 13a11.64 11.64 0 0 1-7 2c9 5 20 0 20-11.5a4.5 4.5 0 0 0-.08-.83A7.72 7.72 0 0 0 23 3z"></path>
</svg>
@OuteAI
</a>
</div>
</div>
<div class="grid grid-cols-3 sm:grid-cols-3 gap-2">
<a href="https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B" target="_blank" class="bg-white dark:bg-gray-700 text-gray-800 dark:text-gray-100 text-sm font-medium py-2 px-3 rounded-md text-center hover:bg-gray-100 dark:hover:bg-gray-600 hover:border-gray-300 dark:hover:border-gray-500 border border-transparent transition-all">
Llama OuteTTS 1.0 1B
</a>
<a href="https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B-GGUF" target="_blank" class="bg-white dark:bg-gray-700 text-gray-800 dark:text-gray-100 text-sm font-medium py-2 px-3 rounded-md text-center hover:bg-gray-100 dark:hover:bg-gray-600 hover:border-gray-300 dark:hover:border-gray-500 border border-transparent transition-all">
Llama OuteTTS 1.0 1B GGUF
</a>
<a href="https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B-FP8" target="_blank" class="bg-white dark:bg-gray-700 text-gray-800 dark:text-gray-100 text-sm font-medium py-2 px-3 rounded-md text-center hover:bg-gray-100 dark:hover:bg-gray-600 hover:border-gray-300 dark:hover:border-gray-500 border border-transparent transition-all">
Llama OuteTTS 1.0 1B FP8
</a>
<a href="https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B-EXL2-8bpw" target="_blank" class="bg-white dark:bg-gray-700 text-gray-800 dark:text-gray-100 text-sm font-medium py-2 px-3 rounded-md text-center hover:bg-gray-100 dark:hover:bg-gray-600 hover:border-gray-300 dark:hover:border-gray-500 border border-transparent transition-all">
Llama OuteTTS 1.0 1B 8bpw
</a>
<a href="https://github.com/edwko/OuteTTS" target="_blank" class="bg-white dark:bg-gray-700 text-gray-800 dark:text-gray-100 text-sm font-medium py-2 px-3 rounded-md text-center hover:bg-gray-100 dark:hover:bg-gray-600 hover:border-gray-300 dark:hover:border-gray-500 border border-transparent transition-all">
GitHub Library
</a>
</div>
</div>
> [!IMPORTANT]
> **Important Sampling Considerations**
>
> When using OuteTTS version 1.0, it is crucial to use the settings specified in the [Sampling Configuration](#sampling-configuration) section.
> The **repetition penalty implementation** is particularly important - this model requires penalization applied to a **64-token recent window**,
> rather than across the entire context window. Penalizing the entire context will cause the model to produce **broken or low-quality output**.
>
> To address this limitation, all necessary samplers and patches for all backends are set up automatically in the **outetts** library.
> If using a custom implementation, ensure you correctly implement these requirements.
# OuteTTS Version 1.0
This update brings significant improvements in speech synthesis and voice cloning—delivering a more powerful, accurate, and user-friendly experience in a compact size.
## What's New
### 1. Prompt Revamp & Dependency Removal
- **Automatic Word Alignment:** The model now performs word alignment internally. Simply input raw text—no pre-processing required—and the model handles the rest, streamlining your workflow. For optimal results, use normalized, readable text without newlines (light normalization is applied automatically in outetts library).
- **Native Multilingual Text Support:** Direct support for native text across multiple languages eliminates the need for romanization.
- **Enhanced Metadata Integration:** The updated prompt system incorporates additional metadata (time, energy, spectral centroid, pitch) at both global and word levels, improving speaker flow and synthesis quality.
- **Special Tokens for Audio Codebooks:** New tokens for c1 (codebook 1) and c2 (codebook 2).
### 2. New Audio Encoder Model
- **DAC Encoder:** Integrates a DAC audio encoder from [ibm-research/DAC.speech.v1.0](https://huggingface.co/ibm-research/DAC.speech.v1.0), utilizing two codebooks for high quality audio reconstruction.
- **Performance Trade-off:** Improved audio fidelity increases the token generation rate from 75 to 150 tokens per second. This trade-off prioritizes quality, especially for multilingual applications.
### 3. Voice Cloning
- **One-Shot Voice Cloning:** To achieve one-shot cloning, the model typically requires only around **10 seconds** of reference audio to produce an accurate voice representation.
- **Improved Accuracy:** Enhanced by the new encoder and additional training metadata, voice cloning is now more natural and precise.
### 4. Auto Text Alignment & Numerical Support
- **Automatic Text Alignment:** Aligns raw text at the word level, even for languages without clear boundaries (e.g., Japanese, Chinese), using insights from pre-processed training data.
- **Direct Numerical Input:** Built-in multilingual numerical support allows direct use of numbers in prompts—no textual conversion needed. (The model typically chooses the dominant language present. Mixing languages in a single prompt may lead to mistakes.)
### 5. Multilingual Capabilities
- **Supported Languages:** OuteTTS offers varying proficiency levels across languages, based on training data exposure.
- **High Training Data Languages:** These languages feature extensive training: **English, Arabic, Chinese, Dutch, French, German, Italian, Japanese, Korean, Lithuanian, Russian, Spanish**
- **Moderate Training Data Languages:** These languages received moderate training, offering good performance with occasional limitations: **Portuguese, Belarusian, Bengali, Georgian, Hungarian, Latvian, Persian/Farsi, Polish, Swahili, Tamil, Ukrainian**
- **Beyond Supported Languages:** The model can generate speech in untrained languages with varying success. Experiment with unlisted languages, though results may not be optimal.
## Video Showcase
<video width="1280" height="720" controls style="box-shadow: 0px 0px 20px 10px rgba(0, 0, 0, 0.05), 0px 1px 3px 10px rgba(255, 255, 255, 0.05);">
<source src="https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B-GGUF/resolve/main/media/showcase.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
## OuteTTS Python Package v0.4.2
New version adds **batched inference** generation with the latest OuteTTS release.
### ⚡ **Batched RTF Benchmarks**
Tested with **NVIDIA L40S GPU**

## Quick Start Guide
Getting started with **OuteTTS** is simple:
### Installation
🔗 [Installation instructions](https://github.com/edwko/OuteTTS?tab=readme-ov-file#installation)
### Basic Usage
```python
import outetts
# Initialize the interface
interface = outetts.Interface(
config=outetts.ModelConfig.auto_config(
model=outetts.Models.VERSION_1_0_SIZE_1B,
# For llama.cpp backend
backend=outetts.Backend.LLAMACPP,
quantization=outetts.LlamaCppQuantization.FP16
# For transformers backend
# backend=outetts.Backend.HF,
)
)
# Load the default speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL")
# Or create your own speaker profiles in seconds and reuse them instantly
# speaker = interface.create_speaker("path/to/audio.wav")
# interface.save_speaker(speaker, "speaker.json")
# speaker = interface.load_speaker("speaker.json")
# Generate speech
output = interface.generate(
config=outetts.GenerationConfig(
text="Hello, how are you doing?",
generation_type=outetts.GenerationType.CHUNKED,
speaker=speaker,
sampler_config=outetts.SamplerConfig(
temperature=0.4
),
)
)
# Save to file
output.save("output.wav")
```
### ⚡ Batch Setup
```python
from outetts import Interface, ModelConfig, GenerationConfig, Backend, GenerationType
if __name__ == "__main__":
# Initialize the interface with a batch-capable backend
interface = Interface(
ModelConfig(
model_path="OuteAI/Llama-OuteTTS-1.0-1B-FP8",
tokenizer_path="OuteAI/Llama-OuteTTS-1.0-1B",
backend=Backend.VLLM
# For EXL2, use backend=Backend.EXL2ASYNC + exl2_cache_seq_multiply={should be same as max_batch_size in GenerationConfig}
# For LLAMACPP_ASYNC_SERVER, use backend=Backend.LLAMACPP_ASYNC_SERVER and provide server_host in GenerationConfig
)
)
# Load your speaker profile
speaker = interface.load_default_speaker("EN-FEMALE-1-NEUTRAL") # Or load/create custom speaker
# Generate speech using BATCH type
# Note: For EXL2ASYNC, VLLM, LLAMACPP_ASYNC_SERVER, BATCH is automatically selected.
output = interface.generate(
GenerationConfig(
text="This is a longer text that will be automatically split into chunks and processed in batches.",
speaker=speaker,
generation_type=GenerationType.BATCH,
max_batch_size=32, # Adjust based on your GPU memory and server capacity
dac_decoding_chunk=2048, # Adjust chunk size for DAC decoding
# If using LLAMACPP_ASYNC_SERVER, add:
# server_host="http://localhost:8000" # Replace with your server address
)
)
# Save to file
output.save("output_batch.wav")
```
### More Configuration Options
For advanced settings and customization, visit the official repository:
[](https://github.com/edwko/OuteTTS/blob/main/docs/interface_usage.md)
## Usage Recommendations
### Speaker Reference
The model is designed to be used with a speaker reference. Without one, it generates random vocal characteristics, often leading to lower-quality outputs.
The model inherits the referenced speaker's emotion, style, and accent.
When transcribing to other languages with the same speaker, you may observe the model retaining the original accent.
### Multilingual Application
It is recommended to create a speaker profile in the language you intend to use. This helps achieve the best results in that specific language, including tone, accent, and linguistic features.
While the model supports cross-lingual speech, it still relies on the reference speaker. If the speaker has a distinct accent—such as British English—other languages may carry that accent as well.
### Optimal Audio Length
- **Best Performance:** Generate audio around **42 seconds** in a single run (approximately 8,192 tokens). It is recomended not to near the limits of this windows when generating. Usually, the best results are up to 7,000 tokens.
- **Context Reduction with Speaker Reference:** If the speaker reference is 10 seconds long, the effective context is reduced to approximately 32 seconds.
### Temperature Setting Recommendations
Testing shows that a temperature of **0.4** is an ideal starting point for accuracy (with the sampling settings below). However, some voice references may benefit from higher temperatures for enhanced expressiveness or slightly lower temperatures for more precise voice replication.
### Verifying Speaker Encoding
If the cloned voice quality is subpar, check the encoded speaker sample.
```python
interface.decode_and_save_speaker(speaker=your_speaker, path="speaker.wav")
```
The DAC audio reconstruction model is lossy, and samples with clipping, excessive loudness, or unusual vocal features may introduce encoding issues that impact output quality.
### Sampling Configuration
For optimal results with this TTS model, use the following sampling settings.
| Parameter | Value |
|-------------------|----------|
| Temperature | 0.4 |
| Repetition Penalty| 1.1 |
| **Repetition Range** | **64** |
| Top-k | 40 |
| Top-p | 0.9 |
| Min-p | 0.05 |
## Model Specifications
- **Training Data:** Trained on **~60k hours of audio**
- **Context Length:** Supports a maximum context window of **8,192 tokens**
### Training Parameters
#### **Pre-Training**
- **Optimizer:** AdamW
- **Batch Size:** 1 million tokens
- **Max Learning Rate:** 3e-4
- **Min Learning Rate:** 3e-5
- **Context Length:** 8192
#### **Fine-Tuning**
- **Optimizer:** AdamW
- **Max Learning Rate:** 1e-5
- **Min Learning Rate:** 5e-6
- **Data:** 10,000 diverse, high-quality examples
## License Information
- **Initial Llama3.2 Components:** [Llama 3.2 Community License Agreement ](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt)
- **Our Continued Pre-Training, Fine-Tuning, and Additional Components:** [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Acknowledgments
- Big thanks to **Hugging Face** for their continued resource support through their grant program!
- Audio encoding and decoding utilize [ibm-research/DAC.speech.v1.0](https://huggingface.co/ibm-research/DAC.speech.v1.0)
- OuteTTS is built with [Llama3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) as the base model, with continued pre-training and fine-tuning.
### Ethical Use Guidelines
This text-to-speech model is intended for legitimate applications that enhance accessibility, creativity, and communication;
prohibited uses include impersonation without consent, creation of deliberately misleading content,
generation of harmful or harassing material, distribution of synthetic audio without proper disclosure,
voice cloning without permission, and any uses that violate applicable laws, regulations, or copyrights.
|