File size: 11,704 Bytes
5ceaa34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd05b82
5ceaa34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07411c4
 
 
 
7339bc2
48d39ae
7339bc2
 
07411c4
48d39ae
07411c4
7339bc2
07411c4
 
48d39ae
 
07411c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
---
language:
- ja
base_model:
- webbigdata/VoiceCore
tags:
- tts
- vllm
---

# VoiceCore_smoothquant

[webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)をvLLMで高速に動かすためにgptq(W4A16)量子化したモデルです  
詳細は[webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)のモデルカードを御覧ください  

This is a model quantized using gptq(W4A16) to run [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore) at high speed using vLLM.  
See the [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore) model card for details.  


## Install/Setup

[vLLMはAMDのGPUでも動作する](https://docs.vllm.ai/en/v0.6.5/getting_started/amd-installation.html)そうですがチェックは出来ていません。  
Mac(CPU)でも動くようですが、[gguf版](https://huggingface.co/webbigdata/VoiceCore_gguf)を使った方が早いかもしれません  

vLLM seems to work with [AMD GPUs](https://docs.vllm.ai/en/v0.6.5/getting_started/amd-installation.html), but I haven't checked.  
It also seems to work with Mac (CPU), but [gguf version](https://huggingface.co/webbigdata/VoiceCore_gguf) seems to be better.  

以下はLinuxのNvidia GPU版のセットアップ手順です  
Below are the setup instructions for the Nvidia GPU version of Linux.  

```
python3 -m venv VL
source VL/bin/activate
pip install vllm
pip install snac
pip install numpy==1.26.4
pip install transformers==4.53.2
```

## Sample script
```
import torch
import scipy.io.wavfile as wavfile
from transformers import AutoTokenizer
from snac import SNAC
from vllm import LLM, SamplingParams

QUANTIZED_MODEL_PATH = "webbigdata/VoiceCore_gptq"
prompts = [
     "テストです",
     "ジーピーティーキュー、問題なく動いてますかね?あ~、笑い声が上手く表現できなくなっちゃってますかね、仕方ないか、えへへ"
]
chosen_voice = "matsukaze_male[neutral]"

print("Loading tokenizer and preparing inputs...")
tokenizer = AutoTokenizer.from_pretrained(QUANTIZED_MODEL_PATH)
prompts_ = [(f"{chosen_voice}: " + p) if chosen_voice else p for p in prompts]
start_token, end_tokens = [128259], [128009, 128260, 128261]
all_prompt_token_ids = []
for prompt in prompts_:
  input_ids = tokenizer.encode(prompt)
  final_token_ids = start_token + input_ids + end_tokens
  all_prompt_token_ids.append(final_token_ids)
print("Inputs prepared successfully.")

print(f"Loading SmoothQuant model with vLLM from: {QUANTIZED_MODEL_PATH}")
llm = LLM(
    model=QUANTIZED_MODEL_PATH,
    trust_remote_code=True,
    max_model_len=10000,    # メモリ不足になる場合は減らしてください f you run out of memory, reduce it.  
    #gpu_memory_utilization=0.9 # 「最大GPUメモリの何割を使うか?」適宜調整してください  "What percentage of the maximum GPU memory should be used?" Adjust accordingly.
)
sampling_params = SamplingParams(
    temperature=0.6,
    top_p=0.90,
    repetition_penalty=1.1,
    max_tokens=8192, # max_tokens + input_prompt <= max_model_len
    stop_token_ids=[128258]
)
print("vLLM model loaded.")

print("Generating audio tokens with vLLM...")
outputs = llm.generate(prompt_token_ids=all_prompt_token_ids, sampling_params=sampling_params)
print("Generation complete.")

# GPUの方が早いがvllmが大きくメモリ確保していると失敗するため  GPU is faster, but if vllm allocates a lot of memory it will fail to run.
print("Loading SNAC decoder to CPU...")
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
snac_model.to("cpu") 
print("SNAC model loaded.")

print("Decoding tokens to audio...")
audio_start_token = 128257

def redistribute_codes(code_list):
  layer_1, layer_2, layer_3 = [], [], []
  for i in range(len(code_list) // 7):
    layer_1.append(code_list[7*i])
    layer_2.append(code_list[7*i+1] - 4096)
    layer_3.append(code_list[7*i+2] - (2*4096))
    layer_3.append(code_list[7*i+3] - (3*4096))
    layer_2.append(code_list[7*i+4] - (4*4096))
    layer_3.append(code_list[7*i+5] - (5*4096))
    layer_3.append(code_list[7*i+6] - (6*4096))

  codes = [torch.tensor(layer).unsqueeze(0)
           for layer in [layer_1, layer_2, layer_3]]

  audio_hat = snac_model.decode(codes)
  return audio_hat

code_lists = []
for output in outputs:
    generated_token_ids = output.outputs[0].token_ids
    generated_tensor = torch.tensor([generated_token_ids])
    token_indices = (generated_tensor == audio_start_token).nonzero(as_tuple=True)
    if len(token_indices[1]) > 0:
        cropped_tensor = generated_tensor[:, token_indices[1][-1].item() + 1:]
    else:
        cropped_tensor = generated_tensor

    masked_row = cropped_tensor.squeeze()
    row_length = masked_row.size(0)
    new_length = (row_length // 7) * 7
    trimmed_row = masked_row[:new_length]
    code_list = [t.item() - 128266 for t in trimmed_row]
    code_lists.append(code_list)

for i, code_list in enumerate(code_lists):
    if i >= len(prompts): break

    print(f"Processing audio for prompt: '{prompts[i]}'")
    samples = redistribute_codes(code_list)
    sample_np = samples.detach().squeeze().numpy()

    safe_prompt = "".join(c for c in prompts[i] if c.isalnum() or c in (' ', '_')).rstrip()
    filename = f"audio_final_{i}_{safe_prompt[:20].replace(' ', '_')}.wav"

    wavfile.write(filename, 24000, sample_np)
    print(f"Saved audio to: {filename}")
```


## Streaming sample

vLLMをサーバーとして動作させてストリーミングでアクセスさせ、クライアントが逐次再生するデモです。  
品質は劣化してしまいますがRTX 4060くらいの性能をもつGPUなら疑似リアルタイム再生が実現できます。  
理想は雑音が生成されないタイミングで生成する事ですが、まだ実現出来ておらず、実証実験レベルとお考え下さい 

### Sever side command 
(Linux server前提)
```
python3 -m vllm.entrypoints.openai.api_server --model VoiceCore_gptq --host 0.0.0.0 --port 8000 --max-model-len 9000
```
### Client side scripyt
(Windows前提)  
SERVER_URLを書き換えてください  
```
import torch
from transformers import AutoTokenizer
from snac import SNAC
import requests
import json
import sounddevice as sd
import numpy as np
import queue
import threading

# --- サーバー設定とモデルの準備 (変更なし) ---
SERVER_URL = "http://192.168.1.16:8000/v1/completions"
TOKENIZER_PATH = "webbigdata/VoiceCore_gptq"
MODEL_NAME = "VoiceCore_gptq"

prompts = [
     "テストです",
     "ジーピーティーキュー、問題なく動いてますかね?圧縮しすぎると別人の声になっちゃう事があるんですよね、ふふふ"
]
chosen_voice = "matsukaze_male[neutral]"

print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_PATH)
start_token, end_tokens = [128259], [128009, 128260, 128261]

print("Loading SNAC decoder to CPU...")
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
snac_model.to("cpu")
print("SNAC model loaded.")
audio_start_token = 128257

def redistribute_codes(code_list):
    if len(code_list) % 7 != 0: return torch.tensor([])
    layer_1, layer_2, layer_3 = [], [], []
    for i in range(len(code_list) // 7):
        layer_1.append(code_list[7*i])
        layer_2.append(code_list[7*i+1] - 4096)
        layer_3.append(code_list[7*i+2] - (2*4096)); layer_3.append(code_list[7*i+3] - (3*4096))
        layer_2.append(code_list[7*i+4] - (4*4096)); layer_3.append(code_list[7*i+5] - (5*4096))
        layer_3.append(code_list[7*i+6] - (6*4096))
    codes = [torch.tensor(layer).unsqueeze(0) for layer in [layer_1, layer_2, layer_3]]
    return snac_model.decode(codes)


def audio_playback_worker(q, stream):
    while True:
        data = q.get()
        if data is None:
            break
        stream.write(data)

for i, prompt in enumerate(prompts):
    print("\n" + "="*50)
    print(f"Processing prompt ({i+1}/{len(prompts)}): '{prompt}'")
    print("="*50)

    prompt_ = (f"{chosen_voice}: " + prompt) if chosen_voice else prompt
    input_ids = tokenizer.encode(prompt_)
    final_token_ids = start_token + input_ids + end_tokens
    
    payload = {
        "model": MODEL_NAME, "prompt": final_token_ids,
        "max_tokens": 8192, "temperature": 0.6, "top_p": 0.90,
        "repetition_penalty": 1.1, "stop_token_ids": [128258],
        "stream": True
    }

    token_buffer = []
    found_audio_start = False
    CHUNK_SIZE = 28 
    
    audio_queue = queue.Queue()
    playback_stream = sd.OutputStream(samplerate=24000, channels=1, dtype='float32')
    playback_stream.start()
    
    playback_thread = threading.Thread(target=audio_playback_worker, args=(audio_queue, playback_stream))
    playback_thread.start()

    try:
        response = requests.post(SERVER_URL, headers={"Content-Type": "application/json"}, json=payload, stream=True)
        response.raise_for_status()

        for line in response.iter_lines():
            if line:
                decoded_line = line.decode('utf-8')
                if decoded_line.startswith('data: '):
                    content = decoded_line[6:]
                    if content == '[DONE]':
                        break
                    
                    try:
                        chunk = json.loads(content)
                        text_chunk = chunk['choices'][0]['text']
                        if text_chunk:
                            token_buffer.extend(tokenizer.encode(text_chunk, add_special_tokens=False))
                        
                        if not found_audio_start:
                            try:
                                start_index = token_buffer.index(audio_start_token)
                                token_buffer = token_buffer[start_index + 1:]
                                found_audio_start = True
                                print("Audio start token found. Starting playback...")
                            except ValueError:
                                continue
                        
                        while len(token_buffer) >= CHUNK_SIZE:
                            tokens_to_process = token_buffer[:CHUNK_SIZE]
                            token_buffer = token_buffer[CHUNK_SIZE:]
                            
                            code_list = [t - 128266 for t in tokens_to_process]
                            samples = redistribute_codes(code_list)
                            
                            if samples.numel() > 0:
                                sample_np = samples.detach().squeeze().numpy()
                                audio_queue.put(sample_np)

                    except (json.JSONDecodeError, Exception) as e:
                        print(f"処理中にエラー: {e}")

        if found_audio_start and token_buffer:
            remaining_length = (len(token_buffer) // 7) * 7
            if remaining_length > 0:
                tokens_to_process = token_buffer[:remaining_length]
                code_list = [t - 128266 for t in tokens_to_process]
                samples = redistribute_codes(code_list)
                if samples.numel() > 0:
                    sample_np = samples.detach().squeeze().numpy()
                    audio_queue.put(sample_np)

    except requests.exceptions.RequestException as e:
        print(f"サーバーへのリクエストでエラーが発生しました: {e}")
    finally:
        audio_queue.put(None)
        playback_thread.join()
        playback_stream.stop()
        playback_stream.close()
        print("Playback finished for this prompt.")

print("\nAll processing complete!")
```