VoiceCore_gptq / README.md
dahara1's picture
Update README.md
48d39ae verified
metadata
language:
  - ja
base_model:
  - webbigdata/VoiceCore
tags:
  - tts
  - vllm

VoiceCore_smoothquant

webbigdata/VoiceCoreをvLLMで高速に動かすためにgptq(W4A16)量子化したモデルです
詳細はwebbigdata/VoiceCoreのモデルカードを御覧ください

This is a model quantized using gptq(W4A16) to run webbigdata/VoiceCore at high speed using vLLM.
See the webbigdata/VoiceCore model card for details.

Install/Setup

vLLMはAMDのGPUでも動作するそうですがチェックは出来ていません。
Mac(CPU)でも動くようですが、gguf版を使った方が早いかもしれません

vLLM seems to work with AMD GPUs, but I haven't checked.
It also seems to work with Mac (CPU), but gguf version seems to be better.

以下はLinuxのNvidia GPU版のセットアップ手順です
Below are the setup instructions for the Nvidia GPU version of Linux.

python3 -m venv VL
source VL/bin/activate
pip install vllm
pip install snac
pip install numpy==1.26.4
pip install transformers==4.53.2

Sample script

import torch
import scipy.io.wavfile as wavfile
from transformers import AutoTokenizer
from snac import SNAC
from vllm import LLM, SamplingParams

QUANTIZED_MODEL_PATH = "webbigdata/VoiceCore_gptq"
prompts = [
     "テストです",
     "ジーピーティーキュー、問題なく動いてますかね?あ~、笑い声が上手く表現できなくなっちゃってますかね、仕方ないか、えへへ"
]
chosen_voice = "matsukaze_male[neutral]"

print("Loading tokenizer and preparing inputs...")
tokenizer = AutoTokenizer.from_pretrained(QUANTIZED_MODEL_PATH)
prompts_ = [(f"{chosen_voice}: " + p) if chosen_voice else p for p in prompts]
start_token, end_tokens = [128259], [128009, 128260, 128261]
all_prompt_token_ids = []
for prompt in prompts_:
  input_ids = tokenizer.encode(prompt)
  final_token_ids = start_token + input_ids + end_tokens
  all_prompt_token_ids.append(final_token_ids)
print("Inputs prepared successfully.")

print(f"Loading SmoothQuant model with vLLM from: {QUANTIZED_MODEL_PATH}")
llm = LLM(
    model=QUANTIZED_MODEL_PATH,
    trust_remote_code=True,
    max_model_len=10000,    # メモリ不足になる場合は減らしてください f you run out of memory, reduce it.  
    #gpu_memory_utilization=0.9 # 「最大GPUメモリの何割を使うか?」適宜調整してください  "What percentage of the maximum GPU memory should be used?" Adjust accordingly.
)
sampling_params = SamplingParams(
    temperature=0.6,
    top_p=0.90,
    repetition_penalty=1.1,
    max_tokens=8192, # max_tokens + input_prompt <= max_model_len
    stop_token_ids=[128258]
)
print("vLLM model loaded.")

print("Generating audio tokens with vLLM...")
outputs = llm.generate(prompt_token_ids=all_prompt_token_ids, sampling_params=sampling_params)
print("Generation complete.")

# GPUの方が早いがvllmが大きくメモリ確保していると失敗するため  GPU is faster, but if vllm allocates a lot of memory it will fail to run.
print("Loading SNAC decoder to CPU...")
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
snac_model.to("cpu") 
print("SNAC model loaded.")

print("Decoding tokens to audio...")
audio_start_token = 128257

def redistribute_codes(code_list):
  layer_1, layer_2, layer_3 = [], [], []
  for i in range(len(code_list) // 7):
    layer_1.append(code_list[7*i])
    layer_2.append(code_list[7*i+1] - 4096)
    layer_3.append(code_list[7*i+2] - (2*4096))
    layer_3.append(code_list[7*i+3] - (3*4096))
    layer_2.append(code_list[7*i+4] - (4*4096))
    layer_3.append(code_list[7*i+5] - (5*4096))
    layer_3.append(code_list[7*i+6] - (6*4096))

  codes = [torch.tensor(layer).unsqueeze(0)
           for layer in [layer_1, layer_2, layer_3]]

  audio_hat = snac_model.decode(codes)
  return audio_hat

code_lists = []
for output in outputs:
    generated_token_ids = output.outputs[0].token_ids
    generated_tensor = torch.tensor([generated_token_ids])
    token_indices = (generated_tensor == audio_start_token).nonzero(as_tuple=True)
    if len(token_indices[1]) > 0:
        cropped_tensor = generated_tensor[:, token_indices[1][-1].item() + 1:]
    else:
        cropped_tensor = generated_tensor

    masked_row = cropped_tensor.squeeze()
    row_length = masked_row.size(0)
    new_length = (row_length // 7) * 7
    trimmed_row = masked_row[:new_length]
    code_list = [t.item() - 128266 for t in trimmed_row]
    code_lists.append(code_list)

for i, code_list in enumerate(code_lists):
    if i >= len(prompts): break

    print(f"Processing audio for prompt: '{prompts[i]}'")
    samples = redistribute_codes(code_list)
    sample_np = samples.detach().squeeze().numpy()

    safe_prompt = "".join(c for c in prompts[i] if c.isalnum() or c in (' ', '_')).rstrip()
    filename = f"audio_final_{i}_{safe_prompt[:20].replace(' ', '_')}.wav"

    wavfile.write(filename, 24000, sample_np)
    print(f"Saved audio to: {filename}")

Streaming sample

vLLMをサーバーとして動作させてストリーミングでアクセスさせ、クライアントが逐次再生するデモです。
品質は劣化してしまいますがRTX 4060くらいの性能をもつGPUなら疑似リアルタイム再生が実現できます。
理想は雑音が生成されないタイミングで生成する事ですが、まだ実現出来ておらず、実証実験レベルとお考え下さい

Sever side command

(Linux server前提)

python3 -m vllm.entrypoints.openai.api_server --model VoiceCore_gptq --host 0.0.0.0 --port 8000 --max-model-len 9000

Client side scripyt

(Windows前提)
SERVER_URLを書き換えてください

import torch
from transformers import AutoTokenizer
from snac import SNAC
import requests
import json
import sounddevice as sd
import numpy as np
import queue
import threading

# --- サーバー設定とモデルの準備 (変更なし) ---
SERVER_URL = "http://192.168.1.16:8000/v1/completions"
TOKENIZER_PATH = "webbigdata/VoiceCore_gptq"
MODEL_NAME = "VoiceCore_gptq"

prompts = [
     "テストです",
     "ジーピーティーキュー、問題なく動いてますかね?圧縮しすぎると別人の声になっちゃう事があるんですよね、ふふふ"
]
chosen_voice = "matsukaze_male[neutral]"

print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_PATH)
start_token, end_tokens = [128259], [128009, 128260, 128261]

print("Loading SNAC decoder to CPU...")
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
snac_model.to("cpu")
print("SNAC model loaded.")
audio_start_token = 128257

def redistribute_codes(code_list):
    if len(code_list) % 7 != 0: return torch.tensor([])
    layer_1, layer_2, layer_3 = [], [], []
    for i in range(len(code_list) // 7):
        layer_1.append(code_list[7*i])
        layer_2.append(code_list[7*i+1] - 4096)
        layer_3.append(code_list[7*i+2] - (2*4096)); layer_3.append(code_list[7*i+3] - (3*4096))
        layer_2.append(code_list[7*i+4] - (4*4096)); layer_3.append(code_list[7*i+5] - (5*4096))
        layer_3.append(code_list[7*i+6] - (6*4096))
    codes = [torch.tensor(layer).unsqueeze(0) for layer in [layer_1, layer_2, layer_3]]
    return snac_model.decode(codes)


def audio_playback_worker(q, stream):
    while True:
        data = q.get()
        if data is None:
            break
        stream.write(data)

for i, prompt in enumerate(prompts):
    print("\n" + "="*50)
    print(f"Processing prompt ({i+1}/{len(prompts)}): '{prompt}'")
    print("="*50)

    prompt_ = (f"{chosen_voice}: " + prompt) if chosen_voice else prompt
    input_ids = tokenizer.encode(prompt_)
    final_token_ids = start_token + input_ids + end_tokens
    
    payload = {
        "model": MODEL_NAME, "prompt": final_token_ids,
        "max_tokens": 8192, "temperature": 0.6, "top_p": 0.90,
        "repetition_penalty": 1.1, "stop_token_ids": [128258],
        "stream": True
    }

    token_buffer = []
    found_audio_start = False
    CHUNK_SIZE = 28 
    
    audio_queue = queue.Queue()
    playback_stream = sd.OutputStream(samplerate=24000, channels=1, dtype='float32')
    playback_stream.start()
    
    playback_thread = threading.Thread(target=audio_playback_worker, args=(audio_queue, playback_stream))
    playback_thread.start()

    try:
        response = requests.post(SERVER_URL, headers={"Content-Type": "application/json"}, json=payload, stream=True)
        response.raise_for_status()

        for line in response.iter_lines():
            if line:
                decoded_line = line.decode('utf-8')
                if decoded_line.startswith('data: '):
                    content = decoded_line[6:]
                    if content == '[DONE]':
                        break
                    
                    try:
                        chunk = json.loads(content)
                        text_chunk = chunk['choices'][0]['text']
                        if text_chunk:
                            token_buffer.extend(tokenizer.encode(text_chunk, add_special_tokens=False))
                        
                        if not found_audio_start:
                            try:
                                start_index = token_buffer.index(audio_start_token)
                                token_buffer = token_buffer[start_index + 1:]
                                found_audio_start = True
                                print("Audio start token found. Starting playback...")
                            except ValueError:
                                continue
                        
                        while len(token_buffer) >= CHUNK_SIZE:
                            tokens_to_process = token_buffer[:CHUNK_SIZE]
                            token_buffer = token_buffer[CHUNK_SIZE:]
                            
                            code_list = [t - 128266 for t in tokens_to_process]
                            samples = redistribute_codes(code_list)
                            
                            if samples.numel() > 0:
                                sample_np = samples.detach().squeeze().numpy()
                                audio_queue.put(sample_np)

                    except (json.JSONDecodeError, Exception) as e:
                        print(f"処理中にエラー: {e}")

        if found_audio_start and token_buffer:
            remaining_length = (len(token_buffer) // 7) * 7
            if remaining_length > 0:
                tokens_to_process = token_buffer[:remaining_length]
                code_list = [t - 128266 for t in tokens_to_process]
                samples = redistribute_codes(code_list)
                if samples.numel() > 0:
                    sample_np = samples.detach().squeeze().numpy()
                    audio_queue.put(sample_np)

    except requests.exceptions.RequestException as e:
        print(f"サーバーへのリクエストでエラーが発生しました: {e}")
    finally:
        audio_queue.put(None)
        playback_thread.join()
        playback_stream.stop()
        playback_stream.close()
        print("Playback finished for this prompt.")

print("\nAll processing complete!")