Whisper Ctranslate2

Whisper large-v3-turbo is distilled version of large-v3, only difference is number of decoder layers. In turbo, there are only 4 decoder layers instead of 32 in large-v3.


This repo contains the conversion of whisper-large-v3-turbo openai checkpoint to ctranslate2 format.

Command to convert safetensors to ctranslate format checkpoint : ct2-transformers-converter --model sasikr2/whisper-large-v3-turbo --output_dir whisper-large-v3-turbo-ct2 --quantization float16

huggingface checkpoint: sasikr2/whisper-large-v3-turbo

Parent source repo

official github page

Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for sasikr2/whisper-large-v3-turbo-ct2

Finetuned
(181)
this model