license: apache-2.0
quantized_by: Pomni
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
base_model:
- openai/whisper-tiny
pipeline_tag: automatic-speech-recognition
tags:
- whisper.cpp
- ggml
- whisper
- audio
- speech
- voice
Whisper-Tiny quants
This is a repository of GGML quants for whisper-tiny, for use with whisper.cpp.
If you are looking for a program to run this model with, then I would recommend EasyWhisper UI, as it is user-friendly, has a GUI, and will automate a lot of the hard stuff for you.
List of Quants
Clicking on a link will download the corresponding quant instantly.
Link | Quant | Size | Notes |
---|---|---|---|
GGML | F32 | 152 MB | Likely overkill. |
GGML | F16 | 77.7 MB | Performs better than Q8_0 for noisy audio and music. |
GGML | Q8_0 | 43.5 MB | Sweet spot; superficial quality loss at nearly double the speed. |
GGML | Q6_K | 34.7 MB | |
GGML | Q5_K | 29.9 MB | |
GGML | Q5_1 | 32.2 MB | |
GGML | Q5_0 | 29.9 MB | Last "good" quant; anything below loses quality rapidly. |
GGML | Q4_K | 25.3 MB | Might not have lost too much quality, but I'm not sure. |
GGML | Q4_1 | 27.6 MB | |
GGML | Q4_0 | 25.3 MB | |
GGML | Q3_K | 20.5 MB | |
GGML | Q2_K | 16.8 MB | Completely non-sensical outputs. |
The F16 quant was taken from ggerganov/whisper.cpp/ggml-tiny.bin.
Questions you may have
Why do the "K-quants" not work for me?
My guess is that your GPU might be too old to recognize them, considering that I have gotten the same error on my GTX 1080. If you would like to run them regardless, you can try switching to CPU inference.
Are the K-quants "S", "M", or "L"?
The quantizer I was using was not specific about this, so I do not know about this either.
What program did you use to make these quants?
I used whisper.cpp v1.7.6 on Windows x64, leveraging CUDA 12.4.0. For the F32 quant, I converted the original Hugging Face (H5) format model to a GGML using the models/convert-h5-to-ggml.py
script.
One or multiple of the quants are not working for me.
Open a new discussion in the community tab about this, and I will look into the issue.