Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
VoxCPM
Log In
Sign Up
QuantStack
/
HunyuanImage-2.1-Refiner-GGUF
like
3
Follow
QuantStack
1.17k
GGUF
Model card
Files
Files and versions
xet
Community
1
README.md exists but content is empty.
Downloads last month
5,700
GGUF
Model size
15B params
Architecture
hyvid
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
5.64 GB
3-bit
Q3_K_S
7.13 GB
Q3_K_M
7.3 GB
4-bit
Q4_K_S
9.07 GB
Q4_0
9.07 GB
Q4_1
9.98 GB
Q4_K_M
9.24 GB
5-bit
Q5_K_S
10.9 GB
Q5_0
10.9 GB
Q5_1
11.8 GB
Q5_K_M
11.1 GB
6-bit
Q6_K
12.8 GB
8-bit
Q8_0
16.4 GB
16-bit
BF16
30.1 GB
F16
30.1 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
QuantStack/HunyuanImage-2.1-Refiner-GGUF
Base model
tencent/HunyuanImage-2.1
Quantized
(
4
)
this model
Collection including
QuantStack/HunyuanImage-2.1-Refiner-GGUF
HunyuanImage2.1 GGUFs
Collection
3 items
•
Updated
2 days ago
•
1