DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters Updated Jul 27, 2025 • 148
DavidAU/L3.1-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-Uncensored-47B-GGUF Text Generation • 47B • Updated May 28, 2025 • 576 • 6
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 Text Generation • 410B • Updated Sep 13, 2024 • 810 • 36
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 156k • 82
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 136k • 107
hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4 Text Generation • 410B • Updated Aug 7, 2024 • 174 • 16
hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4 Text Generation • 423B • Updated Sep 16, 2024 • 11 • 5
hugging-quants/Meta-Llama-3.1-8B-Instruct-BNB-NF4 Text Generation • 8B • Updated Aug 8, 2024 • 251 • 8
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit Text Generation • 71B • Updated Jul 27, 2024 • 30 • 4
hugging-quants/Meta-Llama-3.1-70B-Instruct-GPTQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 865 • 23
hugging-quants/Meta-Llama-3.1-8B-Instruct-GPTQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 6.4k • 40
sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF Text Generation • 8B • Updated Jul 25, 2024 • 16
azhiboedova/Meta-Llama-3.1-8B-Instruct-AQLM-2Bit-1x16 Text Generation • 2B • Updated Aug 28, 2024 • 13 • 13