W4A16 gs128 GPTQ quant of cyberagent/Mistral-Nemo-Japanese-Instruct-2408 w/ GPTQModel 1.7.2 using augmxnt/ultra-orca-boros-en-ja-v1 as calibration set

Downloads last month
9
Safetensors
Model size
2.8B params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for shisa-ai/Mistral-Nemo-Japanese-Instruct-2408-GPTQ-W4A16-gs128

Collection including shisa-ai/Mistral-Nemo-Japanese-Instruct-2408-GPTQ-W4A16-gs128