ShahzebKhoso/GPT-OSS-20B-AceReason-Math
#1777
by
saipangon - opened
Do something for this, blessed people :D!
I did something =)
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GPT-OSS-20B-AceReason-Math-GGUF for quants to appear.
There still no progress =(
There is if you check the status page:
-2000 13 si GPT-OSS-20B-AceReason-Math error/1 converting...
Here the full error:
INFO:hf-to-gguf:Loading model: GPT-OSS-20B-AceReason-Math
INFO:hf-to-gguf:Model architecture: GptOssForCausalLM
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: indexing model part 'model-00001-of-00003.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00002-of-00003.safetensors'
INFO:hf-to-gguf:gguf: indexing model part 'model-00003-of-00003.safetensors'
Traceback (most recent call last):
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 11291, in <module>
main()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 11268, in main
model_instance = model_class(dir_model, output_type, fname_out,
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 772, in __init__
super().__init__(*args, **kwargs)
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 163, in __init__
self.dequant_model()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 10013, in dequant_model
return super().dequant_model()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 472, in dequant_model
raise NotImplementedError(f"Quant method is not yet supported: {quant_method!r}")
NotImplementedError: Quant method is not yet supported: 'bitsandbytes'
job finished, status 1
job-done<0 GPT-OSS-20B-AceReason-Math noquant 1>
error/1 converting...
https://huggingface.co/ShahzebKhoso/GPT-OSS-20B-AceReason-Math