YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Gpt-OSS-20B-BF16-Unquantized

Unquantized GGUF BF16 model weights for gpt-oss-20B (the MoE layers are all unquantized from MXFP4 to BF16). Happy quantizing! 😋

Downloads last month
479
GGUF
Model size
20.9B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support