YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Gpt-OSS-20B-MXFP4-GGUF

GGUF MXFP4_MOE quant of openai/gpt-OSS-20b. This GGUF model was quantized from the fully-dequantized/Upcasted F32 of the model (including the MoE layers, which usually are kept at MXFP4). This was done to help preserve and improve the model's accuracy and precision post quantization.

filesize: (~12.11 GB)

Downloads last month
558
GGUF
Model size
20.9B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support