metadata
base_model:
- NousResearch/DeepHermes-3-Llama-3-8B-Preview
This is a converted weight from DeepHermes-3-Llama-3-8B-Preview model in unsloth 4-bit dynamic quant using this collab notebook.
About this Conversion
This conversion uses Unsloth to load the model in 4-bit format and force-save it in the same 4-bit format.
How 4-bit Quantization Works
- The actual 4-bit quantization is handled by BitsAndBytes (bnb), which works under Torch via AutoGPTQ or BitsAndBytes.
- Unsloth acts as a wrapper, simplifying and optimizing the process for better efficiency.
This allows for reduced memory usage and faster inference while keeping the model compact.