Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be greater than or equal to 2

Note

This was made to test extremely low quants. 70B is NOT usable at 1.6 bpw. A custom strategy for quanting was used, for details check EXL3 repo.

Downloads last month
11
Safetensors
Model size
8.3B params
Tensor type
F16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SicariusSicariiStuff/Negative_LLAMA_70B_EXL3_1.6bpw

Dataset used to train SicariusSicariiStuff/Negative_LLAMA_70B_EXL3_1.6bpw