Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Alcoft
/
SmolLM2-1.7B-Instruct-GGUF
like
0
Text Generation
GGUF
English
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
140
GGUF
Model size
1.71B params
Architecture
llama
2-bit
Q2_K
3-bit
Q3_K_S
Q3_K_M
Q3_K_L
4-bit
Q4_K_S
Q4_K_M
5-bit
Q5_K_S
Q5_K_M
6-bit
Q6_K
8-bit
Q8_0
16-bit
F16
BF16
Inference Providers
NEW
Text Generation
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.
Model tree for
Alcoft/SmolLM2-1.7B-Instruct-GGUF
Base model
HuggingFaceTB/SmolLM2-1.7B
Quantized
HuggingFaceTB/SmolLM2-1.7B-Instruct
Quantized
(
72
)
this model
Collection including
Alcoft/SmolLM2-1.7B-Instruct-GGUF
TAO71-AI Quants: SmolLM2
Collection
3 items
•
Updated
Dec 2, 2024