MISHANM/meta-Llama-3.2-3B-Instruct.gguf
This model is a GGUF version of the meta-llama/Llama-3.2-3B-Instruct model, optimized for use with the llama.cpp
framework. It is designed to run efficiently on CPUs and can be used for various natural language processing tasks.
Model Details
- Language: English
- Tasks: Text generation
- Base Model:meta-llama/Llama-3.2-3B-Instruct
Building and Running the Model
To build and run the model using llama.cpp
, follow these steps:
Build llama.cpp Locally
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release
Run the Model
Navigate to the build directory and run the model with a prompt:
cd llama.cpp/build/bin
Inference with llama.cpp
./llama-cli -m /path/to/model/ -p "Your prompt here" -n 128
Citation Information
@misc{MISHANM/meta-Llama-3.2-3B-Instruct.gguf,
author = {Mishan Maurya},
title = {Introducing MISHANM/meta-Llama-3.2-3B-Instruct.gguf GGUF Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
}
- Downloads last month
- 27
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for MISHANM/meta-Llama-3.2-3B-Instruct.gguf
Base model
meta-llama/Llama-3.2-3B-Instruct