license: llama3.1 | |
language: en | |
base_model: meta-llama/Llama-3.1-8B-Instruct | |
# MathBite/self_corrective_llama_3.1_8B | |
This is a fine-tuned version of `meta-llama/Llama-3.1-8B-Instruct` that has been trained to detect and mitigate hallucinations in generated text. | |
## How to Use | |
Because this model uses a custom architecture, you **must** use `trust_remote_code=True` when loading it. | |
```python | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
model_name = "MathBite/self_corrective_llama_3.1_8B" | |
tokenizer = AutoTokenizer.from_pretrained(model_name) | |
model = AutoModelForCausalLM.from_pretrained( | |
model_name, | |
trust_remote_code=True | |
) | |
``` | |