huihui-ai/SmolLM2-1.7B-Instruct-abliterated

This is an uncensored version of HuggingFaceTB/SmolLM2-1.7B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it).

If the desired result is not achieved, you can clear the conversation and try again.

How to use

Transformers

pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "huihui-ai/SmolLM2-1.7B-Instruct-abliterated"

device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
Downloads last month
180
Safetensors
Model size
1.71B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for huihui-ai/SmolLM2-1.7B-Instruct-abliterated

Finetuned
(57)
this model
Quantizations
9 models