Model Card for Racial_Bias_Detection_LLaMa
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct. It has been trained using TRL.
Quick start
from transformers import pipeline
from transformers import AutoTokenizer ,AutoModelForCausalLM
text="n Arkansas police officer has been fired after telling a group of African-American men that you don’t belong in my city."
prompt='''Classify the text into 0, 1, and return the answer as the corresponding label.
text: {}
label: '''.format(text)
tokenizer = AutoTokenizer.from_pretrained("NYUAD-ComNets/Racial_Bias_Detection_LLaMa")
tokenizer.pad_token_id = tokenizer.eos_token_id
model = AutoModelForCausalLM.from_pretrained(
"NYUAD-ComNets/Racial_Bias_Detection_LLaMa",
device_map="auto",
torch_dtype="float16",
)
pipe = pipeline(task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=2,
temperature=0.1)
result = pipe(prompt)
answer = result[0]['generated_text'].split("label:")[-1].strip()
print(answer)
if('1' in answer):
print('This text has racial bias')
else:
print('no racial bias')
Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citations
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
2
Ask for provider support
Model tree for NYUAD-ComNets/Racial_Bias_Detection_LLaMa
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct