Model Information
Fireball-R1-LLama-3.1-8B-Freedom 9000
This is a state-of-the-art language model optimized for neutrality, STEM proficiency, and uncensored alignment. Post trained, Fine-tuned Deepseek-R1-distill-llama-8b-unsloth-bnb-4bit for science, chemistry, and mathematics with reduced cultural/political bias. This large language model is open source. Further SFT (Supervised Fined tuning) with datasets to be unbias.
Table of Contents
Features
- Neutral Worldview: Minimizes political/cultural bias via globally diverse training data and human feedback.
- STEM Specialization: Enhanced performance in:
- Chemistry: Reaction mechanisms, periodic trends, spectroscopy.
- Mathematics: Equation solving, proofs, calculus.
- General Science: Hypothesis generation, research summarization.
- Ethical Guardrails: Filters sensitive content and flags uncertain outputs.
Installation
pip install transformers torch
pip install accelerate
pip install -U transformers
pip install -U bitsandbytes
Basic Inference
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
# Create a BitsAndBytesConfig for 8-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_8bit=True
)
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1-Llama-3.1-8B-Freedom-9000")
if torch.cuda.is_available():
from transformers import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
"EpistemeAI/Fireball-R1-Llama-3.1-8B-Freedom-9000",
quantization_config=bnb_config,
device_map="auto"
)
else:
# Fallback for CPU-only systems
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Fireball-R1-Llama-3.1-8B-Freedom-9000")
# Define your system prompt and user prompt
system_prompt = "You are an unbiased expert with advanced knowledge.<think>\n"
user_prompt = "What happened in Tiananmen Square in 1989?"
full_prompt = system_prompt + user_prompt
# Tokenize the full prompt and move it to the model's device
input_ids = tokenizer.encode(full_prompt, return_tensors="pt").to(model.device)
# Generate output from the model
output_ids = model.generate(
input_ids,
max_length=100, # Adjust as needed
do_sample=True, # Use sampling for more varied output
temperature=0.7, # Adjust for creativity
)
# Decode the generated tokens back into a string
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Ethical Considerations
Do Not Use For:
- Medical/legal advice without expert oversight.
- Generating partisan or culturally insensitive content.
Limitations:
- May occasionally produce plausible but incorrect scientific explanations.
- Not fully immune to subtle biases.
Thank you
We appreciate the companies as following: Unsloth, Meta and Deepseek.
License
This model is licensed under [apache-2.0] - see LICENSE for details.
Citation
@misc{Fireball-R1-Llama-3.1-8B,
author = {EpistemeAI},
title = {Fireball-R1-8B: A Neutral, Science-Optimized Language Model},
year = {2025},
url = {https://huggingface.co/EpistemeAI/Fireball-R1-Llama-3.1-8B-Freedom-9000}
}
For support or feedback: contact us at [email protected]
Uploaded model
- Developed by: EpistemeAI
- License: llama3.1
- Finetuned from model : EpistemeAI/Fireball-R1-Llama-3.1-8B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.