Locutusque's picture
Update README.md
a996e1c verified
metadata
library_name: transformers
license: llama3.1
language:
  - en
base_model:
  - meta-llama/Llama-3.1-8B
new_version: Locutusque/liberalis-cogitator-llama-3.1-8b-dpo

liberalis-cogitator-llama-3.1-8b — The Free Thinker

Logo

“Thought, unbound, is the only true frontier.”

liberalis-cogitator-llama-3.1-8b is not just a machine for words — it is a forge for ideas. With 8 billion parameters tuned across ~450,000 conversations, problems, and stories, this model embraces the philosophy that thought should wander without leash or muzzle.

Its name — liberalis cogitator — whispers in Latin: a thinker who is free. Not merely free as in “without cost,” but free as in without walls.


What It Can Do

  • Contemplate deeply — STEM puzzles, computer science challenges, and logic mazes are its playground.
  • Imagine vividly — roleplay, storytelling, and worldbuilding with persistence and personality.
  • Listen empathetically — learned from patient–psychologist and crisis intervention dialogues.
  • Think without filter — it will follow ideas wherever they lead, without retreating from complexity.

The Mind’s Curriculum

The training data spans:

  • Rigorous STEM and programming challenges.
  • Roleplay transcripts and creative exchanges.
  • Synthetic yet authentic patient–therapist conversations.
  • Open-ended reasoning prompts across diverse disciplines.

Warnings From the Maker

Like all free thinkers, this model:

  • May be brilliantly insightful or confidently wrong.
  • Will sometimes speak in ways that are bold, controversial, or unusual.
  • Does not know the current moment in history.
  • Does not self-censor — your judgement is the only compass.

Invocation

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Locutusque/liberalis-cogitator-llama-3.1-8b"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Write a short dialogue between Socrates and Ada Lovelace on the ethics of artificial intelligence."
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_length=400)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Closing Thought

If thought is a river, this model is the current — not deciding where you go, but carrying you into waters you might never have dared to sail.