-
Felladrin/Llama-160M-Chat-v1
Text Generation • 0.2B • Updated • 50 • 20 -
Felladrin/Minueza-2-96M-Instruct-Variant-10
Text Generation • 96M • Updated • 1 -
Felladrin/Smol-Llama-101M-Chat-v1
Text Generation • 0.1B • Updated • 13 • 9 -
Felladrin/Minueza-32M-UltraChat
Text Generation • 32.8M • Updated • 19 • 5
Victor Nogueira
Felladrin
AI & ML interests
Models to run in the web browser
Recent Activity
upvoted
a
collection
about 9 hours ago
Open Coding Agents
updated
a Space
3 days ago
Felladrin/awesome-ai-web-search
reacted
to
raincandy-u's
post
with 🔥
3 days ago
🤗 Just released Rain-100M, an experimental ~97M-parameter Qwen3-style language model trained from random initialization.
Repo: https://huggingface.co/raincandy-u/Rain-100M
Data: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu, ~3B tokens, English only
Tokenizer: custom 16k BPE, context length 4096
Architecture: 12 Transformer layers, hidden size 768, 12 heads, MLP 2048, SiLU, bf16
Rain-100M is a raw base model (not instruction-tuned or safety-aligned), aimed at small-scale research, debugging training pipelines, and CPU/edge experiments. If you run evaluations, finetunes, or visualizations with it, I would be very interested in your results!