SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model Paper • 2502.02737 • Published 19 days ago • 190
Running 1.4k 1.4k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
view post Post 6437 AGENTS + FINETUNING! This week Hugging Face learn has a whole pathway on finetuning for agentic applications. You can follow these two courses to get knowledge on levelling up your agent game beyond prompts:1️⃣ New Supervised Fine-tuning unit in the NLP Course https://huggingface.co/learn/nlp-course/en/chapter11/12️⃣New Finetuning for agents bonus module in the Agents Course https://huggingface.co/learn/agents-course/bonus-unit1/introductionFine-tuning will squeeze everything out of your model for how you’re using it, more than any prompt. See translation 2 replies · 🚀 21 21 ❤️ 10 10 🔥 7 7 🤝 4 4 + Reply
view article Article Introducing smolagents: simple agents that write actions in code. Dec 31, 2024 • 751
Running on Zero 1.81k 1.81k Chat With Janus-Pro-7B 🌍 A unified multimodal understanding and generation model.
view article Article Train Custom Models on Hugging Face Spaces with AutoTrain SpaceRunner By abhishek • May 9, 2024 • 16
view article Article Training an Object Detection Model with AutoTrain By abhishek • Jun 5, 2024 • 17