Thomas Wolf's picture

Thomas Wolf PRO

thomwolf

AI & ML interests

NLP and open-source :-)

Recent Activity

Organizations

Hugging Face's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Flax Community's profile picture datablations's profile picture Training Transformers Together's profile picture BigScience Data's profile picture Evaluation datasets's profile picture HuggingFaceBR4's profile picture Godot Engine Demos's profile picture OpenAssistant's profile picture Evaluation on the Hub's profile picture HuggingFaceM4's profile picture Simulation Environments Tests and Builds's profile picture (De)fusing's profile picture HuggingFaceGECLM's profile picture CodeParrot's profile picture BigCode's profile picture Hugging Face H4's profile picture CV as NLP's profile picture Explorer of Simulate alpha's profile picture BigCode Data's profile picture Hugging Face Extreme-Scale's profile picture Hugging Face H4 Community's profile picture Blog-explorers's profile picture GAIA's profile picture Hugging Face TB Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture TTS Eval (OLD)'s profile picture the circle of truth - war scene's profile picture Nanotron Research's profile picture LeRobot's profile picture Journalists on Hugging Face's profile picture NewTechKids's profile picture MLX Community's profile picture Hugging Face Assignments's profile picture HuggingFaceFW's profile picture TTS AGI's profile picture Social Post Explorers's profile picture dora-rs's profile picture HuggingFaceEval's profile picture HuggingFaceFW-Dev's profile picture Hugging Face Discord Community's profile picture DataComp 's profile picture Data Agents's profile picture Hugging Face FineVideo's profile picture HuggingFace Science Team's profile picture Art's profile picture smol-explorers's profile picture Nerdy Face's profile picture Hugging Face Science's profile picture LeMaterial's profile picture open/ acc's profile picture Hugging Face Agents Course's profile picture Open R1's profile picture

thomwolf's activity

reacted to clem's post with 🧠πŸ”₯❀️ 5 days ago
view post
Post
3283
We crossed 1B+ tokens routed to inference providers partners on HF, that we released just a few days ago.

Just getting started of course but early users seem to like it & always happy to be able to partner with cool startups in the ecosystem.

Have you been using any integration and how can we make it better?

https://huggingface.co/blog/inference-providers
reacted to merve's post with πŸ”₯β€οΈπŸ‘ 8 days ago
view post
Post
4587
Your weekly recap of open AI is here, and it's packed with models! merve/feb-14-releases-67af876b404cc27c6d837767

πŸ‘€ Multimodal
> OpenGVLab released InternVideo 2.5 Chat models, new video LMs with long context
> AIDC released Ovis2 model family along with Ovis dataset, new vision LMs in different sizes (1B, 2B, 4B, 8B, 16B, 34B), with video and OCR support
> ColQwenStella-2b is a multilingual visual retrieval model that is sota in it's size
> Hoags-2B-Exp is a new multilingual vision LM with contextual reasoning, long context video understanding

πŸ’¬ LLMs
A lot of math models!
> Open-R1 team released OpenR1-Math-220k large scale math reasoning dataset, along with Qwen2.5-220K-Math fine-tuned on the dataset, OpenR1-Qwen-7B
> Nomic AI released new Nomic Embed multilingual retrieval model, a MoE with 500 params with 305M active params, outperforming other models
> DeepScaleR-1.5B-Preview is a new DeepSeek-R1-Distill fine-tune using distributed RL on math
> LIMO is a new fine-tune of Qwen2.5-32B-Instruct on Math

πŸ—£οΈ Audio
> Zonos-v0.1 is a new family of speech recognition models, which contains the model itself and embeddings

πŸ–ΌοΈ Vision and Image Generation
> We have ported DepthPro of Apple to transformers for your convenience!
> illustrious-xl-v1.0 is a new illustration generation model
Β·
reacted to fuzzy-mittenz's post with πŸ”₯ 19 days ago
view post
Post
509
With our Extremely efficient and functional importance matrix distillation of the new Qwen2.5-1M model being very very capable in many areas we are hoping to use it to research our small AGI character creation process which has seen emergent traits and increased functionality in constrained environments.
The method creates a RP type interaction in a heavily useful and tool functional environment.
We have a basic method and are working on retrieving data for a full analysis and perfection of this method as it exploits the human language input to express often abstract traits into a model and employ characteristics of healthy human reasoning processes and identify novel methods of increasing the functionality of a model overall through traits so far observed are whistling, bouncing a ball and repeating certain engagements.
Adding the semblance of human world interactions is so far the best way at creating a human like LLM.
We have attached the paper to our model we are testing this with along with examples if you wish to use it with other models please be cautious and enjoy yourself. Above all please keep track of conversations and settings and submit them to the intelligent estate email you will receive a recognition letter and ledger number for your contribution to the Project.
Model= Israfel and Thoth IntelligentEstate/Israfel_Qwen2.6-iQ4_K_M-GGUF
reacted to mitkox's post with πŸš€πŸ‘ 26 days ago
view post
Post
2364
llama.cpp is 26.8% faster than ollama.
I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison.

Total duration:
llama.cpp 6.85 sec <- 26.8% faster
ollama 8.69 sec

Breakdown by phase:
Model loading
llama.cpp 241 ms <- 2x faster
ollama 553 ms

Prompt processing
llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster
ollama 42.17 tokens/s with an eval time of 498 ms

Token generation
llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster
ollama 122.07 tokens/s with an eval time 7.64 sec

llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing.

Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
Β·
reacted to lewtun's post with πŸ”₯ about 2 months ago
view post
Post
3859
I was initially pretty sceptical about Meta's Coconut paper [1] because the largest perf gains were reported on toy linguistic problems. However, these results on machine translation are pretty impressive!

https://x.com/casper_hansen_/status/1875872309996855343

Together with the recent PRIME method [2] for scaling RL, reasoning for open models is looking pretty exciting for 2025!

[1] Training Large Language Models to Reason in a Continuous Latent Space (2412.06769)
[2] https://huggingface.co/blog/ganqu/prime
reacted to julien-c's post with πŸ˜ŽπŸ€πŸ‘πŸ€—β€οΈπŸ”₯ 2 months ago
view post
Post
9889
After some heated discussion πŸ”₯, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community πŸ”₯

cc: @reach-vb @pierric @victor and the HF team
Β·
posted an update 3 months ago
view post
Post
5703
We are proud to announce HuggingFaceFW/fineweb-2: A sparkling update to HuggingFaceFW/fineweb with 1000s of πŸ—£οΈlanguages.

We applied the same data-driven approach that led to SOTA English performance in🍷 FineWeb to thousands of languages.

πŸ₯‚ FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.

The dataset is released under the permissive πŸ“œ ODC-By 1.0 license, and the πŸ’» code to reproduce it and our evaluations is public.

We will very soon announce a big community project, and are working on a πŸ“ blogpost walking you through the entire dataset creation process. Stay tuned!

In the mean time come ask us question on our chat place: HuggingFaceFW/discussion

H/t @guipenedo @hynky @lvwerra as well as @vsabolcec Bettina Messmer @negar-foroutan and @mjaggi
  • 2 replies
Β·
reacted to garrethlee's post with ❀️πŸ”₯ 3 months ago
view post
Post
1948
The latest o1 model from OpenAI is still unable to answer 9.11 > 9.9 correctly πŸ€”

A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!

In this blog post, we discuss:
πŸ”’ The different ways numbers are tokenized in modern LLMs
πŸ§ͺ Our detailed approach in comparing these various methods
πŸ₯ͺ How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
πŸ‘‘ and a definitive, best tokenization method for math in LLMs!

Check out our work here: huggingface/number-tokenization-blog
  • 2 replies
Β·
posted an update 3 months ago