Lysandre's picture

Lysandre

lysandre

AI & ML interests

chief open-source officer @ hf

Recent Activity

Organizations

Microsoft's profile picture Hugging Face's profile picture Google's profile picture Miscellaneous's profile picture spaCy's profile picture HF Internships's profile picture Spaces-explorers's profile picture Hugging Face Internal Testing Organization's profile picture Flax Community's profile picture Hugging Face Course's profile picture Hugging Face Helping Hand's profile picture Spaces Examples's profile picture Tools's profile picture HuggingFaceM4's profile picture team6's profile picture HF Canonical Model Maintainers's profile picture BigCode's profile picture Hugging Face H4's profile picture Hugging Face OSS Metrics's profile picture HuggingFace Doc Builds's profile picture peft-internal-testing's profile picture accelerate's profile picture huggingPartyParis's profile picture adept-hf-collab's profile picture OpenAI community's profile picture ALBERT community's profile picture T5 community's profile picture Facebook AI community's profile picture BERT community's profile picture DistilBERT community's profile picture Transformer-XL community's profile picture XLNet community's profile picture Mistral AI EAP's profile picture Hugging Face Assignments's profile picture hsramall's profile picture yorg's profile picture gg-tt's profile picture LLHF's profile picture SLLHF's profile picture lbhf's profile picture blhf's profile picture Meta Llama's profile picture kmhf's profile picture nltpt's profile picture Hugging Face Party @ PyTorch Conference's profile picture qrias's profile picture open/ acc's profile picture wut?'s profile picture DDUF's profile picture kernels-community's profile picture GA VLM's profile picture

lysandre's activity

posted an update 1 day ago
view post
Post
3228
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
·
reacted to ArthurZ's post with 🔥 3 months ago
reacted to clem's post with ❤️🤗🚀🔥 4 months ago
view post
Post
4468
This is no Woodstock AI but will be fun nonetheless haha. I’ll be hosting a live workshop with team members next week about the Enterprise Hugging Face hub.

1,000 spots available first-come first serve with some surprises during the stream!

You can register and add to your calendar here: https://streamyard.com/watch/JS2jHsUP3NDM
·
reacted to Tonic's post with 👀 7 months ago
reacted to dvilasuero's post with 🚀🔥 8 months ago
view post
Post
8164
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
·
reacted to DmitryRyumin's post with 🤗🔥 10 months ago
view post
Post
1194
😀😲😐😡 New Research Alert - FER-YOLO-Mamba (Facial Expressions Recognition Collection)! 😡😥🥴😱
📄 Title: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space 🔝

📝 Description: FER-YOLO-Mamba is a novel facial expression recognition model that combines the strengths of YOLO and Mamba technologies to efficiently recognize and localize facial expressions.

👥 Authors: Hui Ma, Sen Lei, Turgay Celik, and Heng-Chao Li

🔗 Paper: FER-YOLO-Mamba: Facial Expression Detection and Classification Based on Selective State Space (2405.01828)

📁 Repository: https://github.com/SwjtuMa/FER-YOLO-Mamba

🚀 Added to the Facial Expressions Recognition Collection: DmitryRyumin/facial-expressions-recognition-65f22574e0724601636ddaf7

🔥🔝 See also Facial_Expression_Recognition - ElenaRyumina/Facial_Expression_Recognition (App, co-authored by @DmitryRyumin ) 😉

📚 More Papers: more cutting-edge research presented at other conferences in the DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin

🔍 Keywords: #FERYOLOMamba #FER #YOLO #Mamba #FacialExpressionRecognition #EmotionRecognition #ComputerVision #DeepLearning #MachineLearning #Innovation
reacted to Undi95's post with 🤗❤️👍 10 months ago
view post
Post
16532
Hey everyone,

Just wanted to shout out a massive thank you to all 2000 of you who've followed me on Hugging Face! 🎉 It's incredible to have such an awesome crew backing me up as I dive into all these LLM experiments.

Even though not all my models turn out perfect, I've found some real gems and methods along the way 💎. It's like digging for treasure – sometimes you found nothing, but sometimes you find a pearl, and sometimes you find a new method to try.

Your support and encouragement mean the world to me, and I'm really stoked to keep experimenting and learning. If you told me some years ago I would have so much people following me for what I do, I wouldn't have believed it. Here's to more discoveries and adventures ahead! 🚀

Also, big thanks once again, and a huge shoutout to @IkariDev for being there through this journey and supporting me. I'm excited for our future work together and hope we will continue to make people happy! 👏

I want to thank @Gryphe too, since my early work was heavily inspired from MythoMax and the RP/ERP vibe of it. If I'm here today it's probably because of you 😂

I was so close to forget @chargoddard and his amazing tool too! What will we do without mergekit in our life? Thank you! 🙏

See y'all at 3k!
·
reacted to isidentical's post with ❤️ 10 months ago
reacted to trisfromgoogle's post with ❤️🔥🚀 11 months ago
view post
Post
1849
Very excited to share the first two official Gemma variants from Google! Today at Google Cloud Next, we announced cutting-edge models for code and research!

First, google/codegemma-release-66152ac7b683e2667abdee11 - a new set of code-focused Gemma models at 2B and 7B, in both pretrained and instruction-tuned variants. These exhibit outstanding performance on academic benchmarks and (in my experience) real-life usage. Read more in the excellent HuggingFace blog: https://huggingface.co/blog/codegemma

Second, ( google/recurrentgemma-release-66152cbdd2d6619cb1665b7a), which is based on the outstanding Google DeepMind research in Griffin: https://arxiv.org/abs/2402.19427. RecurrentGemma is a research variant that enables higher throughput and vastly improved memory usage. We are excited about new architectures, especially in the lightweight Gemma sizes, where innovations like RecurrentGemma can scale modern AI to many more use cases.

For details on the launches of these models, check out our launch blog -- and please do not hesitate to send us feedback. We are excited to see what you build with CodeGemma and RecurrentGemma!

Huge thanks to the Hugging Face team for helping ensure that these models work flawlessly in the Hugging Face ecosystem at launch!
·
reacted to ArthurZ's post with 🤝 12 months ago
reacted to andrewyng's post with 🤯 12 months ago
view post
Post
DeepLearning.AI just announced a new short course: Open Source Models with Hugging Face 🤗, taught by Hugging Face's own Maria Khalusova, Marc Sun and Younes Belkada!

As many of you already know, Hugging Face has been a game changer by letting developers quickly grab any of hundreds of thousands of already-trained open source models to assemble into new applications. This course teaches you best practices for building this way, including how to search and choose among models.

You'll learn to use the Transformers library and walk through multiple models for text, audio, and image processing, including zero-shot image segmentation, zero-shot audio classification, and speech recognition. You'll also learn to use multimodal models for visual question answering, image search, and image captioning. Finally, you’ll learn how to demo what you build locally, on the cloud, or via an API using Gradio and Hugging Face Spaces.

Thank you very much to Hugging Face's wonderful team for working with us on this.

You can sign up for the course here: https://www.deeplearning.ai/short-courses/open-source-models-hugging-face/
  • 1 reply
·