-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 26 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 13 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 42 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2407.01449
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 33 -
HuggingFaceFW/fineweb
Viewer • Updated • 25B • 367k • 1.98k -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 38.3k • 833 -
cerebras/SlimPajama-627B
Preview • Updated • 70.1k • 454
-
CompCap: Improving Multimodal Large Language Models with Composite Captions
Paper • 2412.05243 • Published • 19 -
GraPE: A Generate-Plan-Edit Framework for Compositional T2I Synthesis
Paper • 2412.06089 • Published • 4 -
SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation
Paper • 2412.05818 • Published -
FLAIR: VLM with Fine-grained Language-informed Image Representations
Paper • 2412.03561 • Published • 1
-
NVLM: Open Frontier-Class Multimodal LLMs
Paper • 2409.11402 • Published • 73 -
BRAVE: Broadening the visual encoding of vision-language models
Paper • 2404.07204 • Published • 19 -
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper • 2403.18814 • Published • 47 -
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
Paper • 2409.17146 • Published • 108
-
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22 -
VILA^2: VILA Augmented VILA
Paper • 2407.17453 • Published • 40 -
The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective
Paper • 2407.08583 • Published • 12 -
Vision language models are blind
Paper • 2407.06581 • Published • 83