-
MMTEB: Massive Multilingual Text Embedding Benchmark
Paper • 2502.13595 • Published • 44 -
MTEB: Massive Text Embedding Benchmark
Paper • 2210.07316 • Published • 6 -
MIEB: Massive Image Embedding Benchmark
Paper • 2504.10471 • Published • 21 -
The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding
Paper • 2406.02396 • Published
AI & ML interests
Massive Text Embeddings Benchmark
Recent Activity
Papers
HUME: Measuring the Human-Model Performance Gap in Text Embedding Task
Maintaining MTEB: Towards Long Term Usability and Reproducibility of Embedding Benchmarks
Organization Card
MTEB is a Python framework for evaluating embeddings and retrieval systems for both text and image. MTEB covers more than 1000 languages and diverse tasks, from classics like classification and clustering to use-case specialized tasks such as legal, code, or healthcare retrieval.
You can get started using mteb, check out our documentation.
| Overview | |
|---|---|
| 📈 Leaderboard | The interactive leaderboard of the benchmark |
| Get Started. | |
| 🏃 Get Started | Overview of how to use mteb |
| 🤖 Defining Models | How to use existing model and define custom ones |
| 📋 Selecting tasks | How to select tasks, benchmarks, splits etc. |
| 🏭 Running Evaluation | How to run the evaluations, including cache management, speeding up evaluations etc. |
| 📊 Loading Results | How to load and work with existing model results |
| Overview. | |
| 📋 Tasks | Overview of available tasks |
| 📐 Benchmarks | Overview of available benchmarks |
| 🤖 Models | Overview of available Models |
| Contributing | |
| 🤖 Adding a model | How to submit a model to MTEB and to the leaderboard |
| 👩💻 Adding a dataset | How to add a new task/dataset to MTEB |
| 👩💻 Adding a benchmark | How to add a new benchmark to MTEB and to the leaderboard |
| 🤝 Contributing | How to contribute to MTEB and set it up for development |
This is a collection of MTEB papers (not exhaustive).
This is a collection of MTEB papers (not exhaustive).
-
MMTEB: Massive Multilingual Text Embedding Benchmark
Paper • 2502.13595 • Published • 44 -
MTEB: Massive Text Embedding Benchmark
Paper • 2210.07316 • Published • 6 -
MIEB: Massive Image Embedding Benchmark
Paper • 2504.10471 • Published • 21 -
The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding
Paper • 2406.02396 • Published
A collection of items telated the the MMTEB release
datasets
1,411
mteb/tu-berlin
Viewer
•
Updated
•
40.3k
mteb/gld-v2
Viewer
•
Updated
•
778k
mteb/ARO-Visual-Relation
Viewer
•
Updated
•
23.9k
mteb/ARO-Visual-Attribution
Viewer
•
Updated
•
28.7k
mteb/ARO-Flickr-Order
Viewer
•
Updated
•
5k
mteb/ARO-COCO-order
Viewer
•
Updated
•
25k
mteb/oxford-flowers
Viewer
•
Updated
•
8.19k
mteb/tatdqa_test_beir
Viewer
•
Updated
•
3.58k
mteb/tabfquad_test_subsampled_beir
Viewer
•
Updated
•
630
mteb/syntheticDocQA_healthcare_industry_test_beir
Viewer
•
Updated
•
1.16k