Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
7
9
corpus-id
stringclasses
46 values
score
int64
1
1
query_0
Geneva Durben
1
query_0
Dorathea Bastress
1
query_1
Geneva Durben
1
query_1
Armand Schweda
1
query_2
Geneva Durben
1
query_2
Flor Lemaire
1
query_3
Geneva Durben
1
query_3
Pate Lindley
1
query_4
Geneva Durben
1
query_4
Shelvia Goike
1
query_5
Geneva Durben
1
query_5
Ovid Rahm
1
query_6
Geneva Durben
1
query_6
Bronson Saelee
1
query_7
Geneva Durben
1
query_7
Gladstone Oonk
1
query_8
Geneva Durben
1
query_8
Ofelia Rosselot
1
query_9
Geneva Durben
1
query_9
Tisha Ghent
1
query_10
Geneva Durben
1
query_10
Herminia Caranto
1
query_11
Geneva Durben
1
query_11
Linzy Recknor
1
query_12
Geneva Durben
1
query_12
Vinie Relford
1
query_13
Geneva Durben
1
query_13
Jerrod Dumpit
1
query_14
Geneva Durben
1
query_14
Amaris Grow
1
query_15
Geneva Durben
1
query_15
Marcellus Meachum
1
query_16
Geneva Durben
1
query_16
Wellington Hinn
1
query_17
Geneva Durben
1
query_17
Georgette Cagna
1
query_18
Geneva Durben
1
query_18
Laurine Bellizzi
1
query_19
Geneva Durben
1
query_19
Agnes Reap
1
query_20
Geneva Durben
1
query_20
Sheree Riddley
1
query_21
Geneva Durben
1
query_21
Mathew Weierke
1
query_22
Geneva Durben
1
query_22
Casimiro Steo
1
query_23
Geneva Durben
1
query_23
Maryann Bohnsack
1
query_24
Geneva Durben
1
query_24
Flo Zaugg
1
query_25
Geneva Durben
1
query_25
Nathen Saadia
1
query_26
Geneva Durben
1
query_26
Ruby Gaskin
1
query_27
Geneva Durben
1
query_27
Jerrie Roupe
1
query_28
Geneva Durben
1
query_28
Camisha Bogosian
1
query_29
Geneva Durben
1
query_29
Gaetano Argel
1
query_30
Geneva Durben
1
query_30
Nathaniel Robens
1
query_31
Geneva Durben
1
query_31
Tarik Hollfelder
1
query_32
Geneva Durben
1
query_32
Riya Hayhoe
1
query_33
Geneva Durben
1
query_33
Chaney Gertman
1
query_34
Geneva Durben
1
query_34
Cristy Walford
1
query_35
Geneva Durben
1
query_35
Eustace Comment
1
query_36
Geneva Durben
1
query_36
Terrell Varadarajan
1
query_37
Geneva Durben
1
query_37
Darwyn Raio
1
query_38
Geneva Durben
1
query_38
Eudora Cervero
1
query_39
Geneva Durben
1
query_39
Jacey Gnatek
1
query_40
Geneva Durben
1
query_40
Elam Mejiamejia
1
query_41
Geneva Durben
1
query_41
Celia Marszalek
1
query_42
Geneva Durben
1
query_42
Aliza Uhlrich
1
query_43
Geneva Durben
1
query_43
Chadwick Frisella
1
query_44
Geneva Durben
1
query_44
Theola Laudermilk
1
query_45
Dorathea Bastress
1
query_45
Armand Schweda
1
query_46
Dorathea Bastress
1
query_46
Flor Lemaire
1
query_47
Dorathea Bastress
1
query_47
Pate Lindley
1
query_48
Dorathea Bastress
1
query_48
Shelvia Goike
1
query_49
Dorathea Bastress
1
query_49
Ovid Rahm
1
End of preview. Expand in Data Studio

LIMIT-small

A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).

Introduction

Vector embeddings have been tasked with an ever-increasing set of retrieval tasks over the years, with a nascent rise in using them for reasoning, instruction-following, coding, and more. These new benchmarks push embeddings to work for any query and any notion of relevance that could be given. While prior works have pointed out theoretical limitations of vector embeddings, there is a common assumption that these difficulties are exclusively due to unrealistic queries, and those that are not can be overcome with better training data and larger models. In this work, we demonstrate that we may encounter these theoretical limitations in realistic settings with extremely simple queries. We connect known results in learning theory, showing that the number of top-k subsets of documents capable of being returned as the result of some query is limited by the dimension of the embedding. We empirically show that this holds true even if we restrict to k=2, and directly optimize on the test set with free parameterized embeddings. We then create a realistic dataset called LIMIT that stress tests models based on these theoretical results, and observe that even state-of-the-art models fail on this dataset despite the simple nature of the task. Our work shows the limits of embedding models under the existing single vector paradigm and calls for future research to develop methods that can resolve this fundamental limitation.

Links

Dataset Details

Queries (1,000): Simple questions asking "Who likes [attribute]?"

  • Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"

Corpus (46 documents): Short biographical texts describing people and their preferences

  • Format: "[Name] likes [attribute1] and [attribute2]."
  • Example: "Geneva Durben likes Quokkas and Apples."

Qrels (2,000): Each query has exactly 2 relevant documents (score=1), creating nearly all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).

Format

The dataset follows standard MTEB format with three configurations:

  • default: Query-document relevance judgments (qrels), keys: corpus-id, query-id, score (1 for relevant)
  • queries: Query texts with IDs , keys: _id, text
  • corpus: Document texts with IDs, keys: _id, title (empty), and text

Purpose

Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.

Sample Usage

Loading with Hugging Face Datasets

You can also load the data using the datasets library from Hugging Face:

from datasets import load_dataset
ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).

Evaluation with MTEB

Evaluation was done using the MTEB framework on the v2.0.0 branch (soon to be main). An example is:

import mteb
from sentence_transformers import SentenceTransformer

model_name = "sentence-transformers/all-MiniLM-L6-v2"

# load the model using MTEB
model = mteb.get_model(model_name) # will default to SentenceTransformers(model_name) if not implemented in MTEB
# or using SentenceTransformers
model = SentenceTransformer(model_name)

# select the desired tasks and evaluate
tasks = mteb.get_tasks(tasks=["LIMITSmallRetrieval"]) # or use LIMITRetrieval for the full dataset
results = mteb.evaluate(model, tasks=tasks)
print(results)

Citation

@misc{weller2025theoreticallimit,
      title={On the Theoretical Limitations of Embedding-Based Retrieval}, 
      author={Orion Weller and Michael Boratko and Iftekhar Naim and Jinhyuk Lee},
      year={2025},
      eprint={2508.21038},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2508.21038}, 
}
Downloads last month
232