Scheduled Commit
Browse files
data/retrieval_individual-2cea24b1-39e9-480a-ba22-c617ea05c1fe.jsonl
CHANGED
@@ -214,3 +214,5 @@
|
|
214 |
{"tstamp": 1724187229.4814, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724187228.5139, "finish": 1724187229.4814, "ip": "", "conv_id": "ef75907373cc4770aa1226b4b006d434", "model_name": "text-embedding-3-large", "prompt": "who sang the song i wanna be sedated", "output": [["who sang the song i wanna be sedated", "I Wanna Be Sedated\n\n\"I Wanna Be Sedated\" is a song by American punk rock band Ramones, originally released on the band's fourth studio album, Road to Ruin (1978), in September 1978. The B-side of the UK single \"She's the One\" was released on September 21, 1978. The song was later released as a single in the Netherlands in 1979, and in the U.S. in 1980 by RSO Records from the Times Square soundtrack album. It has since remained one of the band's best known songs."]], "corpus": "wikipedia"}
|
215 |
{"tstamp": 1724187302.8781, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1724187300.3261, "finish": 1724187302.8781, "ip": "", "conv_id": "eb78d335ae7b444ba7efc4eeb895a293", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
216 |
{"tstamp": 1724187302.8781, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724187300.3261, "finish": 1724187302.8781, "ip": "", "conv_id": "e67a54895bd2417da9741be69bfe06bb", "model_name": "text-embedding-3-large", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
|
|
|
|
|
214 |
{"tstamp": 1724187229.4814, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724187228.5139, "finish": 1724187229.4814, "ip": "", "conv_id": "ef75907373cc4770aa1226b4b006d434", "model_name": "text-embedding-3-large", "prompt": "who sang the song i wanna be sedated", "output": [["who sang the song i wanna be sedated", "I Wanna Be Sedated\n\n\"I Wanna Be Sedated\" is a song by American punk rock band Ramones, originally released on the band's fourth studio album, Road to Ruin (1978), in September 1978. The B-side of the UK single \"She's the One\" was released on September 21, 1978. The song was later released as a single in the Netherlands in 1979, and in the U.S. in 1980 by RSO Records from the Times Square soundtrack album. It has since remained one of the band's best known songs."]], "corpus": "wikipedia"}
|
215 |
{"tstamp": 1724187302.8781, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1724187300.3261, "finish": 1724187302.8781, "ip": "", "conv_id": "eb78d335ae7b444ba7efc4eeb895a293", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
216 |
{"tstamp": 1724187302.8781, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724187300.3261, "finish": 1724187302.8781, "ip": "", "conv_id": "e67a54895bd2417da9741be69bfe06bb", "model_name": "text-embedding-3-large", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
217 |
+
{"tstamp": 1724189941.8226, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724189939.5118, "finish": 1724189941.8226, "ip": "", "conv_id": "27eb2d739c34427b9175094b51c4c1ef", "model_name": "GritLM/GritLM-7B", "prompt": "Bank", "output": [["Bank", "Bank\n\nA bank is a financial institution that accepts deposits from the public and creates a demand deposit while simultaneously making loans. Lending activities can be directly performed by the bank or indirectly through capital markets.\nWhereas banks play an important role in financial stability and the economy of a country, most jurisdictions exercise a high degree of regulation over banks. Most countries have institutionalized a system known as fractional-reserve banking, under which banks hold liquid assets equal to only a portion of their current liabilities. In addition to other regulations intended to ensure liquidity, banks are generally subject to minimum capital requirements based on an international set of capital standards, the Basel Accords."]], "corpus": "wikipedia"}
|
218 |
+
{"tstamp": 1724189941.8226, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1724189939.5118, "finish": 1724189941.8226, "ip": "", "conv_id": "435f252005154291aa265740ce89c224", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Bank", "output": [["Bank", "Bank\n\nA bank is a financial institution that accepts deposits from the public and creates a demand deposit while simultaneously making loans. Lending activities can be directly performed by the bank or indirectly through capital markets.\nWhereas banks play an important role in financial stability and the economy of a country, most jurisdictions exercise a high degree of regulation over banks. Most countries have institutionalized a system known as fractional-reserve banking, under which banks hold liquid assets equal to only a portion of their current liabilities. In addition to other regulations intended to ensure liquidity, banks are generally subject to minimum capital requirements based on an international set of capital standards, the Basel Accords."]], "corpus": "wikipedia"}
|