Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
f4b4e44
·
verified ·
1 Parent(s): e39ddce

Scheduled Commit

Browse files
data/retrieval_individual-2cea24b1-39e9-480a-ba22-c617ea05c1fe.jsonl CHANGED
@@ -183,3 +183,4 @@
183
  {"tstamp": 1724147350.4213, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1724147348.2289, "finish": 1724147350.4213, "ip": "", "conv_id": "b1e3a8689e72427ea133d86987f91a26", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Do the Machines in \"The Matrix\" Adhere to the Three Laws of Robotics? In the \"real world\" of the Matrix, do the machines follow Asimov's Three Rules? Were they even programmed with them?\nEDIT: I realize that killing people goes against the 1st law, but by keeping people in the Matrix (which the machines view as a symbiosis) rather than simply killing off these \"annoying\" creatures, I wonder if the killings can be seen as \"preserving the greater good\" or something along those lines; and thus justify violating the laws.\n\nA: \nA robot may not injure a human being or, through inaction, allow a\n human being to come to harm.\nA robot must obey the orders given to it by human beings, except where\n such orders would conflict with the First Law.\nA robot must protect its own existence as long as such protection does\n not conflict with the First or Second Laws.\n\nNo, no, and no.\n"]], "corpus": "stackexchange"}
184
  {"tstamp": 1724150176.8905, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724150174.6886, "finish": 1724150176.8905, "ip": "", "conv_id": "14abff6b4bf4468abd5725f738f67892", "model_name": "GritLM/GritLM-7B", "prompt": "Research on structured dropout as a way to improve transformer models during both training and inference.", "output": [["Research on structured dropout as a way to improve transformer models during both training and inference.", "Title: Reducing Transformer Depth on Demand with Structured Dropout\n\nAbstract: Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation."]], "corpus": "arxiv"}
185
  {"tstamp": 1724150176.8905, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1724150174.6886, "finish": 1724150176.8905, "ip": "", "conv_id": "d0bfba6aabaf459ab1bfdc30a214e502", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Research on structured dropout as a way to improve transformer models during both training and inference.", "output": [["Research on structured dropout as a way to improve transformer models during both training and inference.", "Title: Reducing Transformer Depth on Demand with Structured Dropout\n\nAbstract: Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation."]], "corpus": "arxiv"}
 
 
183
  {"tstamp": 1724147350.4213, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1724147348.2289, "finish": 1724147350.4213, "ip": "", "conv_id": "b1e3a8689e72427ea133d86987f91a26", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Do the Machines in \"The Matrix\" Adhere to the Three Laws of Robotics? In the \"real world\" of the Matrix, do the machines follow Asimov's Three Rules? Were they even programmed with them?\nEDIT: I realize that killing people goes against the 1st law, but by keeping people in the Matrix (which the machines view as a symbiosis) rather than simply killing off these \"annoying\" creatures, I wonder if the killings can be seen as \"preserving the greater good\" or something along those lines; and thus justify violating the laws.\n\nA: \nA robot may not injure a human being or, through inaction, allow a\n human being to come to harm.\nA robot must obey the orders given to it by human beings, except where\n such orders would conflict with the First Law.\nA robot must protect its own existence as long as such protection does\n not conflict with the First or Second Laws.\n\nNo, no, and no.\n"]], "corpus": "stackexchange"}
184
  {"tstamp": 1724150176.8905, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724150174.6886, "finish": 1724150176.8905, "ip": "", "conv_id": "14abff6b4bf4468abd5725f738f67892", "model_name": "GritLM/GritLM-7B", "prompt": "Research on structured dropout as a way to improve transformer models during both training and inference.", "output": [["Research on structured dropout as a way to improve transformer models during both training and inference.", "Title: Reducing Transformer Depth on Demand with Structured Dropout\n\nAbstract: Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation."]], "corpus": "arxiv"}
185
  {"tstamp": 1724150176.8905, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1724150174.6886, "finish": 1724150176.8905, "ip": "", "conv_id": "d0bfba6aabaf459ab1bfdc30a214e502", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Research on structured dropout as a way to improve transformer models during both training and inference.", "output": [["Research on structured dropout as a way to improve transformer models during both training and inference.", "Title: Reducing Transformer Depth on Demand with Structured Dropout\n\nAbstract: Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation."]], "corpus": "arxiv"}
186
+ {"tstamp": 1724153001.4495, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724153001.1784, "finish": 1724153001.4495, "ip": "", "conv_id": "ca0445fa533a465ba9069be9150646a2", "model_name": "GritLM/GritLM-7B", "prompt": "when did the olympics start", "output": [["when did the olympics start", "Ancient Olympic Games\n\nThe ancient Olympic Games (, ta Olympia) were a series of athletic competitions among representatives of city-states and were one of the Panhellenic Games of Ancient Greece. They were held at the Panhellenic religious sanctuary of Olympia, in honor of Zeus, and the Greeks gave them a mythological origin. The originating Olympic Games are traditionally dated to 776 BC. The games were held every four years, or Olympiad, which became a unit of time in historical chronologies. These Olympiads were referred to based on the winner of their stadion sprint, e.g., \"the third year of the eighteenth Olympiad, when Ladas of Argos won the stadion\". They continued to be celebrated when Greece came under Roman rule in the 2nd century BC. Their last recorded celebration was in AD 393, under the emperor Theodosius I, but archaeological evidence indicates that some games were still held after this date. The games likely came to an end under Theodosius II, possibly in connection with a fire that burned down the temple of the Olympian Zeus during his reign.\nDuring the celebration of the games, the Olympic truce (ekecheiría) was announced so that athletes and religious pilgrims could travel from their cities to the games in safety. The prizes for the victors were olive leaf wreaths or crowns. The games became a political tool used by city-states to assert dominance over their rival city states. Politicians would announce political alliances at the games, and in times of war, priests would offer sacrifices to the gods for victory. The games were also used to help spread Hellenistic culture throughout the Mediterranean. The Olympics also featured religious celebrations. The statue of Zeus at Olympia was counted as one of the seven wonders of the ancient world. Sculptors and poets would congregate each Olympiad to display their works of art to would-be patrons."]], "corpus": "wikipedia"}