Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
0145dd8
·
verified ·
1 Parent(s): dd84bb4

Scheduled Commit

Browse files
data/retrieval_individual-f36c9f97-e795-4522-8eb7-7dee254b42c7.jsonl CHANGED
@@ -94,3 +94,5 @@
94
  {"tstamp": 1723556195.4712, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723556193.2601, "finish": 1723556195.4712, "ip": "", "conv_id": "9671663f29ea4c53ad024d71f252b226", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
95
  {"tstamp": 1723556247.3917, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723556245.2004, "finish": 1723556247.3917, "ip": "", "conv_id": "08cae9772bca44bca79300503c7661b8", "model_name": "GritLM/GritLM-7B", "prompt": "A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "output": [["A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "Title: Meaning without reference in large language models\n\nAbstract: The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like."]], "corpus": "arxiv"}
96
  {"tstamp": 1723556247.3917, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723556245.2004, "finish": 1723556247.3917, "ip": "", "conv_id": "25a9f4c2df4f43a49788976ec2cb5c6d", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "output": [["A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "Title: Meaning without reference in large language models\n\nAbstract: The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like."]], "corpus": "arxiv"}
 
 
 
94
  {"tstamp": 1723556195.4712, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723556193.2601, "finish": 1723556195.4712, "ip": "", "conv_id": "9671663f29ea4c53ad024d71f252b226", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
95
  {"tstamp": 1723556247.3917, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723556245.2004, "finish": 1723556247.3917, "ip": "", "conv_id": "08cae9772bca44bca79300503c7661b8", "model_name": "GritLM/GritLM-7B", "prompt": "A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "output": [["A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "Title: Meaning without reference in large language models\n\nAbstract: The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like."]], "corpus": "arxiv"}
96
  {"tstamp": 1723556247.3917, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723556245.2004, "finish": 1723556247.3917, "ip": "", "conv_id": "25a9f4c2df4f43a49788976ec2cb5c6d", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "output": [["A paper arguing that LLMs capture human-like aspects of meaning through conceptual role.", "Title: Meaning without reference in large language models\n\nAbstract: The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like."]], "corpus": "arxiv"}
97
+ {"tstamp": 1723559917.5437, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1723559916.8638, "finish": 1723559917.5437, "ip": "", "conv_id": "8c2980c8e45147298499ed5d20c31c45", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "New technique for few-shot object detection that reduces object confusion through context integration.", "output": [["New technique for few-shot object detection that reduces object confusion through context integration.", "Title: Context-Transformer: Tackling Object Confusion for Few-Shot Detection\n\nAbstract: Few-shot object detection is a challenging but realistic scenario, where only a few annotated training images are available for training detectors. A popular approach to handle this problem is transfer learning, i.e., fine-tuning a detector pretrained on a source-domain benchmark. However, such transferred detector often fails to recognize new objects in the target domain, due to low data diversity of training samples. To tackle this problem, we propose a novel Context-Transformer within a concise deep transfer framework. Specifically, Context-Transformer can effectively leverage source-domain object knowledge as guidance, and automatically exploit contexts from only a few training images in the target domain. Subsequently, it can adaptively integrate these relational clues to enhance the discriminative power of detector, in order to reduce object confusion in few-shot scenarios. Moreover, Context-Transformer is flexibly embedded in the popular SSD-style detectors, which makes it a plug-and-play module for end-to-end few-shot learning. Finally, we evaluate Context-Transformer on the challenging settings of few-shot detection and incremental few-shot detection. The experimental results show that, our framework outperforms the recent state-of-the-art approaches."]], "corpus": "arxiv"}
98
+ {"tstamp": 1723559917.5437, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723559916.8638, "finish": 1723559917.5437, "ip": "", "conv_id": "6377c93547f547a09cce60d5ebee969f", "model_name": "text-embedding-3-large", "prompt": "New technique for few-shot object detection that reduces object confusion through context integration.", "output": [["New technique for few-shot object detection that reduces object confusion through context integration.", "Title: Context-Transformer: Tackling Object Confusion for Few-Shot Detection\n\nAbstract: Few-shot object detection is a challenging but realistic scenario, where only a few annotated training images are available for training detectors. A popular approach to handle this problem is transfer learning, i.e., fine-tuning a detector pretrained on a source-domain benchmark. However, such transferred detector often fails to recognize new objects in the target domain, due to low data diversity of training samples. To tackle this problem, we propose a novel Context-Transformer within a concise deep transfer framework. Specifically, Context-Transformer can effectively leverage source-domain object knowledge as guidance, and automatically exploit contexts from only a few training images in the target domain. Subsequently, it can adaptively integrate these relational clues to enhance the discriminative power of detector, in order to reduce object confusion in few-shot scenarios. Moreover, Context-Transformer is flexibly embedded in the popular SSD-style detectors, which makes it a plug-and-play module for end-to-end few-shot learning. Finally, we evaluate Context-Transformer on the challenging settings of few-shot detection and incremental few-shot detection. The experimental results show that, our framework outperforms the recent state-of-the-art approaches."]], "corpus": "arxiv"}