Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
358b051
·
verified ·
1 Parent(s): 2ec686d

Scheduled Commit

Browse files
data/retrieval_individual-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl CHANGED
@@ -362,3 +362,5 @@
362
  {"tstamp": 1722494663.0505, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722494660.9176, "finish": 1722494663.0505, "ip": "", "conv_id": "c140fcc3b90146e69e7d999c7a3a65ca", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
363
  {"tstamp": 1722495138.4102, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722495135.7848, "finish": 1722495138.4102, "ip": "", "conv_id": "5c79b101146f4ecd84bd69216e4e1c1b", "model_name": "text-embedding-3-large", "prompt": "what medical LLMs is out there?", "output": [["what medical LLMs is out there?", "Title: LLMs-Healthcare : Current Applications and Challenges of Large Language Models in various Medical Specialties\n\nAbstract: We aim to present a comprehensive overview of the latest advancements in utilizing Large Language Models (LLMs) within the healthcare sector, emphasizing their transformative impact across various medical domains. LLMs have become pivotal in supporting healthcare, including physicians, healthcare providers, and patients. Our review provides insight into the applications of Large Language Models (LLMs) in healthcare, specifically focusing on diagnostic and treatment-related functionalities. We shed light on how LLMs are applied in cancer care, dermatology, dental care, neurodegenerative disorders, and mental health, highlighting their innovative contributions to medical diagnostics and patient care. Throughout our analysis, we explore the challenges and opportunities associated with integrating LLMs in healthcare, recognizing their potential across various medical specialties despite existing limitations. Additionally, we offer an overview of handling diverse data types within the medical field."]], "corpus": "arxiv"}
364
  {"tstamp": 1722495138.4102, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722495135.7848, "finish": 1722495138.4102, "ip": "", "conv_id": "fb33411d7900485285f5fa5039d64941", "model_name": "text-embedding-004", "prompt": "what medical LLMs is out there?", "output": [["what medical LLMs is out there?", "Title: Large language models in medicine: the potentials and pitfalls\n\nAbstract: Large language models (LLMs) have been applied to tasks in healthcare, ranging from medical exam questions to responding to patient questions. With increasing institutional partnerships between companies producing LLMs and healthcare systems, real world clinical application is coming closer to reality. As these models gain traction, it is essential for healthcare practitioners to understand what LLMs are, their development, their current and potential applications, and the associated pitfalls when utilized in medicine. This review and accompanying tutorial aim to give an overview of these topics to aid healthcare practitioners in understanding the rapidly changing landscape of LLMs as applied to medicine."]], "corpus": "arxiv"}
 
 
 
362
  {"tstamp": 1722494663.0505, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722494660.9176, "finish": 1722494663.0505, "ip": "", "conv_id": "c140fcc3b90146e69e7d999c7a3a65ca", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
363
  {"tstamp": 1722495138.4102, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722495135.7848, "finish": 1722495138.4102, "ip": "", "conv_id": "5c79b101146f4ecd84bd69216e4e1c1b", "model_name": "text-embedding-3-large", "prompt": "what medical LLMs is out there?", "output": [["what medical LLMs is out there?", "Title: LLMs-Healthcare : Current Applications and Challenges of Large Language Models in various Medical Specialties\n\nAbstract: We aim to present a comprehensive overview of the latest advancements in utilizing Large Language Models (LLMs) within the healthcare sector, emphasizing their transformative impact across various medical domains. LLMs have become pivotal in supporting healthcare, including physicians, healthcare providers, and patients. Our review provides insight into the applications of Large Language Models (LLMs) in healthcare, specifically focusing on diagnostic and treatment-related functionalities. We shed light on how LLMs are applied in cancer care, dermatology, dental care, neurodegenerative disorders, and mental health, highlighting their innovative contributions to medical diagnostics and patient care. Throughout our analysis, we explore the challenges and opportunities associated with integrating LLMs in healthcare, recognizing their potential across various medical specialties despite existing limitations. Additionally, we offer an overview of handling diverse data types within the medical field."]], "corpus": "arxiv"}
364
  {"tstamp": 1722495138.4102, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722495135.7848, "finish": 1722495138.4102, "ip": "", "conv_id": "fb33411d7900485285f5fa5039d64941", "model_name": "text-embedding-004", "prompt": "what medical LLMs is out there?", "output": [["what medical LLMs is out there?", "Title: Large language models in medicine: the potentials and pitfalls\n\nAbstract: Large language models (LLMs) have been applied to tasks in healthcare, ranging from medical exam questions to responding to patient questions. With increasing institutional partnerships between companies producing LLMs and healthcare systems, real world clinical application is coming closer to reality. As these models gain traction, it is essential for healthcare practitioners to understand what LLMs are, their development, their current and potential applications, and the associated pitfalls when utilized in medicine. This review and accompanying tutorial aim to give an overview of these topics to aid healthcare practitioners in understanding the rapidly changing landscape of LLMs as applied to medicine."]], "corpus": "arxiv"}
365
+ {"tstamp": 1722495416.6769, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722495416.3783, "finish": 1722495416.6769, "ip": "", "conv_id": "5cd756e6eb644210a86d912d562e254e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
366
+ {"tstamp": 1722495416.6769, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722495416.3783, "finish": 1722495416.6769, "ip": "", "conv_id": "a57a91150268410495416350d39d8ff7", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Chinese room\n\nSearle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, but does not have a mind or intentionality in the same way that brains do.\nSearle's \"strong AI hypothesis\" should not be confused with \"strong AI\" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals or exceeds human intelligence -- that is, artificial general intelligence, human level AI or superintelligence. Kurzweil is referring primarily to the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that a superintelligent machine would not necessarily have a mind and consciousness.\nTuring test\nThe Chinese room implements a version of the Turing test. Alan Turing introduced the test in 1950 to help answer the question \"can machines think?\" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test."]], "corpus": "wikipedia"}