Scheduled Commit
Browse files
data/retrieval_individual-1948e9c4-e613-4487-b12e-0e988013b994.jsonl
CHANGED
@@ -326,3 +326,5 @@
|
|
326 |
{"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "ff1b619d1c87430493c1da7fc9a3bf83", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "RAM drive\n\nA RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory. They are in a sense the reverse of virtual memory: RAM drive uses a volatile fast memory as if it's a nonvolatile slow memory. Virtual memory is the opposite.\nIt is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.\nHistorically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information."]], "corpus": "wikipedia"}
|
327 |
{"tstamp": 1722789234.3241, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722789232.1249, "finish": 1722789234.3241, "ip": "", "conv_id": "37d3e0c08a6c484eb5256df3bd9e64e6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
328 |
{"tstamp": 1722789296.1676, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722789293.8463, "finish": 1722789296.1676, "ip": "", "conv_id": "8b1ce90e9a164eb5b10d788f15c773f6", "model_name": "text-embedding-004", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond\n\nAbstract: Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs, which facilitate collaboration among developers and play a critical role in Open-Source Software (OSS). Very recently, Large Language Models (LLMs) have demonstrated extensive applicability in diverse code-related task. But few studies systematically explored their effectiveness using LLMs. This paper conducts the first comprehensive experiment to investigate how far we have been in applying LLM to generate high-quality commit messages. Motivated by a pilot analysis, we first clean the most widely-used CMG dataset following practitioners' criteria. Afterward, we re-evaluate diverse state-of-the-art CMG approaches and make comparisons with LLMs, demonstrating the superior performance of LLMs against state-of-the-art CMG approaches. Then, we further propose four manual metrics following the practice of OSS, including Accuracy, Integrity, Applicability, and Readability, and assess various LLMs accordingly. Results reveal that GPT-3.5 performs best overall, but different LLMs carry different advantages. To further boost LLMs' performance in the CMG task, we propose an Efficient Retrieval-based In-Context Learning (ICL) framework, namely ERICommiter, which leverages a two-step filtering to accelerate the retrieval efficiency and introduces semantic/lexical-based retrieval algorithm to construct the ICL examples. Extensive experiments demonstrate the substantial performance improvement of ERICommiter on various LLMs for code diffs of different programming languages. Meanwhile, ERICommiter also significantly reduces the retrieval time while keeping almost the same performance. Our research contributes to the understanding of LLMs' capabilities in the CMG field and provides valuable insights for practitioners seeking to leverage these tools in their workflows."]], "corpus": "arxiv"}
|
|
|
|
|
|
326 |
{"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "ff1b619d1c87430493c1da7fc9a3bf83", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "RAM drive\n\nA RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory. They are in a sense the reverse of virtual memory: RAM drive uses a volatile fast memory as if it's a nonvolatile slow memory. Virtual memory is the opposite.\nIt is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.\nHistorically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information."]], "corpus": "wikipedia"}
|
327 |
{"tstamp": 1722789234.3241, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722789232.1249, "finish": 1722789234.3241, "ip": "", "conv_id": "37d3e0c08a6c484eb5256df3bd9e64e6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
328 |
{"tstamp": 1722789296.1676, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722789293.8463, "finish": 1722789296.1676, "ip": "", "conv_id": "8b1ce90e9a164eb5b10d788f15c773f6", "model_name": "text-embedding-004", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond\n\nAbstract: Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs, which facilitate collaboration among developers and play a critical role in Open-Source Software (OSS). Very recently, Large Language Models (LLMs) have demonstrated extensive applicability in diverse code-related task. But few studies systematically explored their effectiveness using LLMs. This paper conducts the first comprehensive experiment to investigate how far we have been in applying LLM to generate high-quality commit messages. Motivated by a pilot analysis, we first clean the most widely-used CMG dataset following practitioners' criteria. Afterward, we re-evaluate diverse state-of-the-art CMG approaches and make comparisons with LLMs, demonstrating the superior performance of LLMs against state-of-the-art CMG approaches. Then, we further propose four manual metrics following the practice of OSS, including Accuracy, Integrity, Applicability, and Readability, and assess various LLMs accordingly. Results reveal that GPT-3.5 performs best overall, but different LLMs carry different advantages. To further boost LLMs' performance in the CMG task, we propose an Efficient Retrieval-based In-Context Learning (ICL) framework, namely ERICommiter, which leverages a two-step filtering to accelerate the retrieval efficiency and introduces semantic/lexical-based retrieval algorithm to construct the ICL examples. Extensive experiments demonstrate the substantial performance improvement of ERICommiter on various LLMs for code diffs of different programming languages. Meanwhile, ERICommiter also significantly reduces the retrieval time while keeping almost the same performance. Our research contributes to the understanding of LLMs' capabilities in the CMG field and provides valuable insights for practitioners seeking to leverage these tools in their workflows."]], "corpus": "arxiv"}
|
329 |
+
{"tstamp": 1722790265.2531, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722790262.9791, "finish": 1722790265.2531, "ip": "", "conv_id": "df0c424f8864402795a77b2a4531f8c6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "what is the song who are you by the who about", "output": [["what is the song who are you by the who about", "Who Are You (The Who song)\n\n\"Who Are You\" is the title track on the Who's 1978 album, Who Are You, the last album released by the group before Keith Moon's death in September 1978. It was written by Pete Townshend and released as a double-A-sided single with the John Entwistle composition \"Had Enough\", also featured on the album. The song was one of the band's biggest hits in North America, peaking at number 7 in Canada and at number 14 in the United States, and has become one of the band's signature tunes at their live shows. The piano on the track is played by Rod Argent.\nBackground\nThe lyrics of \"Who Are You\" were inspired by an incident Townshend experienced. After going out drinking with Steve Jones and Paul Cook of the Sex Pistols, Townshend was found in a \"Soho doorway\" by a policeman, who recognized him and said he would let him go if he could safely walk away."]], "corpus": "wikipedia"}
|
330 |
+
{"tstamp": 1722790265.2531, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722790262.9791, "finish": 1722790265.2531, "ip": "", "conv_id": "33e4d55af2184e6296992ccf5d109c2a", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "what is the song who are you by the who about", "output": [["what is the song who are you by the who about", "Who Are You (The Who song)\n\nHowever, as explained by Townshend in his autobiography Who I Am, the last verse is about an early incident, that happened on the last North American leg of the 1971 tour: the day before the first concert in Charlotte, North Carolina, Townshend took the opportunity to visit the Meher Spiritual Center—a retreat owned by his guru Meher Baba—in nearby Myrtle Beach.\n\"Who Are You\" was released as a double-A side with the John Entwistle song, \"Had Enough,\" but \"Who Are You\" was the more popular song, reaching the Top 20 in both the United States and UK. The song has since been featured on multiple compilation albums.\nThe single mix contains an alternate acoustic guitar solo to the album mix.\nLyrics\nThe album version includes a third verse compared to the much shorter single. Additionally, a \"lost verse\" mix of the song was released on the 1996 reissue of Who Are You, with a completely different second verse: \"I used to check my reflection / Jumping with my cheap guitar / I must have lost my direction, 'cause I ended up a superstar / One-nighters in the boardroom / Petrify the human brain / You can learn from my mistakes, but you're posing in the glass again.\""]], "corpus": "wikipedia"}
|