Scheduled Commit
Browse files
data/retrieval_battle-c4bdd700-2c9c-4d9b-92d7-63fecd0c1575.jsonl
CHANGED
@@ -4,3 +4,6 @@
|
|
4 |
{"tstamp": 1722436384.9754, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "cde58fd409d44b1b8bd0c8490be0b24c", "0_model_name": "text-embedding-3-large", "0_prompt": "who played charlie barosky on sons of anarchy", "0_output": [["who played charlie barosky on sons of anarchy", "Sons of Anarchy\n\nQ'orianka Kilcher as Kerrianne Larkin-Telford (season 3), Chibs and Fiona's daughter, and Jimmy O's stepdaughter\nMonique Gabriela Curnen as Amelia Dominguez (season 3), the caretaker of Gemma's father Nate\nMichael Fairman as 'Lumpy' Feldstein (season 3), a boxing club owner in Charming\nBob McCracken as Brendan Roarke (seasons 4 & 6–7; guest season 3), a member of the True IRA council\nFrank Potter as Eric Miles (season 4; co-star season 3), a SAMCRO prospect\nWalter Wong as Chris 'V-Lin' Von Lin (seasons 4–6), a SAMCRO prospect\nDanny Trejo as Romero 'Romeo' Parada (season 4; special guest season 5), a high-ranking member of the Galindo Cartel, who is also a C.I.A. agent\nTimothy V. Murphy as Galen O'Shay (seasons 4–6), a leader within the True IRA\nMerle Dandridge as Rita Roosevelt (seasons 4–5), the wife of San Joaquin Sheriff Eli Roosevelt\nBenito Martinez as Luis Torres (season 4; special guest season 5), a member of the Galindo Cartel\nDavid Rees Snell as Agent Grad Nicholas (season 4), an ATF agent\nBilly Brown as August Marks (seasons 5–7), an Oakland drug kingpin and crime boss who works with Damon Pope\nReynaldo Gallegos as 'Fiasco' (seasons 5–7)\nHarold Perrineau as Damon Pope (season 5), an Oakland gangster who oversees the Niners"]], "0_corpus": "wikipedia", "1_conv_id": "738aa5d219da4ac0a96e0e194a92de30", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who played charlie barosky on sons of anarchy", "1_output": [["who played charlie barosky on sons of anarchy", "Peter Weller\n\nIn 2013, he played the part of the villain in the remake of the \"Hookman\" episode of the original Hawaii Five-O. The part was originally played in 1973 by Jay J. Armes. In seasons six and seven of Sons of Anarchy, he plays the ruthless ex-cop Charles Barosky, a boss at the Stockton port and has dealings with the SAMCRO motorcycle club.\nIn 2017, Weller began appearing as Dr. Paul Vellek, a scientist, on The Last Ship.\nDirecting\nWeller has directed various projects for television, including episodes of Homicide: Life on the Street, Monk and three episodes of Odyssey 5. He directed a 1997 adaptation of Elmore Leonard's Gold Coast starring David Caruso and Marg Helgenberger after previously starring in a version of Leonard's thriller Cat Chaser (1989). He has directed eleven episodes of the series Sons of Anarchy, two in season 4, titled \"Dorylus\" and \"Hands\", three in season 5, titled \"Authority Vested\", \"Toad's Wild Ride\" and \"Darthy\", three in season 6, titled \"One One Six\", \"Los Fantasmas\" and \"Aon Rud Pearsanta\", and three in season 7, titled \"Some Strange Eruption\", \"What a Piece of Work Is Man\" and \"Suits of Woe\". He has directed one episode each season of Longmire since 2012."]], "1_corpus": "wikipedia"}
|
5 |
{"tstamp": 1722436760.2442, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "b3a4545fe4f0454ca94e9c4bf5d00ccc", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Explain MTEB scores and their relevance in real-life applications.", "0_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: Dynamical Quantum Processes of Molecular Beams at Surfaces: Dissociative Adsorption of Hydrogen on Metal Surfaces\n\nAbstract: Due to the improvement of computer power and the development of efficient algorithms it is now possible to combine high-dimensional quantum dynamical calculations of the dissociative adsorption of molecular beams with reliable ab-initio potential energy surfaces (PES). In this brief review two recent examples of such studies of the systems H_2/Cu(111), where adsorption is hindered by a noticeable energy barrier, and H_2/Pd(100), where activated as well as non-activated paths to adsorption exist, are presented. The effect of lateral surface corrugations on the sticking probability in the tunneling and the classical regime and the role of additional parallel momentum are discussed in the context of the H_2/Cu(111) results. For the system H_2/Pd(100) it is shown that the initial decrease of the sticking probability with increasing kinetic energy, which is usually attributed to a precursor mechanism, can be explained by dynamical steering. In addition, the influence of rotation on the adsorption and desorption dynamics is examined."]], "0_corpus": "arxiv", "1_conv_id": "7d81e64c2cb04a4193cbf4ee3119ef85", "1_model_name": "text-embedding-3-large", "1_prompt": "Explain MTEB scores and their relevance in real-life applications.", "1_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
6 |
{"tstamp": 1722436858.2756, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c3e05bfd38a54334966b3fee56c50372", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "What is a landmark paper that captions images with sentiments?", "0_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "0_corpus": "arxiv", "1_conv_id": "2c277a48c058469a99f7d8e871e56114", "1_model_name": "text-embedding-004", "1_prompt": "What is a landmark paper that captions images with sentiments?", "1_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
4 |
{"tstamp": 1722436384.9754, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "cde58fd409d44b1b8bd0c8490be0b24c", "0_model_name": "text-embedding-3-large", "0_prompt": "who played charlie barosky on sons of anarchy", "0_output": [["who played charlie barosky on sons of anarchy", "Sons of Anarchy\n\nQ'orianka Kilcher as Kerrianne Larkin-Telford (season 3), Chibs and Fiona's daughter, and Jimmy O's stepdaughter\nMonique Gabriela Curnen as Amelia Dominguez (season 3), the caretaker of Gemma's father Nate\nMichael Fairman as 'Lumpy' Feldstein (season 3), a boxing club owner in Charming\nBob McCracken as Brendan Roarke (seasons 4 & 6–7; guest season 3), a member of the True IRA council\nFrank Potter as Eric Miles (season 4; co-star season 3), a SAMCRO prospect\nWalter Wong as Chris 'V-Lin' Von Lin (seasons 4–6), a SAMCRO prospect\nDanny Trejo as Romero 'Romeo' Parada (season 4; special guest season 5), a high-ranking member of the Galindo Cartel, who is also a C.I.A. agent\nTimothy V. Murphy as Galen O'Shay (seasons 4–6), a leader within the True IRA\nMerle Dandridge as Rita Roosevelt (seasons 4–5), the wife of San Joaquin Sheriff Eli Roosevelt\nBenito Martinez as Luis Torres (season 4; special guest season 5), a member of the Galindo Cartel\nDavid Rees Snell as Agent Grad Nicholas (season 4), an ATF agent\nBilly Brown as August Marks (seasons 5–7), an Oakland drug kingpin and crime boss who works with Damon Pope\nReynaldo Gallegos as 'Fiasco' (seasons 5–7)\nHarold Perrineau as Damon Pope (season 5), an Oakland gangster who oversees the Niners"]], "0_corpus": "wikipedia", "1_conv_id": "738aa5d219da4ac0a96e0e194a92de30", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who played charlie barosky on sons of anarchy", "1_output": [["who played charlie barosky on sons of anarchy", "Peter Weller\n\nIn 2013, he played the part of the villain in the remake of the \"Hookman\" episode of the original Hawaii Five-O. The part was originally played in 1973 by Jay J. Armes. In seasons six and seven of Sons of Anarchy, he plays the ruthless ex-cop Charles Barosky, a boss at the Stockton port and has dealings with the SAMCRO motorcycle club.\nIn 2017, Weller began appearing as Dr. Paul Vellek, a scientist, on The Last Ship.\nDirecting\nWeller has directed various projects for television, including episodes of Homicide: Life on the Street, Monk and three episodes of Odyssey 5. He directed a 1997 adaptation of Elmore Leonard's Gold Coast starring David Caruso and Marg Helgenberger after previously starring in a version of Leonard's thriller Cat Chaser (1989). He has directed eleven episodes of the series Sons of Anarchy, two in season 4, titled \"Dorylus\" and \"Hands\", three in season 5, titled \"Authority Vested\", \"Toad's Wild Ride\" and \"Darthy\", three in season 6, titled \"One One Six\", \"Los Fantasmas\" and \"Aon Rud Pearsanta\", and three in season 7, titled \"Some Strange Eruption\", \"What a Piece of Work Is Man\" and \"Suits of Woe\". He has directed one episode each season of Longmire since 2012."]], "1_corpus": "wikipedia"}
|
5 |
{"tstamp": 1722436760.2442, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "b3a4545fe4f0454ca94e9c4bf5d00ccc", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Explain MTEB scores and their relevance in real-life applications.", "0_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: Dynamical Quantum Processes of Molecular Beams at Surfaces: Dissociative Adsorption of Hydrogen on Metal Surfaces\n\nAbstract: Due to the improvement of computer power and the development of efficient algorithms it is now possible to combine high-dimensional quantum dynamical calculations of the dissociative adsorption of molecular beams with reliable ab-initio potential energy surfaces (PES). In this brief review two recent examples of such studies of the systems H_2/Cu(111), where adsorption is hindered by a noticeable energy barrier, and H_2/Pd(100), where activated as well as non-activated paths to adsorption exist, are presented. The effect of lateral surface corrugations on the sticking probability in the tunneling and the classical regime and the role of additional parallel momentum are discussed in the context of the H_2/Cu(111) results. For the system H_2/Pd(100) it is shown that the initial decrease of the sticking probability with increasing kinetic energy, which is usually attributed to a precursor mechanism, can be explained by dynamical steering. In addition, the influence of rotation on the adsorption and desorption dynamics is examined."]], "0_corpus": "arxiv", "1_conv_id": "7d81e64c2cb04a4193cbf4ee3119ef85", "1_model_name": "text-embedding-3-large", "1_prompt": "Explain MTEB scores and their relevance in real-life applications.", "1_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
6 |
{"tstamp": 1722436858.2756, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c3e05bfd38a54334966b3fee56c50372", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "What is a landmark paper that captions images with sentiments?", "0_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "0_corpus": "arxiv", "1_conv_id": "2c277a48c058469a99f7d8e871e56114", "1_model_name": "text-embedding-004", "1_prompt": "What is a landmark paper that captions images with sentiments?", "1_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "1_corpus": "arxiv"}
|
7 |
+
{"tstamp": 1722438411.1567, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d78c6a77faa9427f879a69e311af8ff9", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how to velvet beef", "0_output": [["how to velvet beef", "Beef aging\n\nBeef aging or ageing is a process of preparing beef for consumption by aging it, in order to break down the connective tissue within the meat."]], "0_corpus": "wikipedia", "1_conv_id": "cb6b594524cf4ab5b886defe195e06aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to velvet beef", "1_output": [["how to velvet beef", "Beef\n\nFrying Meat can be cooked in boiling oil, typically by shallow frying, although deep frying may be used, often for meat enrobed with breadcrumbs as in milanesas or finger steaks. Larger pieces such as steaks may be cooked this way, or meat may be cut smaller as in stir frying, typically an Asian way of cooking: cooking oil with flavorings such as garlic, ginger and onions is put in a very hot wok. Then small pieces of meat are added, followed by ingredients which cook more quickly, such as mixed vegetables. The dish is ready when the ingredients are 'just cooked'.\nMoist heat\nMoist heat cooking methods include braising, pot roasting, stewing and sous-vide. These techniques are often used for cuts of beef that are tougher, as these longer, lower-temperature cooking methods have time to dissolve connecting tissue which otherwise makes meat remain tough after cooking.\nStewing or simmering\nsimmering meat, whole or cut into bite-size pieces, in a water-based liquid with flavorings. This technique may be used as part of pressure cooking.\nBraising\ncooking meats, in a covered container, with small amounts of liquids (usually seasoned or flavored). Unlike stewing, braised meat is not fully immersed in liquid, and usually is browned before the oven step.\nSous-vide\nSous-vide, French for \"under vacuum\", is a method of cooking food sealed in airtight plastic bags in a water bath for a long time—72 hours is not unknown—at an accurately determined temperature much lower than normally used for other types of cooking. The intention is to maintain the integrity of ingredients and achieve very precise control of cooking. Although water is used in the method, only moisture in or added to the food bags is in contact with the food."]], "1_corpus": "wikipedia"}
|
8 |
+
{"tstamp": 1722438494.5451, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "07e1aabf8adb4a288b137072ddca886d", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "8dbcfb951f86457b93498918f768052c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "1_corpus": "stackexchange"}
|
9 |
+
{"tstamp": 1722438547.9969, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "20fcbcf3531d433693a2c8fd215d20fa", "0_model_name": "voyage-multilingual-2", "0_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "0_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Alpha Centauri\n\nIn 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of in Alpha Centauri B's habitable zone.\nRadial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected.\nCurrent estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about for Alpha Centauri A, for Alpha Centauri B, and for Proxima Centauri.\nEarly computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet."]], "0_corpus": "wikipedia", "1_conv_id": "c47d106e79854ed2997f0d261f2f5c71", "1_model_name": "text-embedding-004", "1_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "1_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "TRAPPIST-1e\n\nTRAPPIST-1e, also designated as 2MASS J23062928-0502285 e, is a rocky, close-to-Earth-sized exoplanet orbiting within the habitable zone around the ultracool dwarf star TRAPPIST-1, located away from Earth in the constellation of Aquarius. Astronomers used the transit method to find the exoplanet, a method that measures the dimming of a star when a planet crosses in front of it.\nThe exoplanet was one of seven discovered orbiting the star using observations from the Spitzer Space Telescope. Three of the seven (e, f, and g) are in the habitable zone or the \"goldilocks zone\". TRAPPIST-1e is similar to Earth's mass, radius, density, gravity, temperature, and stellar flux. It is also confirmed that TRAPPIST-1e lacks a cloud-free hydrogen-dominated atmosphere, meaning it is more likely to have a compact atmosphere like the terrestrial planets in the Solar System.\nIn November 2018, researchers determined that of the seven exoplanets in the multi-planetary system, TRAPPIST-1e has the best chance of being an Earth-like ocean planet, and the one most worthy of further study regarding habitability. According to the Habitable Exoplanets Catalog, TRAPPIST-1e is among the best potentially habitable exoplanets discovered.\nPhysical characteristics\nMass, radius, composition and temperature"]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-c4bdd700-2c9c-4d9b-92d7-63fecd0c1575.jsonl
CHANGED
@@ -20,3 +20,9 @@
|
|
20 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "2c277a48c058469a99f7d8e871e56114", "model_name": "text-embedding-004", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
21 |
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "720939dccd164266b3656f02d8a865ea", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|
22 |
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "7fce68dfaa284fc8be2fb939013f9b3b", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
{"tstamp": 1722436845.5015, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722436845.0959, "finish": 1722436845.5015, "ip": "", "conv_id": "2c277a48c058469a99f7d8e871e56114", "model_name": "text-embedding-004", "prompt": "What is a landmark paper that captions images with sentiments?", "output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
|
21 |
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "720939dccd164266b3656f02d8a865ea", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|
22 |
{"tstamp": 1722437856.3717, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722437855.6019, "finish": 1722437856.3717, "ip": "", "conv_id": "7fce68dfaa284fc8be2fb939013f9b3b", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "output": [["Looking for a comprehensive Chinese short-text conversation dataset for training neural dialogue models.", "Title: A Large-Scale Chinese Short-Text Conversation Dataset\n\nAbstract: The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset, LCCC, which contains a base version (6.8million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT."]], "corpus": "arxiv"}
|
23 |
+
{"tstamp": 1722438398.9998, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722438398.535, "finish": 1722438398.9998, "ip": "", "conv_id": "d78c6a77faa9427f879a69e311af8ff9", "model_name": "GritLM/GritLM-7B", "prompt": "how to velvet beef", "output": [["how to velvet beef", "Beef aging\n\nBeef aging or ageing is a process of preparing beef for consumption by aging it, in order to break down the connective tissue within the meat."]], "corpus": "wikipedia"}
|
24 |
+
{"tstamp": 1722438398.9998, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722438398.535, "finish": 1722438398.9998, "ip": "", "conv_id": "cb6b594524cf4ab5b886defe195e06aa", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "how to velvet beef", "output": [["how to velvet beef", "Beef\n\nFrying Meat can be cooked in boiling oil, typically by shallow frying, although deep frying may be used, often for meat enrobed with breadcrumbs as in milanesas or finger steaks. Larger pieces such as steaks may be cooked this way, or meat may be cut smaller as in stir frying, typically an Asian way of cooking: cooking oil with flavorings such as garlic, ginger and onions is put in a very hot wok. Then small pieces of meat are added, followed by ingredients which cook more quickly, such as mixed vegetables. The dish is ready when the ingredients are 'just cooked'.\nMoist heat\nMoist heat cooking methods include braising, pot roasting, stewing and sous-vide. These techniques are often used for cuts of beef that are tougher, as these longer, lower-temperature cooking methods have time to dissolve connecting tissue which otherwise makes meat remain tough after cooking.\nStewing or simmering\nsimmering meat, whole or cut into bite-size pieces, in a water-based liquid with flavorings. This technique may be used as part of pressure cooking.\nBraising\ncooking meats, in a covered container, with small amounts of liquids (usually seasoned or flavored). Unlike stewing, braised meat is not fully immersed in liquid, and usually is browned before the oven step.\nSous-vide\nSous-vide, French for \"under vacuum\", is a method of cooking food sealed in airtight plastic bags in a water bath for a long time—72 hours is not unknown—at an accurately determined temperature much lower than normally used for other types of cooking. The intention is to maintain the integrity of ingredients and achieve very precise control of cooking. Although water is used in the method, only moisture in or added to the food bags is in contact with the food."]], "corpus": "wikipedia"}
|
25 |
+
{"tstamp": 1722438442.0224, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722438441.8561, "finish": 1722438442.0224, "ip": "", "conv_id": "07e1aabf8adb4a288b137072ddca886d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "corpus": "stackexchange"}
|
26 |
+
{"tstamp": 1722438442.0224, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722438441.8561, "finish": 1722438442.0224, "ip": "", "conv_id": "8dbcfb951f86457b93498918f768052c", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "corpus": "stackexchange"}
|
27 |
+
{"tstamp": 1722438514.9489, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722438514.4806, "finish": 1722438514.9489, "ip": "", "conv_id": "20fcbcf3531d433693a2c8fd215d20fa", "model_name": "voyage-multilingual-2", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Alpha Centauri\n\nIn 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of in Alpha Centauri B's habitable zone.\nRadial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected.\nCurrent estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about for Alpha Centauri A, for Alpha Centauri B, and for Proxima Centauri.\nEarly computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet."]], "corpus": "wikipedia"}
|
28 |
+
{"tstamp": 1722438514.9489, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722438514.4806, "finish": 1722438514.9489, "ip": "", "conv_id": "c47d106e79854ed2997f0d261f2f5c71", "model_name": "text-embedding-004", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "TRAPPIST-1e\n\nTRAPPIST-1e, also designated as 2MASS J23062928-0502285 e, is a rocky, close-to-Earth-sized exoplanet orbiting within the habitable zone around the ultracool dwarf star TRAPPIST-1, located away from Earth in the constellation of Aquarius. Astronomers used the transit method to find the exoplanet, a method that measures the dimming of a star when a planet crosses in front of it.\nThe exoplanet was one of seven discovered orbiting the star using observations from the Spitzer Space Telescope. Three of the seven (e, f, and g) are in the habitable zone or the \"goldilocks zone\". TRAPPIST-1e is similar to Earth's mass, radius, density, gravity, temperature, and stellar flux. It is also confirmed that TRAPPIST-1e lacks a cloud-free hydrogen-dominated atmosphere, meaning it is more likely to have a compact atmosphere like the terrestrial planets in the Solar System.\nIn November 2018, researchers determined that of the seven exoplanets in the multi-planetary system, TRAPPIST-1e has the best chance of being an Earth-like ocean planet, and the one most worthy of further study regarding habitability. According to the Habitable Exoplanets Catalog, TRAPPIST-1e is among the best potentially habitable exoplanets discovered.\nPhysical characteristics\nMass, radius, composition and temperature"]], "corpus": "wikipedia"}
|