Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
File size: 66,340 Bytes
c3ba23f
 
12121a4
fe9a678
eb40df0
6bef737
071ef65
 
 
57681b0
 
212317f
 
 
fb35eec
 
362d488
 
 
 
 
 
 
d23b395
13e6575
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{"tstamp": 1722436132.3737, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "9fc7884f010949148b0c0372ed6bdc60", "0_model_name": "voyage-multilingual-2", "0_prompt": "are rolls and biscuits the same?", "0_output": [["are rolls and biscuits the same?", "English Stackexchange\n\nQ: Is a ship's biscuit a biscuit or a cake? Cakes go hard when they are stale. Biscuits go soft when they are stale.\nSo, what about a ship's biscuit?\n\nA: From an American point of view, none of the terms biscuit, cake, or cookie is an appropriate descriptor for  ships biscuit, which is also known as hardtack and by other terms.  The image below is from wikipedia's article about hardtack.\n\nThe article says\n\nHardtack (or hard tack) is a simple type of cracker or biscuit, made from flour, water, and sometimes salt\n\nFrom that description and from the picture, cracker evidently is the most appropriate term.\nNote, the previously-linked biscuit article begins by distinguishing American usage (where biscuits are like small breadrolls except often leavened with soda, not yeast) from British usage (where biscuits are like cookies or possibly like crackers).\n"]], "0_corpus": "stackexchange", "1_conv_id": "229c16e4a1cb4ca78bcc68c20a11da00", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "are rolls and biscuits the same?", "1_output": [["are rolls and biscuits the same?", "Math Stackexchange\n\nQ: Annihilator for a tensor $T\\in\\wedge V^{\\ast}$ For $T\\in\\wedge^{k} V^{\\ast}$ the annihilator is set\n$$an(T)= \\{\\phi\\in V^{\\ast}\\mid \\phi\\wedge T=0\\}$$\nThen I need to prove that $dim(an(T))\\leq k$ and is equal iff $T$ is decomposable ($i.e.$, $T=\\phi_1\\wedge\\cdots \\wedge \\phi_k$ for $\\phi_j \\in V^{\\ast}$).\nI do not know how to attack this problem, suppose that has dimension greater that $k$ but I get no contradiction or at least do not know how to find it.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722436145.3122, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "fbbb80e6e1cd4354a3097f1d997a3063", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is chief executive officer capitalized?", "0_output": [["is chief executive officer capitalized?", "English Stackexchange\n\nQ: When should titles like \"captain\" or \"admiral\" be capitalized? Please excuse the contrived example below, but I am wondering if this example is properly capitalised? I'm assuming that capitals are used when addressing someone directly, but lower case is used elsewhere.\nCan someone give some rules on this? \n\nThe captain walked into the room. \n  \"Hi, Captain,\" said the admiral.\n  \"Hi, Sir,\" replied the captain.\n  \"Call me Admiral.\"\n  \"Hi Admiral.\"\n  The captain and admiral stepped forward.\n  \"Prime Minister, this is Captain Jones,\" said Admiral Brown.\n  \"Pleased to meet you,\" said the prime minister.\n\n\nA: At least according to CMOS, there are a few rules that apply. (Your mileage may vary with AP or Oxford rules, I just happen to have CMOS within grabbing distance.)\n7.16 says that titles used in place of names in direct address are capitalized. (e.g., \"'Hi, Captain'\")\n7.17 says that titles used alone in place of a name are generally lowercased (e.g., \"replied the captain\").\n(Edit: Cited the wrong rule.)\n"]], "0_corpus": "stackexchange", "1_conv_id": "18f8939df9cf41d69bd7782b9f3819ce", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "is chief executive officer capitalized?", "1_output": [["is chief executive officer capitalized?", "English Stackexchange\n\nQ: Correct capitalization for \"of\" in an acronym definition For an acronym that includes the word \"of\", do you capitalize its usage within the definition (or expansion) of the acronym\nFor example, in the follow sentence:\nMy coworker Steve suffers from Complete Avoidance of Work Syndrome (CAOWS).\nShould of be written as Of?\n\nA: I wouldn't capitalize the o in \"of\".\nHere's a well known example.\nTOEFL: \n\nTest of English as a Foreign Language.\n\nTypically, prepositions and articles should not be capitalized.\nFor a comprehensive capitalization rule list, see here.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722436325.5236, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "24959d5ccc4c4f3ca75055b9cce2a838", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "who plays meredith quill in guardians of the galaxy 2", "0_output": [["who plays meredith quill in guardians of the galaxy 2", "List of Marvel Comics characters: Q\n\nMeredith Quill appears in films set in the Marvel Cinematic Universe (MCU), portrayed by Laura Haddock.\nFirst appearing in Guardians of the Galaxy (2014), she gives Peter Quill a mixtape of her favorite songs before dying of terminal cancer.\nMeredith appears in the prologue of Guardians of the Galaxy Vol. 2, in which it is revealed she pursued a relationship with Ego, who secretly gave her the tumor that killed her.\nMeredith appears in Guardians of the Galaxy: The Telltale Series, voiced by Courtenay Taylor. In a flashback, Peter Quill promises her not to use violence if he confronts bullies. Additionally, she previously encountered Yondu, who took in Peter on her advice. Amidst Peter's apparent death, he sees a vision of Meredith being revived by the Eternity Forge. Following Hala the Accuser's defeat, the player has the option to either have Mantis help Peter make contact with his mother's spirit or use the last of the Eternity Forge's energy to revive her.\nMeredith appears in Marvel's Guardians of the Galaxy, voiced by Mylène Dinh-Robic. In a flashback, she gifts Peter his father's guns for his 13th birthday before she is later killed by the Chitauri. Additionally, the Magus uses an illusion of her in a failed attempt to coax Peter into submission."]], "0_corpus": "wikipedia", "1_conv_id": "d94f94041f324178bfb62096f35d40b7", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who plays meredith quill in guardians of the galaxy 2", "1_output": [["who plays meredith quill in guardians of the galaxy 2", "List of Marvel Comics characters: Q\n\nMeredith Quill appears in films set in the Marvel Cinematic Universe (MCU), portrayed by Laura Haddock.\nFirst appearing in Guardians of the Galaxy (2014), she gives Peter Quill a mixtape of her favorite songs before dying of terminal cancer.\nMeredith appears in the prologue of Guardians of the Galaxy Vol. 2, in which it is revealed she pursued a relationship with Ego, who secretly gave her the tumor that killed her.\nMeredith appears in Guardians of the Galaxy: The Telltale Series, voiced by Courtenay Taylor. In a flashback, Peter Quill promises her not to use violence if he confronts bullies. Additionally, she previously encountered Yondu, who took in Peter on her advice. Amidst Peter's apparent death, he sees a vision of Meredith being revived by the Eternity Forge. Following Hala the Accuser's defeat, the player has the option to either have Mantis help Peter make contact with his mother's spirit or use the last of the Eternity Forge's energy to revive her.\nMeredith appears in Marvel's Guardians of the Galaxy, voiced by Mylène Dinh-Robic. In a flashback, she gifts Peter his father's guns for his 13th birthday before she is later killed by the Chitauri. Additionally, the Magus uses an illusion of her in a failed attempt to coax Peter into submission."]], "1_corpus": "wikipedia"}
{"tstamp": 1722436384.9754, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "cde58fd409d44b1b8bd0c8490be0b24c", "0_model_name": "text-embedding-3-large", "0_prompt": "who played charlie barosky on sons of anarchy", "0_output": [["who played charlie barosky on sons of anarchy", "Sons of Anarchy\n\nQ'orianka Kilcher as Kerrianne Larkin-Telford (season 3), Chibs and Fiona's daughter, and Jimmy O's stepdaughter\nMonique Gabriela Curnen as Amelia Dominguez (season 3), the caretaker of Gemma's father Nate\nMichael Fairman as 'Lumpy' Feldstein (season 3), a boxing club owner in Charming\nBob McCracken as Brendan Roarke (seasons 4 & 6–7; guest season 3), a member of the True IRA council\nFrank Potter as Eric Miles (season 4; co-star season 3), a SAMCRO prospect\nWalter Wong as Chris 'V-Lin' Von Lin (seasons 4–6), a SAMCRO prospect\nDanny Trejo as Romero 'Romeo' Parada (season 4; special guest season 5), a high-ranking member of the Galindo Cartel, who is also a C.I.A. agent\nTimothy V. Murphy as Galen O'Shay (seasons 4–6), a leader within the True IRA\nMerle Dandridge as Rita Roosevelt (seasons 4–5), the wife of San Joaquin Sheriff Eli Roosevelt\nBenito Martinez as Luis Torres (season 4; special guest season 5), a member of the Galindo Cartel\nDavid Rees Snell as Agent Grad Nicholas (season 4), an ATF agent\nBilly Brown as August Marks (seasons 5–7), an Oakland drug kingpin and crime boss who works with Damon Pope\nReynaldo Gallegos as 'Fiasco' (seasons 5–7)\nHarold Perrineau as Damon Pope (season 5), an Oakland gangster who oversees the Niners"]], "0_corpus": "wikipedia", "1_conv_id": "738aa5d219da4ac0a96e0e194a92de30", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who played charlie barosky on sons of anarchy", "1_output": [["who played charlie barosky on sons of anarchy", "Peter Weller\n\nIn 2013, he played the part of the villain in the remake of the \"Hookman\" episode of the original Hawaii Five-O. The part was originally played in 1973 by Jay J. Armes. In seasons six and seven of Sons of Anarchy, he plays the ruthless ex-cop Charles Barosky, a boss at the Stockton port and has dealings with the SAMCRO motorcycle club.\nIn 2017, Weller began appearing as Dr. Paul Vellek, a scientist, on The Last Ship.\nDirecting\nWeller has directed various projects for television, including episodes of Homicide: Life on the Street, Monk and three episodes of Odyssey 5. He directed a 1997 adaptation of Elmore Leonard's Gold Coast starring David Caruso and Marg Helgenberger after previously starring in a version of Leonard's thriller Cat Chaser (1989). He has directed eleven episodes of the series Sons of Anarchy, two in season 4, titled \"Dorylus\" and \"Hands\", three in season 5, titled \"Authority Vested\", \"Toad's Wild Ride\" and \"Darthy\", three in season 6, titled \"One One Six\", \"Los Fantasmas\" and \"Aon Rud Pearsanta\", and three in season 7, titled \"Some Strange Eruption\", \"What a Piece of Work Is Man\" and \"Suits of Woe\". He has directed one episode each season of Longmire since 2012."]], "1_corpus": "wikipedia"}
{"tstamp": 1722436760.2442, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "b3a4545fe4f0454ca94e9c4bf5d00ccc", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "Explain MTEB scores and their relevance in real-life applications.", "0_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: Dynamical Quantum Processes of Molecular Beams at Surfaces: Dissociative Adsorption of Hydrogen on Metal Surfaces\n\nAbstract: Due to the improvement of computer power and the development of efficient algorithms it is now possible to combine high-dimensional quantum dynamical calculations of the dissociative adsorption of molecular beams with reliable ab-initio potential energy surfaces (PES). In this brief review two recent examples of such studies of the systems H_2/Cu(111), where adsorption is hindered by a noticeable energy barrier, and H_2/Pd(100), where activated as well as non-activated paths to adsorption exist, are presented. The effect of lateral surface corrugations on the sticking probability in the tunneling and the classical regime and the role of additional parallel momentum are discussed in the context of the H_2/Cu(111) results. For the system H_2/Pd(100) it is shown that the initial decrease of the sticking probability with increasing kinetic energy, which is usually attributed to a precursor mechanism, can be explained by dynamical steering. In addition, the influence of rotation on the adsorption and desorption dynamics is examined."]], "0_corpus": "arxiv", "1_conv_id": "7d81e64c2cb04a4193cbf4ee3119ef85", "1_model_name": "text-embedding-3-large", "1_prompt": "Explain MTEB scores and their relevance in real-life applications.", "1_output": [["Explain MTEB scores and their relevance in real-life applications.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
{"tstamp": 1722436858.2756, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c3e05bfd38a54334966b3fee56c50372", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "What is a landmark paper that captions images with sentiments?", "0_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "0_corpus": "arxiv", "1_conv_id": "2c277a48c058469a99f7d8e871e56114", "1_model_name": "text-embedding-004", "1_prompt": "What is a landmark paper that captions images with sentiments?", "1_output": [["What is a landmark paper that captions images with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "1_corpus": "arxiv"}
{"tstamp": 1722438411.1567, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d78c6a77faa9427f879a69e311af8ff9", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how to velvet beef", "0_output": [["how to velvet beef", "Beef aging\n\nBeef aging or ageing is a process of preparing beef for consumption by aging it, in order to break down the connective tissue within the meat."]], "0_corpus": "wikipedia", "1_conv_id": "cb6b594524cf4ab5b886defe195e06aa", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "how to velvet beef", "1_output": [["how to velvet beef", "Beef\n\nFrying Meat can be cooked in boiling oil, typically by shallow frying, although deep frying may be used, often for meat enrobed with breadcrumbs as in milanesas or finger steaks. Larger pieces such as steaks may be cooked this way, or meat may be cut smaller as in stir frying, typically an Asian way of cooking: cooking oil with flavorings such as garlic, ginger and onions is put in a very hot wok. Then small pieces of meat are added, followed by ingredients which cook more quickly, such as mixed vegetables. The dish is ready when the ingredients are 'just cooked'.\nMoist heat\nMoist heat cooking methods include braising, pot roasting, stewing and sous-vide. These techniques are often used for cuts of beef that are tougher, as these longer, lower-temperature cooking methods have time to dissolve connecting tissue which otherwise makes meat remain tough after cooking.\nStewing or simmering\nsimmering meat, whole or cut into bite-size pieces, in a water-based liquid with flavorings. This technique may be used as part of pressure cooking.\nBraising\ncooking meats, in a covered container, with small amounts of liquids (usually seasoned or flavored). Unlike stewing, braised meat is not fully immersed in liquid, and usually is browned before the oven step.\nSous-vide\nSous-vide, French for \"under vacuum\", is a method of cooking food sealed in airtight plastic bags in a water bath for a long time—72 hours is not unknown—at an accurately determined temperature much lower than normally used for other types of cooking. The intention is to maintain the integrity of ingredients and achieve very precise control of cooking. Although water is used in the method, only moisture in or added to the food bags is in contact with the food."]], "1_corpus": "wikipedia"}
{"tstamp": 1722438494.5451, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "07e1aabf8adb4a288b137072ddca886d", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n    import pdb\n    pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "8dbcfb951f86457b93498918f768052c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What does `if name == \"__main__\"` mean in Python? \nPossible Duplicate:\nWhat does <if name==“main”:> do? \n\nI have wrote scripts in Python for quite a while now and I study more of Python as I need it. When reading other people's code I meet if name == \"__main__\": construct quite often. \nWhat is it good for?\n\nA: Basically, \nThere's a distinction between the \"main\" script file and external files which were imported or referenced in another way. If the script is the \"main\" script then the special variable __name__ will equal \"__main__\".\nYou can use this to protect the \"execution\" code from the classes and variables the script has. This enables you to import a script and use classes and variables without actually running that script's main code if it has any.\nSee also: What does if name == “main”: do?\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722438547.9969, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "20fcbcf3531d433693a2c8fd215d20fa", "0_model_name": "voyage-multilingual-2", "0_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "0_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Alpha Centauri\n\nIn 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from  from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of  in Alpha Centauri B's habitable zone.\nRadial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a  planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected.\nCurrent estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about  for Alpha Centauri A,  for Alpha Centauri B, and  for Proxima Centauri.\nEarly computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet."]], "0_corpus": "wikipedia", "1_conv_id": "c47d106e79854ed2997f0d261f2f5c71", "1_model_name": "text-embedding-004", "1_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "1_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "TRAPPIST-1e\n\nTRAPPIST-1e, also designated as 2MASS J23062928-0502285 e, is a rocky, close-to-Earth-sized exoplanet orbiting within the habitable zone around the ultracool dwarf star TRAPPIST-1, located  away from Earth in the constellation of Aquarius. Astronomers used the transit method to find the exoplanet, a method that measures the dimming of a star when a planet crosses in front of it.\nThe exoplanet was one of seven discovered orbiting the star using observations from the Spitzer Space Telescope. Three of the seven (e, f, and g) are in the habitable zone or the \"goldilocks zone\". TRAPPIST-1e is similar to Earth's mass, radius, density, gravity, temperature, and stellar flux. It is also confirmed that TRAPPIST-1e lacks a cloud-free hydrogen-dominated atmosphere, meaning it is more likely to have a compact atmosphere like the terrestrial planets in the Solar System.\nIn November 2018, researchers determined that of the seven exoplanets in the multi-planetary system, TRAPPIST-1e has the best chance of being an Earth-like ocean planet, and the one most worthy of further study regarding habitability. According to the Habitable Exoplanets Catalog, TRAPPIST-1e is among the best potentially habitable exoplanets discovered.\nPhysical characteristics\nMass, radius, composition and temperature"]], "1_corpus": "wikipedia"}
{"tstamp": 1722439019.271, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "32a38d7fae954f8db724ef0a1306fe9f", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "what is the capital of france", "0_output": [["what is the capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "0_corpus": "wikipedia", "1_conv_id": "c389f4c4fb7541ac86e3ed02f2291042", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "what is the capital of france", "1_output": [["what is the capital of france", "List of capitals of France\n\nThis is a chronological list of capitals of France. The capital of France has been Paris since its liberation in 1944."]], "1_corpus": "wikipedia"}
{"tstamp": 1722439070.133, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "1260d56ada9a43e981ea9e442c8f5495", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "where was ismail kadare born", "0_output": [["where was ismail kadare born", "Ismail Kadare\n\nIsmail Kadare (; 28 January 1936 – 1 July 2024) was an Albanian novelist, poet, essayist, screenwriter, and playwright. He was a leading international literary figure and intellectual. He focused on poetry until the publication of his first novel, The General of the Dead Army, which made him famous internationally.\nKadare is regarded by some as one of the greatest writers and intellectuals of the 20th and 21st centuries, and as a universal voice against totalitarianism. Living in Albania during a time of strict censorship, he devised stratagems to outwit Communist censors who had banned three of his books, using devices such as parable, myth, fable, folk-tale, allegory, and legend, sprinkled with double-entendre, allusion, insinuation, satire, and coded messages. In 1990, to escape the Communist regime and its Sigurimi secret police, he defected to Paris. From the 1990s he was asked by both major political parties in Albania to become a consensual President of the country, but declined. In 1996, France made him a foreign associate of the , and in 2016, he was a Commandeur de la Légion d'Honneur recipient."]], "0_corpus": "wikipedia", "1_conv_id": "db3b40e3d9774a709df8f539dae94774", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "where was ismail kadare born", "1_output": [["where was ismail kadare born", "Ismail Kadare\n\nEarly life and education Ismail Kadare was born on 28 January 1936, in the Kingdom of Albania during the reign of King Zog I. He was born in Gjirokastër, an historic Ottoman Empire fortress mountain city of tall stone houses in southern Albania, a dozen miles from the border with Greece. He lived there on a crooked, narrow street known as Lunatics' Lane.\nIsmail's parents were Halit Kadare, a post office employee, and Hatixhe Dobi, a homemaker, who had married in 1933 when she was 17. On his mother's side, his great-grandfather was a Bejtexhi of the Bektashi Order, known as Hoxhë Dobi. Though he was born into a Muslim family, he was an atheist.\nThree years after Kadare was born, Italian Prime Minister Benito Mussolini's troops invaded Albania and ousted the king. Italian rule followed. He was nine years old when the Italian troops were withdrawn, and the communist-led People's Socialist Republic of Albania was established.\nKadare attended primary and secondary schools in Gjirokastër. He then studied Languages and Literature at the Faculty of History and Philology of the University of Tirana. In 1956, he received a teacher's diploma. He lived in Tirana until moving to France in 1990.\nLiterary career\nEarly\nAt age 11, Kadare read William Shakespeare's play Macbeth. He recalled years later: \"Because I did not yet understand that I could simply purchase it in a bookstore, I copied much of it by hand and took it home. My childhood imagination pushed me to feel like a co-author of the play.\""]], "1_corpus": "wikipedia"}
{"tstamp": 1722439202.6928, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "91b5c09508374591a1cc1ab11e7960d0", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "what the difference between mega charizard x and y?", "0_output": [["what the difference between mega charizard x and y?", "Gaming Stackexchange\n\nQ: What are the differences between Pokemon X and Pokemon Y? I want to know what are the differences between version X and Y, like pokemons, places, items, megaevolutions, etc.\n\nA: Pokemon Exclusive to Version X\n\n\n*\n\n*Mega Charizard X\n\n*Mega Mewtwo X\n\n*Staryu/Starmie\n\n*Pinsir\n\n*Houndour/Houndoom\n\n*Poochyena/Mightyena\n\n*Aron/Lairon/Aggron\n\n*Sawk\n\n*Swirlix/Slurpuff\n\n*Clauncher/Clawitzer\n\n*Xerneas\n\n\nPokemon Exclusive to Version Y\n\n\n*\n\n*Mega Charizard Y\n\n*Mega Mewtwo Y\n\n*Shellder/Cloyster\n\n*Heracross\n\n*Larvitar/Pupitar/Tyranitar\n\n*Electrike/Manectric\n\n*Purrloin/Liepard\n\n*Throh\n\n*Spritzee/Aromatisse\n\n*Skrelp/Dragalge\n\n*Yveltal\n\n\nSource - Serebii\nIt's also been noted that there are minor story differences:\n\nWithout giving away too much, both legendaries do end up having an impact on the story, so it's a matter of preference as to which one you want to see.\n\nSource: Shack News\nAs always the main thing you should consider when choosing between Pokemon X and Pokemon Y is which version exclusive Pokemon appeal to you the most.\n"]], "0_corpus": "stackexchange", "1_conv_id": "66440ef005a64e6ca18305711af5bb40", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "what the difference between mega charizard x and y?", "1_output": [["what the difference between mega charizard x and y?", "Gaming Stackexchange\n\nQ: What are the differences between Pokemon X and Pokemon Y? I want to know what are the differences between version X and Y, like pokemons, places, items, megaevolutions, etc.\n\nA: Pokemon Exclusive to Version X\n\n\n*\n\n*Mega Charizard X\n\n*Mega Mewtwo X\n\n*Staryu/Starmie\n\n*Pinsir\n\n*Houndour/Houndoom\n\n*Poochyena/Mightyena\n\n*Aron/Lairon/Aggron\n\n*Sawk\n\n*Swirlix/Slurpuff\n\n*Clauncher/Clawitzer\n\n*Xerneas\n\n\nPokemon Exclusive to Version Y\n\n\n*\n\n*Mega Charizard Y\n\n*Mega Mewtwo Y\n\n*Shellder/Cloyster\n\n*Heracross\n\n*Larvitar/Pupitar/Tyranitar\n\n*Electrike/Manectric\n\n*Purrloin/Liepard\n\n*Throh\n\n*Spritzee/Aromatisse\n\n*Skrelp/Dragalge\n\n*Yveltal\n\n\nSource - Serebii\nIt's also been noted that there are minor story differences:\n\nWithout giving away too much, both legendaries do end up having an impact on the story, so it's a matter of preference as to which one you want to see.\n\nSource: Shack News\nAs always the main thing you should consider when choosing between Pokemon X and Pokemon Y is which version exclusive Pokemon appeal to you the most.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439222.3324, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5478714bcf9c4b018c0f9baa922a7738", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "mcpe how to split items?", "0_output": [["mcpe how to split items?", "Gaming Stackexchange\n\nQ: How to divide item stack in MC PE? I have been struggling with this for a while and couldn’t find a clear answer online so I thought I’d post it since I couldn’t find a correct answer anywhere. \nHow can I divide a stack in my inventory without switching to classic mode? Unable to right click or hold shift since it is on a smartphone. Some answers I found said something about a coin icon on the left but I don’t see it. \n\nA: To divide a stack press and hold on it until a green bar starts filling right above the item. The amount the green bar fills is the amount that is being selected. If half the bar is filled then half the stack is selected. Release the hold when the green bar fills to your desired amount. Tap an empty space in your inventory to move the selected items to the new slot.\n"]], "0_corpus": "stackexchange", "1_conv_id": "e8b8f1b269f24c9abed6a72f410a46bb", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "mcpe how to split items?", "1_output": [["mcpe how to split items?", "Stackoverflow Stackexchange\n\nQ: split items in list How can I turn the following list\n['1','2','A,B,C,D','7','8']\n\ninto\n['1','2','A','B','C','D','7','8']\nin the most pythonic way?\nI have very unpythonic code that creates nested list, and then flatterens:\nsum ( [ word.split(',') for word in words ], [] )\n\n\nA: In [1]: from itertools import chain\n\nIn [2]: lis=['1','2','A,B,C,D','7','8']\n\n\nIn [5]: list(chain(*(x.split(',') for x in lis)))\nOut[5]: ['1', '2', 'A', 'B', 'C', 'D', '7', '8']\n\nto further reduce the unwanted split() calls:\nIn [7]: list(chain(*(x.split(',') if ',' in x else x for x in lis)))\nOut[7]: ['1', '2', 'A', 'B', 'C', 'D', '7', '8']\n\nusing map():\nIn [8]: list(chain(*map(lambda x:x.split(','),lis)))\nOut[8]: ['1', '2', 'A', 'B', 'C', 'D', '7', '8']\n\nIn [9]: list(chain(*map(lambda x:x.split(',') if ',' in x else x,lis)))\nOut[9]: ['1', '2', 'A', 'B', 'C', 'D', '7', '8']\n\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439233.6545, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "40415139651e41369a555cbf1b059530", "0_model_name": "voyage-multilingual-2", "0_prompt": "Survey examining models like ELMo and BERT and their impact on contextual word representations.", "0_output": [["Survey examining models like ELMo and BERT and their impact on contextual word representations.", "Title: A Survey on Contextual Embeddings\n\nAbstract: Contextual embeddings, such as ELMo and BERT, move beyond global word representations like Word2Vec and achieve ground-breaking performance on a wide range of natural language processing tasks. Contextual embeddings assign each word a representation based on its context, thereby capturing uses of words across varied contexts and encoding knowledge that transfers across languages. In this survey, we review existing contextual embedding models, cross-lingual polyglot pre-training, the application of contextual embeddings in downstream tasks, model compression, and model analyses."]], "0_corpus": "arxiv", "1_conv_id": "352b6a0075ce4dad8b290e0ae9ac7048", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Survey examining models like ELMo and BERT and their impact on contextual word representations.", "1_output": [["Survey examining models like ELMo and BERT and their impact on contextual word representations.", "Title: A Survey on Contextual Embeddings\n\nAbstract: Contextual embeddings, such as ELMo and BERT, move beyond global word representations like Word2Vec and achieve ground-breaking performance on a wide range of natural language processing tasks. Contextual embeddings assign each word a representation based on its context, thereby capturing uses of words across varied contexts and encoding knowledge that transfers across languages. In this survey, we review existing contextual embedding models, cross-lingual polyglot pre-training, the application of contextual embeddings in downstream tasks, model compression, and model analyses."]], "1_corpus": "arxiv"}
{"tstamp": 1722439246.7796, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "0673117b825548babc52ab352f07e856", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "how much older is arwen than aragorn?", "0_output": [["how much older is arwen than aragorn?", "Scifi Stackexchange\n\nQ: How old was Lady Arwen, wife of Aragorn in the Lord of the Rings? Lady Arwen is possibly thousands of years old when The Lord of the Rings starts off. But exactly how old was she? What was the age difference between her and Aragorn? Considering the age difference, would not Aragorn seem to be a small kid in front of her?\n\nA: The Tale of Years (Appendix B) has this entry in the Third Age:\n\n241 Birth of Arwen Undómiel.\n\nSo Arwen was 2,777 years old in TA3018 when the War of the Ring happened.\nElrond makes exactly your point when Aragorn first falls in love with her:\n\n\"But as for Arwen the Fair, Lady of Imladris and of Lórien, Evenstar of her people, she is of lineage greater than yours, and she has lived in the world already so long that to her you are but as a yearling shoot beside a young birch of many summers. She is too far above you. And so, I think, it may well seem to her.\"\n\nBut she does fall in love with him anyway.\n"]], "0_corpus": "stackexchange", "1_conv_id": "fbc8618cec6047e6815c508db80bce51", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "how much older is arwen than aragorn?", "1_output": [["how much older is arwen than aragorn?", "Scifi Stackexchange\n\nQ: How old was Lady Arwen, wife of Aragorn in the Lord of the Rings? Lady Arwen is possibly thousands of years old when The Lord of the Rings starts off. But exactly how old was she? What was the age difference between her and Aragorn? Considering the age difference, would not Aragorn seem to be a small kid in front of her?\n\nA: The Tale of Years (Appendix B) has this entry in the Third Age:\n\n241 Birth of Arwen Undómiel.\n\nSo Arwen was 2,777 years old in TA3018 when the War of the Ring happened.\nElrond makes exactly your point when Aragorn first falls in love with her:\n\n\"But as for Arwen the Fair, Lady of Imladris and of Lórien, Evenstar of her people, she is of lineage greater than yours, and she has lived in the world already so long that to her you are but as a yearling shoot beside a young birch of many summers. She is too far above you. And so, I think, it may well seem to her.\"\n\nBut she does fall in love with him anyway.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439271.9889, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "56f96663188a4324a3626ff5721d318d", "0_model_name": "voyage-multilingual-2", "0_prompt": "answer of do you mind?", "0_output": [["answer of do you mind?", "English Stackexchange\n\nQ: Answering the question: Do you mind if...? The following always puzzles me as a non-native speaker. \nWhen somebody asks the question \"Do you mind if...\", there seem to be two possible responses. \n\n\n*\n\n*\"Sure\" and \"No, not at all\", which both mean that the person doesn't mind.\n\n*\"Actually, I do mind\", which means that the person does mind.\n\n\nWhy is this so confusing? Especially, how come people reply \"Sure\" to this question, if that could be understood to mean that they for sure do mind?\n\nA: \"Sure\" isn't answering the question as asked; it's answering an implied question, namely: \"is it OK with you if...\".\n\"No, not at all\" is answering the question, taken literally.\n"]], "0_corpus": "stackexchange", "1_conv_id": "13b6abf21ddb4697b0e775567e0d2427", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "answer of do you mind?", "1_output": [["answer of do you mind?", "Ell Stackexchange\n\nQ: 'Would you mind' and 'Do you mind' Does the auxiliary verb 'do' work rather than the modal 'would' in the example below in the sense of permission:\n\nWould you mind speaking in English?\n\n\nA: \nWould you mind speaking in English?\n\nis a polite request.\n\nDo you mind speaking in English?\n\ncomes across more harshly, and implies you are being inconvenienced somehow by not hearing English. A bit softer than something like:\n\nDo you mind, I am talking on the telephone.\n\nthat you might say when someone tries to interrupt you.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439308.3951, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "87e3074b66bc40b8a4e18100bf698de1", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "What's the difference between type 1 and type 2 diabetes?", "0_output": [["What's the difference between type 1 and type 2 diabetes?", "Type 2 diabetes\n\nType 2 diabetes makes up about 90% of cases of diabetes, with the other 10% due primarily to type 1 diabetes and gestational diabetes. In type 1 diabetes there is a lower total level of insulin to control blood glucose, due to an autoimmune induced loss of insulin-producing beta cells in the pancreas. Diagnosis of diabetes is by blood tests such as fasting plasma glucose, oral glucose tolerance test, or glycated hemoglobin (A1C).\nType 2 diabetes is largely preventable by staying at a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fats). Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels (such as through a continuous glucose monitor) is advised; however, this may not be needed in those who are not on insulin therapy. Bariatric surgery often improves diabetes in those who are obese.\nRates of type 2 diabetes have increased markedly since 1960 in parallel with obesity. As of 2015 there were approximately 392 million people diagnosed with the disease compared to around 30 million in 1985. Typically it begins in middle or older age, although rates of type 2 diabetes are increasing in young people. Type 2 diabetes is associated with a ten-year-shorter life expectancy. Diabetes was one of the first diseases ever described, dating back to an Egyptian manuscript from  BCE. The importance of insulin in the disease was determined in the 1920s."]], "0_corpus": "wikipedia", "1_conv_id": "d8744acb01ae4a82943648ca94d79516", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "What's the difference between type 1 and type 2 diabetes?", "1_output": [["What's the difference between type 1 and type 2 diabetes?", "Type 2 diabetes\n\nType 2 diabetes makes up about 90% of cases of diabetes, with the other 10% due primarily to type 1 diabetes and gestational diabetes. In type 1 diabetes there is a lower total level of insulin to control blood glucose, due to an autoimmune induced loss of insulin-producing beta cells in the pancreas. Diagnosis of diabetes is by blood tests such as fasting plasma glucose, oral glucose tolerance test, or glycated hemoglobin (A1C).\nType 2 diabetes is largely preventable by staying at a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fats). Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels (such as through a continuous glucose monitor) is advised; however, this may not be needed in those who are not on insulin therapy. Bariatric surgery often improves diabetes in those who are obese.\nRates of type 2 diabetes have increased markedly since 1960 in parallel with obesity. As of 2015 there were approximately 392 million people diagnosed with the disease compared to around 30 million in 1985. Typically it begins in middle or older age, although rates of type 2 diabetes are increasing in young people. Type 2 diabetes is associated with a ten-year-shorter life expectancy. Diabetes was one of the first diseases ever described, dating back to an Egyptian manuscript from  BCE. The importance of insulin in the disease was determined in the 1920s."]], "1_corpus": "wikipedia"}
{"tstamp": 1722439322.3316, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "fd1078f44978481e81ad953c74850d2d", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "how to determine read only domain controller?", "0_output": [["how to determine read only domain controller?", "Serverfault Stackexchange\n\nQ: Registry key to tell if a server is a Read Only Domain Controller Is there a registry key I can check to tell if a server is a read only domain controller?\n\nA: I've now got a code snippet to check if the EnableRSOForRODC key exists, and if it does, returns it value.\n1>NUL: 2>NUL: REG QUERY \"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\DNS\\Parameters\" /v \"EnableRSOFor‌​RODC\" || @Goto NoRODCExists\nFOR /F \"tokens=3 skip=1 \" %%G IN ('REG QUERY \"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\DNS\\Parameters\" /v \"EnableRSOFor‌​RODC\"') DO (Echo Read Only Domain Controller? %%G.)\n:NoRODCExists\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "f57c7e49da8d486d8013be860e652408", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "how to determine read only domain controller?", "1_output": [["how to determine read only domain controller?", "Stackoverflow Stackexchange\n\nQ: Why does gzip_disable make nginx crash? I've inspired my nginx configuration file from defunkt's conf file for unicorn but it seems that the line gzip_disable \"MSIE [1-6]\\.\"; makes everything crash. I get the error that this site is temporarily unavailable (served from nginx/html/50x.html). Commenting out the line makes everything work again, fiddling with the regexp doesn't change a thing.\nI'm running nginx v.1.0.10 and ubuntu 11.10.\nAny idea?\n\nA: It's already about few years when Nginx API changed for this directive, it should be:\ngzip_disable              \"msie6\";\n\nFull stack Nginx+Unicorn optimized configuration can be found on the gist.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439330.0461, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "3ac2a4bac6a84180886180fa2fbdd044", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "14d6b080bb2849c1a8693559f7e256e9", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
{"tstamp": 1722439330.6484, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b9180981787a4de48a889a11da7b12d4", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "can initial velocity be negative?", "0_output": [["can initial velocity be negative?", "Physics Stackexchange\n\nQ: Negative vectors (e.g. velocity) If you said someone had a velocity of $-12\\,{\\rm mph}$ and they were traveling north? Wouldn't it mean that they were traveling $12\\,{\\rm mph}$ south?\nThis is a quote from here:\n\nif something [object-x] moving to the right was taken to have positive momentum, then one should consider something [object-y] moving to the left to have negative momentum.\n\nBut isn't momentum a vector, so the direction should be specified separately to the number. What I mean is, object-y with $-1200\\,{\\rm kg\\, m/s}$ and object-x with 1200kg m/s should both have momentum in the same direction. But this can't be because as the quote says they are moving in opposite direction.\n\nA: That quote is abit misleading, momentum is a vector, however a vector is neither negative nor positive, only its components can have this characteristic. The two objects you are describing does not have the same momentum, but they have the same magnitdue of momentum (length of vector).\n"]], "0_corpus": "stackexchange", "1_conv_id": "e4d62363634b479291ca97b9c6289d27", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "can initial velocity be negative?", "1_output": [["can initial velocity be negative?", "Superuser Stackexchange\n\nQ: Windows 7 cmd.exe \"text mode\" with Alt+Enter? In XP, I can go to text mode in cmd.exe with alt+enter. This is when it goes full screen (no windows, no graphics - it uploads a raster font to the hardware). No longer works in Windows 7. What happened to text mode? Can I get it back in Windows 7 via some other way? \nIf not, to be able to mourn, what is the API change that makes it impossible and when did it ship?\nedit I've read Why doesn't the Windows command prompt window maximize to the full screen size? but that question is about maximizing the graphical mode console window. My question is different, as I'm asking about the different text mode (a different hw mode).\n\nA: I believe that the difference is that the WDDM (windows display driver model) drivers introduced with Vista and later have no full-screen support, where the XP display drivers does.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439346.0319, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "efb7d875f2ee4d5f84010b406aa9b5c3", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "do shields count as armor skyrim?", "0_output": [["do shields count as armor skyrim?", "Stackoverflow Stackexchange\n\nQ: C# pass a null value as a generic object rather than a type for overloaded methods I'm working with some legacy C# code and below are two overloaded methods that I can't change:\nvoid PerformCheck(LegacyData data) {...}\nvoid PerformCheck(object data) {...}\n\nThere is some code that uses the above overloaded methods.  When that code passes anything but a LegacyData reference, the PerformCheck(object data) gets invoked, e.g. PerformCheck(\"Hello World\");  However, if null is passed, PerformCheck(LegacyData data) gets invoked.  Strangely the PerformCheck implementations are different depending on what is passed.  I would like the PerformCheck(null) to invoke the PerformCheck(object data) implementation instead.  How do I make this work?\n\nA: You can force the behavior by casting null to anything other than LegacyData.\nvar x = new Test();\nx.PerformCheck((object)null);\n\npublic class Test\n{\n    public void PerformCheck(LegacyData data) { Console.WriteLine(\"legacy\"); }\n    public void PerformCheck(object data) { Console.WriteLine(\"other\"); }\n}\n    \npublic class LegacyData {}\n\nThis outputs \"other\" as expected.\n"]], "0_corpus": "stackexchange", "1_conv_id": "4fa6fe8bd9644a39955c4d7f4dcc09a5", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "do shields count as armor skyrim?", "1_output": [["do shields count as armor skyrim?", "Gaming Stackexchange\n\nQ: In Skyrim, what does a full armor set entail? I.E., do I have to have a shield, or can I just have the helmet, armor, gauntlets, and boots?\n\nA: A full armor set is simply is entirely based on around worn pieces, such as helmet, armor, gauntlets, boots, etc. Shields are considered separate pieces of armor, hence why they are placed within the \"Block\" skill and not the \"Heavy/Light Armor\" skills.\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439356.8617, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "5296aeb678084e99a677d71b97c85d42", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what is the difference between chkdsk /f and chkdsk /r?", "0_output": [["what is the difference between chkdsk /f and chkdsk /r?", "Superuser Stackexchange\n\nQ: What is the difference between chkdsk /f and chkdsk /r? \n/F              Fixes errors on the disk.\n/R              Locates bad sectors and recovers readable information (implies /F).\n\n(from dir /?)\nIt just sounds like the same thing to me.  I never know which one to use - is there even a difference?  Or is it maybe for backwards compatibility with previous versions of Windows where there was a difference?\n\nA: chkdsk /r does the same thing as chkdsk /f only it also checks for bad sectors on the disk and recovers any readable information. Running chkdsk /r implies that also chkdsk /f is run.\nchkdsk /f only checks for disk errors, not bad sectors.\nMicrosoft has a detailed page for chkdsk. The following is a snippet explaining /f and /r parameters.\n\nFixing disk errors:\nChkdsk corrects disk errors only if you specify the /f command-line\n  option. Chkdsk must be able to lock the drive to correct errors.\n  Because repairs usually change a disk's file allocation table and\n  sometimes cause a loss of data, chkdsk sends a confirmation message.\nFinding physical disk errors:\nUse the /r command-line option to find physical disk errors in the\n  file system.\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "26243994a6974ab099c38f50a8302a08", "1_model_name": "BM25", "1_prompt": "what is the difference between chkdsk /f and chkdsk /r?", "1_output": [["what is the difference between chkdsk /f and chkdsk /r?", "Superuser Stackexchange\n\nQ: What is the difference between chkdsk /f and chkdsk /r? \n/F              Fixes errors on the disk.\n/R              Locates bad sectors and recovers readable information (implies /F).\n\n(from dir /?)\nIt just sounds like the same thing to me.  I never know which one to use - is there even a difference?  Or is it maybe for backwards compatibility with previous versions of Windows where there was a difference?\n\nA: chkdsk /r does the same thing as chkdsk /f only it also checks for bad sectors on the disk and recovers any readable information. Running chkdsk /r implies that also chkdsk /f is run.\nchkdsk /f only checks for disk errors, not bad sectors.\nMicrosoft has a detailed page for chkdsk. The following is a snippet explaining /f and /r parameters.\n\nFixing disk errors:\nChkdsk corrects disk errors only if you specify the /f command-line\n  option. Chkdsk must be able to lock the drive to correct errors.\n  Because repairs usually change a disk's file allocation table and\n  sometimes cause a loss of data, chkdsk sends a confirmation message.\nFinding physical disk errors:\nUse the /r command-line option to find physical disk errors in the\n  file system.\n\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439388.9707, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "10ed8e581f62414c872938cd8160de29", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "who does peggy carter marry?", "0_output": [["who does peggy carter marry?", "Scifi Stackexchange\n\nQ: What is the name of Peggy's husband in Captain America: The Winter Soldier? I'm having a hard time hearing the name of Peggy's husband she names. I'm not the greatest at spelling but it sounds like his name is something like: Emanhu Winter?\nHere's the scene if anyone can help, he's named at 1:40:\n\n\n\n\nA: She doesn't give a specific name. The phrase she utters at 1:40 is:\n\nHe saved over 1000 men, including the man who, would ahh, who would become my husband, as it turned out\n\nwhich I guess you hear as Emanhu Winter, because of the slight hesitation and repetition there in the line delivery.\n"]], "0_corpus": "stackexchange", "1_conv_id": "512b56cafcf548dc8e1ba766787e9dee", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "who does peggy carter marry?", "1_output": [["who does peggy carter marry?", "Scifi Stackexchange\n\nQ: When did Captain America and Peggy Carter dance in the comics? Sometime this year, I read a comic book or graphic novel in which Steve Rogers (Captain America) returned from the ice, and gets to dance at last with an old Peggy Carter.\nDoes anyone know which comic that was? I can't find it anymore.\nThanks for your help!\n"]], "1_corpus": "stackexchange"}
{"tstamp": 1722439938.8404, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d9c71637975b4303bfbcbad02f1bd062", "0_model_name": "voyage-multilingual-2", "0_prompt": "A paper that proposes an improved Transformer model capable of learning long-term dependencies beyond fixed-length context in language modeling.", "0_output": [["A paper that proposes an improved Transformer model capable of learning long-term dependencies beyond fixed-length context in language modeling.", "Title: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context\n\nAbstract: Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch."]], "0_corpus": "arxiv", "1_conv_id": "1802d7d52d704f829a4f4992ff41fc69", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that proposes an improved Transformer model capable of learning long-term dependencies beyond fixed-length context in language modeling.", "1_output": [["A paper that proposes an improved Transformer model capable of learning long-term dependencies beyond fixed-length context in language modeling.", "Title: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context\n\nAbstract: Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch."]], "1_corpus": "arxiv"}
{"tstamp": 1722440194.3406, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "cef97f776ff242d6ad09eee2574b7864", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "what does it mean if a car is a fleet vehicle", "0_output": [["what does it mean if a car is a fleet vehicle", "Vehicle registration plates of Indonesia\n\nBlack on yellow: Commercial vehicle or public transportation such as buses, taxis, angkot, auto rickshaws and commercial trucks.\nWhite on red: Used by fire departments, government ambulances, government officials and other governmental vehicles administered under their respective local governments.\nBlack on Red: Vehicles belonging to foreign countries. Commonly used by foreign embassies or vehicles belonging to International organizations.\nBlack on White with Black Trim: Vehicles belonging to diplomatic corps of foreign countries. Commonly used by foreign embassies or vehicles belonging to international organizations.\nBlack on green: Free Zone vehicles i.e. Batam (see Indonesia–Malaysia–Singapore Growth Triangle).\nBlue on white: Vehicles belonging to foreign countries, mainly used before the vehicle has been registered.\nElectric vehicles\nThe Indonesian National Police has set a special license plate for electric vehicles with additional blue trim at the expiry date row in accordance with the regulations in the Decree of the Head of the Traffic Corps of the Indonesian National Police in 2020.\nHowever, when personal fossil-fueled vehicle plates have changed the color scheme, personal EV plates are still using the old one (white on black) with blue trim, added with the implementation of FE-Schrift since November 2022.\nWhite on Black with Blue Trim: for privately owned electric vehicles and rental electric vehicles"]], "0_corpus": "wikipedia", "1_conv_id": "8fb7fcd108be4239a41e863c284cde1a", "1_model_name": "text-embedding-004", "1_prompt": "what does it mean if a car is a fleet vehicle", "1_output": [["what does it mean if a car is a fleet vehicle", "Fleet vehicle\n\nA fleet vehicle is a vehicle owned or leased by a business, government agency, or other organization rather than by an individual or family. Typical examples include vehicles operated by car rental companies, taxicab companies, public utilities, public transport, and emergency services.\nMany businesses purchase or lease fleet vehicles to deliver goods to customers, as well as providing vehicles for sales representatives to travel to clients. In some jurisdictions and countries, fleet vehicles can also be privately owned by employees. These vehicles are often called the \"grey fleet\" and are used for work purposes. Fleet vehicles can be managed by a fleet manager or transport manager using fleet management software. Vehicles may be connected to a fleet telematics system by way of a Fleet Management System, also known as an FMS.\nFederal Vehicle Fleet\nIn the United States, \"Federal Vehicle Fleet\" refers to the federal government's vehicles."]], "1_corpus": "wikipedia"}