|
{: 1722356200.3701, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722356865.2174, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722357180.6325, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, greatest domestic challenge\in 30 years\pre-revolutionary\]], : } |
|
{: 1722357524.4041, : , : , : [, ], : , : , : , : , : [[, Terijoki Government\]], : , : , : , : , : [[, Arab Winter\]], : } |
|
{: 1722357588.6391, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, scholar's rock\" originates. Suseok regained popularity amongst nouveau riche businessmen in the 1980s during Korea's economic boom.\nThe art usually works on three scales: large installations of monumental shaped stones as ornamental gates; medium-sized shaped stones for landscape decoration within Korean gardens; and the smaller shaped stones for scholar's tables, the most important of these scales.\nChinese gongshi influenced the development of suseok in Korea.\nEvaluation\nEarly on, important sites within landscape were marked with shaped stones, similarly to distance markers on post roads. Burial sites were also given permanent marking by large scale tumuli or mounds, often surrounded by anthropomorphic shaped stones much akin to that of Inuit or First Nations' memory markers. The animistic belief of nature being alive, and large-scaled elements of nature having souls, has led to the continued use of massive sculpted stone in natural forms throughout Korean traditional entranceways, as the firstgrowth cedarwood traditionally used for gates is now rare.\nAs Confucian scholarship ascended into the golden age of the Joseon dynasty, scholar rocks became an essential fixture of the writing tables of the yangban class of scholars, and a brilliant example of Confucian art.1_corpuswikipedia |
|
tstamptask_typeretrievaltypetievotemodelsip0_conv_id40adf2ecfae142f690cf15572d1c2a390_model_namemixedbread-ai/mxbai-embed-large-v10_promptA paper with a dataset and methods for Chinese LFQA including real-time web search interaction 0_outputA paper with a dataset and methods for Chinese LFQA including real-time web search interaction Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively.0_corpusarxiv1_conv_id5b7d23084cf84e698b8ae3ef7e10bdc31_model_nameembed-english-v3.01_promptA paper with a dataset and methods for Chinese LFQA including real-time web search interaction 1_outputA paper with a dataset and methods for Chinese LFQA including real-time web search interaction Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively.1_corpusarxiv |
|
tstamptask_typeretrievaltypebothbadvotemodelsip0_conv_idba39a43bd1d845a29bb436a813e124f30_model_nametext-embedding-0040_promptinpars0_outputinparsTitle: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin.0_corpusarxiv1_conv_id613710642ec9426ca02056c89cf5fbf71_model_nameAlibaba-NLP/gte-Qwen2-7B-instruct1_promptinpars1_outputinparsTitle: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions .1_corpusarxiv |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_idd192f963eb874f30877d73a20a3386d70_model_namesentence-transformers/all-MiniLM-L6-v20_promptare meet or are met?0_outputare meet or are met?Stackoverflow Stackexchange\n\nQ: Get values from TempData into view I want to get value of TempData into View\nController\n[NonAction]\n public ActionResult RedirectToMessagePage(string title, string message)\n {\n TempData[\] = new Models.SuccessModel { Title = title, Message = message };\n\n return RedirectToAction(\, \);\n }\n\nView\nif (TempData[\] != null)\n {\n\n <div id=\>\n @{\n var errors =TempData.Values;\n }\n @if (errors != null && errors.Count() > 0)\n {\n <div style=\>\n @foreach (var error in errors)\n { \n @error \n }\n </div>\n }\n </div>\n }\n\nI want to get value of Message of TempData[\] into view and want to display it.\nHow can I display it?\n\nA: under\nvar errors =TempData.Values;\n\nyou can cast TempData[\] and get it's message\nvar message = (TempData[\"SuccessModel\"] as Models.SuccessModel).Message;\n\nand use this message variable.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f3eef428ee4847acbdadd37fb9b77a6e", "1_model_name": "BM25", "1_prompt": "are meet or are met?", "1_output": [["are meet or are met?", "Ell Stackexchange\n\nQ: \"Meet my boss\" or \"meet with my boss\"? What is the difference between \"meet\" and \"meet with\"?\nFor example, which of these should I say?\n\n \n*\n \n*I met my boss yesterday \n \n*I met with my boss yesterday\n \n*I'm meeting my friends tonight \n \n*I'm meeting with my friends tonight\n \n\n\nA: \nI met my boss yesterday.\n\nWithout elaboration, this implies that you met your boss for the first time.\n\nI met my boss at Spago* yesterday.\n\nYou had lunch or dinner with your boss. (It was not your first encounter.)\n\nI met with my boss yesterday.\n\nYou got together with your boss for a specific purpose. It was likely to be a working meeting of some sort.\n\nI'm meeting my friends tonight.\n\nYou're going out to have fun with your friends.\n\nI'm meeting with my friends tonight.\n\nYou're getting together with friends for some shared purpose. It might be for fun, but the \"with\" is more likely to be used to suggest a special purpose to the meeting: planning, hobby, whatever.\n* Spago is an upscale restaurant, suitable for business luncheons or important dinners.\n"]], "1_corpus": "stackexchange"} |
|
{"tstamp": 1722358096.4098, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "2412f917aa0241cfa42543f4dcf47ff5", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "When did american astronauts first walk on the moon", "0_output": [["When did american astronauts first walk on the moon", "Exploration of the Moon\n\nIn September 1968 the Soviet Union's Zond 5 sent tortoises on a circumlunar mission, followed by turtles aboard Zond 6 in November. On December 24, 1968, the crew of Apollo 8—Frank Borman, James Lovell and William Anders—became the first human beings to enter lunar orbit and see the far side of the Moon in person. Humans first landed on the Moon on July 20, 1969. The first humans to walk on the lunar surface were Neil Armstrong, commander of the U.S. mission Apollo 11 and his fellow astronaut Buzz Aldrin.\nThe first robot lunar rover to land on the Moon was the Soviet vessel Lunokhod 1 on November 17, 1970, as part of the Lunokhod programme. To date, the last human to stand on the Moon was Eugene Cernan, who as part of the Apollo 17 mission, walked on the Moon in December 1972.\nMoon rock samples were brought back to Earth by three Luna missions (Luna 16, 20, and 24) and the Apollo missions 11 through 17 (except Apollo 13, which aborted its planned lunar landing). Luna 24 in 1976 was the last Lunar mission by either the Soviet Union or the U.S. until Clementine in 1994. Focus shifted to probes to other planets, space stations, and the Shuttle program.\nBefore the \ the U.S. had pre-projects for scientific and military moonbases: the Lunex Project and Project Horizon. Besides crewed landings, the abandoned Soviet crewed lunar programs included the building of a multipurpose moonbase \, the first detailed project, complete with developed mockups of expedition vehicles and surface modules.\nAfter 19900_corpuswikipedia1_conv_id4ca696c1ad1e44dd83714298585835db1_model_namejinaai/jina-embeddings-v2-base-en1_promptWhen did american astronauts first walk on the moon1_outputWhen did american astronauts first walk on the moonWalking on the Moon\n\n\ is a reggae song by British rock band the Police, released as the second single from their second studio album, Reggatta de Blanc (1979). The song was written by the band's lead vocalist and bassist Sting. It went on to become the band's second hit in the UK.\nBackground\nSting has said that he wrote the song when he was drunk one night after a concert in Munich. The following morning, he remembered the song and wrote it down.\nIn his autobiography, Sting implies that the song was partially inspired by an early girlfriend:\nAccording to Sting, the song was originally recorded \ in early versions, but it was reworked. The riff, which is played on the bass, was described as \ and \ by Sting. Guitarist Andy Summers came up with the chord \ throughout the song.\n\ was released as the follow-up single to the British single \ in late 1979. The song was the Police's second number-one hit single in the United Kingdom. It also reached in Ireland and in Australia but did not chart in the United States."]], "1_corpus": "wikipedia"} |
|
{"tstamp": 1722358118.4642, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "6e69d5cf6ca64ba795f4d1b52f734ef6", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "inpars", "0_output": [["inpars", "Title: ParPaRaw: Massively Parallel Parsing of Delimiter-Separated Raw Data\n\nAbstract: Parsing is essential for a wide range of use cases, such as stream processing, bulk loading, and in-situ querying of raw data. Yet, the compute-intense step often constitutes a major bottleneck in the data ingestion pipeline, since parsing of inputs that require more involved parsing rules is challenging to parallelise. This work proposes a massively parallel algorithm for parsing delimiter-separated data formats on GPUs. Other than the state-of-the-art, the proposed approach does not require an initial sequential pass over the input to determine a thread's parsing context. That is, how a thread, beginning somewhere in the middle of the input, should interpret a certain symbol (e.g., whether to interpret a comma as a delimiter or as part of a larger string enclosed in double-quotes). Instead of tailoring the approach to a single format, we are able to perform a massively parallel FSM simulation, which is more flexible and powerful, supporting more expressive parsing rules with general applicability. Achieving a parsing rate of as much as 14.2 GB/s, our experimental evaluation on a GPU with 3584 cores shows that the presented approach is able to scale to thousands of cores and beyond. With an end-to-end streaming approach, we are able to exploit the full-duplex capabilities of the PCIe bus and hide latency from data transfers. Considering the end-to-end performance, the algorithm parses 4.8 GB in as little as 0.44 seconds, including data transfers.0_corpusarxiv1_conv_idf4c133383bd5449c9636e530f8d4fdeb1_model_nameAlibaba-NLP/gte-Qwen2-7B-instruct1_promptinpars1_outputinparsTitle: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions .1_corpusarxiv |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_id78dfb52f27f64b61929bcde3dcc9504b0_model_namejinaai/jina-embeddings-v2-base-en0_promptinpars0_outputinparsTitle: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$.0_corpusarxiv1_conv_id59aba873f4524f3298bd6dc89efec4211_model_nameBM251_promptinpars1_outputinparsTitle: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu1_corpusarxiv |
|
tstamptask_typeretrievaltypeleftvotemodelsip0_conv_id620b3c01c65e40488aebdee13adb9ab40_model_nameBAAI/bge-large-en-v1.50_promptwhen does the stock market open pacific time0_outputwhen does the stock market open pacific timeTokyo Stock Exchange\n\nHours The exchange's normal trading sessions are from 9:00 a.m. to 11:30 a.m. and from 12:30 p.m. to 3:00 p.m. on all days of the week except Saturdays, Sundays and holidays declared by the Exchange in advance. The exchange is closed for the following holidays: New Year's Day, Coming of Age Day, National Foundation Day, Vernal Equinox Day, Shōwa Day, Constitution Memorial Day, Greenery Day, Children's Day, Marine Day, Respect for the Aged Day, Autumnal Equinox, Health and Sports Day, Culture Day, Labour Thanksgiving Day, and The Emperor's Birthday.\nStock market\nFirst, Second and other Sections until April 4, 2022\nUntil April 4, 2022, corporate shares are listed and traded at Tokyo Stock Exchange in five market sections: the First Section which started when Tokyo Stock Exchange was re-established in 1949 and includes mainly large companies; the Second Section which started in 1961 and includes mainly mid-sized companies; JASDAQ (established in 1991, acquired by Osaka Stock Exchange in 2010, and absorbed into TSE in 2013) and Mothers (Market of the high-growth and emerging stocks, established at TSE in 1999) which are both for emerging companies; and TOKYO PRO Market which was established in 2009 jointly with London Stock Exchange as an Alternative Investment Market (Tokyo AIM) for small companies.\nThere were a total of 3,821 companies listed in Tokyo Stock Exchange, as of March 31, 2022.\nPrime, Standard and Growth markets since April 4, 20220_corpuswikipedia1_conv_id0d61a1651b204d21ad4de10b5702eaf51_model_nameGritLM/GritLM-7B1_promptwhen does the stock market open pacific time1_outputwhen does the stock market open pacific timePhiladelphia Stock Exchange\n\nIn March 2020, the PHLX announced plans to temporarily move to all-electronic trading on March 23, 2020, due to the COVID-19 pandemic. Along with the NYSE and the BSE, the PHLX reopened on May 26, 2020.\nLocations\n1790-1834 – Merchants Coffee House (City Tavern), 138 South 2nd Street at Walnut Street\n1834-1876 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1876-1888 – Girard Bank Building, 120 South 3rd Street, between Walnut and Chestnut Streets\n1888-1902 – Drexel Building, near Chestnut and South 5th Streets\n1902-1912 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1913-1951 – 1411 Walnut Street, between South Broad and South 15th Streets\n1951-1966 – Central Penn Bank Building, 1401 Walnut Street, between South Broad and South 15th Streets\n1966-1981 – 120 South 17th Street, between Sansom Street and Stock Exchange Place (Ionic Street)\nJan-Feb 1969 – Drecker Building, Bala Cynwyd, Pennsylvania (trading floor only)\n1981-2017 – Market and South 19th Streets\n2017-now – FMC Tower at Cira Centre South, 2929 Walnut Street, between Schuylkill Expressway and South 30th Street\nHours\nThe exchange's normal trading sessions are from 9:30 am to 4:00 pm on all days of the week except Saturdays, Sundays and holidays declared by the exchange in advance."]], "1_corpus": "wikipedia"} |
|
{"tstamp": 1722358305.208, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bcf54224c20940c9be3f305f57cbea90", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how many indian prime ministers were born in independent india", "0_output": [["how many indian prime ministers were born in independent india", "List of prime ministers of India\n\nSince 1947, India has had 14 prime ministers. Jawaharlal Nehru was India's first prime minister, serving as prime minister of the Dominion of India from 15 August 1947 until 26 January 1950, and thereafter of the Republic of India until his death in May 1964. (India conducted its first post-independence general elections in 1952). Earlier, Nehru had served as prime minister of the Interim Government of India during the British Raj from 2 September 1946 until 14 August 1947, his party, the Indian National Congress having won the 1946 Indian provincial elections. Nehru was succeeded by Lal Bahadur Shastri, whose 1 year 7-month term ended in his death in Tashkent, then in the USSR, where he had signed the Tashkent Declaration between India and Pakistan. Indira Gandhi, Nehru's daughter, succeeded Shastri in 1966 to become the country's first female prime minister. Eleven years later, her party the Indian National Congress lost the 1977 Indian general election to the Janata Party, whose leader Morarji Desai became the first non-Congress prime minister. After Desai resigned in 1979, his former associate Charan Singh briefly held office until the Congress won the 1980 Indian general election and Indira Gandhi returned as prime minister. Her second term as prime minister ended five years later on 31 October 1984, when she was assassinated by her bodyguards. Her son Rajiv Gandhi was sworn in as India's youngest premier. Members of NehruGandhi family have been prime minister for approximately 38 years."]], "0_corpus": "wikipedia", "1_conv_id": "b780dab7e4904028b9644772aa8f3c0c", "1_model_name": "text-embedding-004", "1_prompt": "how many indian prime ministers were born in independent india", "1_output": [["how many indian prime ministers were born in independent india", "List of prime ministers of India\n\nSince 1947, India has had 14 prime ministers. Jawaharlal Nehru was India's first prime minister, serving as prime minister of the Dominion of India from 15 August 1947 until 26 January 1950, and thereafter of the Republic of India until his death in May 1964. (India conducted its first post-independence general elections in 1952). Earlier, Nehru had served as prime minister of the Interim Government of India during the British Raj from 2 September 1946 until 14 August 1947, his party, the Indian National Congress having won the 1946 Indian provincial elections. Nehru was succeeded by Lal Bahadur Shastri, whose 1 year 7-month term ended in his death in Tashkent, then in the USSR, where he had signed the Tashkent Declaration between India and Pakistan. Indira Gandhi, Nehru's daughter, succeeded Shastri in 1966 to become the country's first female prime minister. Eleven years later, her party the Indian National Congress lost the 1977 Indian general election to the Janata Party, whose leader Morarji Desai became the first non-Congress prime minister. After Desai resigned in 1979, his former associate Charan Singh briefly held office until the Congress won the 1980 Indian general election and Indira Gandhi returned as prime minister. Her second term as prime minister ended five years later on 31 October 1984, when she was assassinated by her bodyguards. Her son Rajiv Gandhi was sworn in as India's youngest premier. Members of NehruGandhi family have been prime minister for approximately 38 years."]], "1_corpus": "wikipedia"} |
|
{"tstamp": 1722358344.5843, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "db7e05b430d14921bc34cc60599755eb", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what is the difference between phd and pdf?", "0_output": [["what is the difference between phd and pdf?", "Math Stackexchange\n\nQ: Difference between a PDF and PMF What is the difference between a probability density function and a probability mass function?\n\nA: One is for discrete variables, the other for continuous.\nThey are also interpreted differently. The PDF is a probability density. If f(x) is a PDF, f(x) doesn't tell you the probability of getting x (In fact, the probability of getting precisely x is 0).\nThe way you use a PDF is through integration. So, the probability of getting a value that is in a dx area around x is approximately f(x)*dx.\n0_corpusstackexchange1_conv_id0f78bc246a144b02832e702ee547b20a1_model_namejinaai/jina-embeddings-v2-base-en1_promptwhat is the difference between phd and pdf?1_outputwhat is the difference between phd and pdf?Stackoverflow Stackexchange\n\nQ: Can't find kubeadm token after initializing master When i run kubeadm init it shows a token that I should use from the other hosts to connect to this host. Where is that token stored? I'm trying to figure out more ways to automate this by adding kubeadm inside of Terraform. \nThanks for any help!\n\nA: \n--token\nBy default, kubeadm init automatically generates the token used to\n initialise each new node. If you would like to manually specify this\n token, you can use the --token flag. The token must be of the format\n ‘<6 character string>.<16 character string>’.\n\nyou can see token value on master node using command:\ncat /etc/kubernetes/pki/tokens.csv\n\n1_corpusstackexchange |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_idfe1ce490872a499fb54784a3637c16830_model_namesentence-transformers/all-MiniLM-L6-v20_promptare all achiral molecules meso?0_outputare all achiral molecules meso?Stackoverflow Stackexchange\n\nQ: Test If An NSString Contains a Letter I'm working on an iPhone project and I need to check if the user's input in a UITextfield contains a letter. More generally if an NSString contains a letter.\nI tried this with a giant if loop with the rangeofstring:@\.location == NSNotFound and then did OR rangeofstring:@\.location == NSNotFound\nand so on....\nBut:\n\n\n*\n\n*It doesn't seem to work\n\n*There has to be a simple line of code to check if the NSString contains letters.\n\n\nI have been searching this for hours... Can someone please answer this question???\n\nA: If you want to make sure the text has a certain letter in it (as opposed to just ANY letter), use the rangeOfString: message. For example, to ensure the text contains the letter \"Q\":\nNSString *string = @\"poQduu\";\n\n\nif ([string rangeOfString:@\"Q\"].location != NSNotFound) {\n DLog (@\"Yes, we have a Q at location %i\", [string rangeOfString:@\"Q\"].location );\n}\n\nAs others (Rob Napier) note, if you want to find ANY letter, use the rangeOfCharacterFromSet: message.\nif ([string rangeOfCharacterFromSet:[NSCharacterSet letterCharacterSet]].location != NSNotFound) ...\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "8090cc0cc0b946bc8818f151b0cfcdfd", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all achiral molecules meso?", "1_output": [["are all achiral molecules meso?", "Physics Stackexchange\n\nQ: Why can a solution show optical rotation? Why can a solution show optical rotation? A solution, as a liquid, is rotationally isotropic, right? So, even if the molecules are chiral, because of the random orientation of the molecules, shouldn't the effect cancel out? \n\nA: Your claim:\nSo, even if the molecules are chiral, because of the random orientation of the molecules, should not the effect cancel out?\nis not correct.\nOne should think in a way that molecules have different dispersion for left and right circularly polarized light. Hence one of the circular polarization will be retarded with respect to another which will result in rotated linear polarization.\nIf molecules also have different absorption the exiting light will be elliptic\nEDIT:\nTo visualize the effect take a spring and rotate it - you will see - no matter from which side you look it will be still left (or right), in a sense of clock or -counterclockwise rotation of the spring.\nThus if you have only left springs - you will have more retardation for the left light.\n1_corpusstackexchange |
|
tstamptask_typeretrievaltypetievotemodelsip0_conv_id31f178365d4f46f68c5dc6cfef04f2c00_model_nameintfloat/multilingual-e5-large-instruct0_promptdo tonics stack in bioshock 2?0_outputdo tonics stack in bioshock 2?Gaming Stackexchange\n\nQ: Can multiple gene tonics stack in Bioshock 2? If I have a specific gene tonic in level 1 and level 2, is there any benefit to equipping both of them at the same time? Will I get the full 1+2 bonuses? Or will I get just somewhat over the 2nd bonus? Or is the level 1 tonic completely useless?\n\nA: The bonuses from the gene tonics stack, so if you equip a level 1 and a level 2 tonic, you will get the full bonuses from both.\n0_corpusstackexchange1_conv_id35ba2fd72c234df5816ca5c243d4e6de1_model_nameBAAI/bge-large-en-v1.51_promptdo tonics stack in bioshock 2?1_outputdo tonics stack in bioshock 2?Gaming Stackexchange\n\nQ: Can multiple gene tonics stack in Bioshock 2? If I have a specific gene tonic in level 1 and level 2, is there any benefit to equipping both of them at the same time? Will I get the full 1+2 bonuses? Or will I get just somewhat over the 2nd bonus? Or is the level 1 tonic completely useless?\n\nA: The bonuses from the gene tonics stack, so if you equip a level 1 and a level 2 tonic, you will get the full bonuses from both.\n1_corpusstackexchange |
|
tstamptask_typeretrievaltypeleftvotemodelsip0_conv_id051c9eaffd8c4a1f88d6a7b072e3a3da0_model_nameintfloat/multilingual-e5-large-instruct0_promptI am looking for a paper to help me deal with data constraints as I scale up my language model.0_outputI am looking for a paper to help me deal with data constraints as I scale up my language model.Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations.0_corpusarxiv1_conv_idbf3763ae7d234c86b92a4acbf172612e1_model_namejinaai/jina-embeddings-v2-base-en1_promptI am looking for a paper to help me deal with data constraints as I scale up my language model.1_outputI am looking for a paper to help me deal with data constraints as I scale up my language model.Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \ according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers.1_corpusarxiv |
|
tstamptask_typeretrievaltypeleftvotemodelsip0_conv_id5f897a82cc1c4bb6877d9de92b903c300_model_namemixedbread-ai/mxbai-embed-large-v10_promptWho was temujin's greatest rival before 1207?", "0_output": [["Who was temujin's greatest rival before 1207?Rise of Genghis Khan\n\nThe period of Temüjin's life from 1177 to 1191 is largely unknown except that Temüjin often fought the Taichiud, Salji'ut, Khadagin, and Tatars with mixed results. One of the clans following Temüjin eventually left and was defeated by the Taichiud, after which they joined Jamukha. During the 1180s there was a drought in Mongolia which increased conflict between the tribes but Temüjin only played a limited part in these affairs.\nAttack on the Merkit (1191)\nThe previous attack on the Merkit that resulted in the rescue of Börte may have been a separate campaign that occurred in 1191, and was confused in the sources. In 1191, Jamukha, Temüjin, and Toghrul and his brother Jakha Gambhu decided to attack the Merkit. However Temüjin and Toghrul were late to the rendezvous area by three days, which greatly angered Jamukha. Together the 40,000 strong allied forces made a detour east to attack the enemy camp from the north-east. A fisherman spotted them and warned the Merkit of the impending attack, but the Merkit failed to make the necessary preparations, and dispersed.\nBattle of Dalan Baljut (1187/1193)\nTemüjin and Jamukha stayed together for a while after the battle. At one point they sent an embassy to the Khadagin and Salji'ut to try to win them over against the Taichiud. In 1187 or 1193, Temüjin and Jamukha split when retainers of the two camps clashed over horse thievery. Temüjin took with him 41 tribal leaders and 10,000 men from Jamukha and was elected khan by his followers in the following days. The Taichiud became frightened by Temüjin's power and joined Jamukha.0_corpuswikipedia1_conv_id8b89df9e59e0464fbb8397b16371fca31_model_namesentence-transformers/all-MiniLM-L6-v21_promptWho was temujin's greatest rival before 1207?", "1_output": [["Who was temujin's greatest rival before 1207?Emperor Ōjin\n\n, also known as (alternatively spelled ) or , was the 15th (possibly legendary) Emperor of Japan, according to the traditional order of succession. Both the Kojiki, and the Nihon Shoki (collectively known as the Kiki) record events that took place during Ōjin's alleged lifetime. Ōjin is traditionally listed as the first emperor of the Kofun period, and is primarily known for being the controversial son of Empress Jingū. Historians have mixed views on his factual existence, if Ōjin was indeed a historical figure then it's assumed he reigned much later than he is attested.\nNo firm dates can be assigned to Ōjin's life or reign, but he is traditionally considered to have reigned from 270 to 310. According to the Shinto religion and Buddhism in Japan, Emperor Ōjin is the divine spirit of the deity Hachiman (八幡神). While the location of Ōjin's grave (if any) is unknown, he is traditionally venerated at a memorial Shinto tomb. Modern historians have come to the conclusion that the title of \ and the name \ was used by later generations to describe this possibly legendary Emperor.1_corpuswikipedia |
|
tstamptask_typeretrievaltypeleftvotemodelsip0_conv_idb0d36170b64947a5b92f47145da1c1370_model_namevoyage-multilingual-20_promptWhich test was devised to determine whether robots can think?0_outputWhich test was devised to determine whether robots can think?Turing test\n\nThe test was introduced by Turing in his 1950 paper \ while working at the University of Manchester. It opens with the words: \thinking\replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\imitation game\Are there imaginable digital computers which would do well in the imitation game?\machines can think\mind\understanding\consciousness\]], : , : , : , : , : [[, nearly equal to human performance\]], : } |
|
|