|
{: 1722441607.5868, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722441638.7331, : , : , : [, ], : , : , : , : , : [[, Cross Platform Multiplayer\]], : , : , : , : , : [[, Cross Platform Multiplayer\]], : } |
|
{: 1722441688.0512, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722441726.5986, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722442171.6617, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, I have therefore published the method, although I am aware that as yet it is very defective and imperfect; but it is hoped that also in the hands of other investigators it will turn out to be useful.\]], : } |
|
{: 1722442236.114, : , : , : [, ], : , : , : , : , : [[, showing political courage, enthusiasm andimpact with social feeling\]], : , : , : , : , : [[, showing political courage, enthusiasm andimpact with social feeling\]], : } |
|
{: 1722442304.08, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722443951.9871, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722444028.9194, : , : , : [, ], : , : , : , : , : [[, ]], : , : , : , : , : [[, ]], : } |
|
{: 1722444509.5213, : , : , : [, ], : , : , : , : , : [[, kingdom of heaven\]], : , : , : , : , : [[, highest place\low places\]], : } |
|
{: 1722445209.7995, : , : , : [, ], : , : , : , : , : [[, Computing Machinery and Intelligence\I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \ This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \.\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \, \, or \, regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "25390507c1a14ef1b32bb1574a1c3f26", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).1_corpuswikipedia |
|
tstamptask_typeretrievaltypebothbadvotemodelsip0_conv_id33d8228235394657b5ee7c61845ad49f0_model_nameSalesforce/SFR-Embedding-2_R0_promptWho won the latest euros?0_outputWho won the latest euros?UEFA European Championship\n\nThe UEFA European Football Championship, less formally the European Championship and informally the Euro or Euros, is the primary association football tournament organised by the Union of European Football Associations (UEFA). The competition is contested by UEFA members' senior men's national teams, determining the continental champion of Europe. It is the second-most watched football tournament in the world after the FIFA World Cup; the Euro 2016 final was watched by a global audience of around 600 million. The competition has been held every four years since 1960, except for 2020, when it was postponed until 2021 due to the COVID-19 pandemic in Europe, but kept the name Euro 2020. Scheduled to be in the even-numbered year between FIFA World Cup tournaments, it was originally called the European Nations' Cup before changing to its current name in 1968. Since 1996, the individual events have been branded as \"UEFA Euro [year]\".\nBefore entering the tournament, all teams other than the host nations (which qualify automatically) compete in a qualifying process. Until 2016, the championship winners could compete in the following year's FIFA Confederations Cup, but were not obliged to do so. From the 2020 edition onwards, the winner competes in the CONMEBOL–UEFA Cup of Champions.\nThe seventeen European Championship tournaments have been won by ten national teams: Spain have won four titles, Germany have won three titles, Italy and France have won two titles, and the Soviet Union, Czechoslovakia, the Netherlands, Denmark, Greece and Portugal have won one title each. To date, Spain is the only team to have won consecutive titles, doing so in 2008 and 2012.\nThe most recent championship, held in Germany in 2024, was won by Spain, who lifted a record fourth European title after beating England 2–1 in the final at Olympiastadion in Berlin.\nHistory0_corpuswikipedia1_conv_idf03d04a064ae4d97aa465675ae705ebf1_model_nameGritLM/GritLM-7B1_promptWho won the latest euros?1_outputWho won the latest euros?UEFA Euro 2016\n\nThe 2016 UEFA European Football Championship, commonly referred to as UEFA Euro 2016 (stylised as UEFA EURO 2016) or simply Euro 2016, was the 15th UEFA European Championship, the quadrennial international men's football championship of Europe organised by UEFA. It was held in France from 10 June to 10 July 2016. Spain were the two-time defending champions, having won the 2008 and 2012 tournaments, but were eliminated in the round of 16 2-0 by Italy. Portugal won the tournament for the first time, following a 1–0 victory after extra time over the host team, France, in the final played at the Stade de France."]], "1_corpus": "wikipedia"} |
|
{"tstamp": 1722445626.3012, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8c2ce987427b42e0886f0713c13efbc9", "0_model_name": "BM25", "0_prompt": "when is us presidential elections", "0_output": [["when is us presidential elections", "2010 Ukrainian presidential election\n\nOn 21 December 2009, the Central Election Commission of Ukraine formed 113 foreign polling stations.\nVoters are permitted to vote at home during the presidential election.\nCosts\nThe Central Election Commission has estimated the budget of the holding of regular presidential elections in Ukraine at ₴1.5 billion, (approximately 200 million US dollars) with additional costs required by candidates to fund their campaigns.\nEach candidate is required to pay an election deposit of ₴2.5 million (Approximately US$300,000) The deposit will be refunded to the two highest polling candidates who progress to the second round of elections.\nOn 26 November, the Central Election Commission stated a total of ₴1.314 billion is required to hold the presidential election, including 192.2 million in 2009 and 1.122 billion in 2010.\nAssessments by political analysts show that each presidential candidate will have to spend at least US$150–200mn to promote himself; this includes buying story lines in the media, visual advertising, canvassing, printing political material and, work with electoral commissions.\nChairman of the Committee of Voters of Ukraine, Oleksandr Chernenko, also commented that presidential candidates will spend 1 billion US dollars on the election campaign\nThe cost of the run-off ballot is estimated to be US$119 million"]], "0_corpus": "wikipedia", "1_conv_id": "7919081d502b4943b3903c5a61983653", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "when is us presidential elections", "1_output": [["when is us presidential elections", "2024 United States presidential election\n\nThe 2024 United States presidential election will be the 60th quadrennial presidential election, set to be held on Tuesday, November 5, 2024. Voters will elect a president and vice president for a term of four years. The incumbent president, Joe Biden, a member of the Democratic Party, is running for re-election. His predecessor, Donald Trump, a member of the Republican Party, is running for re-election for a second, non-consecutive term, after losing to Biden in the 2020 presidential election. If both are nominated, this will mark the seventh presidential election rematch, and the first since 1956. In the run-up to the election, on May 30, 2024, Trump was convicted of 34 felonies related to falsifying business records, becoming the first president to be found guilty of a crime. On July 13, Trump was shot in an assassination attempt at one of his rallies.\nThe winner of this election is scheduled to be inaugurated on January 20, 2025. The presidential election will take place at the same time as elections for the U.S. Senate, House, gubernatorial, and state legislatures. Biden and Trump secured a majority of delegates for the Democratic and Republican parties, respectively, on March 12, though they have yet to be confirmed at their nominating conventions. Robert F. Kennedy Jr. emerged as the highest-polling third-party presidential candidate since Ross Perot in the 1992 election, running as an independent.\nLeading campaign issues are expected to be abortion, border security and immigration, healthcare, education, the economy, foreign policy, LGBT rights, climate change, and democracy.\nBackground"]], "1_corpus": "wikipedia"} |
|
{"tstamp": 1722445656.4983, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d33c666d9a464315b179a286c0df9942", "0_model_name": "voyage-multilingual-2", "0_prompt": "Interpretable feature extraction in language models using autoencoders.", "0_output": [["Interpretable feature extraction in language models using autoencoders.", "Title: Sparse Autoencoders Find Highly Interpretable Features in Language Models\n\nAbstract: One of the roadblocks to a better understanding of neural networks' internals is \\textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \\textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \\citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability.0_corpusarxiv1_conv_ida07117052afe44569b36cea1bcc52e861_model_nameBM251_promptInterpretable feature extraction in language models using autoencoders.1_outputInterpretable feature extraction in language models using autoencoders.Title: Scaling and evaluating sparse autoencoders\n\nAbstract: Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer.1_corpusarxiv |
|
tstamptask_typeretrievaltypebothbadvotemodelsip0_conv_iddc5c3730a60e46b58e211818a759810b0_model_namenomic-ai/nomic-embed-text-v1.50_promptis pizza also called pie?0_outputis pizza also called pie?Ell Stackexchange\n\nQ: What is this food called in English? I think technically it's a fruit although they strongly resemble a vegetable:\n\nWhat do you call these?\nPersonally, I know them as ‘paprika’.\nOthers say these are ‘bell peppers’, and only when ground into spice it's called paprika.\nBesides grinding them into spice, you can also make soup of it or sauce. Or put slices of this on a pizza. I would call that paprika soup, or paprika sauce, or paprika slices.\nIs this perhaps a difference between British vs American vs 'international' English?\nP.S. regarding the fruit vs vegetable issue: not really related to the question here but I think from a botanical point of view it's a fruit, and from a culinary point of view it's a vegetable.\n\nA: As well as being called bell peppers in the other answers, they are sometimes also called Capsicum in some English speaking countries, as per Wikipedia Bell pepper. Capsicum is actually the Genus name of the plant.\n0_corpusstackexchange1_conv_idfba47ce32dad426d85b4fcd99186db7a1_model_namevoyage-multilingual-21_promptis pizza also called pie?1_outputis pizza also called pie?English Stackexchange\n\nQ: How to name a part of a piechart Which term best suits to describe a part/slice/share/portion of a piechart, disregarding what the chart is about ?\n\nA: Since each \ is a sector of a circle, sector can be used IMO.\n1_corpusstackexchange |
|
tstamptask_typeretrievaltypeleftvotemodelsip0_conv_id2822b7b032984b2896bc9c1b5d9477a80_model_nameGritLM/GritLM-7B0_promptA system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.0_outputA system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.Title: End-to-End Open-Domain Question Answering with BERTserini\n\nAbstract: We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.0_corpusarxiv1_conv_ide4223ef5fa88473386fbe4e0ee1c30161_model_namesentence-transformers/all-MiniLM-L6-v21_promptA system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.1_outputA system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.Title: Detection of the Geminga pulsar with MAGIC hints at a power-law tail emission beyond 15 GeV\n\nAbstract: We report the detection of pulsed gamma-ray emission from the Geminga pulsar (PSR J0633+1746) between $15\\,$GeV and $75\\,$GeV. This is the first time a middle-aged pulsar has been detected up to these energies. Observations were carried out with the MAGIC telescopes between 2017 and 2019 using the low-energy threshold Sum-Trigger-II system. After quality selection cuts, $\\sim 80\\,$hours of observational data were used for this analysis. To compare with the emission at lower energies below the sensitivity range of MAGIC, $11$ years of Fermi-LAT data above $100\\,$MeV were also analysed. From the two pulses per rotation seen by Fermi-LAT, only the second one, P2, is detected in the MAGIC energy range, with a significance of $6.3\\,\\sigma$. The spectrum measured by MAGIC is well-represented by a simple power law of spectral index $\\Gamma= 5.62\\pm0.54$, which smoothly extends the Fermi-LAT spectrum. A joint fit to MAGIC and Fermi-LAT data rules out the existence of a sub-exponential cut-off in the combined energy range at the $3.6\\,\\sigma$ significance level. The power-law tail emission detected by MAGIC is interpreted as the transition from curvature radiation to Inverse Compton Scattering of particles accelerated in the northern outer gap.1_corpusarxiv |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_id55af47c6ca9c4ac7a77ee1e484d2f6480_model_namesentence-transformers/all-MiniLM-L6-v20_promptChinese text encoder with n-gram representations achieving state-of-the-art performance.0_outputChinese text encoder with n-gram representations achieving state-of-the-art performance.Title: Is Nothing Sacred? Vacuum Energy, Supersymmetry and Lorentz Breaking from Recoiling D branes\n\nAbstract: Classical superstring vacua have zero vacuum energy and are supersymmetric and Lorentz-invariant. We argue that all these properties may be destroyed when quantum aspects of the interactions between particles and non-perturbative vacuum fluctuations are considered. A toy calculation of string/D-brane interactions using a world-sheet approach indicates that quantum recoil effects - reflecting the gravitational back-reaction on space-time foam due to the propagation of energetic particles - induce non-zero vacuum energy that is linked to supersymmetry breaking and breaks Lorentz invariance. This model of space-time foam also suggests the appearance of microscopic event horizons.0_corpusarxiv1_conv_id8b2315872e5949a7a35825b601d439771_model_nameAlibaba-NLP/gte-Qwen2-7B-instruct1_promptChinese text encoder with n-gram representations achieving state-of-the-art performance.1_outputChinese text encoder with n-gram representations achieving state-of-the-art performance.Title: Character-level Chinese-English Translation through ASCII Encoding\n\nAbstract: Character-level Neural Machine Translation (NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a major challenge because of a lack of systematic correspondence between the individual linguistic units. In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic information of the characters, while also being reversible. We show promising results from training Wubi-based models on the character- and subword-level with recurrent as well as convolutional models.1_corpusarxiv |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_id483dafda9ed54c9c80bc45216ad796db0_model_nameAlibaba-NLP/gte-Qwen2-7B-instruct0_promptwhat type of data is provided in a knowledge base0_outputwhat type of data is provided in a knowledge baseW. Edwards Deming\n\nDeming continued to advise businesses large and small. From 1985 through 1989, Deming served as a consultant to Vernay Laboratories, a rubber manufacturing firm in Yellow Springs, Ohio, with fewer than 1,000 employees. He held several week-long seminars for employees and suppliers of the small company where his famous example \ spurred several major changes in Vernay's manufacturing processes.\nDeming joined the Graduate School of Business at Columbia University in 1988. In 1990, during his last year, he founded the W. Edwards Deming Center for Quality, Productivity, and Competitiveness at Columbia Business School to promote operational excellence in business through the development of research, best practices and strategic planning.\nIn 1990, Marshall Industries (NYSE:MI, 1984–1999) CEO Robert Rodin trained with the then 90-year-old Deming and his colleague Nida Backaitis. Marshall Industries' dramatic transformation and growth from $400 million to $1.8 billion in sales was chronicled in Deming's last book The New Economics, a Harvard Case Study, and Rodin's book, Free, Perfect and Now.\nIn 1993, Deming published his final book, The New Economics for Industry, Government, Education, which included the System of Profound Knowledge and the 14 Points for Management. It also contained educational concepts involving group-based teaching without grades, as well as management without individual merit or performance reviews.\nAcademic contributions\nThe philosophy of W. Edwards Deming has been summarized as follows:\nThe Deming System of Profound Knowledge0_corpuswikipedia1_conv_id3139be50f5064e0894343f6a22890c541_model_nameintfloat/e5-mistral-7b-instruct1_promptwhat type of data is provided in a knowledge base1_outputwhat type of data is provided in a knowledge baseKnowledge base\n\nIn computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.\nOriginal usage of the term1_corpuswikipedia |
|
tstamptask_typeretrievaltypetievotemodelsip0_conv_ided8435cf63824a24a3b2372e6608f29d0_model_nameintfloat/e5-mistral-7b-instruct0_prompthow to change share name in windows server 2008?0_outputhow to change share name in windows server 2008?Superuser Stackexchange\n\nQ: How to change the administrator account name from command prompt in Windows Server 2008 Server Core? I'm trying to change the administrator account name on my virtual machine for lab work purposes. I'm running Windows Server 2008 Server Core, therefore I'm only using the standard command prompts.\nHow can I do this?\n\nA: If it domain user install Active Directory Administration module for PowerShell. \ncommand line:\npowershell Import-module ActiveDirectory\n\nrename domain user -\nPowershell:\nGet-ADUser -Identity 'Administrator' | Rename-ADObject -NewName 'Partisan'\n\ncommand line:\npowershell Get-ADUser -Identity 'Administrator' ^| Rename-ADObject -NewName 'Partisan'\n\nvar 2:\ndsquery user -name Administrator | dsmove -newname \"Partisan\"\n\nlocal administrator -\nPowershell:\nRename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\ncommand line:\npowershell Rename-LocalUser -UserName 'Administrator' -NewUserName 'Partisan'\n\nvar2:\nwmic UserAccount where Name=\"Administrator\" call Rename Name=\"Partisan\"\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "939f7aa837ce48d9b9700ee6346622d4", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "how to change share name in windows server 2008?", "1_output": [["how to change share name in windows server 2008?", "Serverfault Stackexchange\n\nQ: How do you configure a \\\\domain\\sharedFolder in windows 2008? Background: \n\n\n*\n\n*I've got several Windows 2008 servers setup by someone else.\n\n*They are all on one domain, lets call this mydomain\n\n*Several boxes are accessing a shared folder by going to \\\\mydomain\\share1\n\n*I found the actual files backing the share on box1\n\n*I need to move the location of this share to a larger drive for space reasons\n\n\nI'm familiar with setting up and managing shares like \\\\box1\\share1, but where do you manage/configure shares that are domain-wide; accessed like \\\\mydomain\\share1 ?\n\nA: It sounds like you have a domain based DFS configured, which is managed by the DFS Management utility.\n"]], "1_corpus": "stackexchange"} |
|
{"tstamp": 1722446198.8045, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9e2f3e5d4eb04b5db030979a91bfe01e", "0_model_name": "text-embedding-3-large", "0_prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "0_output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "0_corpus": "arxiv", "1_conv_id": "99d70563aab749489c924deb1dc45544", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "How can a small visual-linguistic model be effectively trained using knowledge distillation?", "1_output": [["How can a small visual-linguistic model be effectively trained using knowledge distillation?", "Title: Compressing Visual-linguistic Model via Knowledge Distillation\n\nAbstract: Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages."]], "1_corpus": "arxiv"} |
|
{"tstamp": 1722447804.2737, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a69e4fe18f394932a19bcd5b57c0266f", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "How can in-context learning emerge from pretraining documents with long-range coherence?", "0_output": [["How can in-context learning emerge from pretraining documents with long-range coherence?", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "0_corpus": "arxiv", "1_conv_id": "05b4950e4c6247cd99901948d031c52e", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "How can in-context learning emerge from pretraining documents with long-range coherence?", "1_output": [["How can in-context learning emerge from pretraining documents with long-range coherence?", "Title: An Explanation of In-context Learning as Implicit Bayesian Inference\n\nAbstract: Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, where the model learns to do a downstream task simply by conditioning on a prompt consisting of input-output examples. The LM learns from these examples without being explicitly pretrained to learn. Thus, it is unclear what enables in-context learning. In this paper, we study how in-context learning can emerge when pretraining documents have long-range coherence. Here, the LM must infer a latent document-level concept to generate coherent next tokens during pretraining. At test time, in-context learning occurs when the LM also infers a shared latent concept between examples in a prompt. We prove when this occurs despite a distribution mismatch between prompts and pretraining data in a setting where the pretraining distribution is a mixture of HMMs. In contrast to messy large-scale datasets used to train LMs capable of in-context learning, we generate a small-scale synthetic dataset (GINC) where Transformers and LSTMs both exhibit in-context learning. Beyond the theory, experiments on GINC exhibit large-scale real-world phenomena including improved in-context performance with model scaling (despite the same pretraining loss), sensitivity to example order, and instances where zero-shot is better than few-shot in-context learning."]], "1_corpus": "arxiv"} |
|
{"tstamp": 1722448831.6873, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "7a79ffac6e0d41b3add4d6518d91cca8", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?0_outputWhat's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "4e34da23af094e5bae7f2ad3357c2fe9", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?1_outputWhat's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "1_corpus": "stackexchange"} |
|
{"tstamp": 1722451025.5473, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "a19bf7564c984e3ba9c90cb4ad0724c9", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "is harvard kennedy school part of harvard university", "0_output": [["is harvard kennedy school part of harvard university", "Harvard Kennedy School\n\nHKS courses Harvard Kennedy School maintains six academic divisions each headed by a faculty chair. In addition to offerings in the Harvard Kennedy School course listing, students are eligible to cross-register for courses at the other graduate and professional schools at Harvard and at the MIT Sloan School of Management, The Fletcher School of Law and Diplomacy at Tufts University, and the MIT School of Architecture and Planning. MPP coursework is focused on one of five areas, called a Policy Area of Concentration (PAC), and includes a year-long research seminar in their second year, which includes a master's thesis called a Policy Analysis Exercise.\nRankings\nHarvard Kennedy School has routinely ranked as the best, or among the best, of the world's public policy graduate schools. U.S. News & World Report ranks it the best graduate school for social policy, the best for health policy, and second best for public policy analysis. In 2015 rankings, Kennedy School is ranked first in the subcategory of health policy and second in the category of public policy analysis and social policy.\nKennedy's School's foreign affairs programs have consistently ranked at the top or near the top of Foreign Policy magazine's Inside the Ivory Tower survey, which lists the world's top twenty academic international relations programs at the undergraduate, Master's, and Ph.D. levels. In 2012, for example, the survey ranked Kennedy School first overall for doctoral and undergraduate programs and third overall in the Master's category.\nStudent organizations\nHarvard Kennedy School maintains a range of student activities, including interest-driven student caucuses, the student government (Kennedy School Student Government, known as KSSG), student-edited policy journals including Harvard Journal of Hispanic Policy, Kennedy School Review, the Journal of Middle Eastern Politics and Policy, a student newspaper (The Citizen), and a number of student athletic groups."]], "0_corpus": "wikipedia", "1_conv_id": "8ccc2c3a69d3474e967663216549eb5c", "1_model_name": "voyage-multilingual-2", "1_prompt": "is harvard kennedy school part of harvard university", "1_output": [["is harvard kennedy school part of harvard university", "Harvard Kennedy School\n\nIra Jackson (MPA '86), dean, Peter F. Drucker and Masatoshi Ito Graduate School of Management at Claremont Graduate University\nNancy Koehn (MPP), author, historian, and Harvard Business School business history professor\nMark Lilla (MPP '80), professor of humanities, Columbia University\nHollis Robbins (MPP '90), dean of humanities, University of Utah\nMark Schuster (MPP '88), dean and founding CEO, Kaiser Permanente Bernard J. Tyson School of Medicine\nStephen Joel Trachtenberg (MPA '66), former president, George Washington University\nWilliam E. Trueheart (MPA '73), former president, Bryant University\nJonathan Zittrain (MPA '95), professor of international law, Harvard Law School and co-founder of Harvard's Berkman Klein Center for Internet & Society\nArts\nWill Butler (MPA '17), musician and former member of Arcade Fire\nAshley Judd (MPA '10), actress and activist\nHill Harper (MPA '92), actor and author\nThor Steingraber (MPA '09), opera director\nDamian Woetzel (MPA '07), former principal dancer, New York City Ballet\nBusiness\nRune Bjerke (MPA '97), CEO, DNB ASA\nGregory C. Carr (MPP '86), founder, Boston Technology\nLeonard S. Coleman Jr. (MPA '75), former president of the National League\nBenjamin Fernandes (Exec. Ed'17), Tanzanian entrepreneur\nDebra L. Lee (MPP '80), President and CEO, Black Entertainment Television\nDaniel Mudd (MPA '86), former president and CEO of U.S. Fannie Mae1_corpuswikipedia |
|
tstamptask_typeretrievaltyperightvotemodelsip0_conv_id92ab18d28fff48efba3b08078d47ad810_model_namesentence-transformers/all-MiniLM-L6-v20_promptwhy does sx mean surgery?0_outputwhy does sx mean surgery?Stackoverflow Stackexchange\n\nQ: printf format for unsigned __int64 on Windows I need to print a ULONGLONG value (unsigned __int64). What format should i use in printf ?\nI found %llu in another question but they say it is for linux only.\nThanks for your help.\n\nA: I recommend you use PRIu64 format specified from a standard C library. It was designed to provide users with a format specifier for unsigned 64-bit integer across different architectures.\nHere is an example (in C, not C++):\n#include <stdint.h> /* For uint64_t */\n#include <inttypes.h> /* For PRIu64 */\n#include <stdio.h> /* For printf */\n#include <stdlib.h> /* For exit status */\n\nint main()\n{\n uint64_t n = 1986;\n printf(\ PRIu64 \, n);\n return EXIT_SUCCESS;\n}\n\n0_corpusstackexchange1_conv_id81ae0eae1b3c462584fef0c6acb82a031_model_namemixedbread-ai/mxbai-embed-large-v11_promptwhy does sx mean surgery?1_outputwhy does sx mean surgery?English Stackexchange\n\nQ: Is there a verb form of the word 'surgery'? Surgery as a noun is also performance of a surgical procedure. Surgical is an adjective. I couldn't find anywhere a verb for surgery. Like how you 'operate' for the noun operation. \n\nA: I would have said performing surgery. \n"]], "1_corpus": "stackexchange"} |
|
|