Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
2cbe2f1
·
verified ·
1 Parent(s): 24015f2

Scheduled Commit

Browse files
data/retrieval_battle-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl CHANGED
@@ -10,3 +10,7 @@
10
  {"tstamp": 1722444509.5213, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "17817cacbabd4dcf9fa57b72945df25a", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "What is the antipode of heaven?", "0_output": [["What is the antipode of heaven?", "Heaven in Christianity\n\nIn Christianity, heaven is traditionally the location of the throne of God and the angels of God, and in most forms of Christianity it is the abode of the righteous dead in the afterlife. In some Christian denominations it is understood as a temporary stage before the resurrection of the dead and the saints' return to the New Earth.\nIn the Book of Acts, the resurrected Jesus ascends to heaven where, as the Nicene Creed states, he now sits at the right hand of God and will return to earth in the Second Coming. According to Catholic, Eastern Orthodox, and Oriental Orthodox teaching, Mary, mother of Jesus, is said to have been assumed into heaven without the corruption of her earthly body; she is venerated as Queen of Heaven.\nIn the Christian Bible, concepts about Christian eschatology, the future \"kingdom of heaven\", and the resurrection of the dead are found, particularly in the book of Revelation and in 1 Corinthians 15.\nDescription\nThe Bible does not provide much information on what Heaven is supposed to be like. As a result, Christian theologians are usually not very specific when describing heaven.\nThe Book of Revelation states that the New Jerusalem will be transported from Heaven to Earth, rather than people from Earth going to Heaven. The description of the gates of New Jerusalem in inspired the idea of the Pearly gates, which is the informal name for the gateway to heaven according to some Christian denominations."]], "0_corpus": "wikipedia", "1_conv_id": "2d381decf1d1476fbbd0c1c804ed3088", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "What is the antipode of heaven?", "1_output": [["What is the antipode of heaven?", "Heaven\n\nHeaven, or the heavens, is a common religious cosmological or transcendent supernatural place where beings such as deities, angels, souls, saints, or venerated ancestors are said to originate, be enthroned, or reside. According to the beliefs of some religions, heavenly beings can descend to Earth or incarnate and earthly beings can ascend to Heaven in the afterlife or, in exceptional cases, enter Heaven without dying.\nHeaven is often described as a \"highest place\", the holiest place, a Paradise, in contrast to hell or the Underworld or the \"low places\" and universally or conditionally accessible by earthly beings according to various standards of divinity, goodness, piety, faith, or other virtues or right beliefs or simply divine will. Some believe in the possibility of a heaven on Earth in a world to come."]], "1_corpus": "wikipedia"}
11
  {"tstamp": 1722445209.7995, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c4c4c875854a41f8a4f23c4dd4df5074", "0_model_name": "embed-english-v3.0", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "25390507c1a14ef1b32bb1574a1c3f26", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
12
  {"tstamp": 1722445580.5576, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "33d8228235394657b5ee7c61845ad49f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Who won the latest euros?", "0_output": [["Who won the latest euros?", "UEFA European Championship\n\nThe UEFA European Football Championship, less formally the European Championship and informally the Euro or Euros, is the primary association football tournament organised by the Union of European Football Associations (UEFA). The competition is contested by UEFA members' senior men's national teams, determining the continental champion of Europe. It is the second-most watched football tournament in the world after the FIFA World Cup; the Euro 2016 final was watched by a global audience of around 600 million. The competition has been held every four years since 1960, except for 2020, when it was postponed until 2021 due to the COVID-19 pandemic in Europe, but kept the name Euro 2020. Scheduled to be in the even-numbered year between FIFA World Cup tournaments, it was originally called the European Nations' Cup before changing to its current name in 1968. Since 1996, the individual events have been branded as \"UEFA Euro [year]\".\nBefore entering the tournament, all teams other than the host nations (which qualify automatically) compete in a qualifying process. Until 2016, the championship winners could compete in the following year's FIFA Confederations Cup, but were not obliged to do so. From the 2020 edition onwards, the winner competes in the CONMEBOL–UEFA Cup of Champions.\nThe seventeen European Championship tournaments have been won by ten national teams: Spain have won four titles, Germany have won three titles, Italy and France have won two titles, and the Soviet Union, Czechoslovakia, the Netherlands, Denmark, Greece and Portugal have won one title each. To date, Spain is the only team to have won consecutive titles, doing so in 2008 and 2012.\nThe most recent championship, held in Germany in 2024, was won by Spain, who lifted a record fourth European title after beating England 2–1 in the final at Olympiastadion in Berlin.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f03d04a064ae4d97aa465675ae705ebf", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Who won the latest euros?", "1_output": [["Who won the latest euros?", "UEFA Euro 2016\n\nThe 2016 UEFA European Football Championship, commonly referred to as UEFA Euro 2016 (stylised as UEFA EURO 2016) or simply Euro 2016, was the 15th UEFA European Championship, the quadrennial international men's football championship of Europe organised by UEFA. It was held in France from 10 June to 10 July 2016. Spain were the two-time defending champions, having won the 2008 and 2012 tournaments, but were eliminated in the round of 16 2-0 by Italy. Portugal won the tournament for the first time, following a 1–0 victory after extra time over the host team, France, in the final played at the Stade de France."]], "1_corpus": "wikipedia"}
 
 
 
 
 
10
  {"tstamp": 1722444509.5213, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "17817cacbabd4dcf9fa57b72945df25a", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "What is the antipode of heaven?", "0_output": [["What is the antipode of heaven?", "Heaven in Christianity\n\nIn Christianity, heaven is traditionally the location of the throne of God and the angels of God, and in most forms of Christianity it is the abode of the righteous dead in the afterlife. In some Christian denominations it is understood as a temporary stage before the resurrection of the dead and the saints' return to the New Earth.\nIn the Book of Acts, the resurrected Jesus ascends to heaven where, as the Nicene Creed states, he now sits at the right hand of God and will return to earth in the Second Coming. According to Catholic, Eastern Orthodox, and Oriental Orthodox teaching, Mary, mother of Jesus, is said to have been assumed into heaven without the corruption of her earthly body; she is venerated as Queen of Heaven.\nIn the Christian Bible, concepts about Christian eschatology, the future \"kingdom of heaven\", and the resurrection of the dead are found, particularly in the book of Revelation and in 1 Corinthians 15.\nDescription\nThe Bible does not provide much information on what Heaven is supposed to be like. As a result, Christian theologians are usually not very specific when describing heaven.\nThe Book of Revelation states that the New Jerusalem will be transported from Heaven to Earth, rather than people from Earth going to Heaven. The description of the gates of New Jerusalem in inspired the idea of the Pearly gates, which is the informal name for the gateway to heaven according to some Christian denominations."]], "0_corpus": "wikipedia", "1_conv_id": "2d381decf1d1476fbbd0c1c804ed3088", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "What is the antipode of heaven?", "1_output": [["What is the antipode of heaven?", "Heaven\n\nHeaven, or the heavens, is a common religious cosmological or transcendent supernatural place where beings such as deities, angels, souls, saints, or venerated ancestors are said to originate, be enthroned, or reside. According to the beliefs of some religions, heavenly beings can descend to Earth or incarnate and earthly beings can ascend to Heaven in the afterlife or, in exceptional cases, enter Heaven without dying.\nHeaven is often described as a \"highest place\", the holiest place, a Paradise, in contrast to hell or the Underworld or the \"low places\" and universally or conditionally accessible by earthly beings according to various standards of divinity, goodness, piety, faith, or other virtues or right beliefs or simply divine will. Some believe in the possibility of a heaven on Earth in a world to come."]], "1_corpus": "wikipedia"}
11
  {"tstamp": 1722445209.7995, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c4c4c875854a41f8a4f23c4dd4df5074", "0_model_name": "embed-english-v3.0", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "25390507c1a14ef1b32bb1574a1c3f26", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
12
  {"tstamp": 1722445580.5576, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "33d8228235394657b5ee7c61845ad49f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Who won the latest euros?", "0_output": [["Who won the latest euros?", "UEFA European Championship\n\nThe UEFA European Football Championship, less formally the European Championship and informally the Euro or Euros, is the primary association football tournament organised by the Union of European Football Associations (UEFA). The competition is contested by UEFA members' senior men's national teams, determining the continental champion of Europe. It is the second-most watched football tournament in the world after the FIFA World Cup; the Euro 2016 final was watched by a global audience of around 600 million. The competition has been held every four years since 1960, except for 2020, when it was postponed until 2021 due to the COVID-19 pandemic in Europe, but kept the name Euro 2020. Scheduled to be in the even-numbered year between FIFA World Cup tournaments, it was originally called the European Nations' Cup before changing to its current name in 1968. Since 1996, the individual events have been branded as \"UEFA Euro [year]\".\nBefore entering the tournament, all teams other than the host nations (which qualify automatically) compete in a qualifying process. Until 2016, the championship winners could compete in the following year's FIFA Confederations Cup, but were not obliged to do so. From the 2020 edition onwards, the winner competes in the CONMEBOL–UEFA Cup of Champions.\nThe seventeen European Championship tournaments have been won by ten national teams: Spain have won four titles, Germany have won three titles, Italy and France have won two titles, and the Soviet Union, Czechoslovakia, the Netherlands, Denmark, Greece and Portugal have won one title each. To date, Spain is the only team to have won consecutive titles, doing so in 2008 and 2012.\nThe most recent championship, held in Germany in 2024, was won by Spain, who lifted a record fourth European title after beating England 2–1 in the final at Olympiastadion in Berlin.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f03d04a064ae4d97aa465675ae705ebf", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Who won the latest euros?", "1_output": [["Who won the latest euros?", "UEFA Euro 2016\n\nThe 2016 UEFA European Football Championship, commonly referred to as UEFA Euro 2016 (stylised as UEFA EURO 2016) or simply Euro 2016, was the 15th UEFA European Championship, the quadrennial international men's football championship of Europe organised by UEFA. It was held in France from 10 June to 10 July 2016. Spain were the two-time defending champions, having won the 2008 and 2012 tournaments, but were eliminated in the round of 16 2-0 by Italy. Portugal won the tournament for the first time, following a 1–0 victory after extra time over the host team, France, in the final played at the Stade de France."]], "1_corpus": "wikipedia"}
13
+ {"tstamp": 1722445626.3012, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8c2ce987427b42e0886f0713c13efbc9", "0_model_name": "BM25", "0_prompt": "when is us presidential elections", "0_output": [["when is us presidential elections", "2010 Ukrainian presidential election\n\nOn 21 December 2009, the Central Election Commission of Ukraine formed 113 foreign polling stations.\nVoters are permitted to vote at home during the presidential election.\nCosts\nThe Central Election Commission has estimated the budget of the holding of regular presidential elections in Ukraine at ₴1.5 billion, (approximately 200 million US dollars) with additional costs required by candidates to fund their campaigns.\nEach candidate is required to pay an election deposit of ₴2.5 million (Approximately US$300,000) The deposit will be refunded to the two highest polling candidates who progress to the second round of elections.\nOn 26 November, the Central Election Commission stated a total of ₴1.314 billion is required to hold the presidential election, including 192.2 million in 2009 and 1.122 billion in 2010.\nAssessments by political analysts show that each presidential candidate will have to spend at least US$150–200mn to promote himself; this includes buying story lines in the media, visual advertising, canvassing, printing political material and, work with electoral commissions.\nChairman of the Committee of Voters of Ukraine, Oleksandr Chernenko, also commented that presidential candidates will spend 1 billion US dollars on the election campaign\nThe cost of the run-off ballot is estimated to be US$119 million"]], "0_corpus": "wikipedia", "1_conv_id": "7919081d502b4943b3903c5a61983653", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "when is us presidential elections", "1_output": [["when is us presidential elections", "2024 United States presidential election\n\nThe 2024 United States presidential election will be the 60th quadrennial presidential election, set to be held on Tuesday, November 5, 2024. Voters will elect a president and vice president for a term of four years. The incumbent president, Joe Biden, a member of the Democratic Party, is running for re-election. His predecessor, Donald Trump, a member of the Republican Party, is running for re-election for a second, non-consecutive term, after losing to Biden in the 2020 presidential election. If both are nominated, this will mark the seventh presidential election rematch, and the first since 1956. In the run-up to the election, on May 30, 2024, Trump was convicted of 34 felonies related to falsifying business records, becoming the first president to be found guilty of a crime. On July 13, Trump was shot in an assassination attempt at one of his rallies.\nThe winner of this election is scheduled to be inaugurated on January 20, 2025. The presidential election will take place at the same time as elections for the U.S. Senate, House, gubernatorial, and state legislatures. Biden and Trump secured a majority of delegates for the Democratic and Republican parties, respectively, on March 12, though they have yet to be confirmed at their nominating conventions. Robert F. Kennedy Jr. emerged as the highest-polling third-party presidential candidate since Ross Perot in the 1992 election, running as an independent.\nLeading campaign issues are expected to be abortion, border security and immigration, healthcare, education, the economy, foreign policy, LGBT rights, climate change, and democracy.\nBackground"]], "1_corpus": "wikipedia"}
14
+ {"tstamp": 1722445656.4983, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d33c666d9a464315b179a286c0df9942", "0_model_name": "voyage-multilingual-2", "0_prompt": "Interpretable feature extraction in language models using autoencoders.", "0_output": [["Interpretable feature extraction in language models using autoencoders.", "Title: Sparse Autoencoders Find Highly Interpretable Features in Language Models\n\nAbstract: One of the roadblocks to a better understanding of neural networks' internals is \\textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \\textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \\citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability."]], "0_corpus": "arxiv", "1_conv_id": "a07117052afe44569b36cea1bcc52e86", "1_model_name": "BM25", "1_prompt": "Interpretable feature extraction in language models using autoencoders.", "1_output": [["Interpretable feature extraction in language models using autoencoders.", "Title: Scaling and evaluating sparse autoencoders\n\nAbstract: Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer."]], "1_corpus": "arxiv"}
15
+ {"tstamp": 1722445681.9899, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "dc5c3730a60e46b58e211818a759810b", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is pizza also called pie?", "0_output": [["is pizza also called pie?", "Ell Stackexchange\n\nQ: What is this food called in English? I think technically it's a fruit although they strongly resemble a vegetable:\n\nWhat do you call these?\nPersonally, I know them as ‘paprika’.\nOthers say these are ‘bell peppers’, and only when ground into spice it's called paprika.\nBesides grinding them into spice, you can also make soup of it or sauce. Or put slices of this on a pizza. I would call that paprika soup, or paprika sauce, or paprika slices.\nIs this perhaps a difference between British vs American vs 'international' English?\nP.S. regarding the fruit vs vegetable issue: not really related to the question here but I think from a botanical point of view it's a fruit, and from a culinary point of view it's a vegetable.\n\nA: As well as being called bell peppers in the other answers, they are sometimes also called Capsicum in some English speaking countries, as per Wikipedia Bell pepper. Capsicum is actually the Genus name of the plant.\n"]], "0_corpus": "stackexchange", "1_conv_id": "fba47ce32dad426d85b4fcd99186db7a", "1_model_name": "voyage-multilingual-2", "1_prompt": "is pizza also called pie?", "1_output": [["is pizza also called pie?", "English Stackexchange\n\nQ: How to name a part of a piechart Which term best suits to describe a part/slice/share/portion of a piechart, disregarding what the chart is about ?\n\nA: Since each \"part\" is a sector of a circle, sector can be used IMO.\n"]], "1_corpus": "stackexchange"}
16
+ {"tstamp": 1722445702.9498, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "2822b7b032984b2896bc9c1b5d9477a8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "0_output": [["A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "Title: End-to-End Open-Domain Question Answering with BERTserini\n\nAbstract: We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans."]], "0_corpus": "arxiv", "1_conv_id": "e4223ef5fa88473386fbe4e0ee1c3016", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "1_output": [["A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "Title: Detection of the Geminga pulsar with MAGIC hints at a power-law tail emission beyond 15 GeV\n\nAbstract: We report the detection of pulsed gamma-ray emission from the Geminga pulsar (PSR J0633+1746) between $15\\,$GeV and $75\\,$GeV. This is the first time a middle-aged pulsar has been detected up to these energies. Observations were carried out with the MAGIC telescopes between 2017 and 2019 using the low-energy threshold Sum-Trigger-II system. After quality selection cuts, $\\sim 80\\,$hours of observational data were used for this analysis. To compare with the emission at lower energies below the sensitivity range of MAGIC, $11$ years of Fermi-LAT data above $100\\,$MeV were also analysed. From the two pulses per rotation seen by Fermi-LAT, only the second one, P2, is detected in the MAGIC energy range, with a significance of $6.3\\,\\sigma$. The spectrum measured by MAGIC is well-represented by a simple power law of spectral index $\\Gamma= 5.62\\pm0.54$, which smoothly extends the Fermi-LAT spectrum. A joint fit to MAGIC and Fermi-LAT data rules out the existence of a sub-exponential cut-off in the combined energy range at the $3.6\\,\\sigma$ significance level. The power-law tail emission detected by MAGIC is interpreted as the transition from curvature radiation to Inverse Compton Scattering of particles accelerated in the northern outer gap."]], "1_corpus": "arxiv"}
data/retrieval_individual-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl CHANGED
@@ -26,3 +26,13 @@
26
  {"tstamp": 1722445343.7295, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722445342.5281, "finish": 1722445343.7295, "ip": "", "conv_id": "4ab9f7d1179f49ff8c1b741dd7ab39ef", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A study on improving traffic forecasting accuracy by combining future traffic volume with historical data.", "output": [["A study on improving traffic forecasting accuracy by combining future traffic volume with historical data.", "Title: Informed along the road: roadway capacity driven graph convolution network for network-wide traffic prediction\n\nAbstract: While deep learning has shown success in predicting traffic states, most methods treat it as a general prediction task without considering transportation aspects. Recently, graph neural networks have proven effective for this task, but few incorporate external factors that impact roadway capacity and traffic flow. This study introduces the Roadway Capacity Driven Graph Convolution Network (RCDGCN) model, which incorporates static and dynamic roadway capacity attributes in spatio-temporal settings to predict network-wide traffic states. The model was evaluated on two real-world datasets with different transportation factors: the ICM-495 highway network and an urban network in Manhattan, New York City. Results show RCDGCN outperformed baseline methods in forecasting accuracy. Analyses, including ablation experiments, weight analysis, and case studies, investigated the effect of capacity-related factors. The study demonstrates the potential of using RCDGCN for transportation system management."]], "corpus": "arxiv"}
27
  {"tstamp": 1722445564.3919, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722445564.0756, "finish": 1722445564.3919, "ip": "", "conv_id": "33d8228235394657b5ee7c61845ad49f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Who won the latest euros?", "output": [["Who won the latest euros?", "UEFA European Championship\n\nThe UEFA European Football Championship, less formally the European Championship and informally the Euro or Euros, is the primary association football tournament organised by the Union of European Football Associations (UEFA). The competition is contested by UEFA members' senior men's national teams, determining the continental champion of Europe. It is the second-most watched football tournament in the world after the FIFA World Cup; the Euro 2016 final was watched by a global audience of around 600 million. The competition has been held every four years since 1960, except for 2020, when it was postponed until 2021 due to the COVID-19 pandemic in Europe, but kept the name Euro 2020. Scheduled to be in the even-numbered year between FIFA World Cup tournaments, it was originally called the European Nations' Cup before changing to its current name in 1968. Since 1996, the individual events have been branded as \"UEFA Euro [year]\".\nBefore entering the tournament, all teams other than the host nations (which qualify automatically) compete in a qualifying process. Until 2016, the championship winners could compete in the following year's FIFA Confederations Cup, but were not obliged to do so. From the 2020 edition onwards, the winner competes in the CONMEBOL–UEFA Cup of Champions.\nThe seventeen European Championship tournaments have been won by ten national teams: Spain have won four titles, Germany have won three titles, Italy and France have won two titles, and the Soviet Union, Czechoslovakia, the Netherlands, Denmark, Greece and Portugal have won one title each. To date, Spain is the only team to have won consecutive titles, doing so in 2008 and 2012.\nThe most recent championship, held in Germany in 2024, was won by Spain, who lifted a record fourth European title after beating England 2–1 in the final at Olympiastadion in Berlin.\nHistory"]], "corpus": "wikipedia"}
28
  {"tstamp": 1722445564.3919, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722445564.0756, "finish": 1722445564.3919, "ip": "", "conv_id": "f03d04a064ae4d97aa465675ae705ebf", "model_name": "GritLM/GritLM-7B", "prompt": "Who won the latest euros?", "output": [["Who won the latest euros?", "UEFA Euro 2016\n\nThe 2016 UEFA European Football Championship, commonly referred to as UEFA Euro 2016 (stylised as UEFA EURO 2016) or simply Euro 2016, was the 15th UEFA European Championship, the quadrennial international men's football championship of Europe organised by UEFA. It was held in France from 10 June to 10 July 2016. Spain were the two-time defending champions, having won the 2008 and 2012 tournaments, but were eliminated in the round of 16 2-0 by Italy. Portugal won the tournament for the first time, following a 1–0 victory after extra time over the host team, France, in the final played at the Stade de France."]], "corpus": "wikipedia"}
 
 
 
 
 
 
 
 
 
 
 
26
  {"tstamp": 1722445343.7295, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722445342.5281, "finish": 1722445343.7295, "ip": "", "conv_id": "4ab9f7d1179f49ff8c1b741dd7ab39ef", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A study on improving traffic forecasting accuracy by combining future traffic volume with historical data.", "output": [["A study on improving traffic forecasting accuracy by combining future traffic volume with historical data.", "Title: Informed along the road: roadway capacity driven graph convolution network for network-wide traffic prediction\n\nAbstract: While deep learning has shown success in predicting traffic states, most methods treat it as a general prediction task without considering transportation aspects. Recently, graph neural networks have proven effective for this task, but few incorporate external factors that impact roadway capacity and traffic flow. This study introduces the Roadway Capacity Driven Graph Convolution Network (RCDGCN) model, which incorporates static and dynamic roadway capacity attributes in spatio-temporal settings to predict network-wide traffic states. The model was evaluated on two real-world datasets with different transportation factors: the ICM-495 highway network and an urban network in Manhattan, New York City. Results show RCDGCN outperformed baseline methods in forecasting accuracy. Analyses, including ablation experiments, weight analysis, and case studies, investigated the effect of capacity-related factors. The study demonstrates the potential of using RCDGCN for transportation system management."]], "corpus": "arxiv"}
27
  {"tstamp": 1722445564.3919, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722445564.0756, "finish": 1722445564.3919, "ip": "", "conv_id": "33d8228235394657b5ee7c61845ad49f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Who won the latest euros?", "output": [["Who won the latest euros?", "UEFA European Championship\n\nThe UEFA European Football Championship, less formally the European Championship and informally the Euro or Euros, is the primary association football tournament organised by the Union of European Football Associations (UEFA). The competition is contested by UEFA members' senior men's national teams, determining the continental champion of Europe. It is the second-most watched football tournament in the world after the FIFA World Cup; the Euro 2016 final was watched by a global audience of around 600 million. The competition has been held every four years since 1960, except for 2020, when it was postponed until 2021 due to the COVID-19 pandemic in Europe, but kept the name Euro 2020. Scheduled to be in the even-numbered year between FIFA World Cup tournaments, it was originally called the European Nations' Cup before changing to its current name in 1968. Since 1996, the individual events have been branded as \"UEFA Euro [year]\".\nBefore entering the tournament, all teams other than the host nations (which qualify automatically) compete in a qualifying process. Until 2016, the championship winners could compete in the following year's FIFA Confederations Cup, but were not obliged to do so. From the 2020 edition onwards, the winner competes in the CONMEBOL–UEFA Cup of Champions.\nThe seventeen European Championship tournaments have been won by ten national teams: Spain have won four titles, Germany have won three titles, Italy and France have won two titles, and the Soviet Union, Czechoslovakia, the Netherlands, Denmark, Greece and Portugal have won one title each. To date, Spain is the only team to have won consecutive titles, doing so in 2008 and 2012.\nThe most recent championship, held in Germany in 2024, was won by Spain, who lifted a record fourth European title after beating England 2–1 in the final at Olympiastadion in Berlin.\nHistory"]], "corpus": "wikipedia"}
28
  {"tstamp": 1722445564.3919, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722445564.0756, "finish": 1722445564.3919, "ip": "", "conv_id": "f03d04a064ae4d97aa465675ae705ebf", "model_name": "GritLM/GritLM-7B", "prompt": "Who won the latest euros?", "output": [["Who won the latest euros?", "UEFA Euro 2016\n\nThe 2016 UEFA European Football Championship, commonly referred to as UEFA Euro 2016 (stylised as UEFA EURO 2016) or simply Euro 2016, was the 15th UEFA European Championship, the quadrennial international men's football championship of Europe organised by UEFA. It was held in France from 10 June to 10 July 2016. Spain were the two-time defending champions, having won the 2008 and 2012 tournaments, but were eliminated in the round of 16 2-0 by Italy. Portugal won the tournament for the first time, following a 1–0 victory after extra time over the host team, France, in the final played at the Stade de France."]], "corpus": "wikipedia"}
29
+ {"tstamp": 1722445615.8123, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722445615.4259, "finish": 1722445615.8123, "ip": "", "conv_id": "8c2ce987427b42e0886f0713c13efbc9", "model_name": "BM25", "prompt": "when is us presidential elections", "output": [["when is us presidential elections", "2010 Ukrainian presidential election\n\nOn 21 December 2009, the Central Election Commission of Ukraine formed 113 foreign polling stations.\nVoters are permitted to vote at home during the presidential election.\nCosts\nThe Central Election Commission has estimated the budget of the holding of regular presidential elections in Ukraine at ₴1.5 billion, (approximately 200 million US dollars) with additional costs required by candidates to fund their campaigns.\nEach candidate is required to pay an election deposit of ₴2.5 million (Approximately US$300,000) The deposit will be refunded to the two highest polling candidates who progress to the second round of elections.\nOn 26 November, the Central Election Commission stated a total of ₴1.314 billion is required to hold the presidential election, including 192.2 million in 2009 and 1.122 billion in 2010.\nAssessments by political analysts show that each presidential candidate will have to spend at least US$150–200mn to promote himself; this includes buying story lines in the media, visual advertising, canvassing, printing political material and, work with electoral commissions.\nChairman of the Committee of Voters of Ukraine, Oleksandr Chernenko, also commented that presidential candidates will spend 1 billion US dollars on the election campaign\nThe cost of the run-off ballot is estimated to be US$119 million"]], "corpus": "wikipedia"}
30
+ {"tstamp": 1722445615.8123, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722445615.4259, "finish": 1722445615.8123, "ip": "", "conv_id": "7919081d502b4943b3903c5a61983653", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "when is us presidential elections", "output": [["when is us presidential elections", "2024 United States presidential election\n\nThe 2024 United States presidential election will be the 60th quadrennial presidential election, set to be held on Tuesday, November 5, 2024. Voters will elect a president and vice president for a term of four years. The incumbent president, Joe Biden, a member of the Democratic Party, is running for re-election. His predecessor, Donald Trump, a member of the Republican Party, is running for re-election for a second, non-consecutive term, after losing to Biden in the 2020 presidential election. If both are nominated, this will mark the seventh presidential election rematch, and the first since 1956. In the run-up to the election, on May 30, 2024, Trump was convicted of 34 felonies related to falsifying business records, becoming the first president to be found guilty of a crime. On July 13, Trump was shot in an assassination attempt at one of his rallies.\nThe winner of this election is scheduled to be inaugurated on January 20, 2025. The presidential election will take place at the same time as elections for the U.S. Senate, House, gubernatorial, and state legislatures. Biden and Trump secured a majority of delegates for the Democratic and Republican parties, respectively, on March 12, though they have yet to be confirmed at their nominating conventions. Robert F. Kennedy Jr. emerged as the highest-polling third-party presidential candidate since Ross Perot in the 1992 election, running as an independent.\nLeading campaign issues are expected to be abortion, border security and immigration, healthcare, education, the economy, foreign policy, LGBT rights, climate change, and democracy.\nBackground"]], "corpus": "wikipedia"}
31
+ {"tstamp": 1722445645.4829, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722445644.79, "finish": 1722445645.4829, "ip": "", "conv_id": "d33c666d9a464315b179a286c0df9942", "model_name": "voyage-multilingual-2", "prompt": "Interpretable feature extraction in language models using autoencoders.", "output": [["Interpretable feature extraction in language models using autoencoders.", "Title: Sparse Autoencoders Find Highly Interpretable Features in Language Models\n\nAbstract: One of the roadblocks to a better understanding of neural networks' internals is \\textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \\textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \\citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability."]], "corpus": "arxiv"}
32
+ {"tstamp": 1722445645.4829, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722445644.79, "finish": 1722445645.4829, "ip": "", "conv_id": "a07117052afe44569b36cea1bcc52e86", "model_name": "BM25", "prompt": "Interpretable feature extraction in language models using autoencoders.", "output": [["Interpretable feature extraction in language models using autoencoders.", "Title: Scaling and evaluating sparse autoencoders\n\nAbstract: Sparse autoencoders provide a promising unsupervised approach for extracting interpretable features from a language model by reconstructing activations from a sparse bottleneck layer. Since language models learn many concepts, autoencoders need to be very large to recover all relevant features. However, studying the properties of autoencoder scaling is difficult due to the need to balance reconstruction and sparsity objectives and the presence of dead latents. We propose using k-sparse autoencoders [Makhzani and Frey, 2013] to directly control sparsity, simplifying tuning and improving the reconstruction-sparsity frontier. Additionally, we find modifications that result in few dead latents, even at the largest scales we tried. Using these techniques, we find clean scaling laws with respect to autoencoder size and sparsity. We also introduce several new metrics for evaluating feature quality based on the recovery of hypothesized features, the explainability of activation patterns, and the sparsity of downstream effects. These metrics all generally improve with autoencoder size. To demonstrate the scalability of our approach, we train a 16 million latent autoencoder on GPT-4 activations for 40 billion tokens. We release training code and autoencoders for open-source models, as well as a visualizer."]], "corpus": "arxiv"}
33
+ {"tstamp": 1722445668.3212, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722445667.9697, "finish": 1722445668.3212, "ip": "", "conv_id": "dc5c3730a60e46b58e211818a759810b", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "is pizza also called pie?", "output": [["is pizza also called pie?", "Ell Stackexchange\n\nQ: What is this food called in English? I think technically it's a fruit although they strongly resemble a vegetable:\n\nWhat do you call these?\nPersonally, I know them as ‘paprika’.\nOthers say these are ‘bell peppers’, and only when ground into spice it's called paprika.\nBesides grinding them into spice, you can also make soup of it or sauce. Or put slices of this on a pizza. I would call that paprika soup, or paprika sauce, or paprika slices.\nIs this perhaps a difference between British vs American vs 'international' English?\nP.S. regarding the fruit vs vegetable issue: not really related to the question here but I think from a botanical point of view it's a fruit, and from a culinary point of view it's a vegetable.\n\nA: As well as being called bell peppers in the other answers, they are sometimes also called Capsicum in some English speaking countries, as per Wikipedia Bell pepper. Capsicum is actually the Genus name of the plant.\n"]], "corpus": "stackexchange"}
34
+ {"tstamp": 1722445668.3212, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722445667.9697, "finish": 1722445668.3212, "ip": "", "conv_id": "fba47ce32dad426d85b4fcd99186db7a", "model_name": "voyage-multilingual-2", "prompt": "is pizza also called pie?", "output": [["is pizza also called pie?", "English Stackexchange\n\nQ: How to name a part of a piechart Which term best suits to describe a part/slice/share/portion of a piechart, disregarding what the chart is about ?\n\nA: Since each \"part\" is a sector of a circle, sector can be used IMO.\n"]], "corpus": "stackexchange"}
35
+ {"tstamp": 1722445691.2185, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722445691.0107, "finish": 1722445691.2185, "ip": "", "conv_id": "2822b7b032984b2896bc9c1b5d9477a8", "model_name": "GritLM/GritLM-7B", "prompt": "A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "output": [["A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "Title: End-to-End Open-Domain Question Answering with BERTserini\n\nAbstract: We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans."]], "corpus": "arxiv"}
36
+ {"tstamp": 1722445691.2185, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722445691.0107, "finish": 1722445691.2185, "ip": "", "conv_id": "e4223ef5fa88473386fbe4e0ee1c3016", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "output": [["A system integrating BERT and IR techniques to enhance open-domain QA performance on Wikipedia.", "Title: Detection of the Geminga pulsar with MAGIC hints at a power-law tail emission beyond 15 GeV\n\nAbstract: We report the detection of pulsed gamma-ray emission from the Geminga pulsar (PSR J0633+1746) between $15\\,$GeV and $75\\,$GeV. This is the first time a middle-aged pulsar has been detected up to these energies. Observations were carried out with the MAGIC telescopes between 2017 and 2019 using the low-energy threshold Sum-Trigger-II system. After quality selection cuts, $\\sim 80\\,$hours of observational data were used for this analysis. To compare with the emission at lower energies below the sensitivity range of MAGIC, $11$ years of Fermi-LAT data above $100\\,$MeV were also analysed. From the two pulses per rotation seen by Fermi-LAT, only the second one, P2, is detected in the MAGIC energy range, with a significance of $6.3\\,\\sigma$. The spectrum measured by MAGIC is well-represented by a simple power law of spectral index $\\Gamma= 5.62\\pm0.54$, which smoothly extends the Fermi-LAT spectrum. A joint fit to MAGIC and Fermi-LAT data rules out the existence of a sub-exponential cut-off in the combined energy range at the $3.6\\,\\sigma$ significance level. The power-law tail emission detected by MAGIC is interpreted as the transition from curvature radiation to Inverse Compton Scattering of particles accelerated in the northern outer gap."]], "corpus": "arxiv"}
37
+ {"tstamp": 1722445710.4902, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722445710.3004, "finish": 1722445710.4902, "ip": "", "conv_id": "55af47c6ca9c4ac7a77ee1e484d2f648", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Is Nothing Sacred? Vacuum Energy, Supersymmetry and Lorentz Breaking from Recoiling D branes\n\nAbstract: Classical superstring vacua have zero vacuum energy and are supersymmetric and Lorentz-invariant. We argue that all these properties may be destroyed when quantum aspects of the interactions between particles and non-perturbative vacuum fluctuations are considered. A toy calculation of string/D-brane interactions using a world-sheet approach indicates that quantum recoil effects - reflecting the gravitational back-reaction on space-time foam due to the propagation of energetic particles - induce non-zero vacuum energy that is linked to supersymmetry breaking and breaks Lorentz invariance. This model of space-time foam also suggests the appearance of microscopic event horizons."]], "corpus": "arxiv"}
38
+ {"tstamp": 1722445710.4902, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722445710.3004, "finish": 1722445710.4902, "ip": "", "conv_id": "8b2315872e5949a7a35825b601d43977", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "output": [["Chinese text encoder with n-gram representations achieving state-of-the-art performance.", "Title: Character-level Chinese-English Translation through ASCII Encoding\n\nAbstract: Character-level Neural Machine Translation (NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a major challenge because of a lack of systematic correspondence between the individual linguistic units. In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic information of the characters, while also being reversible. We show promising results from training Wubi-based models on the character- and subword-level with recurrent as well as convolutional models."]], "corpus": "arxiv"}