Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
c866193
·
verified ·
1 Parent(s): b0c45c6

Scheduled Commit

Browse files
data/retrieval_battle-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl CHANGED
@@ -3,3 +3,5 @@
3
  {"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
4
  {"tstamp": 1722614312.8005, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "0_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "0_corpus": "arxiv", "1_conv_id": "2118993c2cfd41aa8de71a423df87b72", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "1_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "1_corpus": "arxiv"}
5
  {"tstamp": 1722614346.0652, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "8becc0806122486ca9af84480ad04ef1", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "0_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "0_corpus": "arxiv", "1_conv_id": "79f665134b4b46e190d9a9387bd4667b", "1_model_name": "text-embedding-004", "1_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "1_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "1_corpus": "arxiv"}
 
 
 
3
  {"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
4
  {"tstamp": 1722614312.8005, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "0_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "0_corpus": "arxiv", "1_conv_id": "2118993c2cfd41aa8de71a423df87b72", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "1_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "1_corpus": "arxiv"}
5
  {"tstamp": 1722614346.0652, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "8becc0806122486ca9af84480ad04ef1", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "0_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "0_corpus": "arxiv", "1_conv_id": "79f665134b4b46e190d9a9387bd4667b", "1_model_name": "text-embedding-004", "1_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "1_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "1_corpus": "arxiv"}
6
+ {"tstamp": 1722614969.2646, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7f1a9311a88f4b8e85d64832fbfc5c41", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "0_output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Learning Private Neural Language Modeling with Attentive Aggregation\n\nAbstract: Mobile keyboard suggestion is typically regarded as a word-level language modeling problem. Centralized machine learning technique requires massive user data collected to train on, which may impose privacy concerns for sensitive personal typing data of users. Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server. To obtain a global model for prediction, existing FL algorithms simply average the client models and ignore the importance of each client during model aggregation. Furthermore, there is no optimization for learning a well-generalized global model on the central server. To solve these problems, we propose a novel model aggregation with the attention mechanism considering the contribution of clients models to the global model, together with an optimization technique during server aggregation. Our proposed attentive aggregation method minimizes the weighted distance between the server model and client models through iterative parameters updating while attends the distance between the server model and client models. Through experiments on two popular language modeling datasets and a social media dataset, our proposed method outperforms its counterparts in terms of perplexity and communication cost in most settings of comparison."]], "0_corpus": "arxiv", "1_conv_id": "6393a25824bc47d2b265aaf82a72803d", "1_model_name": "text-embedding-3-large", "1_prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "1_output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Federated Learning for Mobile Keyboard Prediction\n\nAbstract: We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices."]], "1_corpus": "arxiv"}
7
+ {"tstamp": 1722614995.1738, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e2afa09adcbd49efa8a47de8a0a6135f", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "the recipient of first jnanpith award was an author which language", "0_output": [["the recipient of first jnanpith award was an author which language", "Jnanpith Award\n\nFrom 1965 till 1981, the award was given to the authors for their \"most outstanding work\" and consisted of a citation plaque, a cash prize and a bronze replica of Saraswati, the Hindu goddess of knowledge and wisdom. The first recipient of the award was the Malayalam writer G. Sankara Kurup who received the award in 1965 for his collection of poems, Odakkuzhal (The Bamboo Flute), published in 1950. The rules were revised in subsequent years to consider only works published during the preceding twenty years, excluding the year for which the award was to be given and the cash prize was increased to from 1981.\n, the cash prize has been revised to and out of twenty-three eligible languages the award has been presented for works in sixteen languages: Hindi (eleven), Kannada (eight), Bengali and Malayalam (six each), Urdu (five) Gujarati, Marathi, Odia (four each), Assamese and Telugu (three each), Punjabi, Tamil, Konkani and Sanskrit (two each), English, Kashmiri and (one each). The award has been conferred upon fifty-eight writers including eight women authors. In 1976, Bengali novelist Ashapoorna Devi became the first woman to win the award and was honoured for the 1965 novel Prothom Protishruti (The First Promise), the first in a trilogy. The most recent recipients of the award are Sanskrit scholar Rambhadracharya and Urdu writer, Bollywood lyricist Gulzar jointly awarded for the year of 2023."]], "0_corpus": "wikipedia", "1_conv_id": "248bf37201d04c08a4f4bae15838c4ba", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "the recipient of first jnanpith award was an author which language", "1_output": [["the recipient of first jnanpith award was an author which language", "Jnanpith Award\n\nThe first Selection Board consisted of Kalelkar, Niharranjan Ray, Karan Singh, R. R. Diwakar, V. Raghavan, B. Gopal Reddy, Harekrushna Mahatab, Rama Jain, and Lakshmi Chandra Jain and was headed by Sampurnanand. Works that were published between 1921 and 1951 were considered for the first award. The nine language committees that were formed were to submit to the board nominations along with translations of the work into Hindi or English. The final round had four authors; Kazi Nazrul Islam (Bengali), D. V. Gundappa (Kannada), Viswanatha Satyanarayana (Telugu), and G. Sankara Kurup (Malayalam). On 19 November 1966, Kurup was presented with the citation, statue of Saraswati, and a cheque for prize of at a ceremony held at Vigyan Bhavan, Delhi. In his acceptance speech, Kurup appreciated the concept of the new award and thanked it for bringing \"integration of the diverse people of this land on a spiritual plane\".\nRules and selection process\nThe nominations for the award are received from various literary experts, teachers, critics, universities, and numerous literary and language associations. Every three years, an advisory committee is constituted for each of the languages. The language of the most recent recipient's work is not eligible for consideration for the next two years. Each committee consists of three literary critics and scholars of their respective languages. All the nominations are scrutinised by the committee and their recommendations are submitted to the Jnanpith Award Selection Board.\nThe Selection Board consists of between seven and eleven members of \"high repute and integrity\". Each member is part of the committee for a term of three years which can also be extended further for two more terms. The recommendations of all language advisory committees are evaluated by the board based on complete or partial translations of the selected writings of the proposed writers into Hindi or English. The recipient for a particular year is announced by the Selection Board, which has final authority in selection."]], "1_corpus": "wikipedia"}
data/retrieval_individual-1948e9c4-e613-4487-b12e-0e988013b994.jsonl ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ {"tstamp": 1722615062.4729, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722615061.4283, "finish": 1722615062.4729, "ip": "", "conv_id": "2496908d1f3749c48016b3b0ee6db257", "model_name": "GritLM/GritLM-7B", "prompt": "what is the system of government in japan", "output": [["what is the system of government in japan", "Japan\n\nGovernment and politics Japan is a unitary state and constitutional monarchy in which the power of the Emperor is limited to a ceremonial role. Executive power is instead wielded by the Prime Minister of Japan and his Cabinet, whose sovereignty is vested in the Japanese people. Naruhito is the Emperor of Japan, having succeeded his father Akihito upon his accession to the Chrysanthemum Throne in 2019.\nJapan's legislative organ is the National Diet, a bicameral parliament. It consists of a lower House of Representatives with 465 seats, elected by popular vote every four years or when dissolved, and an upper House of Councillors with 245 seats, whose popularly-elected members serve six-year terms. There is universal suffrage for adults over 18 years of age, with a secret ballot for all elected offices. The prime minister as the head of government has the power to appoint and dismiss Ministers of State, and is appointed by the emperor after being designated from among the members of the Diet. Fumio Kishida is Japan's prime minister; he took office after winning the 2021 Liberal Democratic Party leadership election. The right-wing big tent Liberal Democratic Party has been the dominant party in the country since the 1950s, often called the 1955 System."]], "corpus": "wikipedia"}
2
+ {"tstamp": 1722615062.4729, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722615061.4283, "finish": 1722615062.4729, "ip": "", "conv_id": "0d3f6e40fab24acfbaa53db54a6865da", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "what is the system of government in japan", "output": [["what is the system of government in japan", "Government of Japan\n\nThe Government of Japan consists of legislative, executive and judiciary branches and is based on popular sovereignty. The Government runs under the framework established by the Constitution of Japan, adopted in 1947. It is a unitary state, containing forty-seven administrative divisions, with the Emperor as its Head of State. His role is ceremonial and he has no powers related to the Government. Instead, it is the Cabinet, comprising the Ministers of State and the Prime Minister, that directs and controls the Government and the civil service. The Cabinet has the executive power and is formed by the Prime Minister, who is the Head of Government. The Prime Minister is nominated by the National Diet and appointed to office by the Emperor."]], "corpus": "wikipedia"}
data/retrieval_individual-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl CHANGED
@@ -18,3 +18,7 @@
18
  {"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "79f665134b4b46e190d9a9387bd4667b", "model_name": "text-embedding-004", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "corpus": "arxiv"}
19
  {"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "7f1a9311a88f4b8e85d64832fbfc5c41", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Learning Private Neural Language Modeling with Attentive Aggregation\n\nAbstract: Mobile keyboard suggestion is typically regarded as a word-level language modeling problem. Centralized machine learning technique requires massive user data collected to train on, which may impose privacy concerns for sensitive personal typing data of users. Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server. To obtain a global model for prediction, existing FL algorithms simply average the client models and ignore the importance of each client during model aggregation. Furthermore, there is no optimization for learning a well-generalized global model on the central server. To solve these problems, we propose a novel model aggregation with the attention mechanism considering the contribution of clients models to the global model, together with an optimization technique during server aggregation. Our proposed attentive aggregation method minimizes the weighted distance between the server model and client models through iterative parameters updating while attends the distance between the server model and client models. Through experiments on two popular language modeling datasets and a social media dataset, our proposed method outperforms its counterparts in terms of perplexity and communication cost in most settings of comparison."]], "corpus": "arxiv"}
20
  {"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "6393a25824bc47d2b265aaf82a72803d", "model_name": "text-embedding-3-large", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Federated Learning for Mobile Keyboard Prediction\n\nAbstract: We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices."]], "corpus": "arxiv"}
 
 
 
 
 
18
  {"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "79f665134b4b46e190d9a9387bd4667b", "model_name": "text-embedding-004", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "corpus": "arxiv"}
19
  {"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "7f1a9311a88f4b8e85d64832fbfc5c41", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Learning Private Neural Language Modeling with Attentive Aggregation\n\nAbstract: Mobile keyboard suggestion is typically regarded as a word-level language modeling problem. Centralized machine learning technique requires massive user data collected to train on, which may impose privacy concerns for sensitive personal typing data of users. Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server. To obtain a global model for prediction, existing FL algorithms simply average the client models and ignore the importance of each client during model aggregation. Furthermore, there is no optimization for learning a well-generalized global model on the central server. To solve these problems, we propose a novel model aggregation with the attention mechanism considering the contribution of clients models to the global model, together with an optimization technique during server aggregation. Our proposed attentive aggregation method minimizes the weighted distance between the server model and client models through iterative parameters updating while attends the distance between the server model and client models. Through experiments on two popular language modeling datasets and a social media dataset, our proposed method outperforms its counterparts in terms of perplexity and communication cost in most settings of comparison."]], "corpus": "arxiv"}
20
  {"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "6393a25824bc47d2b265aaf82a72803d", "model_name": "text-embedding-3-large", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Federated Learning for Mobile Keyboard Prediction\n\nAbstract: We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices."]], "corpus": "arxiv"}
21
+ {"tstamp": 1722614983.5672, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722614981.2904, "finish": 1722614983.5672, "ip": "", "conv_id": "e2afa09adcbd49efa8a47de8a0a6135f", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "the recipient of first jnanpith award was an author which language", "output": [["the recipient of first jnanpith award was an author which language", "Jnanpith Award\n\nFrom 1965 till 1981, the award was given to the authors for their \"most outstanding work\" and consisted of a citation plaque, a cash prize and a bronze replica of Saraswati, the Hindu goddess of knowledge and wisdom. The first recipient of the award was the Malayalam writer G. Sankara Kurup who received the award in 1965 for his collection of poems, Odakkuzhal (The Bamboo Flute), published in 1950. The rules were revised in subsequent years to consider only works published during the preceding twenty years, excluding the year for which the award was to be given and the cash prize was increased to from 1981.\n, the cash prize has been revised to and out of twenty-three eligible languages the award has been presented for works in sixteen languages: Hindi (eleven), Kannada (eight), Bengali and Malayalam (six each), Urdu (five) Gujarati, Marathi, Odia (four each), Assamese and Telugu (three each), Punjabi, Tamil, Konkani and Sanskrit (two each), English, Kashmiri and (one each). The award has been conferred upon fifty-eight writers including eight women authors. In 1976, Bengali novelist Ashapoorna Devi became the first woman to win the award and was honoured for the 1965 novel Prothom Protishruti (The First Promise), the first in a trilogy. The most recent recipients of the award are Sanskrit scholar Rambhadracharya and Urdu writer, Bollywood lyricist Gulzar jointly awarded for the year of 2023."]], "corpus": "wikipedia"}
22
+ {"tstamp": 1722614983.5672, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722614981.2904, "finish": 1722614983.5672, "ip": "", "conv_id": "248bf37201d04c08a4f4bae15838c4ba", "model_name": "GritLM/GritLM-7B", "prompt": "the recipient of first jnanpith award was an author which language", "output": [["the recipient of first jnanpith award was an author which language", "Jnanpith Award\n\nThe first Selection Board consisted of Kalelkar, Niharranjan Ray, Karan Singh, R. R. Diwakar, V. Raghavan, B. Gopal Reddy, Harekrushna Mahatab, Rama Jain, and Lakshmi Chandra Jain and was headed by Sampurnanand. Works that were published between 1921 and 1951 were considered for the first award. The nine language committees that were formed were to submit to the board nominations along with translations of the work into Hindi or English. The final round had four authors; Kazi Nazrul Islam (Bengali), D. V. Gundappa (Kannada), Viswanatha Satyanarayana (Telugu), and G. Sankara Kurup (Malayalam). On 19 November 1966, Kurup was presented with the citation, statue of Saraswati, and a cheque for prize of at a ceremony held at Vigyan Bhavan, Delhi. In his acceptance speech, Kurup appreciated the concept of the new award and thanked it for bringing \"integration of the diverse people of this land on a spiritual plane\".\nRules and selection process\nThe nominations for the award are received from various literary experts, teachers, critics, universities, and numerous literary and language associations. Every three years, an advisory committee is constituted for each of the languages. The language of the most recent recipient's work is not eligible for consideration for the next two years. Each committee consists of three literary critics and scholars of their respective languages. All the nominations are scrutinised by the committee and their recommendations are submitted to the Jnanpith Award Selection Board.\nThe Selection Board consists of between seven and eleven members of \"high repute and integrity\". Each member is part of the committee for a term of three years which can also be extended further for two more terms. The recommendations of all language advisory committees are evaluated by the board based on complete or partial translations of the selected writings of the proposed writers into Hindi or English. The recipient for a particular year is announced by the Selection Board, which has final authority in selection."]], "corpus": "wikipedia"}
23
+ {"tstamp": 1722615025.1982, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722615024.8322, "finish": 1722615025.1982, "ip": "", "conv_id": "75ad367e4e90412eb799882ccdc4442c", "model_name": "GritLM/GritLM-7B", "prompt": "Paper proposing a solution to memory bottlenecks in sequential recommender systems for edge devices.", "output": [["Paper proposing a solution to memory bottlenecks in sequential recommender systems for edge devices.", "Title: DIET: Customized Slimming for Incompatible Networks in Sequential Recommendation\n\nAbstract: Due to the continuously improving capabilities of mobile edges, recommender systems start to deploy models on edges to alleviate network congestion caused by frequent mobile requests. Several studies have leveraged the proximity of edge-side to real-time data, fine-tuning them to create edge-specific models. Despite their significant progress, these methods require substantial on-edge computational resources and frequent network transfers to keep the model up to date. The former may disrupt other processes on the edge to acquire computational resources, while the latter consumes network bandwidth, leading to a decrease in user satisfaction. In response to these challenges, we propose a customizeD slImming framework for incompatiblE neTworks(DIET). DIET deploys the same generic backbone (potentially incompatible for a specific edge) to all devices. To minimize frequent bandwidth usage and storage consumption in personalization, DIET tailors specific subnets for each edge based on its past interactions, learning to generate slimming subnets(diets) within incompatible networks for efficient transfer. It also takes the inter-layer relationships into account, empirically reducing inference time while obtaining more suitable diets. We further explore the repeated modules within networks and propose a more storage-efficient framework, DIETING, which utilizes a single layer of parameters to represent the entire network, achieving comparably excellent performance. The experiments across four state-of-the-art datasets and two widely used models demonstrate the superior accuracy in recommendation and efficiency in transmission and storage of our framework."]], "corpus": "arxiv"}
24
+ {"tstamp": 1722615025.1982, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722615024.8322, "finish": 1722615025.1982, "ip": "", "conv_id": "a1bfb2dc1be244d09c0a85dfc71e76c4", "model_name": "embed-english-v3.0", "prompt": "Paper proposing a solution to memory bottlenecks in sequential recommender systems for edge devices.", "output": [["Paper proposing a solution to memory bottlenecks in sequential recommender systems for edge devices.", "Title: Learning Elastic Embeddings for Customizing On-Device Recommenders\n\nAbstract: In today's context, deploying data-driven services like recommendation on edge devices instead of cloud servers becomes increasingly attractive due to privacy and network latency concerns. A common practice in building compact on-device recommender systems is to compress their embeddings which are normally the cause of excessive parameterization. However, despite the vast variety of devices and their associated memory constraints, existing memory-efficient recommender systems are only specialized for a fixed memory budget in every design and training life cycle, where a new model has to be retrained to obtain the optimal performance while adapting to a smaller/larger memory budget. In this paper, we present a novel lightweight recommendation paradigm that allows a well-trained recommender to be customized for arbitrary device-specific memory constraints without retraining. The core idea is to compose elastic embeddings for each item, where an elastic embedding is the concatenation of a set of embedding blocks that are carefully chosen by an automated search function. Correspondingly, we propose an innovative approach, namely recommendation with universally learned elastic embeddings (RULE). To ensure the expressiveness of all candidate embedding blocks, RULE enforces a diversity-driven regularization when learning different embedding blocks. Then, a performance estimator-based evolutionary search function is designed, allowing for efficient specialization of elastic embeddings under any memory constraint for on-device recommendation. Extensive experiments on real-world datasets reveal the superior performance of RULE under tight memory budgets."]], "corpus": "arxiv"}