Scheduled Commit
Browse files
data/retrieval_battle-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl
CHANGED
@@ -2,3 +2,4 @@
|
|
2 |
{"tstamp": 1722614197.243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "41055abd389145179f511e62a2ae4be8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what does it mean when you roll your thumbs", "0_output": [["what does it mean when you roll your thumbs", "Thumb twiddling\n\nThumb twiddling is an activity that is done with the hands of an individual whereby the fingers are interlocked and the thumbs circle around a common point, usually in the middle of the distance between the two thumbs.\nWhile it is an expression of at least a moderate amount of manual dexterity, thumb twiddling is frequently used as an example of a useless, time-wasting activity.\nMedical uses\nThumb twiddling can be used as a simple test for manual dexterity.\nContra-rotating thumbs\nContra-rotation involves moving the thumbs in opposing directions. While thumb twiddling comes naturally for almost everyone, it is extremely rare for people to be able to naturally contra-rotate their thumbs without spending a significant amount of time training the new technique. Typically, a person will get through a half rotation before both of the thumbs will inexplicably synchronize directions. A similar phenomenon occurs using similar finger or arm movements."]], "0_corpus": "wikipedia", "1_conv_id": "1226ec347e844c54b34f337a77d3f39c", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what does it mean when you roll your thumbs", "1_output": [["what does it mean when you roll your thumbs", "Thumbscrew (torture)\n\n\"It has been very generally asserted,\" says Dr. Jamieson, \"that part of the cargo of the invincible Armada was a large assortment of thumbikens, which it was meant should be employed as powerful arguments for convincing the heretics.\" The country of the inquisition was certainly a fit quarter from whence to derive so congenial an instrument; but other accounts, as we have said, and these apparently unquestionable, assign it a later introduction... In the torturing of [William] Spence, Lord Fountainhall mentions the origin of the thumbikens, stating that this instrument \"was a new invention used among the colliers upon transgressors, and discovered by Generals Dalyell and Drummond, they having seen them used in Muscovy.\" The account which Bishop Burnet gives of the torturing of Spence confirms the then recent use of the thumbikens. ... This point we think is put beyond all doubt by the following act of the privy council in 1684, quoted in Wodrow's invaluable history: \"Whereas there is now a new invention and engine called the thumbikens ... the Lords of His Majesty's Council do therefore ordain, that when any person shall be put to the torture, that the boots and the thumbikens both be applied to them...\"\nIn 1612 the Baroque painter Orazio Gentileschi accused his colleague, Agostino Tassi, of raping his daughter, the painter Artemisia Gentileschi. During the five-month long trial, Artemisia was cross-examined under thumb-screw torture.\nAs late as the mid-18th century, the ex-slave Olaudah Equiano, in his autobiography The Interesting Narrative of the Life of Olaudah Equiano, documented the use of thumbscrews to torture slaves. During this period (mid-18th century), Thomas Clarkson carried thumbscrews with him to further his cause for the abolition of the slave trade and later emancipation of slaves in the British Empire. He hoped to, and did, inspire empathy with the display of this and other torture devices used on slaves. They were used on slave ships, as witnessed and described by Equiano and Ottobah Cugoano."]], "1_corpus": "wikipedia"}
|
3 |
{"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
|
4 |
{"tstamp": 1722614312.8005, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "0_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "0_corpus": "arxiv", "1_conv_id": "2118993c2cfd41aa8de71a423df87b72", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "1_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "1_corpus": "arxiv"}
|
|
|
|
2 |
{"tstamp": 1722614197.243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "41055abd389145179f511e62a2ae4be8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what does it mean when you roll your thumbs", "0_output": [["what does it mean when you roll your thumbs", "Thumb twiddling\n\nThumb twiddling is an activity that is done with the hands of an individual whereby the fingers are interlocked and the thumbs circle around a common point, usually in the middle of the distance between the two thumbs.\nWhile it is an expression of at least a moderate amount of manual dexterity, thumb twiddling is frequently used as an example of a useless, time-wasting activity.\nMedical uses\nThumb twiddling can be used as a simple test for manual dexterity.\nContra-rotating thumbs\nContra-rotation involves moving the thumbs in opposing directions. While thumb twiddling comes naturally for almost everyone, it is extremely rare for people to be able to naturally contra-rotate their thumbs without spending a significant amount of time training the new technique. Typically, a person will get through a half rotation before both of the thumbs will inexplicably synchronize directions. A similar phenomenon occurs using similar finger or arm movements."]], "0_corpus": "wikipedia", "1_conv_id": "1226ec347e844c54b34f337a77d3f39c", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what does it mean when you roll your thumbs", "1_output": [["what does it mean when you roll your thumbs", "Thumbscrew (torture)\n\n\"It has been very generally asserted,\" says Dr. Jamieson, \"that part of the cargo of the invincible Armada was a large assortment of thumbikens, which it was meant should be employed as powerful arguments for convincing the heretics.\" The country of the inquisition was certainly a fit quarter from whence to derive so congenial an instrument; but other accounts, as we have said, and these apparently unquestionable, assign it a later introduction... In the torturing of [William] Spence, Lord Fountainhall mentions the origin of the thumbikens, stating that this instrument \"was a new invention used among the colliers upon transgressors, and discovered by Generals Dalyell and Drummond, they having seen them used in Muscovy.\" The account which Bishop Burnet gives of the torturing of Spence confirms the then recent use of the thumbikens. ... This point we think is put beyond all doubt by the following act of the privy council in 1684, quoted in Wodrow's invaluable history: \"Whereas there is now a new invention and engine called the thumbikens ... the Lords of His Majesty's Council do therefore ordain, that when any person shall be put to the torture, that the boots and the thumbikens both be applied to them...\"\nIn 1612 the Baroque painter Orazio Gentileschi accused his colleague, Agostino Tassi, of raping his daughter, the painter Artemisia Gentileschi. During the five-month long trial, Artemisia was cross-examined under thumb-screw torture.\nAs late as the mid-18th century, the ex-slave Olaudah Equiano, in his autobiography The Interesting Narrative of the Life of Olaudah Equiano, documented the use of thumbscrews to torture slaves. During this period (mid-18th century), Thomas Clarkson carried thumbscrews with him to further his cause for the abolition of the slave trade and later emancipation of slaves in the British Empire. He hoped to, and did, inspire empathy with the display of this and other torture devices used on slaves. They were used on slave ships, as witnessed and described by Equiano and Ottobah Cugoano."]], "1_corpus": "wikipedia"}
|
3 |
{"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
|
4 |
{"tstamp": 1722614312.8005, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "0_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "0_corpus": "arxiv", "1_conv_id": "2118993c2cfd41aa8de71a423df87b72", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "1_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "1_corpus": "arxiv"}
|
5 |
+
{"tstamp": 1722614346.0652, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "8becc0806122486ca9af84480ad04ef1", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "0_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "0_corpus": "arxiv", "1_conv_id": "79f665134b4b46e190d9a9387bd4667b", "1_model_name": "text-embedding-004", "1_prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "1_output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl
CHANGED
@@ -16,3 +16,5 @@
|
|
16 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "2118993c2cfd41aa8de71a423df87b72", "model_name": "text-embedding-3-large", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "corpus": "arxiv"}
|
17 |
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "8becc0806122486ca9af84480ad04ef1", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "corpus": "arxiv"}
|
18 |
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "79f665134b4b46e190d9a9387bd4667b", "model_name": "text-embedding-004", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "corpus": "arxiv"}
|
|
|
|
|
|
16 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "2118993c2cfd41aa8de71a423df87b72", "model_name": "text-embedding-3-large", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "corpus": "arxiv"}
|
17 |
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "8becc0806122486ca9af84480ad04ef1", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "corpus": "arxiv"}
|
18 |
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "79f665134b4b46e190d9a9387bd4667b", "model_name": "text-embedding-004", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "corpus": "arxiv"}
|
19 |
+
{"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "7f1a9311a88f4b8e85d64832fbfc5c41", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Learning Private Neural Language Modeling with Attentive Aggregation\n\nAbstract: Mobile keyboard suggestion is typically regarded as a word-level language modeling problem. Centralized machine learning technique requires massive user data collected to train on, which may impose privacy concerns for sensitive personal typing data of users. Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server. To obtain a global model for prediction, existing FL algorithms simply average the client models and ignore the importance of each client during model aggregation. Furthermore, there is no optimization for learning a well-generalized global model on the central server. To solve these problems, we propose a novel model aggregation with the attention mechanism considering the contribution of clients models to the global model, together with an optimization technique during server aggregation. Our proposed attentive aggregation method minimizes the weighted distance between the server model and client models through iterative parameters updating while attends the distance between the server model and client models. Through experiments on two popular language modeling datasets and a social media dataset, our proposed method outperforms its counterparts in terms of perplexity and communication cost in most settings of comparison."]], "corpus": "arxiv"}
|
20 |
+
{"tstamp": 1722614353.3371, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614352.8759, "finish": 1722614353.3371, "ip": "", "conv_id": "6393a25824bc47d2b265aaf82a72803d", "model_name": "text-embedding-3-large", "prompt": "Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "output": [["Innovative approach to private mobile keyboard suggestions using federated learning and attention mechanisms.", "Title: Federated Learning for Mobile Keyboard Prediction\n\nAbstract: We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices."]], "corpus": "arxiv"}
|