Scheduled Commit
Browse files
data/retrieval_battle-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl
CHANGED
@@ -1,3 +1,4 @@
|
|
1 |
{"tstamp": 1722613807.2412, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d86e74446aa24ee79233a2005add1639", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "90fff105c0544fad8d91a822dbd3a2c0", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
2 |
{"tstamp": 1722614197.243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "41055abd389145179f511e62a2ae4be8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what does it mean when you roll your thumbs", "0_output": [["what does it mean when you roll your thumbs", "Thumb twiddling\n\nThumb twiddling is an activity that is done with the hands of an individual whereby the fingers are interlocked and the thumbs circle around a common point, usually in the middle of the distance between the two thumbs.\nWhile it is an expression of at least a moderate amount of manual dexterity, thumb twiddling is frequently used as an example of a useless, time-wasting activity.\nMedical uses\nThumb twiddling can be used as a simple test for manual dexterity.\nContra-rotating thumbs\nContra-rotation involves moving the thumbs in opposing directions. While thumb twiddling comes naturally for almost everyone, it is extremely rare for people to be able to naturally contra-rotate their thumbs without spending a significant amount of time training the new technique. Typically, a person will get through a half rotation before both of the thumbs will inexplicably synchronize directions. A similar phenomenon occurs using similar finger or arm movements."]], "0_corpus": "wikipedia", "1_conv_id": "1226ec347e844c54b34f337a77d3f39c", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what does it mean when you roll your thumbs", "1_output": [["what does it mean when you roll your thumbs", "Thumbscrew (torture)\n\n\"It has been very generally asserted,\" says Dr. Jamieson, \"that part of the cargo of the invincible Armada was a large assortment of thumbikens, which it was meant should be employed as powerful arguments for convincing the heretics.\" The country of the inquisition was certainly a fit quarter from whence to derive so congenial an instrument; but other accounts, as we have said, and these apparently unquestionable, assign it a later introduction... In the torturing of [William] Spence, Lord Fountainhall mentions the origin of the thumbikens, stating that this instrument \"was a new invention used among the colliers upon transgressors, and discovered by Generals Dalyell and Drummond, they having seen them used in Muscovy.\" The account which Bishop Burnet gives of the torturing of Spence confirms the then recent use of the thumbikens. ... This point we think is put beyond all doubt by the following act of the privy council in 1684, quoted in Wodrow's invaluable history: \"Whereas there is now a new invention and engine called the thumbikens ... the Lords of His Majesty's Council do therefore ordain, that when any person shall be put to the torture, that the boots and the thumbikens both be applied to them...\"\nIn 1612 the Baroque painter Orazio Gentileschi accused his colleague, Agostino Tassi, of raping his daughter, the painter Artemisia Gentileschi. During the five-month long trial, Artemisia was cross-examined under thumb-screw torture.\nAs late as the mid-18th century, the ex-slave Olaudah Equiano, in his autobiography The Interesting Narrative of the Life of Olaudah Equiano, documented the use of thumbscrews to torture slaves. During this period (mid-18th century), Thomas Clarkson carried thumbscrews with him to further his cause for the abolition of the slave trade and later emancipation of slaves in the British Empire. He hoped to, and did, inspire empathy with the display of this and other torture devices used on slaves. They were used on slave ships, as witnessed and described by Equiano and Ottobah Cugoano."]], "1_corpus": "wikipedia"}
|
3 |
{"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
|
|
|
|
1 |
{"tstamp": 1722613807.2412, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d86e74446aa24ee79233a2005add1639", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "90fff105c0544fad8d91a822dbd3a2c0", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
2 |
{"tstamp": 1722614197.243, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "41055abd389145179f511e62a2ae4be8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what does it mean when you roll your thumbs", "0_output": [["what does it mean when you roll your thumbs", "Thumb twiddling\n\nThumb twiddling is an activity that is done with the hands of an individual whereby the fingers are interlocked and the thumbs circle around a common point, usually in the middle of the distance between the two thumbs.\nWhile it is an expression of at least a moderate amount of manual dexterity, thumb twiddling is frequently used as an example of a useless, time-wasting activity.\nMedical uses\nThumb twiddling can be used as a simple test for manual dexterity.\nContra-rotating thumbs\nContra-rotation involves moving the thumbs in opposing directions. While thumb twiddling comes naturally for almost everyone, it is extremely rare for people to be able to naturally contra-rotate their thumbs without spending a significant amount of time training the new technique. Typically, a person will get through a half rotation before both of the thumbs will inexplicably synchronize directions. A similar phenomenon occurs using similar finger or arm movements."]], "0_corpus": "wikipedia", "1_conv_id": "1226ec347e844c54b34f337a77d3f39c", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "what does it mean when you roll your thumbs", "1_output": [["what does it mean when you roll your thumbs", "Thumbscrew (torture)\n\n\"It has been very generally asserted,\" says Dr. Jamieson, \"that part of the cargo of the invincible Armada was a large assortment of thumbikens, which it was meant should be employed as powerful arguments for convincing the heretics.\" The country of the inquisition was certainly a fit quarter from whence to derive so congenial an instrument; but other accounts, as we have said, and these apparently unquestionable, assign it a later introduction... In the torturing of [William] Spence, Lord Fountainhall mentions the origin of the thumbikens, stating that this instrument \"was a new invention used among the colliers upon transgressors, and discovered by Generals Dalyell and Drummond, they having seen them used in Muscovy.\" The account which Bishop Burnet gives of the torturing of Spence confirms the then recent use of the thumbikens. ... This point we think is put beyond all doubt by the following act of the privy council in 1684, quoted in Wodrow's invaluable history: \"Whereas there is now a new invention and engine called the thumbikens ... the Lords of His Majesty's Council do therefore ordain, that when any person shall be put to the torture, that the boots and the thumbikens both be applied to them...\"\nIn 1612 the Baroque painter Orazio Gentileschi accused his colleague, Agostino Tassi, of raping his daughter, the painter Artemisia Gentileschi. During the five-month long trial, Artemisia was cross-examined under thumb-screw torture.\nAs late as the mid-18th century, the ex-slave Olaudah Equiano, in his autobiography The Interesting Narrative of the Life of Olaudah Equiano, documented the use of thumbscrews to torture slaves. During this period (mid-18th century), Thomas Clarkson carried thumbscrews with him to further his cause for the abolition of the slave trade and later emancipation of slaves in the British Empire. He hoped to, and did, inspire empathy with the display of this and other torture devices used on slaves. They were used on slave ships, as witnessed and described by Equiano and Ottobah Cugoano."]], "1_corpus": "wikipedia"}
|
3 |
{"tstamp": 1722614262.0132, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7bc72d4f16ae4e4896190189d05deb64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "0_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems\n\nAbstract: Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented in $\\textit{superposition}$, i.e., on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems."]], "0_corpus": "arxiv", "1_conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "1_output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "1_corpus": "arxiv"}
|
4 |
+
{"tstamp": 1722614312.8005, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "0_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "0_corpus": "arxiv", "1_conv_id": "2118993c2cfd41aa8de71a423df87b72", "1_model_name": "text-embedding-3-large", "1_prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "1_output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-3fa403cf-f7b9-48f4-8ec7-02929ab51cb5.jsonl
CHANGED
@@ -14,3 +14,5 @@
|
|
14 |
{"tstamp": 1722614204.2927, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722614204.1049, "finish": 1722614204.2927, "ip": "", "conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "corpus": "arxiv"}
|
15 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "corpus": "arxiv"}
|
16 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "2118993c2cfd41aa8de71a423df87b72", "model_name": "text-embedding-3-large", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "corpus": "arxiv"}
|
|
|
|
|
|
14 |
{"tstamp": 1722614204.2927, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722614204.1049, "finish": 1722614204.2927, "ip": "", "conv_id": "a121ba5ad2284d10a26b846f00bb7fcf", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Methods to resolve superposition in neural networks for enhancing interpretability.", "output": [["Methods to resolve superposition in neural networks for enhancing interpretability.", "Title: Nonlinear Perturbation Theory\n\nAbstract: An explicit perturbative solution to all orders is given for a general class of nonlinear differential equations. This solution is written as a sum indexed by rooted trees and uses the Green function of a linearization of the equations. The modifications due to the presence of zero-modes is considered. Possible divergence of the integrals can be avoided by using approximate Green functions."]], "corpus": "arxiv"}
|
15 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "0e3fd6458f354e40bdf036b04b2c7350", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Scenarios of domain pattern formation in a reaction-diffusion system\n\nAbstract: We performed an extensive numerical study of a two-dimensional reaction-diffusion system of the activator-inhibitor type in which domain patterns can form. We showed that both multidomain and labyrinthine patterns may form spontaneously as a result of Turing instability. In the stable homogeneous system with the fast inhibitor one can excite both localized and extended patterns by applying a localized stimulus. Depending on the parameters and the excitation level of the system stripes, spots, wriggled stripes, or labyrinthine patterns form. The labyrinthine patterns may be both connected and disconnected. In the the stable homogeneous system with the slow inhibitor one can excite self-replicating spots, breathing patterns, autowaves and turbulence. The parameter regions in which different types of patterns are realized are explained on the basis of the asymptotic theory of instabilities for patterns with sharp interfaces developed by us in Phys. Rev. E. 53, 3101 (1996). The dynamics of the patterns observed in our simulations is very similar to that of the patterns forming in the ferrocyanide-iodate-sulfite reaction."]], "corpus": "arxiv"}
|
16 |
{"tstamp": 1722614269.6557, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1722614269.0773, "finish": 1722614269.6557, "ip": "", "conv_id": "2118993c2cfd41aa8de71a423df87b72", "model_name": "text-embedding-3-large", "prompt": "A paper that applies Transformer architecture to operator learning problems in PDEs.", "output": [["A paper that applies Transformer architecture to operator learning problems in PDEs.", "Title: Choose a Transformer: Fourier or Galerkin\n\nAbstract: In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts."]], "corpus": "arxiv"}
|
17 |
+
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "8becc0806122486ca9af84480ad04ef1", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks\n\nAbstract: Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (Devlin et al., 2018), we attain a GLUE score of 81.8---the state of the art (as of 02/24/2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (Peters et al., 2018a) and Radford et al. (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data."]], "corpus": "arxiv"}
|
18 |
+
{"tstamp": 1722614319.4459, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722614319.0159, "finish": 1722614319.4459, "ip": "", "conv_id": "79f665134b4b46e190d9a9387bd4667b", "model_name": "text-embedding-004", "prompt": "Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "output": [["Research paper discussing achieving higher performance on GLUE by supplementing language model pretraining with labeled data tasks.", "Title: Pre-Training a Language Model Without Human Language\n\nAbstract: In this paper, we study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance. To this end, we pre-train different transformer-based masked language models on several corpora with certain features, and we fine-tune those language models on GLUE benchmarks. We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks. Our results also show that pre-training on structured data does not always make the model acquire ability that can be transferred to natural language downstream tasks. To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language."]], "corpus": "arxiv"}
|