paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/a-question-answering-framework-for-plots
|
1806.04655
| null | null |
FigureNet: A Deep Learning model for Question-Answering on Scientific Plots
|
Deep Learning has managed to push boundaries in a wide variety of tasks. One
area of interest is to tackle problems in reasoning and understanding, with an
aim to emulate human intelligence. In this work, we describe a deep learning
model that addresses the reasoning task of question-answering on categorical
plots. We introduce a novel architecture FigureNet, that learns to identify
various plot elements, quantify the represented values and determine a relative
ordering of these statistical values. We test our model on the FigureQA dataset
which provides images and accompanying questions for scientific plots like bar
graphs and pie charts, augmented with rich annotations. Our approach
outperforms the state-of-the-art Relation Networks baseline by approximately
$7\%$ on this dataset, with a training time that is over an order of magnitude
lesser.
| null |
http://arxiv.org/abs/1806.04655v2
|
http://arxiv.org/pdf/1806.04655v2.pdf
| null |
[
"Revanth Reddy",
"Rahul Ramesh",
"Ameet Deshpande",
"Mitesh M. Khapra"
] |
[
"Deep Learning",
"Question Answering"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/disentangled-sequential-autoencoder
|
1803.02991
| null | null |
Disentangled Sequential Autoencoder
|
We present a VAE architecture for encoding and generating high dimensional
sequential data, such as video or audio. Our deep generative model learns a
latent representation of the data which is split into a static and dynamic
part, allowing us to approximately disentangle latent time-dependent features
(dynamics) from features which are preserved over time (content). This
architecture gives us partial control over generating content and dynamics by
conditioning on either one of these sets of features. In our experiments on
artificially generated cartoon video clips and voice recordings, we show that
we can convert the content of a given sequence into another one by such content
swapping. For audio, this allows us to convert a male speaker into a female
speaker and vice versa, while for video we can separately manipulate shapes and
dynamics. Furthermore, we give empirical evidence for the hypothesis that
stochastic RNNs as latent state models are more efficient at compressing and
generating long sequences than deterministic ones, which may be relevant for
applications in video compression.
|
This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features.
|
http://arxiv.org/abs/1803.02991v2
|
http://arxiv.org/pdf/1803.02991v2.pdf
|
ICML 2018 7
|
[
"Yingzhen Li",
"Stephan Mandt"
] |
[
"Video Compression"
] | 2018-03-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2147
|
http://proceedings.mlr.press/v80/yingzhen18a/yingzhen18a.pdf
|
disentangled-sequential-autoencoder-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/learning-to-estimate-indoor-lighting-from-3d
|
1806.03994
| null | null |
Learning to Estimate Indoor Lighting from 3D Objects
|
In this work, we propose a step towards a more accurate prediction of the
environment light given a single picture of a known object. To achieve this, we
developed a deep learning method that is able to encode the latent space of
indoor lighting using few parameters and that is trained on a database of
environment maps. This latent space is then used to generate predictions of the
light that are both more realistic and accurate than previous methods. To
achieve this, our first contribution is a deep autoencoder which is capable of
learning the feature space that compactly models lighting. Our second
contribution is a convolutional neural network that predicts the light from a
single image of a known object. To train these networks, our third contribution
is a novel dataset that contains 21,000 HDR indoor environment maps. The
results indicate that the predictor can generate plausible lighting estimations
even from diffuse objects.
|
To achieve this, we developed a deep learning method that is able to encode the latent space of indoor lighting using few parameters and that is trained on a database of environment maps.
|
http://arxiv.org/abs/1806.03994v3
|
http://arxiv.org/pdf/1806.03994v3.pdf
|
3DV 2018 - International Conference on 3D Vision 2018 9
|
[
"Henrique Weber",
"Donald Prévost",
"Jean-François Lalonde"
] |
[] | 2018-06-11T00:00:00 | null | null |
learning-to-estimate-indoor-lighting-from-3d-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/boosting-black-box-variational-inference
|
1806.02185
| null | null |
Boosting Black Box Variational Inference
|
Approximating a probability density in a tractable manner is a central task
in Bayesian statistics. Variational Inference (VI) is a popular technique that
achieves tractability by choosing a relatively simple variational family.
Borrowing ideas from the classic boosting framework, recent approaches attempt
to \emph{boost} VI by replacing the selection of a single density with a
greedily constructed mixture of densities. In order to guarantee convergence,
previous works impose stringent assumptions that require significant effort for
practitioners. Specifically, they require a custom implementation of the greedy
step (called the LMO) for every probabilistic model with respect to an
unnatural variational family of truncated distributions. Our work fixes these
issues with novel theoretical and algorithmic insights. On the theoretical
side, we show that boosting VI satisfies a relaxed smoothness assumption which
is sufficient for the convergence of the functional Frank-Wolfe (FW) algorithm.
Furthermore, we rephrase the LMO problem and propose to maximize the Residual
ELBO (RELBO) which replaces the standard ELBO optimization in VI. These
theoretical enhancements allow for black box implementation of the boosting
subroutine. Finally, we present a stopping criterion drawn from the duality gap
in the classic FW analyses and exhaustive experiments to illustrate the
usefulness of our theoretical and algorithmic contributions.
|
Finally, we present a stopping criterion drawn from the duality gap in the classic FW analyses and exhaustive experiments to illustrate the usefulness of our theoretical and algorithmic contributions.
|
http://arxiv.org/abs/1806.02185v5
|
http://arxiv.org/pdf/1806.02185v5.pdf
|
NeurIPS 2018 12
|
[
"Francesco Locatello",
"Gideon Dresdner",
"Rajiv Khanna",
"Isabel Valera",
"Gunnar Rätsch"
] |
[
"Variational Inference"
] | 2018-06-06T00:00:00 |
http://papers.nips.cc/paper/7600-boosting-black-box-variational-inference
|
http://papers.nips.cc/paper/7600-boosting-black-box-variational-inference.pdf
|
boosting-black-box-variational-inference-1
| null |
[] |
https://paperswithcode.com/paper/adversarial-attacks-on-variational
|
1806.04646
| null | null |
Adversarial Attacks on Variational Autoencoders
|
Adversarial attacks are malicious inputs that derail machine-learning models.
We propose a scheme to attack autoencoders, as well as a quantitative
evaluation framework that correlates well with the qualitative assessment of
the attacks. We assess --- with statistically validated experiments --- the
resistance to attacks of three variational autoencoders (simple, convolutional,
and DRAW) in three datasets (MNIST, SVHN, CelebA), showing that both DRAW's
recurrence and attention mechanism lead to better resistance. As autoencoders
are proposed for compressing data --- a scenario in which their safety is
paramount --- we expect more attention will be given to adversarial attacks on
them.
|
Adversarial attacks are malicious inputs that derail machine-learning models.
|
http://arxiv.org/abs/1806.04646v1
|
http://arxiv.org/pdf/1806.04646v1.pdf
| null |
[
"George Gondim-Ribeiro",
"Pedro Tabacof",
"Eduardo Valle"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/accelerating-imitation-learning-with
|
1806.04642
| null | null |
Accelerating Imitation Learning with Predictive Models
|
Sample efficiency is critical in solving real-world reinforcement learning
problems, where agent-environment interactions can be costly. Imitation
learning from expert advice has proved to be an effective strategy for reducing
the number of interactions required to train a policy. Online imitation
learning, which interleaves policy evaluation and policy optimization, is a
particularly effective technique with provable performance guarantees. In this
work, we seek to further accelerate the convergence rate of online imitation
learning, thereby making it more sample efficient. We propose two model-based
algorithms inspired by Follow-the-Leader (FTL) with prediction: MoBIL-VI based
on solving variational inequalities and MoBIL-Prox based on stochastic
first-order updates. These two methods leverage a model to predict future
gradients to speed up policy learning. When the model oracle is learned online,
these algorithms can provably accelerate the best known convergence rate up to
an order. Our algorithms can be viewed as a generalization of stochastic
Mirror-Prox (Juditsky et al., 2011), and admit a simple constructive FTL-style
analysis of performance.
| null |
http://arxiv.org/abs/1806.04642v4
|
http://arxiv.org/pdf/1806.04642v4.pdf
| null |
[
"Ching-An Cheng",
"Xinyan Yan",
"Evangelos A. Theodorou",
"Byron Boots"
] |
[
"Imitation Learning",
"Reinforcement Learning"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/unsupervised-meta-learning-for-reinforcement
|
1806.04640
| null |
H1eRBoC9FX
|
Unsupervised Meta-Learning for Reinforcement Learning
|
Meta-learning algorithms use past experience to learn to quickly solve new tasks. In the context of reinforcement learning, meta-learning algorithms acquire reinforcement learning procedures to solve new problems more efficiently by utilizing experience from prior tasks. The performance of meta-learning algorithms depends on the tasks available for meta-training: in the same way that supervised learning generalizes best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We motivate and describe a general recipe for unsupervised meta-reinforcement learning, and present an instantiation of this approach. Our conceptual and theoretical contributions consist of formulating the unsupervised meta-reinforcement learning problem and describing how task proposals based on mutual information can be used to train optimal meta-learners. Our experimental results indicate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design and these procedures exceed the performance of learning from scratch.
| null |
https://arxiv.org/abs/1806.04640v3
|
https://arxiv.org/pdf/1806.04640v3.pdf
|
ICLR 2020 1
|
[
"Abhishek Gupta",
"Benjamin Eysenbach",
"Chelsea Finn",
"Sergey Levine"
] |
[
"Meta-Learning",
"Meta Reinforcement Learning",
"Multi-Task Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=H1eRBoC9FX
|
https://openreview.net/pdf?id=H1eRBoC9FX
|
unsupervised-meta-learning-for-reinforcement-1
| null |
[] |
https://paperswithcode.com/paper/measures-of-tractography-convergence
|
1806.04634
| null | null |
Measures of Tractography Convergence
|
In the present work, we use information theory to understand the empirical
convergence rate of tractography, a widely-used approach to reconstruct
anatomical fiber pathways in the living brain. Based on diffusion MRI data,
tractography is the starting point for many methods to study brain
connectivity. Of the available methods to perform tractography, most
reconstruct a finite set of streamlines, or 3D curves, representing probable
connections between anatomical regions, yet relatively little is known about
how the sampling of this set of streamlines affects downstream results, and how
exhaustive the sampling should be. Here we provide a method to measure the
information theoretic surprise (self-cross entropy) for tract sampling schema.
We then empirically assess four streamline methods. We demonstrate that the
relative information gain is very low after a moderate number of streamlines
have been generated for each tested method. The results give rise to several
guidelines for optimal sampling in brain connectivity analyses.
| null |
http://arxiv.org/abs/1806.04634v1
|
http://arxiv.org/pdf/1806.04634v1.pdf
| null |
[
"Daniel Moyer",
"Paul M. Thompson",
"Greg Ver Steeg"
] |
[
"Diffusion MRI"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/organizing-experience-a-deeper-look-at-replay
|
1806.04624
| null | null |
Organizing Experience: A Deeper Look at Replay Mechanisms for Sample-based Planning in Continuous State Domains
|
Model-based strategies for control are critical to obtain sample efficient
learning. Dyna is a planning paradigm that naturally interleaves learning and
planning, by simulating one-step experience to update the action-value
function. This elegant planning strategy has been mostly explored in the
tabular setting. The aim of this paper is to revisit sample-based planning, in
stochastic and continuous domains with learned models. We first highlight the
flexibility afforded by a model over Experience Replay (ER). Replay-based
methods can be seen as stochastic planning methods that repeatedly sample from
a buffer of recent agent-environment interactions and perform updates to
improve data efficiency. We show that a model, as opposed to a replay buffer,
is particularly useful for specifying which states to sample from during
planning, such as predecessor states that propagate information in reverse from
a state more quickly. We introduce a semi-parametric model learning approach,
called Reweighted Experience Models (REMs), that makes it simple to sample next
states or predecessors. We demonstrate that REM-Dyna exhibits similar
advantages over replay-based methods in learning in continuous state problems,
and that the performance gap grows when moving to stochastic domains, of
increasing size.
| null |
http://arxiv.org/abs/1806.04624v1
|
http://arxiv.org/pdf/1806.04624v1.pdf
| null |
[
"Yangchen Pan",
"Muhammad Zaheer",
"Adam White",
"Andrew Patterson",
"Martha White"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\\_{t} = \\left(s\\_{t}, a\\_{t}, r\\_{t}, s\\_{t+1}\\right)$ in a data-set $D = e\\_{1}, \\cdots, e\\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.\r\n\r\nImage Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524)",
"full_name": "Experience Replay",
"introduced_year": 1993,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Experience Replay",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fast-forwarding-egocentric-videos-by
|
1806.04620
| null | null |
Fast forwarding Egocentric Videos by Listening and Watching
|
The remarkable technological advance in well-equipped wearable devices is
pushing an increasing production of long first-person videos. However, since
most of these videos have long and tedious parts, they are forgotten or never
seen. Despite a large number of techniques proposed to fast-forward these
videos by highlighting relevant moments, most of them are image based only.
Most of these techniques disregard other relevant sensors present in the
current devices such as high-definition microphones. In this work, we propose a
new approach to fast-forward videos using psychoacoustic metrics extracted from
the soundtrack. These metrics can be used to estimate the annoyance of a
segment allowing our method to emphasize moments of sound pleasantness. The
efficiency of our method is demonstrated through qualitative results and
quantitative results as far as of speed-up and instability are concerned.
| null |
http://arxiv.org/abs/1806.04620v1
|
http://arxiv.org/pdf/1806.04620v1.pdf
| null |
[
"Vinicius S. Furlan",
"Ruzena Bajcsy",
"Erickson R. Nascimento"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/imperfect-segmentation-labels-how-much-do
|
1806.04618
| null | null |
Imperfect Segmentation Labels: How Much Do They Matter?
|
Labeled datasets for semantic segmentation are imperfect, especially in
medical imaging where borders are often subtle or ill-defined. Little work has
been done to analyze the effect that label errors have on the performance of
segmentation methodologies. Here we present a large-scale study of model
performance in the presence of varying types and degrees of error in training
data. We trained U-Net, SegNet, and FCN32 several times for liver segmentation
with 10 different modes of ground-truth perturbation. Our results show that for
each architecture, performance steadily declines with boundary-localized
errors, however, U-Net was significantly more robust to jagged boundary errors
than the other architectures. We also found that each architecture was very
robust to non-boundary-localized errors, suggesting that boundary-localized
errors are fundamentally different and more challenging problem than random
label errors in a classification setting.
|
Labeled datasets for semantic segmentation are imperfect, especially in medical imaging where borders are often subtle or ill-defined.
|
http://arxiv.org/abs/1806.04618v3
|
http://arxiv.org/pdf/1806.04618v3.pdf
| null |
[
"Nicholas Heller",
"Joshua Dean",
"Nikolaos Papanikolopoulos"
] |
[
"Liver Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/yassouali/pytorch_segmentation/blob/8b8e3ee20a3aa733cb19fc158ad5d7773ed6da7f/models/segnet.py#L9",
"description": "**SegNet** is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the\r\nVGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature maps. Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to\r\nperform non-linear upsampling.",
"full_name": "SegNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "SegNet",
"source_title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation",
"source_url": "http://arxiv.org/abs/1511.00561v3"
}
] |
https://paperswithcode.com/paper/deep-learning-to-detect-redundant-method
|
1806.04616
| null | null |
Deep Learning to Detect Redundant Method Comments
|
Comments in software are critical for maintenance and reuse. But apart from
prescriptive advice, there is little practical support or quantitative
understanding of what makes a comment useful. In this paper, we introduce the
task of identifying comments which are uninformative about the code they are
meant to document. To address this problem, we introduce the notion of comment
entailment from code, high entailment indicating that a comment's natural
language semantics can be inferred directly from the code. Although not all
entailed comments are low quality, comments that are too easily inferred, for
example, comments that restate the code, are widely discouraged by authorities
on software style. Based on this, we develop a tool called CRAIC which scores
method-level comments for redundancy. Highly redundant comments can then be
expanded or alternately removed by the developer. CRAIC uses deep language
models to exploit large software corpora without requiring expensive manual
annotations of entailment. We show that CRAIC can perform the comment
entailment task with good agreement with human judgements. Our findings also
have implications for documentation tools. For example, we find that common
tags in Javadoc are at least two times more predictable from code than
non-Javadoc sentences, suggesting that Javadoc tags are less informative than
more free-form comments
|
To address this problem, we introduce the notion of comment entailment from code, high entailment indicating that a comment's natural language semantics can be inferred directly from the code.
|
http://arxiv.org/abs/1806.04616v1
|
http://arxiv.org/pdf/1806.04616v1.pdf
| null |
[
"Annie Louis",
"Santanu Kumar Dash",
"Earl T. Barr",
"Charles Sutton"
] |
[
"Deep Learning"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-regression-performance-with
|
1806.04613
| null | null |
Improving Regression Performance with Distributional Losses
|
There is growing evidence that converting targets to soft targets in
supervised learning can provide considerable gains in performance. Much of this
work has considered classification, converting hard zero-one values to soft
labels---such as by adding label noise, incorporating label ambiguity or using
distillation. In parallel, there is some evidence from a regression setting in
reinforcement learning that learning distributions can improve performance. In
this work, we investigate the reasons for this improvement, in a regression
setting. We introduce a novel distributional regression loss, and similarly
find it significantly improves prediction accuracy. We investigate several
common hypotheses, around reducing overfitting and improved representations. We
instead find evidence for an alternative hypothesis: this loss is easier to
optimize, with better behaved gradients, resulting in improved generalization.
We provide theoretical support for this alternative hypothesis, by
characterizing the norm of the gradients of this loss.
| null |
http://arxiv.org/abs/1806.04613v1
|
http://arxiv.org/pdf/1806.04613v1.pdf
|
ICML 2018 7
|
[
"Ehsan Imani",
"Martha White"
] |
[
"regression",
"Reinforcement Learning"
] | 2018-06-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2092
|
http://proceedings.mlr.press/v80/imani18a/imani18a.pdf
|
improving-regression-performance-with-1
| null |
[] |
https://paperswithcode.com/paper/a-novel-bayesian-approach-for-latent-variable
|
1806.04610
| null | null |
A Novel Bayesian Approach for Latent Variable Modeling from Mixed Data with Missing Values
|
We consider the problem of learning parameters of latent variable models from
mixed (continuous and ordinal) data with missing values. We propose a novel
Bayesian Gaussian copula factor (BGCF) approach that is consistent under
certain conditions and that is quite robust to the violations of these
conditions. In simulations, BGCF substantially outperforms two state-of-the-art
alternative approaches. An illustration on the `Holzinger & Swineford 1939'
dataset indicates that BGCF is favorable over the so-called robust maximum
likelihood (MLR) even if the data match the assumptions of MLR.
|
We consider the problem of learning parameters of latent variable models from mixed (continuous and ordinal) data with missing values.
|
http://arxiv.org/abs/1806.04610v1
|
http://arxiv.org/pdf/1806.04610v1.pdf
| null |
[
"Ruifei Cui",
"Ioan Gabriel Bucur",
"Perry Groot",
"Tom Heskes"
] |
[
"Missing Values"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/streaming-pca-and-subspace-tracking-the
|
1806.04609
| null | null |
Streaming PCA and Subspace Tracking: The Missing Data Case
|
For many modern applications in science and engineering, data are collected
in a streaming fashion carrying time-varying information, and practitioners
need to process them with a limited amount of memory and computational
resources in a timely manner for decision making. This often is coupled with
the missing data problem, such that only a small fraction of data attributes
are observed. These complications impose significant, and unconventional,
constraints on the problem of streaming Principal Component Analysis (PCA) and
subspace tracking, which is an essential building block for many inference
tasks in signal processing and machine learning. This survey article reviews a
variety of classical and recent algorithms for solving this problem with low
computational and memory complexities, particularly those applicable in the big
data regime with missing data. We illustrate that streaming PCA and subspace
tracking algorithms can be understood through algebraic and geometric
perspectives, and they need to be adjusted carefully to handle missing data.
Both asymptotic and non-asymptotic convergence guarantees are reviewed.
Finally, we benchmark the performance of several competitive algorithms in the
presence of missing data for both well-conditioned and ill-conditioned systems.
| null |
http://arxiv.org/abs/1806.04609v1
|
http://arxiv.org/pdf/1806.04609v1.pdf
| null |
[
"Laura Balzano",
"Yuejie Chi",
"Yue M. Lu"
] |
[
"Decision Making"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/knowledge-distillation-by-on-the-fly-native
|
1806.04606
| null | null |
Knowledge Distillation by On-the-Fly Native Ensemble
|
Knowledge distillation is effective to train small and generalisable network
models for meeting the low-memory and fast running requirements. Existing
offline distillation methods rely on a strong pre-trained teacher, which
enables favourable knowledge discovery and transfer but requires a complex
two-phase training procedure. Online counterparts address this limitation at
the price of lacking a highcapacity teacher. In this work, we present an
On-the-fly Native Ensemble (ONE) strategy for one-stage online distillation.
Specifically, ONE trains only a single multi-branch network while
simultaneously establishing a strong teacher on-the- fly to enhance the
learning of target network. Extensive evaluations show that ONE improves the
generalisation performance a variety of deep neural networks more significantly
than alternative methods on four image classification dataset: CIFAR10,
CIFAR100, SVHN, and ImageNet, whilst having the computational efficiency
advantages.
|
Knowledge distillation is effective to train small and generalisable network models for meeting the low-memory and fast running requirements.
|
http://arxiv.org/abs/1806.04606v2
|
http://arxiv.org/pdf/1806.04606v2.pdf
|
NeurIPS 2018 12
|
[
"Xu Lan",
"Xiatian Zhu",
"Shaogang Gong"
] |
[
"Computational Efficiency",
"image-classification",
"Image Classification",
"Knowledge Distillation"
] | 2018-06-12T00:00:00 |
http://papers.nips.cc/paper/7980-knowledge-distillation-by-on-the-fly-native-ensemble
|
http://papers.nips.cc/paper/7980-knowledge-distillation-by-on-the-fly-native-ensemble.pdf
|
knowledge-distillation-by-on-the-fly-native-1
| null |
[] |
https://paperswithcode.com/paper/multiview-two-task-recursive-attention-model
|
1806.04597
| null | null |
Multiview Two-Task Recursive Attention Model for Left Atrium and Atrial Scars Segmentation
|
Late Gadolinium Enhanced Cardiac MRI (LGE-CMRI) for detecting atrial scars in
atrial fibrillation (AF) patients has recently emerged as a promising technique
to stratify patients, guide ablation therapy and predict treatment success.
Visualisation and quantification of scar tissues require a segmentation of both
the left atrium (LA) and the high intensity scar regions from LGE-CMRI images.
These two segmentation tasks are challenging due to the cancelling of healthy
tissue signal, low signal-to-noise ratio and often limited image quality in
these patients. Most approaches require manual supervision and/or a second
bright-blood MRI acquisition for anatomical segmentation. Segmenting both the
LA anatomy and the scar tissues automatically from a single LGE-CMRI
acquisition is highly in demand. In this study, we proposed a novel fully
automated multiview two-task (MVTT) recursive attention model working directly
on LGE-CMRI images that combines a sequential learning and a dilated residual
learning to segment the LA (including attached pulmonary veins) and delineate
the atrial scars simultaneously via an innovative attention model. Compared to
other state-of-the-art methods, the proposed MVTT achieves compelling
improvement, enabling to generate a patient-specific anatomical and atrial scar
assessment model.
| null |
http://arxiv.org/abs/1806.04597v1
|
http://arxiv.org/pdf/1806.04597v1.pdf
| null |
[
"Jun Chen",
"Guang Yang",
"Zhifan Gao",
"Hao Ni",
"Elsa Angelini",
"Raad Mohiaddin",
"Tom Wong",
"Yanping Zhang",
"Xiuquan Du",
"Heye Zhang",
"Jennifer Keegan",
"David Firmin"
] |
[
"Anatomy",
"Segmentation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exponential-weights-on-the-hypercube-in
|
1806.04594
| null | null |
Exponential Weights on the Hypercube in Polynomial Time
|
We study a general online linear optimization problem(OLO). At each round, a subset of objects from a fixed universe of $n$ objects is chosen, and a linear cost associated with the chosen subset is incurred. To measure the performance of our algorithms, we use the notion of regret which is the difference between the total cost incurred over all iterations and the cost of the best fixed subset in hindsight. We consider Full Information and Bandit feedback for this problem. This problem is equivalent to OLO on the $\{0,1\}^n$ hypercube. The Exp2 algorithm and its bandit variant are commonly used strategies for this problem. It was previously unknown if it is possible to run Exp2 on the hypercube in polynomial time. In this paper, we present a polynomial time algorithm called PolyExp for OLO on the hypercube. We show that our algorithm is equivalent Exp2 on $\{0,1\}^n$, Online Mirror Descent(OMD), Follow The Regularized Leader(FTRL) and Follow The Perturbed Leader(FTPL) algorithms. We show PolyExp achieves expected regret bound that is a factor of $\sqrt{n}$ better than Exp2 in the full information setting under $L_\infty$ adversarial losses. Because of the equivalence of these algorithms, this implies an improvement on Exp2's regret bound in full information. We also show matching regret lower bounds. Finally, we show how to use PolyExp on the $\{-1,+1\}^n$ hypercube, solving an open problem in Bubeck et al (COLT 2012).
| null |
https://arxiv.org/abs/1806.04594v5
|
https://arxiv.org/pdf/1806.04594v5.pdf
| null |
[
"Sudeep Raja Putta",
"Abhishek Shetty"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/using-inherent-structures-to-design-lean-2
|
1806.04577
| null | null |
Using Inherent Structures to design Lean 2-layer RBMs
|
Understanding the representational power of Restricted Boltzmann Machines
(RBMs) with multiple layers is an ill-understood problem and is an area of
active research. Motivated from the approach of \emph{Inherent Structure
formalism} (Stillinger & Weber, 1982), extensively used in analysing Spin
Glasses, we propose a novel measure called \emph{Inherent Structure Capacity}
(ISC), which characterizes the representation capacity of a fixed architecture
RBM by the expected number of modes of distributions emanating from the RBM
with parameters drawn from a prior distribution. Though ISC is intractable, we
show that for a single layer RBM architecture ISC approaches a finite constant
as number of hidden units are increased and to further improve the ISC, one
needs to add a second layer. Furthermore, we introduce \emph{Lean} RBMs, which
are multi-layer RBMs where each layer can have at-most $O(n)$ units with the
number of visible units being n. We show that for every single layer RBM with
$\Omega(n^{2+r}), r \ge 0$, hidden units there exists a two-layered \emph{lean}
RBM with $\Theta(n^2)$ parameters with the same ISC, establishing that 2 layer
RBMs can achieve the same representational power as single-layer RBMs but using
far fewer number of parameters. To the best of our knowledge, this is the first
result which quantitatively establishes the need for layering.
| null |
http://arxiv.org/abs/1806.04577v1
|
http://arxiv.org/pdf/1806.04577v1.pdf
|
ICML 2018 7
|
[
"Abhishek Bansal",
"Abhinav Anand",
"Chiranjib Bhattacharyya"
] |
[] | 2018-06-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2376
|
http://proceedings.mlr.press/v80/bansal18a/bansal18a.pdf
|
using-inherent-structures-to-design-lean-2-1
| null |
[] |
https://paperswithcode.com/paper/acting-thoughts-towards-a-mobile-robotic
|
1707.06633
| null | null |
Acting Thoughts: Towards a Mobile Robotic Service Assistant for Users with Limited Communication Skills
|
As autonomous service robots become more affordable and thus available also
for the general public, there is a growing need for user friendly interfaces to
control the robotic system. Currently available control modalities typically
expect users to be able to express their desire through either touch, speech or
gesture commands. While this requirement is fulfilled for the majority of
users, paralyzed users may not be able to use such systems. In this paper, we
present a novel framework, that allows these users to interact with a robotic
service assistant in a closed-loop fashion, using only thoughts. The
brain-computer interface (BCI) system is composed of several interacting
components, i.e., non-invasive neuronal signal recording and decoding,
high-level task planning, motion and manipulation planning as well as
environment perception. In various experiments, we demonstrate its
applicability and robustness in real world scenarios, considering
fetch-and-carry tasks and tasks involving human-robot interaction. As our
results demonstrate, our system is capable of adapting to frequent changes in
the environment and reliably completing given tasks within a reasonable amount
of time. Combined with high-level planning and autonomous robotic systems,
interesting new perspectives open up for non-invasive BCI-based human-robot
interactions.
| null |
http://arxiv.org/abs/1707.06633v4
|
http://arxiv.org/pdf/1707.06633v4.pdf
| null |
[
"Felix Burget",
"Lukas Dominique Josef Fiederer",
"Daniel Kuhner",
"Martin Völker",
"Johannes Aldinger",
"Robin Tibor Schirrmeister",
"Chau Do",
"Joschka Boedecker",
"Bernhard Nebel",
"Tonio Ball",
"Wolfram Burgard"
] |
[
"Brain Computer Interface",
"Task Planning"
] | 2017-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/structured-evolution-with-compact
|
1804.02395
| null | null |
Structured Evolution with Compact Architectures for Scalable Policy Optimization
|
We present a new method of blackbox optimization via gradient approximation
with the use of structured random orthogonal matrices, providing more accurate
estimators than baselines and with provable theoretical guarantees. We show
that this algorithm can be successfully applied to learn better quality compact
policies than those using standard gradient estimation techniques. The compact
policies we learn have several advantages over unstructured ones, including
faster training algorithms and faster inference. These benefits are important
when the policy is deployed on real hardware with limited resources. Further,
compact policies provide more scalable architectures for derivative-free
optimization (DFO) in high-dimensional spaces. We show that most robotics tasks
from the OpenAI Gym can be solved using neural networks with less than 300
parameters, with almost linear time complexity of the inference phase, with up
to 13x fewer parameters relative to the Evolution Strategies (ES) algorithm
introduced by Salimans et al. (2017). We do not need heuristics such as fitness
shaping to learn good quality policies, resulting in a simple and theoretically
motivated training mechanism.
| null |
http://arxiv.org/abs/1804.02395v2
|
http://arxiv.org/pdf/1804.02395v2.pdf
|
ICML 2018 7
|
[
"Krzysztof Choromanski",
"Mark Rowland",
"Vikas Sindhwani",
"Richard E. Turner",
"Adrian Weller"
] |
[
"OpenAI Gym",
"Text-to-Image Generation"
] | 2018-04-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1907
|
http://proceedings.mlr.press/v80/choromanski18a/choromanski18a.pdf
|
structured-evolution-with-compact-1
| null |
[] |
https://paperswithcode.com/paper/benchmarking-evolutionary-algorithms-for
|
1806.04563
| null | null |
Benchmarking Evolutionary Algorithms For Single Objective Real-valued Constrained Optimization - A Critical Review
|
Benchmarking plays an important role in the development of novel search
algorithms as well as for the assessment and comparison of contemporary
algorithmic ideas. This paper presents common principles that need to be taken
into account when considering benchmarking problems for constrained
optimization. Current benchmark environments for testing Evolutionary
Algorithms are reviewed in the light of these principles. Along with this line,
the reader is provided with an overview of the available problem domains in the
field of constrained benchmarking. Hence, the review supports algorithms
developers with information about the merits and demerits of the available
frameworks.
| null |
http://arxiv.org/abs/1806.04563v2
|
http://arxiv.org/pdf/1806.04563v2.pdf
| null |
[
"Michael Hellwig",
"Hans-Georg Beyer"
] |
[
"Benchmarking",
"Evolutionary Algorithms"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-agent-deep-reinforcement-learning-with-1
|
1806.04562
| null | null |
Multi-Agent Deep Reinforcement Learning with Human Strategies
|
Deep learning has enabled traditional reinforcement learning methods to deal with high-dimensional problems. However, one of the disadvantages of deep reinforcement learning methods is the limited exploration capacity of learning agents. In this paper, we introduce an approach that integrates human strategies to increase the exploration capacity of multiple deep reinforcement learning agents. We also report the development of our own multi-agent environment called Multiple Tank Defence to simulate the proposed approach. The results show the significant performance improvement of multiple agents that have learned cooperatively with human strategies. This implies that there is a critical need for human intellect teamed with machines to solve complex problems. In addition, the success of this simulation indicates that our multi-agent environment can be used as a testbed platform to develop and validate other multi-agent control algorithms.
| null |
https://arxiv.org/abs/1806.04562v2
|
https://arxiv.org/pdf/1806.04562v2.pdf
| null |
[
"Thanh Nguyen",
"Ngoc Duy Nguyen",
"Saeid Nahavandi"
] |
[
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-extension-of-averaged-operator-based
|
1806.04561
| null | null |
An Extension of Averaged-Operator-Based Algorithms
|
Many of the algorithms used to solve minimization problems with
sparsity-inducing regularizers are generic in the sense that they do not take
into account the sparsity of the solution in any particular way. However,
algorithms known as semismooth Newton are able to take advantage of this
sparsity to accelerate their convergence. We show how to extend these
algorithms in different directions, and study the convergence of the resulting
algorithms by showing that they are a particular case of an extension of the
well-known Krasnosel'ski\u{\i}--Mann scheme.
| null |
http://arxiv.org/abs/1806.04561v1
|
http://arxiv.org/pdf/1806.04561v1.pdf
| null |
[
"Miguel Simões",
"José Bioucas-Dias",
"Luis B. Almeida"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transfer-learning-from-speaker-verification
|
1806.04558
| null | null |
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis
|
Clone a voice in 5 seconds to generate arbitrary speech in real-time
|
Clone a voice in 5 seconds to generate arbitrary speech in real-time
|
http://arxiv.org/abs/1806.04558v4
|
http://arxiv.org/pdf/1806.04558v4.pdf
|
NeurIPS 2018 12
|
[
"Ye Jia",
"Yu Zhang",
"Ron J. Weiss",
"Quan Wang",
"Jonathan Shen",
"Fei Ren",
"Zhifeng Chen",
"Patrick Nguyen",
"Ruoming Pang",
"Ignacio Lopez Moreno",
"Yonghui Wu"
] |
[
"Speaker Verification",
"Speech Synthesis",
"text-to-speech",
"Text to Speech",
"Text-To-Speech Synthesis",
"Transfer Learning",
"Voice Cloning"
] | 2018-06-12T00:00:00 |
http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis
|
http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis.pdf
|
transfer-learning-from-speaker-verification-1
| null |
[] |
https://paperswithcode.com/paper/logistic-ensemble-models
|
1806.04555
| null | null |
Logistic Ensemble Models
|
Predictive models that are developed in a regulated industry or a regulated
application, like determination of credit worthiness, must be interpretable and
rational (e.g., meaningful improvements in basic credit behavior must result in
improved credit worthiness scores). Machine Learning technologies provide very
good performance with minimal analyst intervention, making them well suited to
a high volume analytic environment, but the majority are black box tools that
provide very limited insight or interpretability into key drivers of model
performance or predicted model output values. This paper presents a methodology
that blends one of the most popular predictive statistical modeling methods for
binary classification with a core model enhancement strategy found in machine
learning. The resulting prediction methodology provides solid performance, from
minimal analyst effort, while providing the interpretability and rationality
required in regulated industries, as well as in other environments where
interpretation of model parameters is required (e.g. businesses that require
interpretation of models, to take action on them).
| null |
http://arxiv.org/abs/1806.04555v1
|
http://arxiv.org/pdf/1806.04555v1.pdf
| null |
[
"Bob Vanderheyden",
"Jennifer Priestley"
] |
[
"BIG-bench Machine Learning",
"Binary Classification"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Interpretability",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Interpretability",
"source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression",
"source_url": "http://arxiv.org/abs/1310.1533v2"
}
] |
https://paperswithcode.com/paper/bitcoin-volatility-forecasting-with-a-glimpse
|
1802.04065
| null | null |
Bitcoin Volatility Forecasting with a Glimpse into Buy and Sell Orders
|
In this paper, we study the ability to make the short-term prediction of the
exchange price fluctuations towards the United States dollar for the Bitcoin
market. We use the data of realized volatility collected from one of the
largest Bitcoin digital trading offices in 2016 and 2017 as well as order
information. Experiments are performed to evaluate a variety of statistical and
machine learning approaches.
| null |
http://arxiv.org/abs/1802.04065v3
|
http://arxiv.org/pdf/1802.04065v3.pdf
| null |
[
"Tian Guo",
"Albert Bifet",
"Nino Antulov-Fantulin"
] |
[
"BIG-bench Machine Learning"
] | 2018-02-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/combining-model-free-q-ensembles-and-model
|
1806.04552
| null | null |
Combining Model-Free Q-Ensembles and Model-Based Approaches for Informed Exploration
|
Q-Ensembles are a model-free approach where input images are fed into
different Q-networks and exploration is driven by the assumption that
uncertainty is proportional to the variance of the output Q-values obtained.
They have been shown to perform relatively well compared to other exploration
strategies. Further, model-based approaches, such as encoder-decoder models
have been used successfully for next frame prediction given previous frames.
This paper proposes to integrate the model-free Q-ensembles and model-based
approaches with the hope of compounding the benefits of both and achieving
superior exploration as a result. Results show that a model-based trajectory
memory approach when combined with Q-ensembles produces superior performance
when compared to only using Q-ensembles.
| null |
http://arxiv.org/abs/1806.04552v1
|
http://arxiv.org/pdf/1806.04552v1.pdf
| null |
[
"Sreecharan Sankaranarayanan",
"Raghuram Mandyam Annasamy",
"Katia Sycara",
"Carolyn Penstein Rosé"
] |
[
"Decoder",
"model"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-risk-and-the-dangers-of
|
1802.05666
| null | null |
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
|
This paper investigates recently proposed approaches for defending against
adversarial examples and evaluating adversarial robustness. We motivate
'adversarial risk' as an objective for achieving models robust to worst-case
inputs. We then frame commonly used attacks and evaluation metrics as defining
a tractable surrogate objective to the true adversarial risk. This suggests
that models may optimize this surrogate rather than the true adversarial risk.
We formalize this notion as 'obscurity to an adversary,' and develop tools and
heuristics for identifying obscured models and designing transparent models. We
demonstrate that this is a significant problem in practice by repurposing
gradient-free optimization techniques into adversarial attacks, which we use to
decrease the accuracy of several recently proposed defenses to near zero. Our
hope is that our formulations and results will help researchers to develop more
powerful defenses.
| null |
http://arxiv.org/abs/1802.05666v2
|
http://arxiv.org/pdf/1802.05666v2.pdf
|
ICML 2018 7
|
[
"Jonathan Uesato",
"Brendan O'Donoghue",
"Aaron van den Oord",
"Pushmeet Kohli"
] |
[
"Adversarial Robustness"
] | 2018-02-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2138
|
http://proceedings.mlr.press/v80/uesato18a/uesato18a.pdf
|
adversarial-risk-and-the-dangers-of-1
| null |
[] |
https://paperswithcode.com/paper/deep-state-space-models-for-unconditional
|
1806.04550
| null | null |
Deep State Space Models for Unconditional Word Generation
|
Autoregressive feedback is considered a necessity for successful
unconditional text generation using stochastic sequence models. However, such
feedback is known to introduce systematic biases into the training process and
it obscures a principle of generation: committing to global information and
forgetting local nuances. We show that a non-autoregressive deep state space
model with a clear separation of global and local uncertainty can be built from
only two ingredients: An independent noise source and a deterministic
transition function. Recent advances on flow-based variational inference can be
used to train an evidence lower-bound without resorting to annealing, auxiliary
losses or similar measures. The result is a highly interpretable generative
model on par with comparable auto-regressive models on the task of word
generation.
| null |
http://arxiv.org/abs/1806.04550v2
|
http://arxiv.org/pdf/1806.04550v2.pdf
|
NeurIPS 2018 12
|
[
"Florian Schmidt",
"Thomas Hofmann"
] |
[
"State Space Models",
"Text Generation",
"Variational Inference"
] | 2018-06-12T00:00:00 |
http://papers.nips.cc/paper/7854-deep-state-space-models-for-unconditional-word-generation
|
http://papers.nips.cc/paper/7854-deep-state-space-models-for-unconditional-word-generation.pdf
|
deep-state-space-models-for-unconditional-1
| null |
[] |
https://paperswithcode.com/paper/early-seizure-detection-with-an-energy
|
1806.04549
| null | null |
Early Seizure Detection with an Energy-Efficient Convolutional Neural Network on an Implantable Microcontroller
|
Implantable, closed-loop devices for automated early detection and
stimulation of epileptic seizures are promising treatment options for patients
with severe epilepsy that cannot be treated with traditional means. Most
approaches for early seizure detection in the literature are, however, not
optimized for implementation on ultra-low power microcontrollers required for
long-term implantation. In this paper we present a convolutional neural network
for the early detection of seizures from intracranial EEG signals, designed
specifically for this purpose. In addition, we investigate approximations to
comply with hardware limits while preserving accuracy. We compare our approach
to three previously proposed convolutional neural networks and a feature-based
SVM classifier with respect to detection accuracy, latency and computational
needs. Evaluation is based on a comprehensive database with long-term EEG
recordings. The proposed method outperforms the other detectors with a median
sensitivity of 0.96, false detection rate of 10.1 per hour and median detection
delay of 3.7 seconds, while being the only approach suited to be realized on a
low power microcontroller due to its parsimonious use of computational and
memory resources.
|
Most approaches for early seizure detection in the literature are, however, not optimized for implementation on ultra-low power microcontrollers required for long-term implantation.
|
http://arxiv.org/abs/1806.04549v1
|
http://arxiv.org/pdf/1806.04549v1.pdf
| null |
[
"Maria Hügle",
"Simon Heller",
"Manuel Watter",
"Manuel Blum",
"Farrokh Manzouri",
"Matthias Dümpelmann",
"Andreas Schulze-Bonhage",
"Peter Woias",
"Joschka Boedecker"
] |
[
"EEG",
"Electroencephalogram (EEG)",
"Seizure Detection"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-deep-similarity-metric-for-3d-mr
|
1806.04548
| null | null |
Learning Deep Similarity Metric for 3D MR-TRUS Registration
|
Purpose: The fusion of transrectal ultrasound (TRUS) and magnetic resonance
(MR) images for guiding targeted prostate biopsy has significantly improved the
biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image
registration. However, it is very challenging to obtain a robust automatic
MR-TRUS registration due to the large appearance difference between the two
imaging modalities. The work presented in this paper aims to tackle this
problem by addressing two challenges: (i) the definition of a suitable
similarity metric and (ii) the determination of a suitable optimization
strategy.
Methods: This work proposes the use of a deep convolutional neural network to
learn a similarity metric for MR-TRUS registration. We also use a composite
optimization strategy that explores the solution space in order to search for a
suitable initialization for the second-order optimization of the learned
metric. Further, a multi-pass approach is used in order to smooth the metric
for optimization.
Results: The learned similarity metric outperforms the classical mutual
information and also the state-of-the-art MIND feature based methods. The
results indicate that the overall registration framework has a large capture
range. The proposed deep similarity metric based approach obtained a mean TRE
of 3.86mm (with an initial TRE of 16mm) for this challenging problem.
Conclusion: A similarity metric that is learned using a deep neural network
can be used to assess the quality of any given image registration and can be
used in conjunction with the aforementioned optimization framework to perform
automatic registration that is robust to poor initialization.
| null |
http://arxiv.org/abs/1806.04548v2
|
http://arxiv.org/pdf/1806.04548v2.pdf
| null |
[
"Grant Haskins",
"Jochen Kruecker",
"Uwe Kruger",
"Sheng Xu",
"Peter A. Pinto",
"Brad J. Wood",
"Pingkun Yan"
] |
[
"Image Registration"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/approximate-inference-with-wasserstein
|
1806.04542
| null | null |
Approximate inference with Wasserstein gradient flows
|
We present a novel approximate inference method for diffusion processes,
based on the Wasserstein gradient flow formulation of the diffusion. In this
formulation, the time-dependent density of the diffusion is derived as the
limit of implicit Euler steps that follow the gradients of a particular free
energy functional. Existing methods for computing Wasserstein gradient flows
rely on discretization of the domain of the diffusion, prohibiting their
application to domains in more than several dimensions. We propose instead a
discretization-free inference method that computes the Wasserstein gradient
flow directly in a space of continuous functions. We characterize approximation
properties of the proposed method and evaluate it on a nonlinear filtering
task, finding performance comparable to the state-of-the-art for filtering
diffusions.
| null |
http://arxiv.org/abs/1806.04542v1
|
http://arxiv.org/pdf/1806.04542v1.pdf
| null |
[
"Charlie Frogner",
"Tomaso Poggio"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-attacks-on-neural-networks-for
|
1805.07984
| null | null |
Adversarial Attacks on Neural Networks for Graph Data
|
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
|
Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
|
https://arxiv.org/abs/1805.07984v4
|
https://arxiv.org/pdf/1805.07984v4.pdf
| null |
[
"Daniel Zügner",
"Amir Akbarnejad",
"Stephan Günnemann"
] |
[
"General Classification",
"Node Classification"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/term-definitions-help-hypernymy-detection
|
1806.04532
| null | null |
Term Definitions Help Hypernymy Detection
|
Existing methods of hypernymy detection mainly rely on statistics over a big
corpus, either mining some co-occurring patterns like "animals such as cats" or
embedding words of interest into context-aware vectors. These approaches are
therefore limited by the availability of a large enough corpus that can cover
all terms of interest and provide sufficient contextual information to
represent their meaning. In this work, we propose a new paradigm, HyperDef, for
hypernymy detection -- expressing word meaning by encoding word definitions,
along with context driven representation. This has two main benefits: (i)
Definitional sentences express (sense-specific) corpus-independent meanings of
words, hence definition-driven approaches enable strong generalization -- once
trained, the model is expected to work well in open-domain testbeds; (ii)
Global context from a large corpus and definitions provide complementary
information for words. Consequently, our model, HyperDef, once trained on
task-agnostic data, gets state-of-the-art results in multiple benchmarks
| null |
http://arxiv.org/abs/1806.04532v1
|
http://arxiv.org/pdf/1806.04532v1.pdf
|
SEMEVAL 2018 6
|
[
"Wenpeng Yin",
"Dan Roth"
] |
[] | 2018-06-12T00:00:00 |
https://aclanthology.org/S18-2025
|
https://aclanthology.org/S18-2025.pdf
|
term-definitions-help-hypernymy-detection-1
| null |
[] |
https://paperswithcode.com/paper/online-parallel-portfolio-selection-with
|
1806.04528
| null | null |
Online Parallel Portfolio Selection with Heterogeneous Island Model
|
We present an online parallel portfolio selection algorithm based on the
island model commonly used for parallelization of evolutionary algorithms. In
our case each of the islands runs a different optimization algorithm. The
distributed computation is managed by a central planner which periodically
changes the running methods during the execution of the algorithm -- less
successful methods are removed while new instances of more successful methods
are added.
We compare different types of planners in the heterogeneous island model
among themselves and also to the traditional homogeneous model on a wide set of
problems. The tests include experiments with different representations of the
individuals and different duration of fitness function evaluations. The results
show that heterogeneous models are a more general and universal computational
tool compared to homogeneous models.
| null |
http://arxiv.org/abs/1806.04528v1
|
http://arxiv.org/pdf/1806.04528v1.pdf
| null |
[
"Štěpán Balcar",
"Martin Pilát"
] |
[
"Evolutionary Algorithms",
"model"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-automatically-generate-fill-in
|
1806.04524
| null | null |
Learning to Automatically Generate Fill-In-The-Blank Quizzes
|
In this paper we formalize the problem automatic fill-in-the-blank question
generation using two standard NLP machine learning schemes, proposing concrete
deep learning models for each. We present an empirical study based on data
obtained from a language learning platform showing that both of our proposed
settings offer promising results.
| null |
http://arxiv.org/abs/1806.04524v1
|
http://arxiv.org/pdf/1806.04524v1.pdf
|
WS 2018 7
|
[
"Edison Marrese-Taylor",
"Ai Nakajima",
"Yutaka Matsuo",
"Ono Yuichi"
] |
[
"BIG-bench Machine Learning",
"Deep Learning",
"Question Generation",
"Question-Generation"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/W18-3722
|
https://aclanthology.org/W18-3722.pdf
|
learning-to-automatically-generate-fill-in-1
| null |
[] |
https://paperswithcode.com/paper/recurrent-one-hop-predictions-for-reasoning
|
1806.04523
| null | null |
Recurrent One-Hop Predictions for Reasoning over Knowledge Graphs
|
Large scale knowledge graphs (KGs) such as Freebase are generally incomplete.
Reasoning over multi-hop (mh) KG paths is thus an important capability that is
needed for question answering or other NLP tasks that require knowledge about
the world. mh-KG reasoning includes diverse scenarios, e.g., given a head
entity and a relation path, predict the tail entity; or given two entities
connected by some relation paths, predict the unknown relation between them. We
present ROPs, recurrent one-hop predictors, that predict entities at each step
of mh-KB paths by using recurrent neural networks and vector representations of
entities and relations, with two benefits: (i) modeling mh-paths of arbitrary
lengths while updating the entity and relation representations by the training
signal at each step; (ii) handling different types of mh-KG reasoning in a
unified framework. Our models show state-of-the-art for two important multi-hop
KG reasoning tasks: Knowledge Base Completion and Path Query Answering.
| null |
http://arxiv.org/abs/1806.04523v1
|
http://arxiv.org/pdf/1806.04523v1.pdf
|
COLING 2018 8
|
[
"Wenpeng Yin",
"Yadollah Yaghoobzadeh",
"Hinrich Schütze"
] |
[
"Knowledge Base Completion",
"Knowledge Graphs",
"Question Answering",
"Relation"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1200
|
https://aclanthology.org/C18-1200.pdf
|
recurrent-one-hop-predictions-for-reasoning-2
| null |
[] |
https://paperswithcode.com/paper/meta-learning-for-stochastic-gradient-mcmc
|
1806.04522
| null |
HkeoOo09YX
|
Meta-Learning for Stochastic Gradient MCMC
|
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become
increasingly popular for simulating posterior samples in large-scale Bayesian
modeling. However, existing SG-MCMC schemes are not tailored to any specific
probabilistic model, even a simple modification of the underlying dynamical
system requires significant physical intuition. This paper presents the first
meta-learning algorithm that allows automated design for the underlying
continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes
Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast
traversal and efficient exploration of neural network energy landscapes.
Experiments validate the proposed approach on both Bayesian fully connected
neural network and Bayesian recurrent neural network tasks, showing that the
learned sampler out-performs generic, hand-designed SG-MCMC algorithms, and
generalizes to different datasets and larger architectures.
|
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling.
|
http://arxiv.org/abs/1806.04522v1
|
http://arxiv.org/pdf/1806.04522v1.pdf
|
ICLR 2019 5
|
[
"Wenbo Gong",
"Yingzhen Li",
"José Miguel Hernández-Lobato"
] |
[
"Efficient Exploration",
"Meta-Learning",
"Physical Intuition"
] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=HkeoOo09YX
|
https://openreview.net/pdf?id=HkeoOo09YX
|
meta-learning-for-stochastic-gradient-mcmc-1
| null |
[] |
https://paperswithcode.com/paper/efficient-first-order-algorithms-for-adaptive
|
1803.11262
| null | null |
Efficient First-Order Algorithms for Adaptive Signal Denoising
|
We consider the problem of discrete-time signal denoising, focusing on a
specific family of non-linear convolution-type estimators. Each such estimator
is associated with a time-invariant filter which is obtained adaptively, by
solving a certain convex optimization problem. Adaptive convolution-type
estimators were demonstrated to have favorable statistical properties. However,
the question of their computational complexity remains largely unexplored, and
in fact we are not aware of any publicly available implementation of these
estimators. Our first contribution is an efficient implementation of these
estimators via some known first-order proximal algorithms. Our second
contribution is a computational complexity analysis of the proposed procedures,
which takes into account their statistical nature and the related notion of
statistical accuracy. The proposed procedures and their analysis are
illustrated on a simulated data benchmark.
|
Our second contribution is a computational complexity analysis of the proposed procedures, which takes into account their statistical nature and the related notion of statistical accuracy.
|
http://arxiv.org/abs/1803.11262v3
|
http://arxiv.org/pdf/1803.11262v3.pdf
|
ICML 2018 7
|
[
"Dmitrii Ostrovskii",
"Zaid Harchaoui"
] |
[
"Denoising"
] | 2018-03-29T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2359
|
http://proceedings.mlr.press/v80/ostrovskii18a/ostrovskii18a.pdf
|
efficient-first-order-algorithms-for-adaptive-1
| null |
[] |
https://paperswithcode.com/paper/a-review-on-distance-based-time-series
|
1806.04509
| null | null |
A review on distance based time series classification
|
Time series classification is an increasing research topic due to the vast
amount of time series data that are being created over a wide variety of
fields. The particularity of the data makes it a challenging task and different
approaches have been taken, including the distance based approach. 1-NN has
been a widely used method within distance based time series classification due
to it simplicity but still good performance. However, its supremacy may be
attributed to being able to use specific distances for time series within the
classification process and not to the classifier itself. With the aim of
exploiting these distances within more complex classifiers, new approaches have
arisen in the past few years that are competitive or which outperform the 1-NN
based approaches. In some cases, these new methods use the distance measure to
transform the series into feature vectors, bridging the gap between time series
and traditional classifiers. In other cases, the distances are employed to
obtain a time series kernel and enable the use of kernel methods for time
series classification. One of the main challenges is that a kernel function
must be positive semi-definite, a matter that is also addressed within this
review. The presented review includes a taxonomy of all those methods that aim
to classify time series using a distance based approach, as well as a
discussion of the strengths and weaknesses of each method.
| null |
http://arxiv.org/abs/1806.04509v1
|
http://arxiv.org/pdf/1806.04509v1.pdf
| null |
[
"Amaia Abanda",
"Usue Mori",
"Jose A. Lozano"
] |
[
"Classification",
"General Classification",
"Time Series",
"Time Series Analysis",
"Time Series Classification"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-unusual-effectiveness-of-averaging-in-gan
|
1806.04498
| null |
SJgw_sRqFQ
|
The Unusual Effectiveness of Averaging in GAN Training
|
We examine two different techniques for parameter averaging in GAN training.
Moving Average (MA) computes the time-average of parameters, whereas
Exponential Moving Average (EMA) computes an exponentially discounted sum.
Whilst MA is known to lead to convergence in bilinear settings, we provide the
-- to our knowledge -- first theoretical arguments in support of EMA. We show
that EMA converges to limit cycles around the equilibrium with vanishing
amplitude as the discount parameter approaches one for simple bilinear games
and also enhances the stability of general GAN training. We establish
experimentally that both techniques are strikingly effective in the
non-convex-concave GAN setting as well. Both improve inception and FID scores
on different architectures and for different GAN objectives. We provide
comprehensive experimental results across a range of datasets -- mixture of
Gaussians, CIFAR-10, STL-10, CelebA and ImageNet -- to demonstrate its
effectiveness. We achieve state-of-the-art results on CIFAR-10 and produce
clean CelebA face images.\footnote{~The code is available at
\url{https://github.com/yasinyazici/EMA_GAN}}
|
We examine two different techniques for parameter averaging in GAN training.
|
http://arxiv.org/abs/1806.04498v2
|
http://arxiv.org/pdf/1806.04498v2.pdf
|
ICLR 2019 5
|
[
"Yasin Yazici",
"Chuan-Sheng Foo",
"Stefan Winkler",
"Kim-Hui Yap",
"Georgios Piliouras",
"Vijay Chandrasekhar"
] |
[] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=SJgw_sRqFQ
|
https://openreview.net/pdf?id=SJgw_sRqFQ
|
the-unusual-effectiveness-of-averaging-in-gan-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/a-virtual-environment-with-multi-robot
|
1806.04497
| null | null |
A Virtual Environment with Multi-Robot Navigation, Analytics, and Decision Support for Critical Incident Investigation
|
Accidents and attacks that involve chemical, biological, radiological/nuclear
or explosive (CBRNE) substances are rare, but can be of high consequence. Since
the investigation of such events is not anybody's routine work, a range of AI
techniques can reduce investigators' cognitive load and support
decision-making, including: planning the assessment of the scene; ongoing
evaluation and updating of risks; control of autonomous vehicles for collecting
images and sensor data; reviewing images/videos for items of interest;
identification of anomalies; and retrieval of relevant documentation. Because
of the rare and high-risk nature of these events, realistic simulations can
support the development and evaluation of AI-based tools. We have developed
realistic models of CBRNE scenarios and implemented an initial set of tools.
| null |
http://arxiv.org/abs/1806.04497v1
|
http://arxiv.org/pdf/1806.04497v1.pdf
| null |
[
"David L. Smyth",
"James Fennell",
"Sai Abinesh",
"Nazli B. Karimi",
"Frank G. Glavin",
"Ihsan Ullah",
"Brett Drury",
"Michael G. Madden"
] |
[
"Autonomous Vehicles",
"Decision Making",
"Retrieval",
"Robot Navigation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/approximate-kernel-pca-using-random-features
|
1706.06296
| null | null |
Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off
|
Kernel methods are powerful learning methodologies that allow to perform non-linear data analysis. Despite their popularity, they suffer from poor scalability in big data scenarios. Various approximation methods, including random feature approximation, have been proposed to alleviate the problem. However, the statistical consistency of most of these approximate kernel methods is not well understood except for kernel ridge regression wherein it has been shown that the random feature approximation is not only computationally efficient but also statistically consistent with a minimax optimal rate of convergence. In this paper, we investigate the efficacy of random feature approximation in the context of kernel principal component analysis (KPCA) by studying the trade-off between computational and statistical behaviors of approximate KPCA. We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces. The analysis hinges on Bernstein-type inequalities for the operator and Hilbert-Schmidt norms of a self-adjoint Hilbert-Schmidt operator-valued U-statistics, which are of independent interest.
| null |
https://arxiv.org/abs/1706.06296v4
|
https://arxiv.org/pdf/1706.06296v4.pdf
| null |
[
"Bharath Sriperumbudur",
"Nicholas Sterge"
] |
[] | 2017-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/examining-the-use-of-neural-networks-for
|
1805.02294
| null | null |
Examining the Use of Neural Networks for Feature Extraction: A Comparative Analysis using Deep Learning, Support Vector Machines, and K-Nearest Neighbor Classifiers
|
Neural networks in many varieties are touted as very powerful machine
learning tools because of their ability to distill large amounts of information
from different forms of data, extracting complex features and enabling powerful
classification abilities. In this study, we use neural networks to extract
features from both images and numeric data and use these extracted features as
inputs for other machine learning models, namely support vector machines (SVMs)
and k-nearest neighbor classifiers (KNNs), in order to see if
neural-network-extracted features enhance the capabilities of these models. We
tested 7 different neural network architectures in this manner, 4 for images
and 3 for numeric data, training each for varying lengths of time and then
comparing the results of the neural network independently to those of an SVM
and KNN on the data, and finally comparing these results to models of SVM and
KNN trained using features extracted via the neural network architecture. This
process was repeated on 3 different image datasets and 2 different numeric
datasets. The results show that, in many cases, the features extracted using
the neural network significantly improve the capabilities of SVMs and KNNs
compared to running these algorithms on the raw features, and in some cases
also surpass the performance of the neural network alone. This in turn suggests
that it may be a reasonable practice to use neural networks as a means to
extract features for classification by other machine learning models for some
datasets.
| null |
http://arxiv.org/abs/1805.02294v2
|
http://arxiv.org/pdf/1805.02294v2.pdf
| null |
[
"Stephen Notley",
"Malik Magdon-Ismail"
] |
[
"BIG-bench Machine Learning",
"General Classification"
] | 2018-05-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/localized-multiple-kernel-learning-for
|
1805.07892
| null | null |
Localized Multiple Kernel Learning for Anomaly Detection: One-class Classification
|
Multi-kernel learning has been well explored in the recent past and has
exhibited promising outcomes for multi-class classification and regression
tasks. In this paper, we present a multiple kernel learning approach for the
One-class Classification (OCC) task and employ it for anomaly detection.
Recently, the basic multi-kernel approach has been proposed to solve the OCC
problem, which is simply a convex combination of different kernels with equal
weights. This paper proposes a Localized Multiple Kernel learning approach for
Anomaly Detection (LMKAD) using OCC, where the weight for each kernel is
assigned locally. Proposed LMKAD approach adapts the weight for each kernel
using a gating function. The parameters of the gating function and one-class
classifier are optimized simultaneously through a two-step optimization
process. We present the empirical results of the performance of LMKAD on 25
benchmark datasets from various disciplines. This performance is evaluated
against existing Multi Kernel Anomaly Detection (MKAD) algorithm, and four
other existing kernel-based one-class classifiers to showcase the credibility
of our approach. Our algorithm achieves significantly better Gmean scores while
using a lesser number of support vectors compared to MKAD. Friedman test is
also performed to verify the statistical significance of the results claimed in
this paper.
|
In this paper, we present a multiple kernel learning approach for the One-class Classification (OCC) task and employ it for anomaly detection.
|
http://arxiv.org/abs/1805.07892v4
|
http://arxiv.org/pdf/1805.07892v4.pdf
| null |
[
"Chandan Gautam",
"Ramesh Balaji",
"K Sudharsan",
"Aruna Tiwari",
"Kapil Ahuja"
] |
[
"Anomaly Detection",
"Classification",
"General Classification",
"Multi-class Classification",
"One-Class Classification",
"One-class classifier"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-latent-variable-descriptiveness
|
1806.04480
| null | null |
Improving latent variable descriptiveness with AutoGen
|
Powerful generative models, particularly in Natural Language Modelling, are
commonly trained by maximizing a variational lower bound on the data log
likelihood. These models often suffer from poor use of their latent variable,
with ad-hoc annealing factors used to encourage retention of information in the
latent variable. We discuss an alternative and general approach to latent
variable modelling, based on an objective that combines the data log likelihood
as well as the likelihood of a perfect reconstruction through an autoencoder.
Tying these together ensures by design that the latent variable captures
information about the observations, whilst retaining the ability to generate
well. Interestingly, though this approach is a priori unrelated to VAEs, the
lower bound attained is identical to the standard VAE bound but with the
addition of a simple pre-factor; thus, providing a formal interpretation of the
commonly used, ad-hoc pre-factors in training VAEs.
| null |
http://arxiv.org/abs/1806.04480v1
|
http://arxiv.org/pdf/1806.04480v1.pdf
| null |
[
"Alex Mansbridge",
"Roberto Fierimonte",
"Ilya Feige",
"David Barber"
] |
[
"Language Modelling"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/two-use-cases-of-machine-learning-for-sdn
|
1804.07433
| null | null |
Two Use Cases of Machine Learning for SDN-Enabled IP/Optical Networks: Traffic Matrix Prediction and Optical Path Performance Prediction
|
We describe two applications of machine learning in the context of IP/Optical
networks. The first one allows agile management of resources at a core
IP/Optical network by using machine learning for short-term and long-term
prediction of traffic flows and joint global optimization of IP and optical
layers using colorless/directionless (CD) flexible ROADMs. Multilayer
coordination allows for significant cost savings, flexible new services to meet
dynamic capacity needs, and improved robustness by being able to proactively
adapt to new traffic patterns and network conditions. The second application is
important as we migrate our metro networks to Open ROADM networks, to allow
physical routing without the need for detailed knowledge of optical parameters.
We discuss a proof-of-concept study, where detailed performance data for
wavelengths on a current flexible ROADM network is used for machine learning to
predict the optical performance of each wavelength. Both applications can be
efficiently implemented by using a SDN (Software Defined Network) controller.
| null |
http://arxiv.org/abs/1804.07433v2
|
http://arxiv.org/pdf/1804.07433v2.pdf
| null |
[
"Gagan Choudhury",
"David Lynch",
"Gaurav Thakur",
"Simon Tse"
] |
[
"BIG-bench Machine Learning",
"global-optimization",
"Management",
"Prediction"
] | 2018-04-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/slice-as-an-evolutionary-service-genetic
|
1802.04491
| null | null |
Slice as an Evolutionary Service: Genetic Optimization for Inter-Slice Resource Management in 5G Networks
|
In the context of Fifth Generation (5G) mobile networks, the concept of
"Slice as a Service" (SlaaS) promotes mobile network operators to flexibly
share infrastructures with mobile service providers and stakeholders. However,
it also challenges with an emerging demand for efficient online algorithms to
optimize the request-and-decision-based inter-slice resource management
strategy. Based on genetic algorithms, this paper presents a novel online
optimizer that efficiently approaches towards the ideal slicing strategy with
maximized long-term network utility. The proposed method encodes slicing
strategies into binary sequences to cope with the request-and-decision
mechanism. It requires no a priori knowledge about the traffic/utility models,
and therefore supports heterogeneous slices, while providing solid
effectiveness, good robustness against non-stationary service scenarios, and
high scalability.
| null |
http://arxiv.org/abs/1802.04491v3
|
http://arxiv.org/pdf/1802.04491v3.pdf
| null |
[
"Bin Han",
"Lianghai Ji",
"Hans D. Schotten"
] |
[
"Management"
] | 2018-02-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/trading-algorithms-with-learning-in-latent
|
1806.04472
| null | null |
Trading algorithms with learning in latent alpha models
|
Alpha signals for statistical arbitrage strategies are often driven by latent
factors. This paper analyses how to optimally trade with latent factors that
cause prices to jump and diffuse. Moreover, we account for the effect of the
trader's actions on quoted prices and the prices they receive from trading.
Under fairly general assumptions, we demonstrate how the trader can learn the
posterior distribution over the latent states, and explicitly solve the latent
optimal trading problem. We provide a verification theorem, and a methodology
for calibrating the model by deriving a variation of the
expectation-maximization algorithm. To illustrate the efficacy of the optimal
strategy, we demonstrate its performance through simulations and compare it to
strategies which ignore learning in the latent factors. We also provide
calibration results for a particular model using Intel Corporation stock as an
example.
| null |
http://arxiv.org/abs/1806.04472v1
|
http://arxiv.org/pdf/1806.04472v1.pdf
| null |
[
"Philippe Casgrain",
"Sebastian Jaimungal"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autoregressive-convolutional-neural-networks
|
1703.04122
| null |
rJaE2alRW
|
Autoregressive Convolutional Neural Networks for Asynchronous Time Series
|
We propose Significance-Offset Convolutional Neural Network, a deep
convolutional network architecture for regression of multivariate asynchronous
time series. The model is inspired by standard autoregressive (AR) models and
gating mechanisms used in recurrent neural networks. It involves an AR-like
weighting system, where the final predictor is obtained as a weighted sum of
adjusted regressors, while the weights are datadependent functions learnt
through a convolutional network. The architecture was designed for applications
on asynchronous time series and is evaluated on such datasets: a hedge fund
proprietary dataset of over 2 million quotes for a credit derivative index, an
artificially generated noisy autoregressive series and UCI household
electricity consumption dataset. The proposed architecture achieves promising
results as compared to convolutional and recurrent neural networks.
|
We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series.
|
http://arxiv.org/abs/1703.04122v4
|
http://arxiv.org/pdf/1703.04122v4.pdf
|
ICML 2018 7
|
[
"Mikołaj Bińkowski",
"Gautier Marti",
"Philippe Donnat"
] |
[
"Time Series",
"Time Series Analysis"
] | 2017-03-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2310
|
http://proceedings.mlr.press/v80/binkowski18a/binkowski18a.pdf
|
autoregressive-convolutional-neural-networks-1
| null |
[] |
https://paperswithcode.com/paper/colwells-castle-defence-a-custom-game-using
|
1806.04471
| null | null |
Colwell's Castle Defence: A Custom Game Using Dynamic Difficulty Adjustment to Increase Player Enjoyment
|
Dynamic Difficulty Adjustment (DDA) is a mechanism used in video games that
automatically tailors the individual gaming experience to match an appropriate
difficulty setting. This is generally achieved by removing pre-defined
difficulty tiers such as Easy, Medium and Hard; and instead concentrates on
balancing the gameplay to match the challenge to the individual's abilities.
The work presented in this paper examines the implementation of DDA in a custom
survival game developed by the author, namely Colwell's Castle Defence. The
premise of this arcade-style game is to defend a castle from hordes of oncoming
enemies. The AI system that we developed adjusts the enemy spawn rate based on
the current performance of the player. Specifically, we read the Player Health
and Gate Health at the end of each level and then assign the player with an
appropriate difficulty tier for the proceeding level. We tested the impact of
our technique on thirty human players and concluded, based on questionnaire
feedback, that enabling the technique led to more enjoyable gameplay.
| null |
http://arxiv.org/abs/1806.04471v1
|
http://arxiv.org/pdf/1806.04471v1.pdf
| null |
[
"Anthony M. Colwell",
"Frank G. Glavin"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/design-challenges-and-misconceptions-in
|
1806.04470
| null | null |
Design Challenges and Misconceptions in Neural Sequence Labeling
|
We investigate the design challenges of constructing effective and efficient
neural sequence labeling systems, by reproducing twelve neural sequence
labeling models, which include most of the state-of-the-art structures, and
conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking,
and POS tagging). Misconceptions and inconsistent conclusions in existing
literature are examined and clarified under statistical experiments. In the
comparison and analysis process, we reach several practical conclusions which
can be useful to practitioners.
|
We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i. e. NER, Chunking, and POS tagging).
|
http://arxiv.org/abs/1806.04470v2
|
http://arxiv.org/pdf/1806.04470v2.pdf
|
COLING 2018 8
|
[
"Jie Yang",
"Shuailong Liang",
"Yue Zhang"
] |
[
"Chunking",
"Misconceptions",
"NER",
"POS",
"POS Tagging"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1327
|
https://aclanthology.org/C18-1327.pdf
|
design-challenges-and-misconceptions-in-2
| null |
[] |
https://paperswithcode.com/paper/training-medical-image-analysis-systems-like
|
1805.10884
| null | null |
Training Medical Image Analysis Systems like Radiologists
|
The training of medical image analysis systems using machine learning
approaches follows a common script: collect and annotate a large dataset, train
the classifier on the training set, and test it on a hold-out test set. This
process bears no direct resemblance with radiologist training, which is based
on solving a series of tasks of increasing difficulty, where each task involves
the use of significantly smaller datasets than those used in machine learning.
In this paper, we propose a novel training approach inspired by how
radiologists are trained. In particular, we explore the use of meta-training
that models a classifier based on a series of tasks. Tasks are selected using
teacher-student curriculum learning, where each task consists of simple
classification problems containing small training sets. We hypothesize that our
proposed meta-training approach can be used to pre-train medical image analysis
models. This hypothesis is tested on the automatic breast screening
classification from DCE-MRI trained with weakly labeled datasets. The
classification performance achieved by our approach is shown to be the best in
the field for that application, compared to state of art baseline approaches:
DenseNet, multiple instance learning and multi-task learning.
| null |
http://arxiv.org/abs/1805.10884v3
|
http://arxiv.org/pdf/1805.10884v3.pdf
| null |
[
"Gabriel Maicas",
"Andrew P. Bradley",
"Jacinto C. Nascimento",
"Ian Reid",
"Gustavo Carneiro"
] |
[
"BIG-bench Machine Learning",
"Classification",
"General Classification",
"Medical Image Analysis",
"Multiple Instance Learning",
"Multi-Task Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fusing-recency-into-neural-machine
|
1806.04466
| null | null |
Fusing Recency into Neural Machine Translation with an Inter-Sentence Gate Model
|
Neural machine translation (NMT) systems are usually trained on a large
amount of bilingual sentence pairs and translate one sentence at a time,
ignoring inter-sentence information. This may make the translation of a
sentence ambiguous or even inconsistent with the translations of neighboring
sentences. In order to handle this issue, we propose an inter-sentence gate
model that uses the same encoder to encode two adjacent sentences and controls
the amount of information flowing from the preceding sentence to the
translation of the current sentence with an inter-sentence gate. In this way,
our proposed model can capture the connection between sentences and fuse
recency from neighboring sentences into neural machine translation. On several
NIST Chinese-English translation tasks, our experiments demonstrate that the
proposed inter-sentence gate model achieves substantial improvements over the
baseline.
| null |
http://arxiv.org/abs/1806.04466v1
|
http://arxiv.org/pdf/1806.04466v1.pdf
|
COLING 2018 8
|
[
"Shaohui Kuang",
"Deyi Xiong"
] |
[
"Machine Translation",
"NMT",
"Sentence",
"Translation"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1051
|
https://aclanthology.org/C18-1051.pdf
|
fusing-recency-into-neural-machine-2
| null |
[] |
https://paperswithcode.com/paper/gaussian-mixture-models-with-wasserstein
|
1806.04465
| null | null |
Gaussian mixture models with Wasserstein distance
|
Generative models with both discrete and continuous latent variables are
highly motivated by the structure of many real-world data sets. They present,
however, subtleties in training often manifesting in the discrete latent being
under leveraged. In this paper, we show that such models are more amenable to
training when using the Optimal Transport framework of Wasserstein
Autoencoders. We find our discrete latent variable to be fully leveraged by the
model when trained, without any modifications to the objective function or
significant fine tuning. Our model generates comparable samples to other
approaches while using relatively simple neural networks, since the discrete
latent variable carries much of the descriptive burden. Furthermore, the
discrete latent provides significant control over generation.
| null |
http://arxiv.org/abs/1806.04465v1
|
http://arxiv.org/pdf/1806.04465v1.pdf
| null |
[
"Benoit Gaujac",
"Ilya Feige",
"David Barber"
] |
[
"Descriptive"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discovery-and-recognition-of-motion
|
1709.10494
| null | null |
Discovery and recognition of motion primitives in human activities
|
We present a novel framework for the automatic discovery and recognition of
motion primitives in videos of human activities. Given the 3D pose of a human
in a video, human motion primitives are discovered by optimizing the `motion
flux', a quantity which captures the motion variation of a group of skeletal
joints. A normalization of the primitives is proposed in order to make them
invariant with respect to a subject anatomical variations and data sampling
rate. The discovered primitives are unknown and unlabeled and are
unsupervisedly collected into classes via a hierarchical non-parametric Bayes
mixture model. Once classes are determined and labeled they are further
analyzed for establishing models for recognizing discovered primitives. Each
primitive model is defined by a set of learned parameters.
Given new video data and given the estimated pose of the subject appearing on
the video, the motion is segmented into primitives, which are recognized with a
probability given according to the parameters of the learned models.
Using our framework we build a publicly available dataset of human motion
primitives, using sequences taken from well-known motion capture datasets. We
expect that our framework, by providing an objective way for discovering and
categorizing human motion, will be a useful tool in numerous research fields
including video analysis, human inspired motion generation, learning by
demonstration, intuitive human-robot interaction, and human behavior analysis.
| null |
http://arxiv.org/abs/1709.10494v7
|
http://arxiv.org/pdf/1709.10494v7.pdf
| null |
[
"Marta Sanzari",
"Valsamis Ntouskos",
"Fiora Pirri"
] |
[
"Motion Generation"
] | 2017-09-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sparse-stochastic-zeroth-order-optimization
|
1806.04458
| null | null |
Sparse Stochastic Zeroth-Order Optimization with an Application to Bandit Structured Prediction
|
Stochastic zeroth-order (SZO), or gradient-free, optimization allows to optimize arbitrary functions by relying only on function evaluations under parameter perturbations, however, the iteration complexity of SZO methods suffers a factor proportional to the dimensionality of the perturbed function. We show that in scenarios with natural sparsity patterns as in structured prediction applications, this factor can be reduced to the expected number of active features over input-output pairs. We give a general proof that applies sparse SZO optimization to Lipschitz-continuous, nonconvex, stochastic objectives, and present an experimental evaluation on linear bandit structured prediction tasks with sparse word-based feature representations that confirm our theoretical results.
| null |
https://arxiv.org/abs/1806.04458v3
|
https://arxiv.org/pdf/1806.04458v3.pdf
| null |
[
"Artem Sokolov",
"Julian Hitschler",
"Mayumi Ohta",
"Stefan Riezler"
] |
[
"Prediction",
"Structured Prediction"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/impersonation-modeling-persona-in-smart
|
1806.04456
| null | null |
Impersonation: Modeling Persona in Smart Responses to Email
|
In this paper, we present design, implementation, and effectiveness of
generating personalized suggestions for email replies. To personalize email
responses based on users style and personality, we model the users persona
based on her past responses to emails. This model is added to the
language-based model created across users using past responses of the all user
emails.
A users model captures the typical responses of the user given a particular
context. The context includes the email received, recipient of the email, and
other external signals such as calendar activities, preferences, etc. The
context along with users personality (e.g., extrovert, formal, reserved, etc.)
is used to suggest responses. These responses can be a mixture of multiple
modes: email replies (textual), audio clips, etc. This helps in making
responses mimic the user as much as possible and helps the user to be more
productive while retaining her mark in the responses.
| null |
http://arxiv.org/abs/1806.04456v1
|
http://arxiv.org/pdf/1806.04456v1.pdf
| null |
[
"Rajeev Gupta",
"Ranganath Kondapally",
"Chakrapani Ravi Kiran"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/convergence-of-gradient-descent-on-separable
|
1803.01905
| null | null |
Convergence of Gradient Descent on Separable Data
|
We provide a detailed study on the implicit bias of gradient descent when
optimizing loss functions with strictly monotone tails, such as the logistic
loss, over separable datasets. We look at two basic questions: (a) what are the
conditions on the tail of the loss function under which gradient descent
converges in the direction of the $L_2$ maximum-margin separator? (b) how does
the rate of margin convergence depend on the tail of the loss function and the
choice of the step size? We show that for a large family of super-polynomial
tailed losses, gradient descent iterates on linear networks of any depth
converge in the direction of $L_2$ maximum-margin solution, while this does not
hold for losses with heavier tails. Within this family, for simple linear
models we show that the optimal rates with fixed step size is indeed obtained
for the commonly used exponentially tailed losses such as logistic loss.
However, with a fixed step size the optimal convergence rate is extremely slow
as $1/\log(t)$, as also proved in Soudry et al. (2018). For linear models with
exponential loss, we further prove that the convergence rate could be improved
to $\log (t) /\sqrt{t}$ by using aggressive step sizes that compensates for the
rapidly vanishing gradients. Numerical results suggest this method might be
useful for deep networks.
| null |
http://arxiv.org/abs/1803.01905v3
|
http://arxiv.org/pdf/1803.01905v3.pdf
| null |
[
"Mor Shpigel Nacson",
"Jason D. Lee",
"Suriya Gunasekar",
"Pedro H. P. Savarese",
"Nathan Srebro",
"Daniel Soudry"
] |
[] | 2018-03-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-ensemble-model-for-sentiment-analysis-of
|
1806.04450
| null | null |
An Ensemble Model for Sentiment Analysis of Hindi-English Code-Mixed Data
|
In multilingual societies like India, code-mixed social media texts comprise
the majority of the Internet. Detecting the sentiment of the code-mixed user
opinions plays a crucial role in understanding social, economic and political
trends. In this paper, we propose an ensemble of character-trigrams based LSTM
model and word-ngrams based Multinomial Naive Bayes (MNB) model to identify the
sentiments of Hindi-English (Hi-En) code-mixed data. The ensemble model
combines the strengths of rich sequential patterns from the LSTM model and
polarity of keywords from the probabilistic ngram model to identify sentiments
in sparse and inconsistent code-mixed data. Experiments on reallife user
code-mixed data reveals that our approach yields state-of-the-art results as
compared to several baselines and other deep learning based proposed methods.
| null |
http://arxiv.org/abs/1806.04450v1
|
http://arxiv.org/pdf/1806.04450v1.pdf
| null |
[
"Madan Gopal Jhanwar",
"Arpita Das"
] |
[
"Sentiment Analysis"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/toxicblend-virtual-screening-of-toxic
|
1806.04449
| null | null |
ToxicBlend: Virtual Screening of Toxic Compounds with Ensemble Predictors
|
Timely assessment of compound toxicity is one of the biggest challenges
facing the pharmaceutical industry today. A significant proportion of compounds
identified as potential leads are ultimately discarded due to the toxicity they
induce. In this paper, we propose a novel machine learning approach for the
prediction of molecular activity on ToxCast targets. We combine extreme
gradient boosting with fully-connected and graph-convolutional neural network
architectures trained on QSAR physical molecular property descriptors, PubChem
molecular fingerprints, and SMILES sequences. Our ensemble predictor leverages
the strengths of each individual technique, significantly outperforming
existing state-of-the art models on the ToxCast and Tox21 toxicity-prediction
datasets. We provide free access to molecule toxicity prediction using our
model at http://www.owkin.com/toxicblend.
| null |
http://arxiv.org/abs/1806.04449v1
|
http://arxiv.org/pdf/1806.04449v1.pdf
| null |
[
"Mikhail Zaslavskiy",
"Simon Jégou",
"Eric W. Tramel",
"Gilles Wainrib"
] |
[
"Drug Discovery",
"Prediction"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/direct-estimation-of-pharmacokinetic
|
1804.02745
| null | null |
Direct Estimation of Pharmacokinetic Parameters from DCE-MRI using Deep CNN with Forward Physical Model Loss
|
Dynamic contrast-enhanced (DCE) MRI is an evolving imaging technique that
provides a quantitative measure of pharmacokinetic (PK) parameters in body
tissues, in which series of T1-weighted images are collected following the
administration of a paramagnetic contrast agent. Unfortunately, in many
applications, conventional clinical DCE-MRI suffers from low spatiotemporal
resolution and insufficient volume coverage. In this paper, we propose a novel
deep learning based approach to directly estimate the PK parameters from
undersampled DCE-MRI data. Specifically, we design a custom loss function where
we incorporate a forward physical model that relates the PK parameters to
corrupted image-time series obtained due to subsampling in k-space. This allows
the network to directly exploit the knowledge of true contrast agent kinetics
in the training phase, and hence provide more accurate restoration of PK
parameters. Experiments on clinical brain DCE datasets demonstrate the efficacy
of our approach in terms of fidelity of PK parameter reconstruction and
significantly faster parameter inference compared to a model-based iterative
reconstruction method.
| null |
http://arxiv.org/abs/1804.02745v2
|
http://arxiv.org/pdf/1804.02745v2.pdf
| null |
[
"Cagdas Ulas",
"Giles Tetteh",
"Michael J. Thrippleton",
"Paul A. Armitage",
"Stephen D. Makin",
"Joanna M. Wardlaw",
"Mike E. Davies",
"Bjoern H. Menze"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-04-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deepasl-kinetic-model-incorporated-loss-for
|
1804.02755
| null | null |
DeepASL: Kinetic Model Incorporated Loss for Denoising Arterial Spin Labeled MRI via Deep Residual Learning
|
Arterial spin labeling (ASL) allows to quantify the cerebral blood flow (CBF)
by magnetic labeling of the arterial blood water. ASL is increasingly used in
clinical studies due to its noninvasiveness, repeatability and benefits in
quantification. However, ASL suffers from an inherently low-signal-to-noise
ratio (SNR) requiring repeated measurements of control/spin-labeled (C/L) pairs
to achieve a reasonable image quality, which in return increases motion
sensitivity. This leads to clinically prolonged scanning times increasing the
risk of motion artifacts. Thus, there is an immense need of advanced imaging
and processing techniques in ASL. In this paper, we propose a novel deep
learning based approach to improve the perfusion-weighted image quality
obtained from a subset of all available pairwise C/L subtractions.
Specifically, we train a deep fully convolutional network (FCN) to learn a
mapping from noisy perfusion-weighted image and its subtraction (residual) from
the clean image. Additionally, we incorporate the CBF estimation model in the
loss function during training, which enables the network to produce high
quality images while simultaneously enforcing the CBF estimates to be as close
as reference CBF values. Extensive experiments on synthetic and clinical ASL
datasets demonstrate the effectiveness of our method in terms of improved ASL
image quality, accurate CBF parameter estimation and considerably small
computation time during testing.
|
Arterial spin labeling (ASL) allows to quantify the cerebral blood flow (CBF) by magnetic labeling of the arterial blood water.
|
http://arxiv.org/abs/1804.02755v2
|
http://arxiv.org/pdf/1804.02755v2.pdf
| null |
[
"Cagdas Ulas",
"Giles Tetteh",
"Stephan Kaczmarz",
"Christine Preibisch",
"Bjoern H. Menze"
] |
[
"Denoising",
"parameter estimation"
] | 2018-04-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sequence-to-sequence-learning-for-task
|
1806.04441
| null | null |
Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation
|
Classic pipeline models for task-oriented dialogue system require explicit
modeling the dialogue states and hand-crafted action spaces to query a
domain-specific knowledge base. Conversely, sequence-to-sequence models learn
to map dialogue history to the response in current turn without explicit
knowledge base querying. In this work, we propose a novel framework that
leverages the advantages of classic pipeline and sequence-to-sequence models.
Our framework models a dialogue state as a fixed-size distributed
representation and use this representation to query a knowledge base via an
attention mechanism. Experiment on Stanford Multi-turn Multi-domain
Task-oriented Dialogue Dataset shows that our framework significantly
outperforms other sequence-to-sequence based baseline models on both automatic
and human evaluation.
| null |
http://arxiv.org/abs/1806.04441v1
|
http://arxiv.org/pdf/1806.04441v1.pdf
|
COLING 2018 8
|
[
"Haoyang Wen",
"Yijia Liu",
"Wanxiang Che",
"Libo Qin",
"Ting Liu"
] |
[
"Task-Oriented Dialogue Systems"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1320
|
https://aclanthology.org/C18-1320.pdf
|
sequence-to-sequence-learning-for-task-1
| null |
[] |
https://paperswithcode.com/paper/ml-fv-heartsuit-a-survey-on-the-application
|
1806.03600
| null | null |
ML + FV = $\heartsuit$? A Survey on the Application of Machine Learning to Formal Verification
|
Formal Verification (FV) and Machine Learning (ML) can seem incompatible due
to their opposite mathematical foundations and their use in real-life problems:
FV mostly relies on discrete mathematics and aims at ensuring correctness; ML
often relies on probabilistic models and consists of learning patterns from
training data. In this paper, we postulate that they are complementary in
practice, and explore how ML helps FV in its classical approaches: static
analysis, model-checking, theorem-proving, and SAT solving. We draw a landscape
of the current practice and catalog some of the most prominent uses of ML
inside FV tools, thus offering a new perspective on FV techniques that can help
researchers and practitioners to better locate the possible synergies. We
discuss lessons learned from our work, point to possible improvements and offer
visions for the future of the domain in the light of the science of software
and systems modeling.
| null |
http://arxiv.org/abs/1806.03600v2
|
http://arxiv.org/pdf/1806.03600v2.pdf
| null |
[
"Moussa Amrani",
"Levi Lúcio",
"Adrien Bibal"
] |
[
"Automated Theorem Proving",
"BIG-bench Machine Learning"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analyzing-uncertainty-in-neural-machine
|
1803.00047
| null | null |
Analyzing Uncertainty in Neural Machine Translation
|
Machine translation is a popular test bed for research in neural
sequence-to-sequence models but despite much recent research, there is still a
lack of understanding of these models. Practitioners report performance
degradation with large beams, the under-estimation of rare words and a lack of
diversity in the final translations. Our study relates some of these issues to
the inherent uncertainty of the task, due to the existence of multiple valid
translations for a single source sentence, and to the extrinsic uncertainty
caused by noisy training data. We propose tools and metrics to assess how
uncertainty in the data is captured by the model distribution and how it
affects search strategies that generate translations. Our results show that
search works remarkably well but that models tend to spread too much
probability mass over the hypothesis space. Next, we propose tools to assess
model calibration and show how to easily fix some shortcomings of current
models. As part of this study, we release multiple human reference translations
for two popular benchmarks.
|
We propose tools and metrics to assess how uncertainty in the data is captured by the model distribution and how it affects search strategies that generate translations.
|
http://arxiv.org/abs/1803.00047v4
|
http://arxiv.org/pdf/1803.00047v4.pdf
|
ICML 2018 7
|
[
"Myle Ott",
"Michael Auli",
"David Grangier",
"Marc'Aurelio Ranzato"
] |
[
"Diversity",
"Machine Translation",
"Sentence",
"Translation",
"valid"
] | 2018-02-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2292
|
http://proceedings.mlr.press/v80/ott18a/ott18a.pdf
|
analyzing-uncertainty-in-neural-machine-1
| null |
[] |
https://paperswithcode.com/paper/ask-no-more-deciding-when-to-guess-in
|
1805.06960
| null | null |
Ask No More: Deciding when to guess in referential visual dialogue
|
Our goal is to explore how the abilities brought in by a dialogue manager can
be included in end-to-end visually grounded conversational agents. We make
initial steps towards this general goal by augmenting a task-oriented visual
dialogue model with a decision-making component that decides whether to ask a
follow-up question to identify a target referent in an image, or to stop the
conversation to make a guess. Our analyses show that adding a decision making
component produces dialogues that are less repetitive and that include fewer
unnecessary questions, thus potentially leading to more efficient and less
unnatural interactions.
|
We make initial steps towards this general goal by augmenting a task-oriented visual dialogue model with a decision-making component that decides whether to ask a follow-up question to identify a target referent in an image, or to stop the conversation to make a guess.
|
http://arxiv.org/abs/1805.06960v2
|
http://arxiv.org/pdf/1805.06960v2.pdf
|
COLING 2018 8
|
[
"Ravi Shekhar",
"Tim Baumgartner",
"Aashish Venkatesh",
"Elia Bruni",
"Raffaella Bernardi",
"Raquel Fernandez"
] |
[
"Decision Making",
"Visual Dialog"
] | 2018-05-17T00:00:00 |
https://aclanthology.org/C18-1104
|
https://aclanthology.org/C18-1104.pdf
|
ask-no-more-deciding-when-to-guess-in-2
| null |
[] |
https://paperswithcode.com/paper/u-segnet-fully-convolutional-neural-network
|
1806.04429
| null | null |
U-SegNet: Fully Convolutional Neural Network based Automated Brain tissue segmentation Tool
|
Automated brain tissue segmentation into white matter (WM), gray matter (GM),
and cerebro-spinal fluid (CSF) from magnetic resonance images (MRI) is helpful
in the diagnosis of neuro-disorders such as epilepsy, Alzheimer's, multiple
sclerosis, etc. However, thin GM structures at the periphery of cortex and
smooth transitions on tissue boundaries such as between GM and WM, or WM and
CSF pose difficulty in building a reliable segmentation tool. This paper
proposes a Fully Convolutional Neural Network (FCN) tool, that is a hybrid of
two widely used deep learning segmentation architectures SegNet and U-Net, for
improved brain tissue segmentation. We propose a skip connection inspired from
U-Net, in the SegNet architetcure, to incorporate fine multiscale information
for better tissue boundary identification. We show that the proposed U-SegNet
architecture, improves segmentation performance, as measured by average dice
ratio, to 89.74% on the widely used IBSR dataset consisting of T-1 weighted MRI
volumes of 18 subjects.
| null |
http://arxiv.org/abs/1806.04429v1
|
http://arxiv.org/pdf/1806.04429v1.pdf
| null |
[
"Pulkit Kumar",
"Pravin Nagar",
"Chetan Arora",
"Anubha Gupta"
] |
[
"Segmentation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/yassouali/pytorch_segmentation/blob/8b8e3ee20a3aa733cb19fc158ad5d7773ed6da7f/models/segnet.py#L9",
"description": "**SegNet** is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the\r\nVGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature maps. Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to\r\nperform non-linear upsampling.",
"full_name": "SegNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "SegNet",
"source_title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation",
"source_url": "http://arxiv.org/abs/1511.00561v3"
}
] |
https://paperswithcode.com/paper/sample-dropout-for-audio-scene-classification
|
1806.04422
| null | null |
Sample Dropout for Audio Scene Classification Using Multi-Scale Dense Connected Convolutional Neural Network
|
Acoustic scene classification is an intricate problem for a machine. As an
emerging field of research, deep Convolutional Neural Networks (CNN) achieve
convincing results. In this paper, we explore the use of multi-scale Dense
connected convolutional neural network (DenseNet) for the classification task,
with the goal to improve the classification performance as multi-scale features
can be extracted from the time-frequency representation of the audio signal. On
the other hand, most of previous CNN-based audio scene classification
approaches aim to improve the classification accuracy, by employing different
regularization techniques, such as the dropout of hidden units and data
augmentation, to reduce overfitting. It is widely known that outliers in the
training set have a high negative influence on the trained model, and culling
the outliers may improve the classification performance, while it is often
under-explored in previous studies. In this paper, inspired by the silence
removal in the speech signal processing, a novel sample dropout approach is
proposed, which aims to remove outliers in the training dataset. Using the
DCASE 2017 audio scene classification datasets, the experimental results
demonstrates the proposed multi-scale DenseNet providing a superior performance
than the traditional single-scale DenseNet, while the sample dropout method can
further improve the classification robustness of multi-scale DenseNet.
| null |
http://arxiv.org/abs/1806.04422v1
|
http://arxiv.org/pdf/1806.04422v1.pdf
| null |
[
"Dawei Feng",
"Kele Xu",
"Haibo Mi",
"Feifan Liao",
"Yan Zhou"
] |
[
"Acoustic Scene Classification",
"Classification",
"Data Augmentation",
"General Classification",
"Scene Classification"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/densenet.py#L93",
"description": "A **Dense Block** is a module used in convolutional neural networks that connects *all layers* (with matching feature-map sizes) directly with each other. It was originally proposed as part of the [DenseNet](https://paperswithcode.com/method/densenet) architecture. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. In contrast to [ResNets](https://paperswithcode.com/method/resnet), we never combine features through summation before they are passed into a layer; instead, we combine features by concatenating them. Hence, the $\\ell^{th}$ layer has $\\ell$ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all $L-\\ell$ subsequent layers. This introduces $\\frac{L(L+1)}{2}$ connections in an $L$-layer network, instead of just $L$, as in traditional architectures: \"dense connectivity\".",
"full_name": "Dense Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Dense Block",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, XRP has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a XRP transaction not confirmed, your XRP wallet not showing balance, or you're trying to recover a lost XRP wallet, knowing where to get help is essential. That’s why the XRP customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the XRP Customer Support Number +1-833-534-1729\r\nXRP operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. XRP Transaction Not Confirmed\r\nOne of the most common concerns is when a XRP transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. XRP Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A XRP wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost XRP Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost XRP wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. XRP Deposit Not Received\r\nIf someone has sent you XRP but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A XRP deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. XRP Transaction Stuck or Pending\r\nSometimes your XRP transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. XRP Wallet Recovery Phrase Issue\r\nYour 12 or 24-word XRP wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the XRP Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and XRP tech.\r\n\r\n24/7 Availability: XRP doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About XRP Support and Wallet Issues\r\nQ1: Can XRP support help me recover stolen BTC?\r\nA: While XRP transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: XRP transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not XRP’s official number (XRP is decentralized), it connects you to trained professionals experienced in resolving all major XRP issues.\r\n\r\nFinal Thoughts\r\nXRP is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a XRP transaction not confirmed, your XRP wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the XRP customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "XRP Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "XRP Customer Service Number +1-833-534-1729",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] |
https://paperswithcode.com/paper/using-chaos-in-grey-wolf-optimizer-and
|
1806.04419
| null | null |
Using Chaos in Grey Wolf Optimizer and Application to Prime Factorization
|
The Grey Wolf Optimizer (GWO) is a swarm intelligence meta-heuristic
algorithm inspired by the hunting behaviour and social hierarchy of grey wolves
in nature. This paper analyses the use of chaos theory in this algorithm to
improve its ability to escape local optima by replacing the key parameters by
chaotic variables. The optimal choice of chaotic maps is then used to apply the
Chaotic Grey Wolf Optimizer (CGWO) to the problem of factoring a large semi
prime into its prime factors. Assuming the number of digits of the factors to
be equal, this is a computationally difficult task upon which the
RSA-cryptosystem relies. This work proposes the use of a new objective function
to solve the problem and uses the CGWO to optimize it and compute the factors.
It is shown that this function performs better than its predecessor for large
semi primes and CGWO is an efficient algorithm to optimize it.
| null |
http://arxiv.org/abs/1806.04419v1
|
http://arxiv.org/pdf/1806.04419v1.pdf
| null |
[
"Harshit Mehrotra",
"Dr. Saibal K. Pal"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quaternion-recurrent-neural-networks
|
1806.04418
| null |
ByMHvs0cFQ
|
Quaternion Recurrent Neural Networks
|
Recurrent neural networks (RNNs) are powerful architectures to model
sequential data, due to their capability to learn short and long-term
dependencies between the basic elements of a sequence. Nonetheless, popular
tasks such as speech or images recognition, involve multi-dimensional input
features that are characterized by strong internal dependencies between the
dimensions of the input vector. We propose a novel quaternion recurrent neural
network (QRNN), alongside with a quaternion long-short term memory neural
network (QLSTM), that take into account both the external relations and these
internal structural dependencies with the quaternion algebra. Similarly to
capsules, quaternions allow the QRNN to code internal dependencies by composing
and processing multidimensional features as single entities, while the
recurrent operation reveals correlations between the elements composing the
sequence. We show that both QRNN and QLSTM achieve better performances than RNN
and LSTM in a realistic application of automatic speech recognition. Finally,
we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of
free parameters needed, compared to real-valued RNNs and LSTMs to reach better
results, leading to a more compact representation of the relevant information.
|
Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence.
|
http://arxiv.org/abs/1806.04418v3
|
http://arxiv.org/pdf/1806.04418v3.pdf
|
ICLR 2019 5
|
[
"Titouan Parcollet",
"Mirco Ravanelli",
"Mohamed Morchid",
"Georges Linarès",
"Chiheb Trabelsi",
"Renato de Mori",
"Yoshua Bengio"
] |
[
"Automatic Speech Recognition",
"Automatic Speech Recognition (ASR)",
"speech-recognition",
"Speech Recognition"
] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=ByMHvs0cFQ
|
https://openreview.net/pdf?id=ByMHvs0cFQ
|
quaternion-recurrent-neural-networks-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/markerless-inside-out-tracking-for
|
1804.01708
| null | null |
Markerless Inside-Out Tracking for Interventional Applications
|
Tracking of rotation and translation of medical instruments plays a
substantial role in many modern interventions. Traditional external optical
tracking systems are often subject to line-of-sight issues, in particular when
the region of interest is difficult to access or the procedure allows only for
limited rigid body markers. The introduction of inside-out tracking systems
aims to overcome these issues. We propose a marker-less tracking system based
on visual SLAM to enable tracking of instruments in an interventional scenario.
To achieve this goal, we mount a miniature multi-modal (monocular, stereo,
active depth) vision system on the object of interest and relocalize its pose
within an adaptive map of the operating room. We compare state-of-the-art
algorithmic pipelines and apply the idea to transrectal 3D Ultrasound (TRUS)
compounding of the prostate. Obtained volumes are compared to reconstruction
using a commercial optical tracking system as well as a robotic manipulator.
Feature-based binocular SLAM is identified as the most promising method and is
tested extensively in challenging clinical environment under severe occlusion
and for the use case of prostate US biopsies.
| null |
http://arxiv.org/abs/1804.01708v3
|
http://arxiv.org/pdf/1804.01708v3.pdf
| null |
[
"Benjamin Busam",
"Patrick Ruhkamp",
"Salvatore Virga",
"Beatrice Lentes",
"Julia Rackerseder",
"Nassir Navab",
"Christoph Hennersperger"
] |
[
"Translation"
] | 2018-04-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-clinical-mri-perfusion-maps-with
|
1806.04413
| null | null |
Enhancing clinical MRI Perfusion maps with data-driven maps of complementary nature for lesion outcome prediction
|
Stroke is the second most common cause of death in developed countries, where
rapid clinical intervention can have a major impact on a patient's life. To
perform the revascularization procedure, the decision making of physicians
considers its risks and benefits based on multi-modal MRI and clinical
experience. Therefore, automatic prediction of the ischemic stroke lesion
outcome has the potential to assist the physician towards a better stroke
assessment and information about tissue outcome. Typically, automatic methods
consider the information of the standard kinetic models of diffusion and
perfusion MRI (e.g. Tmax, TTP, MTT, rCBF, rCBV) to perform lesion outcome
prediction. In this work, we propose a deep learning method to fuse this
information with an automated data selection of the raw 4D PWI image
information, followed by a data-driven deep-learning modeling of the underlying
blood flow hemodynamics. We demonstrate the ability of the proposed approach to
improve prediction of tissue at risk before therapy, as compared to only using
the standard clinical perfusion maps, hence suggesting on the potential
benefits of the proposed data-driven raw perfusion data modelling approach.
| null |
http://arxiv.org/abs/1806.04413v1
|
http://arxiv.org/pdf/1806.04413v1.pdf
| null |
[
"Adriano Pinto",
"Sergio Pereira",
"Raphael Meier",
"Victor Alves",
"Roland Wiest",
"Carlos A. Silva",
"Mauricio Reyes"
] |
[
"Decision Making",
"Prediction"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-for-electromyographic-hand
|
1801.07756
| null | null |
Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning
|
In recent years, deep learning algorithms have become increasingly more
prominent for their unparalleled ability to automatically learn discriminant
features from large amounts of data. However, within the field of
electromyography-based gesture recognition, deep learning algorithms are seldom
employed as they require an unreasonable amount of effort from a single person,
to generate tens of thousands of examples.
This work's hypothesis is that general, informative features can be learned
from the large amounts of data generated by aggregating the signals of multiple
users, thus reducing the recording burden while enhancing gesture recognition.
Consequently, this paper proposes applying transfer learning on aggregated data
from multiple users, while leveraging the capacity of deep learning algorithms
to learn discriminant features from large datasets. Two datasets comprised of
19 and 17 able-bodied participants respectively (the first one is employed for
pre-training) were recorded for this work, using the Myo Armband. A third Myo
Armband dataset was taken from the NinaPro database and is comprised of 10
able-bodied participants. Three different deep learning networks employing
three different modalities as input (raw EMG, Spectrograms and Continuous
Wavelet Transform (CWT)) are tested on the second and third dataset. The
proposed transfer learning scheme is shown to systematically and significantly
enhance the performance for all three networks on the two datasets, achieving
an offline accuracy of 98.31% for 7 gestures over 17 participants for the
CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw
EMG-based ConvNet. Finally, a use-case study employing eight able-bodied
participants suggests that real-time feedback allows users to adapt their
muscle activation strategy which reduces the degradation in accuracy normally
experienced over time.
|
Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets.
|
http://arxiv.org/abs/1801.07756v5
|
http://arxiv.org/pdf/1801.07756v5.pdf
| null |
[
"Ulysse Côté-Allard",
"Cheikh Latyr Fall",
"Alexandre Drouin",
"Alexandre Campeau-Lecours",
"Clément Gosselin",
"Kyrre Glette",
"François Laviolette",
"Benoit Gosselin"
] |
[
"Deep Learning",
"EMG Gesture Recognition",
"General Classification",
"Gesture Recognition",
"Transfer Learning"
] | 2018-01-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/explaining-and-generalizing-back-translation
|
1806.04402
| null | null |
Explaining and Generalizing Back-Translation through Wake-Sleep
|
Back-translation has become a commonly employed heuristic for semi-supervised
neural machine translation. The technique is both straightforward to apply and
has led to state-of-the-art results. In this work, we offer a principled
interpretation of back-translation as approximate inference in a generative
model of bitext and show how the standard implementation of back-translation
corresponds to a single iteration of the wake-sleep algorithm in our proposed
model. Moreover, this interpretation suggests a natural iterative
generalization, which we demonstrate leads to further improvement of up to 1.6
BLEU.
| null |
http://arxiv.org/abs/1806.04402v1
|
http://arxiv.org/pdf/1806.04402v1.pdf
| null |
[
"Ryan Cotterell",
"Julia Kreutzer"
] |
[
"Machine Translation",
"Translation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantum-classification-of-the-mnist-dataset
|
1805.08837
| null | null |
Quantum classification of the MNIST dataset with Slow Feature Analysis
|
Quantum machine learning carries the promise to revolutionize information and communication technologies. While a number of quantum algorithms with potential exponential speedups have been proposed already, it is quite difficult to provide convincing evidence that quantum computers with quantum memories will be in fact useful to solve real-world problems. Our work makes considerable progress towards this goal. We design quantum techniques for Dimensionality Reduction and for Classification, and combine them to provide an efficient and high accuracy quantum classifier that we test on the MNIST dataset. More precisely, we propose a quantum version of Slow Feature Analysis (QSFA), a dimensionality reduction technique that maps the dataset in a lower dimensional space where we can apply a novel quantum classification procedure, the Quantum Frobenius Distance (QFD). We simulate the quantum classifier (including errors) and show that it can provide classification of the MNIST handwritten digit dataset, a widely used dataset for benchmarking classification algorithms, with $98.5\%$ accuracy, similar to the classical case. The running time of the quantum classifier is polylogarithmic in the dimension and number of data points. We also provide evidence that the other parameters on which the running time depends (condition number, Frobenius norm, error threshold, etc.) scale favorably in practice, thus ascertaining the efficiency of our algorithm.
| null |
https://arxiv.org/abs/1805.08837v3
|
https://arxiv.org/pdf/1805.08837v3.pdf
| null |
[
"Iordanis Kerenidis",
"Alessandro Luongo"
] |
[
"Benchmarking",
"Classification",
"Dimensionality Reduction",
"General Classification",
"Quantum Machine Learning"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/attentive-cross-modal-paratope-prediction
|
1806.04398
| null | null |
Attentive cross-modal paratope prediction
|
Antibodies are a critical part of the immune system, having the function of
directly neutralising or tagging undesirable objects (the antigens) for future
destruction. Being able to predict which amino acids belong to the paratope,
the region on the antibody which binds to the antigen, can facilitate antibody
design and contribute to the development of personalised medicine. The
suitability of deep neural networks has recently been confirmed for this task,
with Parapred outperforming all prior physical models. Our contribution is
twofold: first, we significantly outperform the computational efficiency of
Parapred by leveraging \`a trous convolutions and self-attention. Secondly, we
implement cross-modal attention by allowing the antibody residues to attend
over antigen residues. This leads to new state-of-the-art results on this task,
along with insightful interpretations.
| null |
http://arxiv.org/abs/1806.04398v1
|
http://arxiv.org/pdf/1806.04398v1.pdf
| null |
[
"Andreea Deac",
"Petar Veličković",
"Pietro Sormanni"
] |
[
"Antibody-antigen binding prediction",
"Computational Efficiency",
"Prediction"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/community-detection-in-partially-observable
|
1801.00132
| null | null |
Community Detection in Partially Observable Social Networks
|
The discovery of community structures in social networks has gained significant attention since it is a fundamental problem in understanding the networks' topology and functions. However, most social network data are collected from partially observable networks with both missing nodes and edges. In this paper, we address a new problem of detecting overlapping community structures in the context of such an incomplete network, where communities in the network are allowed to overlap since nodes belong to multiple communities at once. To solve this problem, we introduce KroMFac, a new framework that conducts community detection via regularized nonnegative matrix factorization (NMF) based on the Kronecker graph model. Specifically, from an inferred Kronecker generative parameter matrix, we first estimate the missing part of the network. As our major contribution to the proposed framework, to improve community detection accuracy, we then characterize and select influential nodes (which tend to have high degrees) by ranking, and add them to the existing graph. Finally, we uncover the community structures by solving the regularized NMF-aided optimization problem in terms of maximizing the likelihood of the underlying graph. Furthermore, adopting normalized mutual information (NMI), we empirically show superiority of our KroMFac approach over two baseline schemes by using both synthetic and real-world networks.
| null |
https://arxiv.org/abs/1801.00132v8
|
https://arxiv.org/pdf/1801.00132v8.pdf
| null |
[
"Cong Tran",
"Won-Yong Shin",
"Andreas Spitz"
] |
[
"Community Detection"
] | 2017-12-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/qiniu-submission-to-activitynet-challenge
|
1806.04391
| null | null |
Qiniu Submission to ActivityNet Challenge 2018
|
In this paper, we introduce our submissions for the tasks of trimmed activity
recognition (Kinetics) and trimmed event recognition (Moments in Time) for
Activitynet Challenge 2018. In the two tasks, non-local neural networks and
temporal segment networks are implemented as our base models. Multi-modal cues
such as RGB image, optical flow and acoustic signal have also been used in our
method. We also propose new non-local-based models for further improvement on
the recognition accuracy. The final submissions after ensembling the models
achieve 83.5% top-1 accuracy and 96.8% top-5 accuracy on the Kinetics
validation set, 35.81% top-1 accuracy and 62.59% top-5 accuracy on the MIT
validation set.
| null |
http://arxiv.org/abs/1806.04391v1
|
http://arxiv.org/pdf/1806.04391v1.pdf
| null |
[
"Xiaoteng Zhang",
"Yixin Bao",
"Feiyun Zhang",
"Kai Hu",
"Yicheng Wang",
"Liang Zhu",
"Qinzhu He",
"Yining Lin",
"Jie Shao",
"Yao Peng"
] |
[
"Activity Recognition",
"Optical Flow Estimation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-weighted-kendall-and-high-order-kernels
|
1802.08526
| null | null |
The Weighted Kendall and High-order Kernels for Permutations
|
We propose new positive definite kernels for permutations. First we introduce
a weighted version of the Kendall kernel, which allows to weight unequally the
contributions of different item pairs in the permutations depending on their
ranks. Like the Kendall kernel, we show that the weighted version is invariant
to relabeling of items and can be computed efficiently in $O(n \ln(n))$
operations, where $n$ is the number of items in the permutation. Second, we
propose a supervised approach to learn the weights by jointly optimizing them
with the function estimated by a kernel machine. Third, while the Kendall
kernel considers pairwise comparison between items, we extend it by considering
higher-order comparisons among tuples of items and show that the supervised
approach of learning the weights can be systematically generalized to
higher-order permutation kernels.
|
We propose new positive definite kernels for permutations.
|
http://arxiv.org/abs/1802.08526v2
|
http://arxiv.org/pdf/1802.08526v2.pdf
|
ICML 2018 7
|
[
"Yunlong Jiao",
"Jean-Philippe Vert"
] |
[
"Vocal Bursts Intensity Prediction"
] | 2018-02-23T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2305
|
http://proceedings.mlr.press/v80/jiao18a/jiao18a.pdf
|
the-weighted-kendall-and-high-order-kernels-1
| null |
[] |
https://paperswithcode.com/paper/how-to-make-the-gradients-small
|
1801.02982
| null | null |
How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD
|
Stochastic gradient descent (SGD) gives an optimal convergence rate when minimizing convex stochastic objectives $f(x)$. However, in terms of making the gradients small, the original SGD does not give an optimal rate, even when $f(x)$ is convex. If $f(x)$ is convex, to find a point with gradient norm $\varepsilon$, we design an algorithm SGD3 with a near-optimal rate $\tilde{O}(\varepsilon^{-2})$, improving the best known rate $O(\varepsilon^{-8/3})$ of [18]. If $f(x)$ is nonconvex, to find its $\varepsilon$-approximate local minimum, we design an algorithm SGD5 with rate $\tilde{O}(\varepsilon^{-3.5})$, where previously SGD variants only achieve $\tilde{O}(\varepsilon^{-4})$ [6, 15, 33]. This is no slower than the best known stochastic version of Newton's method in all parameter regimes [30].
| null |
https://arxiv.org/abs/1801.02982v3
|
https://arxiv.org/pdf/1801.02982v3.pdf
|
NeurIPS 2018 12
|
[
"Zeyuan Allen-Zhu"
] |
[] | 2018-01-08T00:00:00 |
http://papers.nips.cc/paper/7392-how-to-make-the-gradients-small-stochastically-even-faster-convex-and-nonconvex-sgd
|
http://papers.nips.cc/paper/7392-how-to-make-the-gradients-small-stochastically-even-faster-convex-and-nonconvex-sgd.pdf
|
how-to-make-the-gradients-small-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fast-rotational-sparse-coding
|
1806.04374
| null | null |
Fast Rotational Sparse Coding
|
We propose an algorithm for rotational sparse coding along with an efficient implementation using steerability. Sparse coding (also called dictionary learning) is an important technique in image processing, useful in inverse problems, compression, and analysis; however, the usual formulation fails to capture an important aspect of the structure of images: images are formed from building blocks, e.g., edges, lines, or points, that appear at different locations, orientations, and scales. The sparse coding problem can be reformulated to explicitly account for these transforms, at the cost of increased computation. In this work, we propose an algorithm for a rotational version of sparse coding that is based on K-SVD with additional rotation operations. We then propose a method to accelerate these rotations by learning the dictionary in a steerable basis. Our experiments on patch coding and texture classification demonstrate that the proposed algorithm is fast enough for practical use and compares favorably to standard sparse coding.
| null |
https://arxiv.org/abs/1806.04374v2
|
https://arxiv.org/pdf/1806.04374v2.pdf
| null |
[
"Michael T. McCann",
"Vincent Andrearczyk",
"Michael Unser",
"Adrien Depeursinge"
] |
[
"Dictionary Learning",
"Texture Classification"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kernel-recursive-abc-point-estimation-with
|
1802.08404
| null | null |
Kernel Recursive ABC: Point Estimation with Intractable Likelihood
|
We propose a novel approach to parameter estimation for simulator-based
statistical models with intractable likelihood. Our proposed method involves
recursive application of kernel ABC and kernel herding to the same observed
data. We provide a theoretical explanation regarding why the approach works,
showing (for the population setting) that, under a certain assumption, point
estimates obtained with this method converge to the true parameter, as
recursion proceeds. We have conducted a variety of numerical experiments,
including parameter estimation for a real-world pedestrian flow simulator, and
show that in most cases our method outperforms existing approaches.
| null |
http://arxiv.org/abs/1802.08404v2
|
http://arxiv.org/pdf/1802.08404v2.pdf
|
ICML 2018 7
|
[
"Takafumi Kajihara",
"Motonobu Kanagawa",
"Keisuke Yamazaki",
"Kenji Fukumizu"
] |
[
"parameter estimation"
] | 2018-02-23T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2239
|
http://proceedings.mlr.press/v80/kajihara18a/kajihara18a.pdf
|
kernel-recursive-abc-point-estimation-with-1
| null |
[] |
https://paperswithcode.com/paper/initialize-globally-before-acting-locally
|
1806.04368
| null | null |
Initialize globally before acting locally: Enabling Landmark-free 3D US to MRI Registration
|
Registration of partial-view 3D US volumes with MRI data is influenced by
initialization. The standard of practice is using extrinsic or intrinsic
landmarks, which can be very tedious to obtain. To overcome the limitations of
registration initialization, we present a novel approach that is based on
Euclidean distance maps derived from easily obtainable coarse segmentations. We
evaluate our approach quantitatively on the publicly available RESECT dataset
and show that it is robust regarding overlap of target area and initial
position. Furthermore, our method provides initializations that are suitable
for state-of-the-art nonlinear, deformable image registration algorithm's
capture ranges.
| null |
http://arxiv.org/abs/1806.04368v1
|
http://arxiv.org/pdf/1806.04368v1.pdf
| null |
[
"Julia Rackerseder",
"Maximilian Baust",
"Rüdiger Göbl",
"Nassir Navab",
"Christoph Hennersperger"
] |
[
"Image Registration",
"Position"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mean-field-multi-agent-reinforcement-learning
|
1802.05438
| null | null |
Mean Field Multi-Agent Reinforcement Learning
|
Existing multi-agent reinforcement learning methods are limited typically to a small number of agents. When the agent number increases largely, the learning becomes intractable due to the curse of the dimensionality and the exponential growth of agent interactions. In this paper, we present \emph{Mean Field Reinforcement Learning} where the interactions within the population of agents are approximated by those between a single agent and the average effect from the overall population or neighboring agents; the interplay between the two entities is mutually reinforced: the learning of the individual agent's optimal policy depends on the dynamics of the population, while the dynamics of the population change according to the collective patterns of the individual policies. We develop practical mean field Q-learning and mean field Actor-Critic algorithms and analyze the convergence of the solution to Nash equilibrium. Experiments on Gaussian squeeze, Ising model, and battle games justify the learning effectiveness of our mean field approaches. In addition, we report the first result to solve the Ising model via model-free reinforcement learning methods.
|
Existing multi-agent reinforcement learning methods are limited typically to a small number of agents.
|
https://arxiv.org/abs/1802.05438v5
|
https://arxiv.org/pdf/1802.05438v5.pdf
|
ICML 2018 7
|
[
"Yaodong Yang",
"Rui Luo",
"Minne Li",
"Ming Zhou",
"Wei-Nan Zhang",
"Jun Wang"
] |
[
"Multi-agent Reinforcement Learning",
"Q-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-02-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2458
|
http://proceedings.mlr.press/v80/yang18d/yang18d.pdf
|
mean-field-multi-agent-reinforcement-learning-1
| null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/drift-analysis
|
1712.00964
| null | null |
Drift Analysis
|
Drift analysis is one of the major tools for analysing evolutionary
algorithms and nature-inspired search heuristics. In this chapter we give an
introduction to drift analysis and give some examples of how to use it for the
analysis of evolutionary algorithms.
| null |
http://arxiv.org/abs/1712.00964v2
|
http://arxiv.org/pdf/1712.00964v2.pdf
| null |
[
"Johannes Lengler"
] |
[
"Evolutionary Algorithms"
] | 2017-12-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/msplit-lbi-realizing-feature-selection-and
|
1806.04360
| null | null |
MSplit LBI: Realizing Feature Selection and Dense Estimation Simultaneously in Few-shot and Zero-shot Learning
|
It is one typical and general topic of learning a good embedding model to
efficiently learn the representation coefficients between two spaces/subspaces.
To solve this task, $L_{1}$ regularization is widely used for the pursuit of
feature selection and avoiding overfitting, and yet the sparse estimation of
features in $L_{1}$ regularization may cause the underfitting of training data.
$L_{2}$ regularization is also frequently used, but it is a biased estimator.
In this paper, we propose the idea that the features consist of three
orthogonal parts, \emph{namely} sparse strong signals, dense weak signals and
random noise, in which both strong and weak signals contribute to the fitting
of data. To facilitate such novel decomposition, \emph{MSplit} LBI is for the
first time proposed to realize feature selection and dense estimation
simultaneously. We provide theoretical and simulational verification that our
method exceeds $L_{1}$ and $L_{2}$ regularization, and extensive experimental
results show that our method achieves state-of-the-art performance in the
few-shot and zero-shot learning.
|
To solve this task, $L_{1}$ regularization is widely used for the pursuit of feature selection and avoiding overfitting, and yet the sparse estimation of features in $L_{1}$ regularization may cause the underfitting of training data.
|
http://arxiv.org/abs/1806.04360v1
|
http://arxiv.org/pdf/1806.04360v1.pdf
|
ICML 2018 7
|
[
"Bo Zhao",
"Xinwei Sun",
"Yanwei Fu",
"Yuan YAO",
"Yizhou Wang"
] |
[
"feature selection",
"Zero-Shot Learning"
] | 2018-06-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1897
|
http://proceedings.mlr.press/v80/zhao18c/zhao18c.pdf
|
msplit-lbi-realizing-feature-selection-and-1
| null |
[] |
https://paperswithcode.com/paper/multi-task-neural-models-for-translating
|
1806.04357
| null | null |
Multi-Task Neural Models for Translating Between Styles Within and Across Languages
|
Generating natural language requires conveying content in an appropriate
style. We explore two related tasks on generating text of varying formality:
monolingual formality transfer and formality-sensitive machine translation. We
propose to solve these tasks jointly using multi-task learning, and show that
our models achieve state-of-the-art performance for formality transfer and are
able to perform formality-sensitive translation without being explicitly
trained on style-annotated translation examples.
|
Generating natural language requires conveying content in an appropriate style.
|
http://arxiv.org/abs/1806.04357v1
|
http://arxiv.org/pdf/1806.04357v1.pdf
|
COLING 2018 8
|
[
"Xing Niu",
"Sudha Rao",
"Marine Carpuat"
] |
[
"Machine Translation",
"Multi-Task Learning",
"Translation"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1086
|
https://aclanthology.org/C18-1086.pdf
|
multi-task-neural-models-for-translating-1
| null |
[] |
https://paperswithcode.com/paper/gradnorm-gradient-normalization-for-adaptive
|
1711.02257
| null |
H1bM1fZCW
|
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks
|
Deep multitask networks, in which one neural network produces multiple
predictive outputs, can offer better speed and performance than their
single-task counterparts but are challenging to train properly. We present a
gradient normalization (GradNorm) algorithm that automatically balances
training in deep multitask models by dynamically tuning gradient magnitudes. We
show that for various network architectures, for both regression and
classification tasks, and on both synthetic and real datasets, GradNorm
improves accuracy and reduces overfitting across multiple tasks when compared
to single-task networks, static baselines, and other adaptive multitask loss
balancing techniques. GradNorm also matches or surpasses the performance of
exhaustive grid search methods, despite only involving a single asymmetry
hyperparameter $\alpha$. Thus, what was once a tedious search process that
incurred exponentially more compute for each task added can now be accomplished
within a few training runs, irrespective of the number of tasks. Ultimately, we
will demonstrate that gradient manipulation affords us great control over the
training dynamics of multitask networks and may be one of the keys to unlocking
the potential of multitask learning.
|
Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly.
|
http://arxiv.org/abs/1711.02257v4
|
http://arxiv.org/pdf/1711.02257v4.pdf
|
ICML 2018 7
|
[
"Zhao Chen",
"Vijay Badrinarayanan",
"Chen-Yu Lee",
"Andrew Rabinovich"
] |
[] | 2017-11-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2419
|
http://proceedings.mlr.press/v80/chen18a/chen18a.pdf
|
gradnorm-gradient-normalization-for-adaptive-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/exploiting-document-knowledge-for-aspect
|
1806.04346
| null | null |
Exploiting Document Knowledge for Aspect-level Sentiment Classification
|
Attention-based long short-term memory (LSTM) networks have proven to be
useful in aspect-level sentiment classification. However, due to the
difficulties in annotating aspect-level data, existing public datasets for this
task are all relatively small, which largely limits the effectiveness of those
neural models. In this paper, we explore two approaches that transfer knowledge
from document- level data, which is much less expensive to obtain, to improve
the performance of aspect-level sentiment classification. We demonstrate the
effectiveness of our approaches on 4 public datasets from SemEval 2014, 2015,
and 2016, and we show that attention-based LSTM benefits from document-level
knowledge in multiple ways.
|
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification.
|
http://arxiv.org/abs/1806.04346v1
|
http://arxiv.org/pdf/1806.04346v1.pdf
|
ACL 2018 7
|
[
"Ruidan He",
"Wee Sun Lee",
"Hwee Tou Ng",
"Daniel Dahlmeier"
] |
[
"Aspect-Based Sentiment Analysis (ABSA)",
"Classification",
"General Classification",
"Sentiment Analysis",
"Sentiment Classification"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/P18-2092
|
https://aclanthology.org/P18-2092.pdf
|
exploiting-document-knowledge-for-aspect-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/optimizing-variational-quantum-circuits-using
|
1806.04344
| null | null |
Optimizing Variational Quantum Circuits using Evolution Strategies
|
This version withdrawn by arXiv administrators because the submitter did not
have the right to agree to our license at the time of submission.
| null |
http://arxiv.org/abs/1806.04344v1
|
http://arxiv.org/pdf/1806.04344v1.pdf
| null |
[
"Johannes S. Otterbach"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/focused-hierarchical-rnns-for-conditional
|
1806.04342
| null | null |
Focused Hierarchical RNNs for Conditional Sequence Processing
|
Recurrent Neural Networks (RNNs) with attention mechanisms have obtained
state-of-the-art results for many sequence processing tasks. Most of these
models use a simple form of encoder with attention that looks over the entire
sequence and assigns a weight to each token independently. We present a
mechanism for focusing RNN encoders for sequence modelling tasks which allows
them to attend to key parts of the input as needed. We formulate this using a
multi-layer conditional sequence encoder that reads in one token at a time and
makes a discrete decision on whether the token is relevant to the context or
question being asked. The discrete gating mechanism takes in the context
embedding and the current hidden state as inputs and controls information flow
into the layer above. We train it using policy gradient methods. We evaluate
this method on several types of tasks with different attributes. First, we
evaluate the method on synthetic tasks which allow us to evaluate the model for
its generalization ability and probe the behavior of the gates in more
controlled settings. We then evaluate this approach on large scale Question
Answering tasks including the challenging MS MARCO and SearchQA tasks. Our
models shows consistent improvements for both tasks over prior work and our
baselines. It has also shown to generalize significantly better on synthetic
tasks as compared to the baselines.
| null |
http://arxiv.org/abs/1806.04342v1
|
http://arxiv.org/pdf/1806.04342v1.pdf
|
ICML 2018 7
|
[
"Nan Rosemary Ke",
"Konrad Zolna",
"Alessandro Sordoni",
"Zhouhan Lin",
"Adam Trischler",
"Yoshua Bengio",
"Joelle Pineau",
"Laurent Charlin",
"Chris Pal"
] |
[
"Open-Domain Question Answering",
"Policy Gradient Methods",
"Question Answering"
] | 2018-06-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2312
|
http://proceedings.mlr.press/v80/ke18a/ke18a.pdf
|
focused-hierarchical-rnns-for-conditional-1
| null |
[] |
https://paperswithcode.com/paper/when-will-gradient-methods-converge-to-max
|
1806.04339
| null |
Hygv0sC5F7
|
When Will Gradient Methods Converge to Max-margin Classifier under ReLU Models?
|
We study the implicit bias of gradient descent methods in solving a binary
classification problem over a linearly separable dataset. The classifier is
described by a nonlinear ReLU model and the objective function adopts the
exponential loss function. We first characterize the landscape of the loss
function and show that there can exist spurious asymptotic local minima besides
asymptotic global minima. We then show that gradient descent (GD) can converge
to either a global or a local max-margin direction, or may diverge from the
desired max-margin direction in a general context. For stochastic gradient
descent (SGD), we show that it converges in expectation to either the global or
the local max-margin direction if SGD converges. We further explore the
implicit bias of these algorithms in learning a multi-neuron network under
certain stationary conditions, and show that the learned classifier maximizes
the margins of each sample pattern partition under the ReLU activation.
|
We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset.
|
http://arxiv.org/abs/1806.04339v2
|
http://arxiv.org/pdf/1806.04339v2.pdf
|
ICLR 2019 5
|
[
"Tengyu Xu",
"Yi Zhou",
"Kaiyi Ji",
"Yingbin Liang"
] |
[
"Binary Classification"
] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=Hygv0sC5F7
|
https://openreview.net/pdf?id=Hygv0sC5F7
|
when-will-gradient-methods-converge-to-max-1
| null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-compromise-principle-in-deep-monocular
|
1708.08267
| null | null |
A Compromise Principle in Deep Monocular Depth Estimation
|
Monocular depth estimation, which plays a key role in understanding 3D scene
geometry, is fundamentally an ill-posed problem. Existing methods based on deep
convolutional neural networks (DCNNs) have examined this problem by learning
convolutional networks to estimate continuous depth maps from monocular images.
However, we find that training a network to predict a high spatial resolution
continuous depth map often suffers from poor local solutions. In this paper, we
hypothesize that achieving a compromise between spatial and depth resolutions
can improve network training. Based on this "compromise principle", we propose
a regression-classification cascaded network (RCCN), which consists of a
regression branch predicting a low spatial resolution continuous depth map and
a classification branch predicting a high spatial resolution discrete depth
map. The two branches form a cascaded structure allowing the classification and
regression branches to benefit from each other. By leveraging large-scale raw
training datasets and some data augmentation strategies, our network achieves
top or state-of-the-art results on the NYU Depth V2, KITTI, and Make3D
benchmarks.
| null |
http://arxiv.org/abs/1708.08267v2
|
http://arxiv.org/pdf/1708.08267v2.pdf
| null |
[
"Huan Fu",
"Mingming Gong",
"Chaohui Wang",
"DaCheng Tao"
] |
[
"Classification",
"Data Augmentation",
"Depth Estimation",
"General Classification",
"Monocular Depth Estimation",
"regression"
] | 2017-08-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-and-accurate-tensor-completion-with
|
1804.06128
| null | null |
Fast and Accurate Tensor Completion with Total Variation Regularized Tensor Trains
|
We propose a new tensor completion method based on tensor trains. The
to-be-completed tensor is modeled as a low-rank tensor train, where we use the
known tensor entries and their coordinates to update the tensor train. A novel
tensor train initialization procedure is proposed specifically for image and
video completion, which is demonstrated to ensure fast convergence of the
completion algorithm. The tensor train framework is also shown to easily
accommodate Total Variation and Tikhonov regularization due to their low-rank
tensor train representations. Image and video inpainting experiments verify the
superiority of the proposed scheme in terms of both speed and scalability,
where a speedup of up to 155X is observed compared to state-of-the-art tensor
completion methods at a similar accuracy. Moreover, we demonstrate the proposed
scheme is especially advantageous over existing algorithms when only tiny
portions (say, 1%) of the to-be-completed images/videos are known.
|
We propose a new tensor completion method based on tensor trains.
|
http://arxiv.org/abs/1804.06128v3
|
http://arxiv.org/pdf/1804.06128v3.pdf
| null |
[
"Ching-Yun Ko",
"Kim Batselier",
"Wenjian Yu",
"Ngai Wong"
] |
[
"Video Inpainting"
] | 2018-04-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/small-loss-bounds-for-online-learning-with
|
1711.03639
| null | null |
Small-loss bounds for online learning with partial information
|
We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called "small-loss" $o(\alpha L^{\star})$ regret bounds with high probability, where $\alpha$ is the independence number of the graph, and $L^{\star}$ is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators. In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of $\tilde{O}(\sqrt{dL^{\star}})$ for actual regret, where $d$ is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal $\tilde{O}(\sqrt{\kappa L^{\star}})$ regret guarantee for fixed feedback graphs with clique-partition number at most $\kappa$.
| null |
https://arxiv.org/abs/1711.03639v5
|
https://arxiv.org/pdf/1711.03639v5.pdf
| null |
[
"Thodoris Lykouris",
"Karthik Sridharan",
"Eva Tardos"
] |
[
"Multi-Armed Bandits"
] | 2017-11-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-ship-detection-of-remote-sensing
|
1806.04331
| null | null |
Automatic Ship Detection of Remote Sensing Images from Google Earth in Complex Scenes Based on Multi-Scale Rotation Dense Feature Pyramid Networks
|
Ship detection has been playing a significant role in the field of remote
sensing for a long time but it is still full of challenges. The main
limitations of traditional ship detection methods usually lie in the complexity
of application scenarios, the difficulty of intensive object detection and the
redundancy of detection region. In order to solve such problems above, we
propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN)
which can effectively detect ship in different scenes including ocean and port.
Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is
aimed at solving the problem resulted from the narrow width of the ship.
Compared with previous multi-scale detectors such as Feature Pyramid Network
(FPN), DFPN builds the high-level semantic feature-maps for all scales by means
of dense connections, through which enhances the feature propagation and
encourages the feature reuse. Additionally, in the case of ship rotation and
dense arrangement, we design a rotation anchor strategy to predict the minimum
circumscribed rectangle of the object so as to reduce the redundant detection
region and improve the recall. Furthermore, we also propose multi-scale ROI
Align for the purpose of maintaining the completeness of semantic and spatial
information. Experiments based on remote sensing images from Google Earth for
ship detection show that our detection method based on R-DFPN representation
has a state-of-the-art performance.
|
Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall.
|
http://arxiv.org/abs/1806.04331v1
|
http://arxiv.org/pdf/1806.04331v1.pdf
| null |
[
"Xue Yang",
"Hao Sun",
"Kun fu",
"Jirui Yang",
"Xian Sun",
"Menglong Yan",
"Zhi Guo"
] |
[
"object-detection",
"Object Detection"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-network-models-for-paraphrase
|
1806.04330
| null | null |
Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering
|
In this paper, we analyze several neural network designs (and their
variations) for sentence pair modeling and compare their performance
extensively across eight datasets, including paraphrase identification,
semantic textual similarity, natural language inference, and question answering
tasks. Although most of these models have claimed state-of-the-art performance,
the original papers often reported on only one or two selected datasets. We
provide a systematic study and show that (i) encoding contextual information by
LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help
as much as previously claimed but surprisingly improves performance on Twitter
datasets, (iii) the Enhanced Sequential Inference Model is the best so far for
larger datasets, while the Pairwise Word Interaction Model achieves the best
performance when less data is available. We release our implementations as an
open-source toolkit.
|
In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks.
|
http://arxiv.org/abs/1806.04330v2
|
http://arxiv.org/pdf/1806.04330v2.pdf
|
COLING 2018 8
|
[
"Wuwei Lan",
"Wei Xu"
] |
[
"Natural Language Inference",
"Paraphrase Identification",
"Question Answering",
"Semantic Textual Similarity",
"Sentence",
"Sentence Pair Modeling"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1328
|
https://aclanthology.org/C18-1328.pdf
|
neural-network-models-for-paraphrase-1
| null |
[] |
https://paperswithcode.com/paper/sparse-collaborative-or-nonnegative
|
1806.04329
| null | null |
Sparse, Collaborative, or Nonnegative Representation: Which Helps Pattern Classification?
|
The use of sparse representation (SR) and collaborative representation (CR)
for pattern classification has been widely studied in tasks such as face
recognition and object categorization. Despite the success of SR/CR based
classifiers, it is still arguable whether it is the $\ell_{1}$-norm sparsity or
the $\ell_{2}$-norm collaborative property that brings the success of SR/CR
based classification. In this paper, we investigate the use of nonnegative
representation (NR) for pattern classification, which is largely ignored by
previous work. Our analyses reveal that NR can boost the representation power
of homogeneous samples while limiting the representation power of heterogeneous
samples, making the representation sparse and discriminative simultaneously and
thus providing a more effective solution to representation based classification
than SR/CR. Our experiments demonstrate that the proposed NR based classifier
(NRC) outperforms previous representation based classifiers. With deep features
as inputs, it also achieves state-of-the-art performance on various visual
classification tasks.
|
The use of sparse representation (SR) and collaborative representation (CR) for pattern classification has been widely studied in tasks such as face recognition and object categorization.
|
http://arxiv.org/abs/1806.04329v2
|
http://arxiv.org/pdf/1806.04329v2.pdf
| null |
[
"Jun Xu",
"Wangpeng An",
"Lei Zhang",
"David Zhang"
] |
[
"Classification",
"Face Recognition",
"General Classification",
"Object Categorization"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.