paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/concurrent-spatial-and-channel-squeeze
|
1803.02579
| null | null |
Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks
|
Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in
image segmentation for a plethora of applications. Architectural innovations
within F-CNNs have mainly focused on improving spatial encoding or network
connectivity to aid gradient flow. In this paper, we explore an alternate
direction of recalibrating the feature maps adaptively, to boost meaningful
features, while suppressing weak ones. We draw inspiration from the recently
proposed squeeze & excitation (SE) module for channel recalibration of feature
maps for image classification. Towards this end, we introduce three variants of
SE modules for image segmentation, (i) squeezing spatially and exciting
channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE)
and (iii) concurrent spatial and channel squeeze & excitation (scSE). We
effectively incorporate these SE modules within three different
state-of-the-art F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent
improvement of performance across all architectures, while minimally effecting
model complexity. Evaluations are performed on two challenging applications:
whole brain segmentation on MRI scans (Multi-Atlas Labelling Challenge Dataset)
and organ segmentation on whole body contrast enhanced CT scans (Visceral
Dataset).
|
Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications.
|
http://arxiv.org/abs/1803.02579v2
|
http://arxiv.org/pdf/1803.02579v2.pdf
| null |
[
"Abhijit Guha Roy",
"Nassir Navab",
"Christian Wachinger"
] |
[
"Brain Segmentation",
"image-classification",
"Image Classification",
"Image Segmentation",
"Organ Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-03-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inherent-brain-segmentation-quality-control
|
1804.07046
| null | null |
Inherent Brain Segmentation Quality Control from Fully ConvNet Monte Carlo Sampling
|
We introduce inherent measures for effective quality control of brain
segmentation based on a Bayesian fully convolutional neural network, using
model uncertainty. Monte Carlo samples from the posterior distribution are
efficiently generated using dropout at test time. Based on these samples, we
introduce next to a voxel-wise uncertainty map also three metrics for
structure-wise uncertainty. We then incorporate these structure-wise
uncertainty in group analyses as a measure of confidence in the observation.
Our results show that the metrics are highly correlated to segmentation
accuracy and therefore present an inherent measure of segmentation quality.
Furthermore, group analysis with uncertainty results in effect sizes closer to
that of manual annotations. The introduced uncertainty metrics can not only be
very useful in translation to clinical practice but also provide automated
quality control and group analyses in processing large data repositories.
| null |
http://arxiv.org/abs/1804.07046v2
|
http://arxiv.org/pdf/1804.07046v2.pdf
| null |
[
"Abhijit Guha Roy",
"Sailesh Conjeti",
"Nassir Navab",
"Christian Wachinger"
] |
[
"Brain Segmentation",
"Segmentation",
"Translation"
] | 2018-04-19T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] |
https://paperswithcode.com/paper/changemyview-through-concessions-do
|
1806.03223
| null | null |
ChangeMyView Through Concessions: Do Concessions Increase Persuasion?
|
In discourse studies concessions are considered among those argumentative
strategies that increase persuasion. We aim to empirically test this hypothesis
by calculating the distribution of argumentative concessions in persuasive vs.
non-persuasive comments from the ChangeMyView subreddit. This constitutes a
challenging task since concessions are not always part of an argument. Drawing
from a theoretically-informed typology of concessions, we conduct an annotation
task to label a set of polysemous lexical markers as introducing an
argumentative concession or not and we observe their distribution in threads
that achieved and did not achieve persuasion. For the annotation, we used both
expert and novice annotators. With the ultimate goal of conducting the study on
large datasets, we present a self-training method to automatically identify
argumentative concessions using linguistically motivated features. We achieve a
moderate F1 of 57.4% on the development set and 46.0% on the test set via the
self-training method. These results are comparable to state of the art results
on similar tasks of identifying explicit discourse connective types from the
Penn Discourse Treebank. Our findings from the manual labeling and the
classification experiments indicate that the type of argumentative concessions
we investigated is almost equally likely to be used in winning and losing
arguments from the ChangeMyView dataset. While this result seems to contradict
theoretical assumptions, we provide some reasons for this discrepancy related
to the ChangeMyView subreddit.
| null |
http://arxiv.org/abs/1806.03223v1
|
http://arxiv.org/pdf/1806.03223v1.pdf
| null |
[
"Elena Musi",
"Debanjan Ghosh",
"Smaranda Muresan"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-reweight-examples-for-robust-deep
|
1803.09050
| null | null |
Learning to Reweight Examples for Robust Deep Learning
|
Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
|
Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns.
|
https://arxiv.org/abs/1803.09050v3
|
https://arxiv.org/pdf/1803.09050v3.pdf
|
ICML 2018 7
|
[
"Mengye Ren",
"Wenyuan Zeng",
"Bin Yang",
"Raquel Urtasun"
] |
[
"Deep Learning",
"Meta-Learning"
] | 2018-03-24T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1991
|
http://proceedings.mlr.press/v80/ren18a/ren18a.pdf
|
learning-to-reweight-examples-for-robust-deep-1
| null |
[] |
https://paperswithcode.com/paper/data-driven-model-for-the-identification-of
|
1806.03218
| null | null |
Data-driven model for the identification of the rock type at a drilling bit
|
Directional oil well drilling requires high precision of the wellbore
positioning inside the productive area. However, due to specifics of
engineering design, sensors that explicitly determine the type of the drilled
rock are located farther than 15m from the drilling bit. As a result, the
target area runaways can be detected only after this distance, which in turn,
leads to a loss in well productivity and the risk of the need for an expensive
re-boring operation.
We present a novel approach for identifying rock type at the drilling bit
based on machine learning classification methods and data mining on sensors
readings. We compare various machine-learning algorithms, examine extra
features coming from mathematical modeling of drilling mechanics, and show that
the real-time rock type classification error can be reduced from 13.5 % to 9 %.
The approach is applicable for precise directional drilling in relatively thin
target intervals of complex shapes and generalizes appropriately to new wells
that are different from the ones used for training the machine learning model.
| null |
http://arxiv.org/abs/1806.03218v3
|
http://arxiv.org/pdf/1806.03218v3.pdf
| null |
[
"Nikita Klyuchnikov",
"Alexey Zaytsev",
"Arseniy Gruzdev",
"Georgiy Ovchinnikov",
"Ksenia Antipova",
"Leyla Ismailova",
"Ekaterina Muravleva",
"Evgeny Burnaev",
"Artyom Semenikhin",
"Alexey Cherepanov",
"Vitaliy Koryabkin",
"Igor Simon",
"Alexey Tsurgan",
"Fedor Krasnov",
"Dmitry Koroteev"
] |
[
"BIG-bench Machine Learning",
"General Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generating-liquid-simulations-with
|
1704.07854
| null |
HyeGBj09Fm
|
Generating Liquid Simulations with Deformation-aware Neural Networks
|
We propose a novel approach for deformation-aware neural networks that learn
the weighting and synthesis of dense volumetric deformation fields. Our method
specifically targets the space-time representation of physical surfaces from
liquid simulations. Liquids exhibit highly complex, non-linear behavior under
changing simulation conditions such as different initial conditions. Our
algorithm captures these complex phenomena in two stages: a first neural
network computes a weighting function for a set of pre-computed deformations,
while a second network directly generates a deformation field for refining the
surface. Key for successful training runs in this setting is a suitable loss
function that encodes the effect of the deformations, and a robust calculation
of the corresponding gradients. To demonstrate the effectiveness of our
approach, we showcase our method with several complex examples of flowing
liquids with topology changes. Our representation makes it possible to rapidly
generate the desired implicit surfaces. We have implemented a mobile
application to demonstrate that real-time interactions with complex liquid
effects are possible with our approach.
| null |
http://arxiv.org/abs/1704.07854v4
|
http://arxiv.org/pdf/1704.07854v4.pdf
|
ICLR 2019 5
|
[
"Lukas Prantl",
"Boris Bonev",
"Nils Thuerey"
] |
[] | 2017-04-25T00:00:00 |
https://openreview.net/forum?id=HyeGBj09Fm
|
https://openreview.net/pdf?id=HyeGBj09Fm
|
generating-liquid-simulations-with-1
| null |
[] |
https://paperswithcode.com/paper/learning-in-integer-latent-variable-models
|
1806.03207
| null | null |
Learning in Integer Latent Variable Models with Nested Automatic Differentiation
|
We develop nested automatic differentiation (AD) algorithms for exact
inference and learning in integer latent variable models. Recently, Winner,
Sujono, and Sheldon showed how to reduce marginalization in a class of integer
latent variable models to evaluating a probability generating function which
contains many levels of nested high-order derivatives. We contribute faster and
more stable AD algorithms for this challenging problem and a novel algorithm to
compute exact gradients for learning. These contributions lead to significantly
faster and more accurate learning algorithms, and are the first AD algorithms
whose running time is polynomial in the number of levels of nesting.
| null |
http://arxiv.org/abs/1806.03207v1
|
http://arxiv.org/pdf/1806.03207v1.pdf
|
ICML 2018 7
|
[
"Daniel Sheldon",
"Kevin Winner",
"Debora Sujono"
] |
[] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2357
|
http://proceedings.mlr.press/v80/sheldon18a/sheldon18a.pdf
|
learning-in-integer-latent-variable-models-1
| null |
[] |
https://paperswithcode.com/paper/spreading-vectors-for-similarity-search
|
1806.03198
| null |
SkGuG2R5tm
|
Spreading vectors for similarity search
|
Discretizing multi-dimensional data distributions is a fundamental step of modern indexing methods. State-of-the-art techniques learn parameters of quantizers on training data for optimal performance, thus adapting quantizers to the data. In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net which last layer forms a fixed parameter-free quantizer, such as pre-defined points of a hyper-sphere. As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. We propose a new regularizer derived from the Kozachenko--Leonenko differential entropy estimator to enforce uniformity and combine it with a locality-aware triplet loss. Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks. Furthermore, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyzer that can be applied with any subsequent quantizer.
|
Discretizing multi-dimensional data distributions is a fundamental step of modern indexing methods.
|
https://arxiv.org/abs/1806.03198v3
|
https://arxiv.org/pdf/1806.03198v3.pdf
|
ICLR 2019 5
|
[
"Alexandre Sablayrolles",
"Matthijs Douze",
"Cordelia Schmid",
"Hervé Jégou"
] |
[
"Quantization",
"Triplet"
] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=SkGuG2R5tm
|
https://openreview.net/pdf?id=SkGuG2R5tm
|
spreading-vectors-for-similarity-search-1
| null |
[] |
https://paperswithcode.com/paper/hearst-patterns-revisited-automatic-hypernym
|
1806.03191
| null | null |
Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora
|
Methods for unsupervised hypernym detection may broadly be categorized
according to two paradigms: pattern-based and distributional methods. In this
paper, we study the performance of both approaches on several hypernymy tasks
and find that simple pattern-based methods consistently outperform
distributional methods on common benchmark datasets. Our results show that
pattern-based models provide important contextual constraints which are not yet
captured in distributional methods.
|
Methods for unsupervised hypernym detection may broadly be categorized according to two paradigms: pattern-based and distributional methods.
|
http://arxiv.org/abs/1806.03191v1
|
http://arxiv.org/pdf/1806.03191v1.pdf
|
ACL 2018 7
|
[
"Stephen Roller",
"Douwe Kiela",
"Maximilian Nickel"
] |
[] | 2018-06-08T00:00:00 |
https://aclanthology.org/P18-2057
|
https://aclanthology.org/P18-2057.pdf
|
hearst-patterns-revisited-automatic-hypernym-1
| null |
[] |
https://paperswithcode.com/paper/the-well-tempered-lasso
|
1806.03190
| null | null |
The Well Tempered Lasso
|
We study the complexity of the entire regularization path for least squares
regression with 1-norm penalty, known as the Lasso. Every regression parameter
in the Lasso changes linearly as a function of the regularization value. The
number of changes is regarded as the Lasso's complexity. Experimental results
using exact path following exhibit polynomial complexity of the Lasso in the
problem size. Alas, the path complexity of the Lasso on artificially designed
regression problems is exponential.
We use smoothed analysis as a mechanism for bridging the gap between worst
case settings and the de facto low complexity. Our analysis assumes that the
observed data has a tiny amount of intrinsic noise. We then prove that the
Lasso's complexity is polynomial in the problem size. While building upon the
seminal work of Spielman and Teng on smoothed complexity, our analysis is
morally different as it is divorced from specific path following algorithms. We
verify the validity of our analysis in experiments with both worst case
settings and real datasets. The empirical results we obtain closely match our
analysis.
| null |
http://arxiv.org/abs/1806.03190v1
|
http://arxiv.org/pdf/1806.03190v1.pdf
| null |
[
"Yuanzhi Li",
"Yoram Singer"
] |
[
"regression"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wave-u-net-a-multi-scale-neural-network-for
|
1806.03185
| null | null |
Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation
|
Models for audio source separation usually operate on the magnitude spectrum,
which ignores phase information and makes separation performance dependant on
hyper-parameters for the spectral front-end. Therefore, we investigate
end-to-end source separation in the time-domain, which allows modelling phase
information and avoids fixed spectral transformations. Due to high sampling
rates for audio, employing a long temporal input context on the sample level is
difficult, but required for high quality separation results because of
long-range temporal correlations. In this context, we propose the Wave-U-Net,
an adaptation of the U-Net to the one-dimensional time domain, which repeatedly
resamples feature maps to compute and combine features at different time
scales. We introduce further architectural improvements, including an output
layer that enforces source additivity, an upsampling technique and a
context-aware prediction framework to reduce output artifacts. Experiments for
singing voice separation indicate that our architecture yields a performance
comparable to a state-of-the-art spectrogram-based U-Net architecture, given
the same data. Finally, we reveal a problem with outliers in the currently used
SDR evaluation metrics and suggest reporting rank-based statistics to alleviate
this problem.
|
Models for audio source separation usually operate on the magnitude spectrum, which ignores phase information and makes separation performance dependant on hyper-parameters for the spectral front-end.
|
http://arxiv.org/abs/1806.03185v1
|
http://arxiv.org/pdf/1806.03185v1.pdf
| null |
[
"Daniel Stoller",
"Sebastian Ewert",
"Simon Dixon"
] |
[
"Audio Source Separation",
"Music Source Separation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/averaging-stochastic-gradient-descent-on
|
1802.09128
| null | null |
Averaging Stochastic Gradient Descent on Riemannian Manifolds
|
We consider the minimization of a function defined on a Riemannian manifold
$\mathcal{M}$ accessible only through unbiased estimates of its gradients. We
develop a geometric framework to transform a sequence of slowly converging
iterates generated from stochastic gradient descent (SGD) on $\mathcal{M}$ to
an averaged iterate sequence with a robust and fast $O(1/n)$ convergence rate.
We then present an application of our framework to geodesically-strongly-convex
(and possibly Euclidean non-convex) problems. Finally, we demonstrate how these
ideas apply to the case of streaming $k$-PCA, where we show how to accelerate
the slow rate of the randomized power method (without requiring knowledge of
the eigengap) into a robust algorithm achieving the optimal rate of
convergence.
| null |
http://arxiv.org/abs/1802.09128v2
|
http://arxiv.org/pdf/1802.09128v2.pdf
| null |
[
"Nilesh Tripuraneni",
"Nicolas Flammarion",
"Francis Bach",
"Michael. I. Jordan"
] |
[
"Riemannian optimization"
] | 2018-02-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multilingual-sentiment-analysis-an-rnn-based
|
1806.04511
| null | null |
Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data
|
Sentiment analysis is a widely studied NLP task where the goal is to
determine opinions, emotions, and evaluations of users towards a product, an
entity or a service that they are reviewing. One of the biggest challenges for
sentiment analysis is that it is highly language dependent. Word embeddings,
sentiment lexicons, and even annotated data are language specific. Further,
optimizing models for each language is very time consuming and labor intensive
especially for recurrent neural network models. From a resource perspective, it
is very challenging to collect data for different languages.
In this paper, we look for an answer to the following research question: can
a sentiment analysis model trained on a language be reused for sentiment
analysis in other languages, Russian, Spanish, Turkish, and Dutch, where the
data is more limited? Our goal is to build a single model in the language with
the largest dataset available for the task, and reuse it for languages that
have limited resources. For this purpose, we train a sentiment analysis model
using recurrent neural networks with reviews in English. We then translate
reviews in other languages and reuse this model to evaluate the sentiments.
Experimental results show that our robust approach of single model trained on
English reviews statistically significantly outperforms the baselines in
several different languages.
| null |
http://arxiv.org/abs/1806.04511v1
|
http://arxiv.org/pdf/1806.04511v1.pdf
| null |
[
"Ethem F. Can",
"Aysu Ezen-Can",
"Fazli Can"
] |
[
"Sentiment Analysis",
"Word Embeddings"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-cbr-similarity-functions-for-bam
|
1806.03155
| null | null |
Evaluating CBR Similarity Functions for BAM Switching in Networks with Dynamic Traffic Profile
|
In an increasingly complex scenario for network management, a solution that
allows configuration in more autonomous way with less intervention of the
network manager is expected. This paper presents an evaluation of similarity
functions that are necessary in the context of using a learning strategy for
finding solutions. The learning approach considered is based on Case-Based
Reasoning (CBR) and is applied to a network scenario where different Bandwidth
Allocation Models (BAMs) behaviors are used and must be eventually switched
looking for the best possible network operation. In this context, it is
required to identify and configure an adequate similarity function that will be
used in the learning process to recover similar solutions previously
considered. This paper introduces the similarity functions, explains the
relevant aspects of the learning process in which the similarity function plays
a role and, finally, presents a proof of concept for a specific similarity
function adopted. Results show that the similarity function was capable to get
similar results from the existing use case database. As such, the use of
similarity functions with CBR technique has proved to be potentially
satisfactory for supporting BAM switching decisions mostly driven by the
dynamics of input traffic profile.
| null |
http://arxiv.org/abs/1806.03155v1
|
http://arxiv.org/pdf/1806.03155v1.pdf
| null |
[
"Eliseu Oliveira",
"Rafael Freitas",
"Joberto Martins"
] |
[
"Management"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reinforcing-adversarial-robustness-using
|
1711.08001
| null | null |
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
|
In this paper we study leveraging confidence information induced by
adversarial training to reinforce adversarial robustness of a given
adversarially trained model. A natural measure of confidence is $\|F({\bf
x})\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by
analyzing an adversarial training formulation proposed by Madry et al.. We
demonstrate that, under a variety of instantiations, an only somewhat good
solution to their objective induces confidence to be a discriminator, which can
distinguish between right and wrong model predictions in a neighborhood of a
point sampled from the underlying distribution. Based on this, we propose
Highly Confident Near Neighbor (${\tt HCNN}$), a framework that combines
confidence information and nearest neighbor search, to reinforce adversarial
robustness of a base model. We give algorithms in this framework and perform a
detailed empirical study. We report encouraging experimental results that
support our analysis, and also discuss problems we observed with existing
adversarial training.
| null |
http://arxiv.org/abs/1711.08001v3
|
http://arxiv.org/pdf/1711.08001v3.pdf
|
ICML 2018 7
|
[
"Xi Wu",
"Uyeong Jang",
"Jiefeng Chen",
"Lingjiao Chen",
"Somesh Jha"
] |
[
"Adversarial Robustness"
] | 2017-11-21T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2298
|
http://proceedings.mlr.press/v80/wu18e/wu18e.pdf
|
reinforcing-adversarial-robustness-using-1
| null |
[] |
https://paperswithcode.com/paper/simple-bayesian-algorithms-for-best-arm
|
1602.08448
| null | null |
Simple Bayesian Algorithms for Best Arm Identification
|
This paper considers the optimal adaptive allocation of measurement effort
for identifying the best among a finite set of options or designs. An
experimenter sequentially chooses designs to measure and observes noisy signals
of their quality with the goal of confidently identifying the best design after
a small number of measurements. This paper proposes three simple and intuitive
Bayesian algorithms for adaptively allocating measurement effort, and
formalizes a sense in which these seemingly naive rules are the best possible.
One proposal is top-two probability sampling, which computes the two designs
with the highest posterior probability of being optimal, and then randomizes to
select among these two. One is a variant of top-two sampling which considers
not only the probability a design is optimal, but the expected amount by which
its quality exceeds that of other designs. The final algorithm is a modified
version of Thompson sampling that is tailored for identifying the best design.
We prove that these simple algorithms satisfy a sharp optimality property. In a
frequentist setting where the true quality of the designs is fixed, one hopes
the posterior definitively identifies the optimal design, in the sense that
that the posterior probability assigned to the event that some other design is
optimal converges to zero as measurements are collected. We show that under the
proposed algorithms this convergence occurs at an exponential rate, and the
corresponding exponent is the best possible among all allocation
|
This paper proposes three simple and intuitive Bayesian algorithms for adaptively allocating measurement effort, and formalizes a sense in which these seemingly naive rules are the best possible.
|
http://arxiv.org/abs/1602.08448v4
|
http://arxiv.org/pdf/1602.08448v4.pdf
| null |
[
"Daniel Russo"
] |
[
"Thompson Sampling"
] | 2016-02-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-message-passing-with-edge-updates-for
|
1806.03146
| null | null |
Neural Message Passing with Edge Updates for Predicting Properties of Molecules and Materials
|
Neural message passing on molecular graphs is one of the most promising
methods for predicting formation energy and other properties of molecules and
materials. In this work we extend the neural message passing model with an edge
update network which allows the information exchanged between atoms to depend
on the hidden state of the receiving atom. We benchmark the proposed model on
three publicly available datasets (QM9, The Materials Project and OQMD) and
show that the proposed model yields superior prediction of formation energies
and other properties on all three datasets in comparison with the best
published results. Furthermore we investigate different methods for
constructing the graph used to represent crystalline structures and we find
that using a graph based on K-nearest neighbors achieves better prediction
accuracy than using maximum distance cutoff or the Voronoi tessellation graph.
|
Neural message passing on molecular graphs is one of the most promising methods for predicting formation energy and other properties of molecules and materials.
|
http://arxiv.org/abs/1806.03146v1
|
http://arxiv.org/pdf/1806.03146v1.pdf
| null |
[
"Peter Bjørn Jørgensen",
"Karsten Wedel Jacobsen",
"Mikkel N. Schmidt"
] |
[
"Drug Discovery",
"Formation Energy"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Shifted Softplus** is an activation function ${\\rm ssp}(x) = \\ln( 0.5 e^{x} + 0.5 )$, which [SchNet](https://paperswithcode.com/method/schnet) employs as non-linearity throughout the network in order to obtain a smooth potential energy surface. The shifting ensures that ${\\rm ssp}(0) = 0$ and improves the convergence of the network. This activation function shows similarity to ELUs, while having infinite order of continuity.",
"full_name": "Shifted Softplus",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Shifted Softplus",
"source_title": "SchNet: A continuous-filter convolutional neural network for modeling quantum interactions",
"source_url": "http://arxiv.org/abs/1706.08566v5"
},
{
"code_snippet_url": "",
"description": "**SchNet** is an end-to-end deep neural network architecture based on continuous-filter convolutions. It follows the deep tensor neural network framework, i.e. atom-wise representations are constructed by starting from embedding vectors that characterize the atom type before introducing the configuration of the system by a series of interaction blocks.",
"full_name": "Schrödinger Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "SchNet",
"source_title": "SchNet: A continuous-filter convolutional neural network for modeling quantum interactions",
"source_url": "http://arxiv.org/abs/1706.08566v5"
}
] |
https://paperswithcode.com/paper/structured-output-learning-with-abstention
|
1803.08355
| null | null |
Structured Output Learning with Abstention: Application to Accurate Opinion Prediction
|
Motivated by Supervised Opinion Analysis, we propose a novel framework
devoted to Structured Output Learning with Abstention (SOLA). The structure
prediction model is able to abstain from predicting some labels in the
structured output at a cost chosen by the user in a flexible way. For that
purpose, we decompose the problem into the learning of a pair of predictors,
one devoted to structured abstention and the other, to structured output
prediction. To compare fully labeled training data with predictions potentially
containing abstentions, we define a wide class of asymmetric abstention-aware
losses. Learning is achieved by surrogate regression in an appropriate feature
space while prediction with abstention is performed by solving a new pre-image
problem. Thus, SOLA extends recent ideas about Structured Output Prediction via
surrogate problems and calibration theory and enjoys statistical guarantees on
the resulting excess risk. Instantiated on a hierarchical abstention-aware
loss, SOLA is shown to be relevant for fine-grained opinion mining and gives
state-of-the-art results on this task. Moreover, the abstention-aware
representations can be used to competitively predict user-review ratings based
on a sentence-level opinion predictor.
| null |
http://arxiv.org/abs/1803.08355v2
|
http://arxiv.org/pdf/1803.08355v2.pdf
|
ICML 2018 7
|
[
"Alexandre Garcia",
"Slim Essid",
"Chloé Clavel",
"Florence d'Alché-Buc"
] |
[
"Opinion Mining",
"Prediction",
"Sentence"
] | 2018-03-22T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2199
|
http://proceedings.mlr.press/v80/garcia18a/garcia18a.pdf
|
structured-output-learning-with-abstention-1
| null |
[] |
https://paperswithcode.com/paper/fidelity-based-probabilistic-q-learning-for
|
1806.03145
| null | null |
Fidelity-based Probabilistic Q-learning for Control of Quantum Systems
|
The balance between exploration and exploitation is a key problem for
reinforcement learning methods, especially for Q-learning. In this paper, a
fidelity-based probabilistic Q-learning (FPQL) approach is presented to
naturally solve this problem and applied for learning control of quantum
systems. In this approach, fidelity is adopted to help direct the learning
process and the probability of each action to be selected at a certain state is
updated iteratively along with the learning process, which leads to a natural
exploration strategy instead of a pointed one with configured parameters. A
probabilistic Q-learning (PQL) algorithm is first presented to demonstrate the
basic idea of probabilistic action selection. Then the FPQL algorithm is
presented for learning control of quantum systems. Two examples (a spin- 1/2
system and a lamda-type atomic system) are demonstrated to test the performance
of the FPQL algorithm. The results show that FPQL algorithms attain a better
balance between exploration and exploitation, and can also avoid local optimal
policies and accelerate the learning process.
| null |
http://arxiv.org/abs/1806.03145v1
|
http://arxiv.org/pdf/1806.03145v1.pdf
| null |
[
"Chunlin Chen",
"Daoyi Dong",
"Han-Xiong Li",
"Jian Chu",
"Tzyh-Jong Tarn"
] |
[
"Q-Learning",
"Reinforcement Learning"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/black-box-fdr
|
1806.03143
| null | null |
Black Box FDR
|
Analyzing large-scale, multi-experiment studies requires scientists to test
each experimental outcome for statistical significance and then assess the
results as a whole. We present Black Box FDR (BB-FDR), an empirical-Bayes
method for analyzing multi-experiment studies when many covariates are gathered
per experiment. BB-FDR learns a series of black box predictive models to boost
power and control the false discovery rate (FDR) at two stages of study
analysis. In Stage 1, it uses a deep neural network prior to report which
experiments yielded significant outcomes. In Stage 2, a separate black box
model of each covariate is used to select features that have significant
predictive power across all experiments. In benchmarks, BB-FDR outperforms
competing state-of-the-art methods in both stages of analysis. We apply BB-FDR
to two real studies on cancer drug efficacy. For both studies, BB-FDR increases
the proportion of significant outcomes discovered and selects variables that
reveal key genomic drivers of drug sensitivity and resistance in cancer.
| null |
http://arxiv.org/abs/1806.03143v1
|
http://arxiv.org/pdf/1806.03143v1.pdf
|
ICML 2018 7
|
[
"Wesley Tansey",
"Yixin Wang",
"David M. Blei",
"Raul Rabadan"
] |
[] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1954
|
http://proceedings.mlr.press/v80/tansey18a/tansey18a.pdf
|
black-box-fdr-1
| null |
[] |
https://paperswithcode.com/paper/hierarchy-of-gans-for-learning-embodied-self
|
1806.04012
| null | null |
Hierarchy of GANs for learning embodied self-awareness model
|
In recent years several architectures have been proposed to learn embodied
agents complex self-awareness models. In this paper, dynamic incremental
self-awareness (SA) models are proposed that allow experiences done by an agent
to be modeled in a hierarchical fashion, starting from more simple situations
to more structured ones. Each situation is learned from subsets of private
agent perception data as a model capable to predict normal behaviors and detect
abnormalities. Hierarchical SA models have been already proposed using low
dimensional sensorial inputs. In this work, a hierarchical model is introduced
by means of a cross-modal Generative Adversarial Networks (GANs) processing
high dimensional visual data. Different levels of the GANs are detected in a
self-supervised manner using GANs discriminators decision boundaries. Real
experiments on semi-autonomous ground vehicles are presented.
| null |
http://arxiv.org/abs/1806.04012v1
|
http://arxiv.org/pdf/1806.04012v1.pdf
| null |
[
"Mahdyar Ravanbakhsh",
"Mohamad Baydoun",
"Damian Campo",
"Pablo Marin",
"David Martin",
"Lucio Marcenaro",
"Carlo S. Regazzoni"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/composite-functional-gradient-learning-of
|
1801.06309
| null | null |
Composite Functional Gradient Learning of Generative Adversarial Models
|
This paper first presents a theory for generative adversarial methods that
does not rely on the traditional minimax formulation. It shows that with a
strong discriminator, a good generator can be learned so that the KL divergence
between the distributions of real data and generated data improves after each
functional gradient step until it converges to zero. Based on the theory, we
propose a new stable generative adversarial method. A theoretical insight into
the original GAN from this new viewpoint is also provided. The experiments on
image generation show the effectiveness of our new method.
|
This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation.
|
http://arxiv.org/abs/1801.06309v2
|
http://arxiv.org/pdf/1801.06309v2.pdf
|
ICML 2018 7
|
[
"Rie Johnson",
"Tong Zhang"
] |
[
"Image Generation"
] | 2018-01-19T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2012
|
http://proceedings.mlr.press/v80/johnson18a/johnson18a.pdf
|
composite-functional-gradient-learning-of-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/geodesic-convolutional-neural-networks-on
|
1501.06297
| null | null |
Geodesic convolutional neural networks on Riemannian manifolds
|
Feature descriptors play a crucial role in a wide range of geometry analysis
and processing applications, including shape correspondence, retrieval, and
segmentation. In this paper, we introduce Geodesic Convolutional Neural
Networks (GCNN), a generalization of the convolutional networks (CNN) paradigm
to non-Euclidean manifolds. Our construction is based on a local geodesic
system of polar coordinates to extract "patches", which are then passed through
a cascade of filters and linear and non-linear operators. The coefficients of
the filters and linear combination weights are optimization variables that are
learned to minimize a task-specific cost function. We use GCNN to learn
invariant shape features, allowing to achieve state-of-the-art performance in
problems such as shape description, retrieval, and correspondence.
| null |
http://arxiv.org/abs/1501.06297v3
|
http://arxiv.org/pdf/1501.06297v3.pdf
| null |
[
"Jonathan Masci",
"Davide Boscaini",
"Michael M. Bronstein",
"Pierre Vandergheynst"
] |
[
"Retrieval"
] | 2015-01-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fully-automated-primary-particle-size
|
1806.04010
| null | null |
Fully automated primary particle size analysis of agglomerates on transmission electron microscopy images via artificial neural networks
|
There is a high demand for fully automated methods for the analysis of
primary particle size distributions of agglomerates on transmission electron
microscopy images. Therefore, a novel method, based on the utilization of
artificial neural networks, was proposed, implemented and validated. The
training of the artificial neural networks requires large quantities (up to
several hundreds of thousands) of transmission electron microscopy images of
agglomerates consisting of primary particles with known sizes. Since the manual
evaluation of such large amounts of transmission electron microscopy images is
not feasible, a synthesis of lifelike transmission electron microscopy images
as training data was implemented. The proposed method can compete with
state-of-the-art automated imaging particle size methods like the Hough
transformation, ultimate erosion and watershed transformation and is in some
cases even able to outperform these methods. It is however still outperformed
by the manual analysis.
| null |
http://arxiv.org/abs/1806.04010v1
|
http://arxiv.org/pdf/1806.04010v1.pdf
| null |
[
"Max Frei",
"Frank Einar Kruis"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/text-classification-based-on-word-subspace
|
1806.03125
| null | null |
Text Classification based on Word Subspace with Term-Frequency
|
Text classification has become indispensable due to the rapid increase of
text in digital form. Over the past three decades, efforts have been made to
approach this task using various learning algorithms and statistical models
based on bag-of-words (BOW) features. Despite its simple implementation, BOW
features lack semantic meaning representation. To solve this problem, neural
networks started to be employed to learn word vectors, such as the word2vec.
Word2vec embeds word semantic structure into vectors, where the angle between
vectors indicates the meaningful similarity between words. To measure the
similarity between texts, we propose the novel concept of word subspace, which
can represent the intrinsic variability of features in a set of word vectors.
Through this concept, it is possible to model text from word vectors while
holding semantic information. To incorporate the word frequency directly in the
subspace model, we further extend the word subspace to the term-frequency (TF)
weighted word subspace. Based on these new concepts, text classification can be
performed under the mutual subspace method (MSM) framework. The validity of our
modeling is shown through experiments on the Reuters text database, comparing
the results to various state-of-art algorithms.
| null |
http://arxiv.org/abs/1806.03125v1
|
http://arxiv.org/pdf/1806.03125v1.pdf
| null |
[
"Erica K. Shimomoto",
"Lincon S. Souza",
"Bernardo B. Gatto",
"Kazuhiro Fukui"
] |
[
"Classification",
"General Classification",
"text-classification",
"Text Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/machine-learning-cicy-threefolds
|
1806.03121
| null | null |
Machine Learning CICY Threefolds
|
The latest techniques from Neural Networks and Support Vector Machines (SVM)
are used to investigate geometric properties of Complete Intersection
Calabi-Yau (CICY) threefolds, a class of manifolds that facilitate string model
building. An advanced neural network classifier and SVM are employed to (1)
learn Hodge numbers and report a remarkable improvement over previous efforts,
(2) query for favourability, and (3) predict discrete symmetries, a highly
imbalanced problem to which both Synthetic Minority Oversampling Technique
(SMOTE) and permutations of the CICY matrix are used to decrease the class
imbalance and improve performance. In each case study, we employ a genetic
algorithm to optimise the hyperparameters of the neural network. We demonstrate
that our approach provides quick diagnostic tools capable of shortlisting
quasi-realistic string models based on compactification over smooth CICYs and
further supports the paradigm that classes of problems in algebraic geometry
can be machine learned.
| null |
http://arxiv.org/abs/1806.03121v3
|
http://arxiv.org/pdf/1806.03121v3.pdf
| null |
[
"Kieran Bull",
"Yang-Hui He",
"Vishnu Jejjala",
"Challenger Mishra"
] |
[
"BIG-bench Machine Learning",
"Diagnostic"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/noisy-subspace-clustering-via-matching
|
1612.03450
| null | null |
Noisy subspace clustering via matching pursuits
|
Sparsity-based subspace clustering algorithms have attracted significant
attention thanks to their excellent performance in practical applications. A
prominent example is the sparse subspace clustering (SSC) algorithm by
Elhamifar and Vidal, which performs spectral clustering based on an adjacency
matrix obtained by sparsely representing each data point in terms of all the
other data points via the Lasso. When the number of data points is large or the
dimension of the ambient space is high, the computational complexity of SSC
quickly becomes prohibitive. Dyer et al. observed that SSC-OMP obtained by
replacing the Lasso by the greedy orthogonal matching pursuit (OMP) algorithm
results in significantly lower computational complexity, while often yielding
comparable performance. The central goal of this paper is an analytical
performance characterization of SSC-OMP for noisy data. Moreover, we introduce
and analyze the SSC-MP algorithm, which employs matching pursuit (MP) in lieu
of OMP. Both SSC-OMP and SSC-MP are proven to succeed even when the subspaces
intersect and when the data points are contaminated by severe noise. The
clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for
SSC and for the thresholding-based subspace clustering (TSC) algorithm due to
Heckel and B\"olcskei. Analytical results in combination with numerical results
indicate that both SSC-OMP and SSC-MP with a data-dependent stopping criterion
automatically detect the dimensions of the subspaces underlying the data.
Moreover, experiments on synthetic and on real data show that SSC-MP compares
very favorably to SSC, SSC-OMP, TSC, and the nearest subspace neighbor
algorithm, both in terms of clustering performance and running time. In
addition, we find that, in contrast to SSC-OMP, the performance of SSC-MP is
very robust with respect to the choice of parameters in the stopping criteria.
| null |
http://arxiv.org/abs/1612.03450v2
|
http://arxiv.org/pdf/1612.03450v2.pdf
| null |
[
"Michael Tschannen",
"Helmut Bölcskei"
] |
[
"Clustering"
] | 2016-12-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Spectral clustering has attracted increasing attention due to\r\nthe promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus,",
"full_name": "Spectral Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Spectral Clustering",
"source_title": "A Tutorial on Spectral Clustering",
"source_url": "http://arxiv.org/abs/0711.0189v1"
}
] |
https://paperswithcode.com/paper/vtrails-inferring-vessels-with-geodesic
|
1806.03111
| null | null |
VTrails: Inferring Vessels with Geodesic Connectivity Trees
|
The analysis of vessel morphology and connectivity has an impact on a number
of cardiovascular and neurovascular applications by providing patient-specific
high-level quantitative features such as spatial location, direction and scale.
In this paper we present an end-to-end approach to extract an acyclic vascular
tree from angiographic data by solving a connectivity-enforcing anisotropic
fast marching over a voxel-wise tensor field representing the orientation of
the underlying vascular tree. The method is validated using synthetic and real
vascular images. We compare VTrails against classical and state-of-the-art
ridge detectors for tubular structures by assessing the connectedness of the
vesselness map and inspecting the synthesized tensor field as proof of concept.
VTrails performance is evaluated on images with different levels of
degradation: we verify that the extracted vascular network is an acyclic graph
(i.e. a tree), and we report the extraction accuracy, precision and recall.
| null |
http://arxiv.org/abs/1806.03111v1
|
http://arxiv.org/pdf/1806.03111v1.pdf
| null |
[
"Stefano Moriconi",
"Maria A. Zuluaga",
"H. Rolf Jäger",
"Parashkev Nachev",
"Sébastien Ourselin",
"M. Jorge Cardoso"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rotation-equivariant-cnns-for-digital
|
1806.03962
| null | null |
Rotation Equivariant CNNs for Digital Pathology
|
We propose a new model for digital pathology segmentation, based on the
observation that histopathology images are inherently symmetric under rotation
and reflection. Utilizing recent findings on rotation equivariant CNNs, the
proposed model leverages these symmetries in a principled manner. We present a
visual analysis showing improved stability on predictions, and demonstrate that
exploiting rotation equivariance significantly improves tumor detection
performance on a challenging lymph node metastases dataset. We further present
a novel derived dataset to enable principled comparison of machine learning
models, in combination with an initial benchmark. Through this dataset, the
task of histopathology diagnosis becomes accessible as a challenging benchmark
for fundamental machine learning research.
|
We propose a new model for digital pathology segmentation, based on the observation that histopathology images are inherently symmetric under rotation and reflection.
|
http://arxiv.org/abs/1806.03962v1
|
http://arxiv.org/pdf/1806.03962v1.pdf
| null |
[
"Bastiaan S. Veeling",
"Jasper Linmans",
"Jim Winkens",
"Taco Cohen",
"Max Welling"
] |
[
"BIG-bench Machine Learning",
"Breast Tumour Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automated-curriculum-learning-by-rewarding
|
1803.07131
| null | null |
Automated Curriculum Learning by Rewarding Temporally Rare Events
|
Reward shaping allows reinforcement learning (RL) agents to accelerate
learning by receiving additional reward signals. However, these signals can be
difficult to design manually, especially for complex RL tasks. We propose a
simple and general approach that determines the reward of pre-defined events by
their rarity alone. Here events become less rewarding as they are experienced
more often, which encourages the agent to continually explore new types of
events as it learns. The adaptiveness of this reward function results in a form
of automated curriculum learning that does not have to be specified by the
experimenter. We demonstrate that this \emph{Rarity of Events} (RoE) approach
enables the agent to succeed in challenging VizDoom scenarios without access to
the extrinsic reward from the environment. Furthermore, the results demonstrate
that RoE learns a more versatile policy that adapts well to critical changes in
the environment. Rewarding events based on their rarity could help in many
unsolved RL environments that are characterized by sparse extrinsic rewards but
a plethora of known event types.
|
We demonstrate that this \emph{Rarity of Events} (RoE) approach enables the agent to succeed in challenging VizDoom scenarios without access to the extrinsic reward from the environment.
|
http://arxiv.org/abs/1803.07131v2
|
http://arxiv.org/pdf/1803.07131v2.pdf
| null |
[
"Niels Justesen",
"Sebastian Risi"
] |
[
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/uncertainty-driven-sanity-check-application
|
1806.03106
| null | null |
Uncertainty-driven Sanity Check: Application to Postoperative Brain Tumor Cavity Segmentation
|
Uncertainty estimates of modern neuronal networks provide additional
information next to the computed predictions and are thus expected to improve
the understanding of the underlying model. Reliable uncertainties are
particularly interesting for safety-critical computer-assisted applications in
medicine, e.g., neurosurgical interventions and radiotherapy planning. We
propose an uncertainty-driven sanity check for the identification of
segmentation results that need particular expert review. Our method uses a
fully-convolutional neural network and computes uncertainty estimates by the
principle of Monte Carlo dropout. We evaluate the performance of the proposed
method on a clinical dataset with 30 postoperative brain tumor images. The
method can segment the highly inhomogeneous resection cavities accurately (Dice
coefficients 0.792 $\pm$ 0.154). Furthermore, the proposed sanity check is able
to detect the worst segmentation and three out of the four outliers. The
results highlight the potential of using the additional information from the
model's parameter uncertainty to validate the segmentation performance of a
deep learning model.
| null |
http://arxiv.org/abs/1806.03106v1
|
http://arxiv.org/pdf/1806.03106v1.pdf
| null |
[
"Alain Jungo",
"Raphael Meier",
"Ekin Ermis",
"Evelyn Herrmann",
"Mauricio Reyes"
] |
[
"Segmentation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/contextual-hourglass-networks-for
|
1806.04009
| null | null |
Contextual Hourglass Networks for Segmentation and Density Estimation
|
Hourglass networks such as the U-Net and V-Net are popular neural
architectures for medical image segmentation and counting problems. Typical
instances of hourglass networks contain shortcut connections between mirroring
layers. These shortcut connections improve the performance and it is
hypothesized that this is due to mitigating effects on the vanishing gradient
problem and the ability of the model to combine feature maps from earlier and
later layers. We propose a method for not only combining feature maps of
mirroring layers but also feature maps of layers with different spatial
dimensions. For instance, the method enables the integration of the bottleneck
feature map with those of the reconstruction layers. The proposed approach is
applicable to any hourglass architecture. We evaluated the contextual hourglass
networks on image segmentation and object counting problems in the medical
domain. We achieve competitive results outperforming popular hourglass networks
by up to 17 percentage points.
| null |
http://arxiv.org/abs/1806.04009v1
|
http://arxiv.org/pdf/1806.04009v1.pdf
| null |
[
"Daniel Oñoro-Rubio",
"Mathias Niepert"
] |
[
"Density Estimation",
"Image Segmentation",
"Medical Image Segmentation",
"Object Counting",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/the-uncertainty-bellman-equation-and
|
1709.05380
| null | null |
The Uncertainty Bellman Equation and Exploration
|
We consider the exploration/exploitation problem in reinforcement learning.
For exploitation, it is well known that the Bellman equation connects the value
at any time-step to the expected value at subsequent time-steps. In this paper
we consider a similar \textit{uncertainty} Bellman equation (UBE), which
connects the uncertainty at any time-step to the expected uncertainties at
subsequent time-steps, thereby extending the potential exploratory benefit of a
policy beyond individual time-steps. We prove that the unique fixed point of
the UBE yields an upper bound on the variance of the posterior distribution of
the Q-values induced by any policy. This bound can be much tighter than
traditional count-based bonuses that compound standard deviation rather than
variance. Importantly, and unlike several existing approaches to optimism, this
method scales naturally to large systems with complex generalization.
Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN
performance on 51 out of 57 games in the Atari suite.
|
In this paper we consider a similar \textit{uncertainty} Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps.
|
http://arxiv.org/abs/1709.05380v4
|
http://arxiv.org/pdf/1709.05380v4.pdf
|
ICML 2018 7
|
[
"Brendan O'Donoghue",
"Ian Osband",
"Remi Munos",
"Volodymyr Mnih"
] |
[
"Reinforcement Learning"
] | 2017-09-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1959
|
http://proceedings.mlr.press/v80/odonoghue18a/odonoghue18a.pdf
|
the-uncertainty-bellman-equation-and-1
| null |
[] |
https://paperswithcode.com/paper/deep-extreme-multi-label-learning
|
1704.03718
| null | null |
Deep Extreme Multi-label Learning
|
Extreme multi-label learning (XML) or classification has been a practical and
important problem since the boom of big data. The main challenge lies in the
exponential label space which involves $2^L$ possible label sets especially
when the label dimension $L$ is huge, e.g., in millions for Wikipedia labels.
This paper is motivated to better explore the label space by originally
establishing an explicit label graph. In the meanwhile, deep learning has been
widely studied and used in various classification problems including
multi-label classification, however it has not been properly introduced to XML,
where the label space can be as large as in millions. In this paper, we propose
a practical deep embedding method for extreme multi-label classification, which
harvests the ideas of non-linear embedding and graph priors-based label space
modeling simultaneously. Extensive experiments on public datasets for XML show
that our method performs competitive against state-of-the-art result.
|
Extreme multi-label learning (XML) or classification has been a practical and important problem since the boom of big data.
|
http://arxiv.org/abs/1704.03718v4
|
http://arxiv.org/pdf/1704.03718v4.pdf
| null |
[
"Wenjie Zhang",
"Junchi Yan",
"Xiangfeng Wang",
"Hongyuan Zha"
] |
[
"Classification",
"Extreme Multi-Label Classification",
"General Classification",
"Multi-Label Classification",
"MUlTI-LABEL-ClASSIFICATION",
"Multi-Label Learning"
] | 2017-04-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-stein-variational-newton-method
|
1806.03085
| null | null |
A Stein variational Newton method
|
Stein variational gradient descent (SVGD) was recently proposed as a general
purpose nonparametric variational inference algorithm [Liu & Wang, NIPS 2016]:
it minimizes the Kullback-Leibler divergence between the target distribution
and its approximation by implementing a form of functional gradient descent on
a reproducing kernel Hilbert space. In this paper, we accelerate and generalize
the SVGD algorithm by including second-order information, thereby approximating
a Newton-like iteration in function space. We also show how second-order
information can lead to more effective choices of kernel. We observe
significant computational gains over the original SVGD algorithm in multiple
test cases.
|
Stein variational gradient descent (SVGD) was recently proposed as a general purpose nonparametric variational inference algorithm [Liu & Wang, NIPS 2016]: it minimizes the Kullback-Leibler divergence between the target distribution and its approximation by implementing a form of functional gradient descent on a reproducing kernel Hilbert space.
|
http://arxiv.org/abs/1806.03085v2
|
http://arxiv.org/pdf/1806.03085v2.pdf
|
NeurIPS 2018 12
|
[
"Gianluca Detommaso",
"Tiangang Cui",
"Alessio Spantini",
"Youssef Marzouk",
"Robert Scheichl"
] |
[
"Variational Inference"
] | 2018-06-08T00:00:00 |
http://papers.nips.cc/paper/8130-a-stein-variational-newton-method
|
http://papers.nips.cc/paper/8130-a-stein-variational-newton-method.pdf
|
a-stein-variational-newton-method-1
| null |
[] |
https://paperswithcode.com/paper/unifying-identification-and-context-learning
|
1806.03084
| null | null |
Unifying Identification and Context Learning for Person Recognition
|
Despite the great success of face recognition techniques, recognizing persons
under unconstrained settings remains challenging. Issues like profile views,
unfavorable lighting, and occlusions can cause substantial difficulties.
Previous works have attempted to tackle this problem by exploiting the context,
e.g. clothes and social relations. While showing promising improvement, they
are usually limited in two important aspects, relying on simple heuristics to
combine different cues and separating the construction of context from people
identities. In this work, we aim to move beyond such limitations and propose a
new framework to leverage context for person recognition. In particular, we
propose a Region Attention Network, which is learned to adaptively combine
visual cues with instance-dependent weights. We also develop a unified
formulation, where the social contexts are learned along with the reasoning of
people identities. These models substantially improve the robustness when
working with the complex contextual relations in unconstrained environments. On
two large datasets, PIPA and Cast In Movies (CIM), a new dataset proposed in
this work, our method consistently achieves state-of-the-art performance under
multiple evaluation policies.
|
In this work, we aim to move beyond such limitations and propose a new framework to leverage context for person recognition.
|
http://arxiv.org/abs/1806.03084v1
|
http://arxiv.org/pdf/1806.03084v1.pdf
|
CVPR 2018 6
|
[
"Qingqiu Huang",
"Yu Xiong",
"Dahua Lin"
] |
[
"Face Recognition",
"Person Recognition"
] | 2018-06-08T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Huang_Unifying_Identification_and_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Huang_Unifying_Identification_and_CVPR_2018_paper.pdf
|
unifying-identification-and-context-learning-1
| null |
[] |
https://paperswithcode.com/paper/strassennets-deep-learning-with-a
|
1712.03942
| null | null |
StrassenNets: Deep Learning with a Multiplication Budget
|
A large fraction of the arithmetic operations required to evaluate deep
neural networks (DNNs) consists of matrix multiplications, in both convolution
and fully connected layers. We perform end-to-end learning of low-cost
approximations of matrix multiplications in DNN layers by casting matrix
multiplications as 2-layer sum-product networks (SPNs) (arithmetic circuits)
and learning their (ternary) edge weights from data. The SPNs disentangle
multiplication and addition operations and enable us to impose a budget on the
number of multiplication operations. Combining our method with knowledge
distillation and applying it to image classification DNNs (trained on ImageNet)
and language modeling DNNs (using LSTMs), we obtain a first-of-a-kind reduction
in number of multiplications (over 99.5%) while maintaining the predictive
performance of the full-precision models. Finally, we demonstrate that the
proposed framework is able to rediscover Strassen's matrix multiplication
algorithm, learning to multiply $2 \times 2$ matrices using only 7
multiplications instead of 8.
|
A large fraction of the arithmetic operations required to evaluate deep neural networks (DNNs) consists of matrix multiplications, in both convolution and fully connected layers.
|
http://arxiv.org/abs/1712.03942v3
|
http://arxiv.org/pdf/1712.03942v3.pdf
|
ICML 2018 7
|
[
"Michael Tschannen",
"Aran Khanna",
"Anima Anandkumar"
] |
[
"Deep Learning",
"image-classification",
"Image Classification",
"Knowledge Distillation",
"Language Modeling",
"Language Modelling",
"Model Compression",
"Neural Network Compression"
] | 2017-12-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2167
|
http://proceedings.mlr.press/v80/tschannen18a/tschannen18a.pdf
|
strassennets-deep-learning-with-a-1
| null |
[] |
https://paperswithcode.com/paper/conditional-time-series-forecasting-with
|
1703.04691
| null | null |
Conditional Time Series Forecasting with Convolutional Neural Networks
|
We present a method for conditional time series forecasting based on an
adaptation of the recent deep convolutional WaveNet architecture. The proposed
network contains stacks of dilated convolutions that allow it to access a broad
range of history when forecasting, a ReLU activation function and conditioning
is performed by applying multiple convolutional filters in parallel to separate
time series which allows for the fast processing of data and the exploitation
of the correlation structure between the multivariate time series. We test and
analyze the performance of the convolutional network both unconditionally as
well as conditionally for financial time series forecasting using the S&P500,
the volatility index, the CBOE interest rate and several exchange rates and
extensively compare it to the performance of the well-known autoregressive
model and a long-short term memory network. We show that a convolutional
network is well-suited for regression-type problems and is able to effectively
learn dependencies in and between the series without the need for long
historical time series, is a time-efficient and easy to implement alternative
to recurrent-type networks and tends to outperform linear and recurrent models.
|
The proposed network contains stacks of dilated convolutions that allow it to access a broad range of history when forecasting, a ReLU activation function and conditioning is performed by applying multiple convolutional filters in parallel to separate time series which allows for the fast processing of data and the exploitation of the correlation structure between the multivariate time series.
|
http://arxiv.org/abs/1703.04691v5
|
http://arxiv.org/pdf/1703.04691v5.pdf
| null |
[
"Anastasia Borovykh",
"Sander Bohte",
"Cornelis W. Oosterlee"
] |
[
"Time Series",
"Time Series Analysis",
"Time Series Forecasting"
] | 2017-03-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Mixture of Logistic Distributions (MoL)** is a type of output function, and an alternative to a [softmax](https://paperswithcode.com/method/softmax) layer. Discretized logistic mixture likelihood is used in [PixelCNN](https://paperswithcode.com/method/pixelcnn)++ and [WaveNet](https://paperswithcode.com/method/wavenet) to predict discrete values.\r\n\r\nImage Credit: [Hao Gao](https://medium.com/@smallfishbigsea/an-explanation-of-discretized-logistic-mixture-likelihood-bdfe531751f0)",
"full_name": "Mixture of Logistic Distributions",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Mixture of Logistic Distributions",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Dilated Causal Convolution** is a [causal convolution](https://paperswithcode.com/method/causal-convolution) where the filter is applied over an area larger than its length by skipping input values with a certain step. A dilated causal [convolution](https://paperswithcode.com/method/convolution) effectively allows the network to have very large receptive fields with just a few layers.",
"full_name": "Dilated Causal Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Temporal Convolutions",
"parent": null
},
"name": "Dilated Causal Convolution",
"source_title": "WaveNet: A Generative Model for Raw Audio",
"source_url": "http://arxiv.org/abs/1609.03499v2"
},
{
"code_snippet_url": null,
"description": "**WaveNet** is an audio generative model based on the [PixelCNN](https://paperswithcode.com/method/pixelcnn) architecture. In order to deal with long-range temporal dependencies needed for raw audio generation, architectures are developed based on dilated causal convolutions, which exhibit very large receptive fields.\r\n\r\nThe joint probability of a waveform $\\vec{x} = \\{ x_1, \\dots, x_T \\}$ is factorised as a product of conditional probabilities as follows:\r\n\r\n$$p\\left(\\vec{x}\\right) = \\prod_{t=1}^{T} p\\left(x_t \\mid x_1, \\dots ,x_{t-1}\\right)$$\r\n\r\nEach audio sample $x_t$ is therefore conditioned on the samples at all previous timesteps.",
"full_name": "WaveNet",
"introduced_year": 2000,
"main_collection": {
"area": "Audio",
"description": "",
"name": "Generative Audio Models",
"parent": null
},
"name": "WaveNet",
"source_title": "WaveNet: A Generative Model for Raw Audio",
"source_url": "http://arxiv.org/abs/1609.03499v2"
}
] |
https://paperswithcode.com/paper/selective-inference-for-l_2-boosting
|
1805.01852
| null | null |
Inference for $L_2$-Boosting
|
We propose a statistical inference framework for the component-wise functional gradient descent algorithm (CFGD) under normality assumption for model errors, also known as $L_2$-Boosting. The CFGD is one of the most versatile tools to analyze data, because it scales well to high-dimensional data sets, allows for a very flexible definition of additive regression models and incorporates inbuilt variable selection. Due to the variable selection, we build on recent proposals for post-selection inference. However, the iterative nature of component-wise boosting, which can repeatedly select the same component to update, necessitates adaptations and extensions to existing approaches. We propose tests and confidence intervals for linear, grouped and penalized additive model components selected by $L_2$-Boosting. Our concepts also transfer to slow-learning algorithms more generally, and to other selection techniques which restrict the response space to more complex sets than polyhedra. We apply our framework to an additive model for sales prices of residential apartments and investigate the properties of our concepts in simulation studies.
|
We propose a statistical inference framework for the component-wise functional gradient descent algorithm (CFGD) under normality assumption for model errors, also known as $L_2$-Boosting.
|
https://arxiv.org/abs/1805.01852v4
|
https://arxiv.org/pdf/1805.01852v4.pdf
| null |
[
"David Rügamer",
"Sonja Greven"
] |
[
"Variable Selection"
] | 2018-05-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/zoom-out-and-in-network-with-map-attention
|
1709.04347
| null | null |
Zoom Out-and-In Network with Map Attention Decision for Region Proposal and Object Detection
|
In this paper, we propose a zoom-out-and-in network for generating object
proposals. A key observation is that it is difficult to classify anchors of
different sizes with the same set of features. Anchors of different sizes
should be placed accordingly based on different depth within a network: smaller
boxes on high-resolution layers with a smaller stride while larger boxes on
low-resolution counterparts with a larger stride. Inspired by the conv/deconv
structure, we fully leverage the low-level local details and high-level
regional semantics from two feature map streams, which are complimentary to
each other, to identify the objectness in an image. A map attention decision
(MAD) unit is further proposed to aggressively search for neuron activations
among two streams and attend the most contributive ones on the feature learning
of the final loss. The unit serves as a decisionmaker to adaptively activate
maps along certain channels with the solely purpose of optimizing the overall
training loss. One advantage of MAD is that the learned weights enforced on
each feature channel is predicted on-the-fly based on the input context, which
is more suitable than the fixed enforcement of a convolutional kernel.
Experimental results on three datasets, including PASCAL VOC 2007, ImageNet
DET, MS COCO, demonstrate the effectiveness of our proposed algorithm over
other state-of-the-arts, in terms of average recall (AR) for region proposal
and average precision (AP) for object detection.
|
A key observation is that it is difficult to classify anchors of different sizes with the same set of features.
|
http://arxiv.org/abs/1709.04347v2
|
http://arxiv.org/pdf/1709.04347v2.pdf
| null |
[
"Hongyang Li",
"Yu Liu",
"Wanli Ouyang",
"Xiaogang Wang"
] |
[
"object-detection",
"Object Detection",
"Region Proposal"
] | 2017-09-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-myelin-content-in-multiple-sclerosis
|
1804.08039
| null | null |
Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training
|
Multiple sclerosis (MS) is a demyelinating disease of the central nervous
system (CNS). A reliable measure of the tissue myelin content is therefore
essential for the understanding of the physiopathology of MS, tracking
progression and assessing treatment efficacy. Positron emission tomography
(PET) with $[^{11} \mbox{C}] \mbox{PIB}$ has been proposed as a promising
biomarker for measuring myelin content changes in-vivo in MS. However, PET
imaging is expensive and invasive due to the injection of a radioactive tracer.
On the contrary, magnetic resonance imaging (MRI) is a non-invasive, widely
available technique, but existing MRI sequences do not provide, to date, a
reliable, specific, or direct marker of either demyelination or remyelination.
In this work, we therefore propose Sketcher-Refiner Generative Adversarial
Networks (GANs) with specifically designed adversarial loss functions to
predict the PET-derived myelin content map from a combination of MRI
modalities. The prediction problem is solved by a sketch-refinement process in
which the sketcher generates the preliminary anatomical and physiological
information and the refiner refines and generates images reflecting the tissue
myelin content in the human brain. We evaluated the ability of our method to
predict myelin content at both global and voxel-wise levels. The evaluation
results show that the demyelination in lesion regions and myelin content in
normal-appearing white matter (NAWM) can be well predicted by our method. The
method has the potential to become a useful tool for clinical management of
patients with MS.
| null |
http://arxiv.org/abs/1804.08039v2
|
http://arxiv.org/pdf/1804.08039v2.pdf
| null |
[
"Wen Wei",
"Emilie Poirion",
"Benedetta Bodini",
"Stanley Durrleman",
"Nicholas Ayache",
"Bruno Stankoff",
"Olivier Colliot"
] |
[
"Management"
] | 2018-04-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/localized-structured-prediction
|
1806.02402
| null | null |
Localized Structured Prediction
|
Key to structured prediction is exploiting the problem structure to simplify the learning process. A major challenge arises when data exhibit a local structure (e.g., are made by "parts") that can be leveraged to better approximate the relation between (parts of) the input and (parts of) the output. Recent literature on signal processing, and in particular computer vision, has shown that capturing these aspects is indeed essential to achieve state-of-the-art performance. While such algorithms are typically derived on a case-by-case basis, in this work we propose the first theoretical framework to deal with part-based data from a general perspective. We derive a novel approach to deal with these problems and study its generalization properties within the setting of statistical learning theory. Our analysis is novel in that it explicitly quantifies the benefits of leveraging the part-based structure of the problem with respect to the learning rates of the proposed estimator.
|
Key to structured prediction is exploiting the problem structure to simplify the learning process.
|
https://arxiv.org/abs/1806.02402v3
|
https://arxiv.org/pdf/1806.02402v3.pdf
|
NeurIPS 2019 12
|
[
"Carlo Ciliberto",
"Francis Bach",
"Alessandro Rudi"
] |
[
"Learning Theory",
"Prediction",
"Structured Prediction"
] | 2018-06-06T00:00:00 |
http://papers.nips.cc/paper/8950-localized-structured-prediction
|
http://papers.nips.cc/paper/8950-localized-structured-prediction.pdf
|
localized-structured-prediction-1
| null |
[] |
https://paperswithcode.com/paper/deep-multi-scale-architectures-for-monocular
|
1806.03051
| null | null |
Deep multi-scale architectures for monocular depth estimation
|
This paper aims at understanding the role of multi-scale information in the
estimation of depth from monocular images. More precisely, the paper
investigates four different deep CNN architectures, designed to explicitly make
use of multi-scale features along the network, and compare them to a
state-of-the-art single-scale approach. The paper also shows that involving
multi-scale features in depth estimation not only improves the performance in
terms of accuracy, but also gives qualitatively better depth maps. Experiments
are done on the widely used NYU Depth dataset, on which the proposed method
achieves state-of-the-art performance.
| null |
http://arxiv.org/abs/1806.03051v1
|
http://arxiv.org/pdf/1806.03051v1.pdf
| null |
[
"Michel Moukari",
"Sylvaine Picard",
"Loic Simon",
"Frédéric Jurie"
] |
[
"Depth Estimation",
"Monocular Depth Estimation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neonatal-eeg-interpretation-and-decision
|
1806.04037
| null | null |
Neonatal EEG Interpretation and Decision Support Framework for Mobile Platforms
|
This paper proposes and implements an intuitive and pervasive solution for
neonatal EEG monitoring assisted by sonification and deep learning AI that
provides information about neonatal brain health to all neonatal healthcare
professionals, particularly those without EEG interpretation expertise. The
system aims to increase the demographic of clinicians capable of diagnosing
abnormalities in neonatal EEG. The proposed system uses a low-cost and
low-power EEG acquisition system. An Android app provides single-channel EEG
visualization, traffic-light indication of the presence of neonatal seizures
provided by a trained, deep convolutional neural network and an algorithm for
EEG sonification, designed to facilitate the perception of changes in EEG
morphology specific to neonatal seizures. The multifaceted EEG interpretation
framework is presented and the implemented mobile platform architecture is
analyzed with respect to its power consumption and accuracy.
| null |
http://arxiv.org/abs/1806.04037v1
|
http://arxiv.org/pdf/1806.04037v1.pdf
| null |
[
"Mark O'Sullivan",
"Sergi Gomez",
"Alison O'Shea",
"Eduard Salgado",
"Kevin Huillca",
"Sean Mathieson",
"Geraldine Boylan",
"Emanuel Popovici",
"Andriy Temko"
] |
[
"EEG",
"Electroencephalogram (EEG)"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/meta-learning-by-adjusting-priors-based-on
|
1711.01244
| null | null |
Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory
|
In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate learning of novel future tasks. Under the assumption that future tasks are 'related' to previous tasks, the accumulated knowledge should be learned in a way which captures the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of new tasks. We present a framework for meta-learning that is based on generalization error bounds, allowing us to extend various PAC-Bayes bounds to meta-learning. Learning takes place through the construction of a distribution over hypotheses based on the observed tasks, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting an experience-dependent prior for novel tasks. We develop a gradient-based algorithm which minimizes an objective function derived from the bounds and demonstrate its effectiveness numerically with deep neural networks. In addition to establishing the improved performance available through meta-learning, we demonstrate the intuitive way by which prior information is manifested at different levels of the network.
|
In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate learning of novel future tasks.
|
https://arxiv.org/abs/1711.01244v8
|
https://arxiv.org/pdf/1711.01244v8.pdf
|
ICML 2018 7
|
[
"Ron Amit",
"Ron Meir"
] |
[
"Meta-Learning"
] | 2017-11-03T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1976
|
http://proceedings.mlr.press/v80/amit18a/amit18a.pdf
|
meta-learning-by-adjusting-priors-based-on-1
| null |
[] |
https://paperswithcode.com/paper/investigating-the-impact-of-cnn-depth-on
|
1806.03044
| null | null |
Investigating the Impact of CNN Depth on Neonatal Seizure Detection Performance
|
This study presents a novel, deep, fully convolutional architecture which is
optimized for the task of EEG-based neonatal seizure detection. Architectures
of different depths were designed and tested; varying network depth impacts
convolutional receptive fields and the corresponding learned feature
complexity. Two deep convolutional networks are compared with a shallow
SVM-based neonatal seizure detector, which relies on the extraction of
hand-crafted features. On a large clinical dataset, of over 800 hours of
multichannel unedited EEG, containing 1389 seizure events, the deep 11-layer
architecture significantly outperforms the shallower architectures, improving
the AUC90 from 82.6% to 86.8%. Combining the end-to-end deep architecture with
the feature-based shallow SVM further improves the AUC90 to 87.6%. The fusion
of classifiers of different depths gives greatly improved performance and
reduced variability, making the combined classifier more clinically reliable.
| null |
http://arxiv.org/abs/1806.03044v1
|
http://arxiv.org/pdf/1806.03044v1.pdf
| null |
[
"Alison O'Shea",
"Gordon Lightbody",
"Geraldine Boylan",
"Andriy Temko"
] |
[
"EEG",
"Electroencephalogram (EEG)",
"Seizure Detection"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/anonymous-walk-embeddings
|
1805.11921
| null | null |
Anonymous Walk Embeddings
|
The task of representing entire graphs has seen a surge of prominent results,
mainly due to learning convolutional neural networks (CNNs) on graph-structured
data. While CNNs demonstrate state-of-the-art performance in graph
classification task, such methods are supervised and therefore steer away from
the original problem of network representation in task-agnostic manner. Here,
we coherently propose an approach for embedding entire graphs and show that our
feature representations with SVM classifier increase classification accuracy of
CNN algorithms and traditional graph kernels. For this we describe a recently
discovered graph object, anonymous walk, on which we design task-independent
algorithms for learning graph representations in explicit and distributed way.
Overall, our work represents a new scalable unsupervised learning of
state-of-the-art representations of entire graphs.
|
The task of representing entire graphs has seen a surge of prominent results, mainly due to learning convolutional neural networks (CNNs) on graph-structured data.
|
http://arxiv.org/abs/1805.11921v3
|
http://arxiv.org/pdf/1805.11921v3.pdf
|
ICML 2018 7
|
[
"Sergey Ivanov",
"Evgeny Burnaev"
] |
[
"General Classification",
"Graph Classification"
] | 2018-05-30T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1875
|
http://proceedings.mlr.press/v80/ivanov18a/ivanov18a.pdf
|
anonymous-walk-embeddings-1
| null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/neural-networks-should-be-wide-enough-to
|
1803.00094
| null | null |
Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
|
In the recent literature the important role of depth in deep learning has
been emphasized. In this paper we argue that sufficient width of a feedforward
network is equally important by answering the simple question under which
conditions the decision regions of a neural network are connected. It turns out
that for a class of activation functions including leaky ReLU, neural networks
having a pyramidal structure, that is no layer has more hidden units than the
input dimension, produce necessarily connected decision regions. This implies
that a sufficiently wide hidden layer is necessary to guarantee that the
network can produce disconnected decision regions. We discuss the implications
of this result for the construction of neural networks, in particular the
relation to the problem of adversarial manipulation of classifiers.
| null |
http://arxiv.org/abs/1803.00094v3
|
http://arxiv.org/pdf/1803.00094v3.pdf
|
ICML 2018 7
|
[
"Quynh Nguyen",
"Mahesh Chandra Mukkamala",
"Matthias Hein"
] |
[] | 2018-02-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2272
|
http://proceedings.mlr.press/v80/nguyen18b/nguyen18b.pdf
|
neural-networks-should-be-wide-enough-to-1
| null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/unsupervised-feature-learning-toward-a-real
|
1806.03028
| null | null |
Unsupervised Feature Learning Toward a Real-time Vehicle Make and Model Recognition
|
Vehicle Make and Model Recognition (MMR) systems provide a fully automatic
framework to recognize and classify different vehicle models. Several
approaches have been proposed to address this challenge, however they can
perform in restricted conditions. Here, we formulate the vehicle make and model
recognition as a fine-grained classification problem and propose a new
configurable on-road vehicle make and model recognition framework. We benefit
from the unsupervised feature learning methods and in more details we employ
Locality constraint Linear Coding (LLC) method as a fast feature encoder for
encoding the input SIFT features. The proposed method can perform in real
environments of different conditions. This framework can recognize fifty models
of vehicles and has an advantage to classify every other vehicle not belonging
to one of the specified fifty classes as an unknown vehicle. The proposed MMR
framework can be configured to become faster or more accurate based on the
application domain. The proposed approach is examined on two datasets including
Iranian on-road vehicle dataset and CompuCar dataset. The Iranian on-road
vehicle dataset contains images of 50 models of vehicles captured in real
situations by traffic cameras in different weather and lighting conditions.
Experimental results show superiority of the proposed framework over the
state-of-the-art methods on Iranian on-road vehicle datatset and comparable
results on CompuCar dataset with 97.5% and 98.4% accuracies, respectively.
| null |
http://arxiv.org/abs/1806.03028v1
|
http://arxiv.org/pdf/1806.03028v1.pdf
| null |
[
"Amir Nazemi",
"Mohammad Javad Shafiee",
"Zohreh Azimifar",
"Alexander Wong"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generating-image-sequence-from-description
|
1806.03027
| null | null |
Generating Image Sequence from Description with LSTM Conditional GAN
|
Generating images from word descriptions is a challenging task. Generative
adversarial networks(GANs) are shown to be able to generate realistic images of
real-life objects. In this paper, we propose a new neural network architecture
of LSTM Conditional Generative Adversarial Networks to generate images of
real-life objects. Our proposed model is trained on the Oxford-102 Flowers and
Caltech-UCSD Birds-200-2011 datasets. We demonstrate that our proposed model
produces the better results surpassing other state-of-art approaches.
| null |
http://arxiv.org/abs/1806.03027v1
|
http://arxiv.org/pdf/1806.03027v1.pdf
| null |
[
"Xu Ouyang",
"Xi Zhang",
"Di Ma",
"Gady Agam"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/3d-fcn-feature-driven-regression-forest-based
|
1806.03019
| null | null |
3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation
|
This paper presents a fully automated atlas-based pancreas segmentation
method from CT volumes utilizing 3D fully convolutional network (FCN)
feature-based pancreas localization. Segmentation of the pancreas is difficult
because it has larger inter-patient spatial variations than other organs.
Previous pancreas segmentation methods failed to deal with such variations. We
propose a fully automated pancreas segmentation method that contains novel
localization and segmentation. Since the pancreas neighbors many other organs,
its position and size are strongly related to the positions of the surrounding
organs. We estimate the position and the size of the pancreas (localized) from
global features by regression forests. As global features, we use intensity
differences and 3D FCN deep learned features, which include automatically
extracted essential features for segmentation. We chose 3D FCN features from a
trained 3D U-Net, which is trained to perform multi-organ segmentation. The
global features include both the pancreas and surrounding organ information.
After localization, a patient-specific probabilistic atlas-based pancreas
segmentation is performed. In evaluation results with 146 CT volumes, we
achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.
| null |
http://arxiv.org/abs/1806.03019v1
|
http://arxiv.org/pdf/1806.03019v1.pdf
| null |
[
"Masahiro Oda",
"Natsuki Shimizu",
"Holger R. Roth",
"Ken'ichi Karasawa",
"Takayuki Kitasaka",
"Kazunari Misawa",
"Michitaka Fujiwara",
"Daniel Rueckert",
"Kensaku MORI"
] |
[
"Automated Pancreas Segmentation",
"Organ Segmentation",
"Pancreas Segmentation",
"Position",
"regression",
"Segmentation"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "https://github.com/Jackey9797/FCN",
"description": "**Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks faster to train). It also means an FCN can work for variable image sizes given all connections are local.\r\n\r\nThe network consists of a downsampling path, used to extract and interpret the context, and an upsampling path, which allows for localization. \r\n\r\nFCNs also employ skip connections to recover the fine-grained spatial information lost in the downsampling path.",
"full_name": "Fully Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "FCN",
"source_title": "Fully Convolutional Networks for Semantic Segmentation",
"source_url": "http://arxiv.org/abs/1605.06211v1"
}
] |
https://paperswithcode.com/paper/machine-learning-based-colon-deformation
|
1806.03014
| null | null |
Machine learning-based colon deformation estimation method for colonoscope tracking
|
This paper presents a colon deformation estimation method, which can be used
to estimate colon deformations during colonoscope insertions. Colonoscope
tracking or navigation system that navigates a physician to polyp positions
during a colonoscope insertion is required to reduce complications such as
colon perforation. A previous colonoscope tracking method obtains a colonoscope
position in the colon by registering a colonoscope shape and a colon shape. The
colonoscope shape is obtained using an electromagnetic sensor, and the colon
shape is obtained from a CT volume. However, large tracking errors were
observed due to colon deformations occurred during colonoscope insertions. Such
deformations make the registration difficult. Because the colon deformation is
caused by a colonoscope, there is a strong relationship between the colon
deformation and the colonoscope shape. An estimation method of colon
deformations occur during colonoscope insertions is necessary to reduce
tracking errors. We propose a colon deformation estimation method. This method
is used to estimate a deformed colon shape from a colonoscope shape. We use the
regression forests algorithm to estimate a deformed colon shape. The regression
forests algorithm is trained using pairs of colon and colonoscope shapes, which
contains deformations occur during colonoscope insertions. As a preliminary
study, we utilized the method to estimate deformations of a colon phantom. In
our experiments, the proposed method correctly estimated deformed colon phantom
shapes.
| null |
http://arxiv.org/abs/1806.03014v1
|
http://arxiv.org/pdf/1806.03014v1.pdf
| null |
[
"Masahiro Oda",
"Takayuki Kitasaka",
"Kazuhiro Furukawa",
"Ryoji Miyahara",
"Yoshiki Hirooka",
"Hidemi Goto",
"Nassir Navab",
"Kensaku MORI"
] |
[
"BIG-bench Machine Learning",
"regression"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/domain-adaptive-generation-of-aircraft-on
|
1806.03002
| null | null |
Domain Adaptive Generation of Aircraft on Satellite Imagery via Simulated and Unsupervised Learning
|
Object detection and classification for aircraft are the most important tasks
in the satellite image analysis. The success of modern detection and
classification methods has been based on machine learning and deep learning.
One of the key requirements for those learning processes is huge data to train.
However, there is an insufficient portion of aircraft since the targets are on
military action and oper- ation. Considering the characteristics of satellite
imagery, this paper attempts to provide a framework of the simulated and
unsupervised methodology without any additional su- pervision or physical
assumptions. Finally, the qualitative and quantitative analysis revealed a
potential to replenish insufficient data for machine learning platform for
satellite image analysis.
| null |
http://arxiv.org/abs/1806.03002v1
|
http://arxiv.org/pdf/1806.03002v1.pdf
| null |
[
"Junghoon Seo",
"Seunghyun Jeon",
"Taegyun Jeon"
] |
[
"BIG-bench Machine Learning",
"Classification",
"General Classification",
"object-detection",
"Object Detection"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/noise-adding-methods-of-saliency-map-as
|
1806.03000
| null | null |
Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative
|
SmoothGrad and VarGrad are techniques that enhance the empirical quality of
standard saliency maps by adding noise to input. However, there were few works
that provide a rigorous theoretical interpretation of those methods. We
analytically formalize the result of these noise-adding methods. As a result,
we observe two interesting results from the existing noise-adding methods.
First, SmoothGrad does not make the gradient of the score function smooth.
Second, VarGrad is independent of the gradient of the score function. We
believe that our findings provide a clue to reveal the relationship between
local explanation methods of deep neural networks and higher-order partial
derivatives of the score function.
| null |
http://arxiv.org/abs/1806.03000v1
|
http://arxiv.org/pdf/1806.03000v1.pdf
| null |
[
"Junghoon Seo",
"Jeongyeol Choe",
"Jamyoung Koo",
"Seunghyeon Jeon",
"Beomsu Kim",
"Taegyun Jeon"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-matching-pursuit-and-coordinate-descent
|
1803.09539
| null | null |
On Matching Pursuit and Coordinate Descent
|
Two popular examples of first-order optimization methods over linear spaces are coordinate descent and matching pursuit algorithms, with their randomized variants. While the former targets the optimization by moving along coordinates, the latter considers a generalized notion of directions. Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives. As a byproduct of our affine invariant analysis of matching pursuit, our rates for steepest coordinate descent are the tightest known. Furthermore, we show the first accelerated convergence rate $\mathcal{O}(1/t^2)$ for matching pursuit and steepest coordinate descent on convex objectives.
| null |
https://arxiv.org/abs/1803.09539v7
|
https://arxiv.org/pdf/1803.09539v7.pdf
|
ICML 2018 7
|
[
"Francesco Locatello",
"Anant Raj",
"Sai Praneeth Karimireddy",
"Gunnar Rätsch",
"Bernhard Schölkopf",
"Sebastian U. Stich",
"Martin Jaggi"
] |
[] | 2018-03-26T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2228
|
http://proceedings.mlr.press/v80/locatello18a/locatello18a.pdf
|
on-matching-pursuit-and-coordinate-descent-1
| null |
[] |
https://paperswithcode.com/paper/logarithmic-mathematical-morphology-a-new
|
1806.02998
| null | null |
Logarithmic mathematical morphology: a new framework adaptive to illumination changes
|
A new set of mathematical morphology (MM) operators adaptive to illumination
changes caused by variation of exposure time or light intensity is defined
thanks to the Logarithmic Image Processing (LIP) model. This model based on the
physics of acquisition is consistent with human vision. The fundamental
operators, the logarithmic-dilation and the logarithmic-erosion, are defined
with the LIP-addition of a structuring function. The combination of these two
adjunct operators gives morphological filters, namely the logarithmic-opening
and closing, useful for pattern recognition. The mathematical relation existing
between ``classical'' dilation and erosion and their logarithmic-versions is
established facilitating their implementation. Results on simulated and real
images show that logarithmic-MM is more efficient on low-contrasted information
than ``classical'' MM.
| null |
http://arxiv.org/abs/1806.02998v3
|
http://arxiv.org/pdf/1806.02998v3.pdf
| null |
[
"Guillaume Noyel"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/q-space-novelty-detection-with-variational
|
1806.02997
| null | null |
q-Space Novelty Detection with Variational Autoencoders
|
In machine learning, novelty detection is the task of identifying novel
unseen data. During training, only samples from the normal class are available.
Test samples are classified as normal or abnormal by assignment of a novelty
score. Here we propose novelty detection methods based on training variational
autoencoders (VAEs) on normal data. Since abnormal samples are not used during
training, we define novelty metrics based on the (partially complementary)
assumptions that the VAE is less capable of reconstructing abnormal samples
well; that abnormal samples more strongly violate the VAE regularizer; and that
abnormal samples differ from normal samples not only in input-feature space,
but also in the VAE latent space and VAE output. These approaches, combined
with various possibilities of using (e.g. sampling) the probabilistic VAE to
obtain scalar novelty scores, yield a large family of methods. We apply these
methods to magnetic resonance imaging, namely to the detection of
diffusion-space (q-space) abnormalities in diffusion MRI scans of multiple
sclerosis patients, i.e. to detect multiple sclerosis lesions without using any
lesion labels for training. Many of our methods outperform previously proposed
q-space novelty detection methods. We also evaluate the proposed methods on the
MNIST handwritten digits dataset and show that many of them are able to
outperform the state of the art.
|
Since abnormal samples are not used during training, we define novelty metrics based on the (partially complementary) assumptions that the VAE is less capable of reconstructing abnormal samples well; that abnormal samples more strongly violate the VAE regularizer; and that abnormal samples differ from normal samples not only in input-feature space, but also in the VAE latent space and VAE output.
|
http://arxiv.org/abs/1806.02997v2
|
http://arxiv.org/pdf/1806.02997v2.pdf
| null |
[
"Aleksei Vasilev",
"Vladimir Golkov",
"Marc Meissner",
"Ilona Lipp",
"Eleonora Sgarlata",
"Valentina Tomassini",
"Derek K. Jones",
"Daniel Cremers"
] |
[
"Diffusion MRI",
"Novelty Detection"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/towards-binary-valued-gates-for-robust-lstm
|
1806.02988
| null |
rJiaRbk0-
|
Towards Binary-Valued Gates for Robust LSTM Training
|
Long Short-Term Memory (LSTM) is one of the most widely used recurrent
structures in sequence modeling. It aims to use gates to control information
flow (e.g., whether to skip some information or not) in the recurrent
computations, although its practical implementation based on soft gates only
partially achieves this goal. In this paper, we propose a new way for LSTM
training, which pushes the output values of the gates towards 0 or 1. By doing
so, we can better control the information flow: the gates are mostly open or
closed, instead of in a middle state, which makes the results more
interpretable. Empirical studies show that (1) Although it seems that we
restrict the model capacity, there is no performance drop: we achieve better or
comparable performances due to its better generalization ability; (2) The
outputs of gates are not sensitive to their inputs: we can easily compress the
LSTM unit in multiple ways, e.g., low-rank approximation and low-precision
approximation. The compressed models are even better than the baseline models
without compression.
|
Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling.
|
http://arxiv.org/abs/1806.02988v1
|
http://arxiv.org/pdf/1806.02988v1.pdf
|
ICML 2018 7
|
[
"Zhuohan Li",
"Di He",
"Fei Tian",
"Wei Chen",
"Tao Qin",
"Li-Wei Wang",
"Tie-Yan Liu"
] |
[] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1916
|
http://proceedings.mlr.press/v80/li18c/li18c.pdf
|
towards-binary-valued-gates-for-robust-lstm-1
| null |
[] |
https://paperswithcode.com/paper/a-systematic-evaluation-of-recent-deep
|
1806.02987
| null | null |
A Systematic Evaluation of Recent Deep Learning Architectures for Fine-Grained Vehicle Classification
|
Fine-grained vehicle classification is the task of classifying make, model,
and year of a vehicle. This is a very challenging task, because vehicles of
different types but similar color and viewpoint can often look much more
similar than vehicles of same type but differing color and viewpoint. Vehicle
make, model, and year in com- bination with vehicle color - are of importance
in several applications such as vehicle search, re-identification, tracking,
and traffic analysis. In this work we investigate the suitability of several
recent landmark convolutional neural network (CNN) architectures, which have
shown top results on large scale image classification tasks, for the task of
fine-grained classification of vehicles. We compare the performance of the
networks VGG16, several ResNets, Inception architectures, the recent DenseNets,
and MobileNet. For classification we use the Stanford Cars-196 dataset which
features 196 different types of vehicles. We investigate several aspects of CNN
training, such as data augmentation and training from scratch vs. fine-tuning.
Importantly, we introduce no aspects in the architectures or training process
which are specific to vehicle classification. Our final model achieves a
state-of-the-art classification accuracy of 94.6% outperforming all related
works, even approaches which are specifically tailored for the task, e.g. by
including vehicle part detections.
|
Fine-grained vehicle classification is the task of classifying make, model, and year of a vehicle.
|
http://arxiv.org/abs/1806.02987v1
|
http://arxiv.org/pdf/1806.02987v1.pdf
| null |
[
"Krassimir Valev",
"Arne Schumann",
"Lars Sommer",
"Jürgen Beyerer"
] |
[
"Classification",
"Data Augmentation",
"Fine-Grained Vehicle Classification",
"General Classification",
"image-classification",
"Image Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/continuous-time-value-function-approximation
|
1806.02985
| null | null |
Continuous-time Value Function Approximation in Reproducing Kernel Hilbert Spaces
|
Motivated by the success of reinforcement learning (RL) for discrete-time
tasks such as AlphaGo and Atari games, there has been a recent surge of
interest in using RL for continuous-time control of physical systems (cf. many
challenging tasks in OpenAI Gym and DeepMind Control Suite). Since
discretization of time is susceptible to error, it is methodologically more
desirable to handle the system dynamics directly in continuous time. However,
very few techniques exist for continuous-time RL and they lack flexibility in
value function approximation. In this paper, we propose a novel framework for
model-based continuous-time value function approximation in reproducing kernel
Hilbert spaces. The resulting framework is so flexible that it can accommodate
any kind of kernel-based approach, such as Gaussian processes and kernel
adaptive filters, and it allows us to handle uncertainties and nonstationarity
without prior knowledge about the environment or what basis functions to
employ. We demonstrate the validity of the presented framework through
experiments.
| null |
http://arxiv.org/abs/1806.02985v3
|
http://arxiv.org/pdf/1806.02985v3.pdf
|
NeurIPS 2018 12
|
[
"Motoya Ohnishi",
"Masahiro Yukawa",
"Mikael Johansson",
"Masashi Sugiyama"
] |
[
"Atari Games",
"Gaussian Processes",
"OpenAI Gym",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-08T00:00:00 |
http://papers.nips.cc/paper/7546-continuous-time-value-function-approximation-in-reproducing-kernel-hilbert-spaces
|
http://papers.nips.cc/paper/7546-continuous-time-value-function-approximation-in-reproducing-kernel-hilbert-spaces.pdf
|
continuous-time-value-function-approximation-1
| null |
[] |
https://paperswithcode.com/paper/classification-from-pairwise-similarity-and
|
1802.04381
| null | null |
Classification from Pairwise Similarity and Unlabeled Data
|
Supervised learning needs a huge amount of labeled data, which can be a big
bottleneck under the situation where there is a privacy concern or labeling
cost is high. To overcome this problem, we propose a new weakly-supervised
learning setting where only similar (S) data pairs (two examples belong to the
same class) and unlabeled (U) data points are needed instead of fully labeled
data, which is called SU classification. We show that an unbiased estimator of
the classification risk can be obtained only from SU data, and the estimation
error of its empirical risk minimizer achieves the optimal parametric
convergence rate. Finally, we demonstrate the effectiveness of the proposed
method through experiments.
|
Supervised learning needs a huge amount of labeled data, which can be a big bottleneck under the situation where there is a privacy concern or labeling cost is high.
|
http://arxiv.org/abs/1802.04381v3
|
http://arxiv.org/pdf/1802.04381v3.pdf
|
ICML 2018 7
|
[
"Han Bao",
"Gang Niu",
"Masashi Sugiyama"
] |
[
"Classification",
"General Classification",
"Weakly-supervised Learning"
] | 2018-02-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2134
|
http://proceedings.mlr.press/v80/bao18a/bao18a.pdf
|
classification-from-pairwise-similarity-and-1
| null |
[] |
https://paperswithcode.com/paper/jointgan-multi-domain-joint-distribution
|
1806.02978
| null | null |
JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets
|
A new generative adversarial network is developed for joint distribution
matching. Distinct from most existing approaches, that only learn conditional
distributions, the proposed model aims to learn a joint distribution of
multiple random variables (domains). This is achieved by learning to sample
from conditional distributions between the domains, while simultaneously
learning to sample from the marginals of each individual domain. The proposed
framework consists of multiple generators and a single softmax-based critic,
all jointly trained via adversarial learning. From a simple noise source, the
proposed framework allows synthesis of draws from the marginals, conditional
draws given observations from a subset of random variables, or complete draws
from the full joint distribution. Most examples considered are for joint
analysis of two domains, with examples for three domains also presented.
|
Distinct from most existing approaches, that only learn conditional distributions, the proposed model aims to learn a joint distribution of multiple random variables (domains).
|
http://arxiv.org/abs/1806.02978v1
|
http://arxiv.org/pdf/1806.02978v1.pdf
|
ICML 2018 7
|
[
"Yunchen Pu",
"Shuyang Dai",
"Zhe Gan",
"Wei-Yao Wang",
"Guoyin Wang",
"Yizhe Zhang",
"Ricardo Henao",
"Lawrence Carin"
] |
[
"Generative Adversarial Network"
] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2161
|
http://proceedings.mlr.press/v80/pu18a/pu18a.pdf
|
jointgan-multi-domain-joint-distribution-1
| null |
[] |
https://paperswithcode.com/paper/monge-beats-bayes-hardness-results-for
|
1806.02977
| null | null |
Monge blunts Bayes: Hardness Results for Adversarial Training
|
The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a resource-bounded adversary so that it can be severely detrimental to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers. We suggest a formal answer for losses that satisfy the minimal statistical requirement of being proper. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of "harmfulness" which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss. When classifiers are Lipschitz -- a now popular approach in adversarial training --, this optimisation resorts to optimal transport to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind improves generalization.
| null |
https://arxiv.org/abs/1806.02977v4
|
https://arxiv.org/pdf/1806.02977v4.pdf
| null |
[
"Zac Cranko",
"Aditya Krishna Menon",
"Richard Nock",
"Cheng Soon Ong",
"Zhan Shi",
"Christian Walder"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fingerprint-liveness-detection-using-local
|
1806.02974
| null | null |
Fingerprint liveness detection using local quality features
|
Fingerprint-based recognition has been widely deployed in various
applications. However, current recognition systems are vulnerable to spoofing
attacks which make use of an artificial replica of a fingerprint to deceive the
sensors. In such scenarios, fingerprint liveness detection ensures the actual
presence of a real legitimate fingerprint in contrast to a fake
self-manufactured synthetic sample. In this paper, we propose a static
software-based approach using quality features to detect the liveness in a
fingerprint. We have extracted features from a single fingerprint image to
overcome the issues faced in dynamic software-based approaches which require
longer computational time and user cooperation. The proposed system extracts 8
sensor independent quality features on a local level containing minute details
of the ridge-valley structure of real and fake fingerprints. These local
quality features constitutes a 13-dimensional feature vector. The system is
tested on a publically available dataset of LivDet 2009 competition. The
experimental results exhibit supremacy of the proposed method over current
state-of-the-art approaches providing least average classification error of
5.3% for LivDet 2009. Additionally, effectiveness of the best performing
features over LivDet 2009 is evaluated on the latest LivDet 2015 dataset which
contain fingerprints fabricated using unknown spoof materials. An average
classification error rate of 4.22% is achieved in comparison with 4.49%
obtained by the LivDet 2015 winner. Further, the proposed system utilizes a
single fingerprint image, which results in faster implications and makes it
more user-friendly.
| null |
http://arxiv.org/abs/1806.02974v1
|
http://arxiv.org/pdf/1806.02974v1.pdf
| null |
[
"Ram Prakash Sharma",
"Somnath Dey"
] |
[
"General Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discrete-continuous-mixtures-in-probabilistic
|
1806.02027
| null | null |
Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms
|
Despite the recent successes of probabilistic programming languages (PPLs) in
AI applications, PPLs offer only limited support for random variables whose
distributions combine discrete and continuous elements. We develop the notion
of measure-theoretic Bayesian networks (MTBNs) and use it to provide more
general semantics for PPLs with arbitrarily many random variables defined over
arbitrary measure spaces. We develop two new general sampling algorithms that
are provably correct under the MTBN framework: the lexicographic likelihood
weighting (LLW) for general MTBNs and the lexicographic particle filter (LPF),
a specialized algorithm for state-space models. We further integrate MTBNs into
a widely used PPL system, BLOG, and verify the effectiveness of the new
inference algorithms through representative examples.
| null |
http://arxiv.org/abs/1806.02027v3
|
http://arxiv.org/pdf/1806.02027v3.pdf
|
ICML 2018 7
|
[
"Yi Wu",
"Siddharth Srivastava",
"Nicholas Hay",
"Simon Du",
"Stuart Russell"
] |
[
"Probabilistic Programming",
"State Space Models"
] | 2018-06-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2009
|
http://proceedings.mlr.press/v80/wu18f/wu18f.pdf
|
discrete-continuous-mixtures-in-probabilistic-1
| null |
[] |
https://paperswithcode.com/paper/pac-ranking-from-pairwise-and-listwise
|
1806.02970
| null | null |
PAC Ranking from Pairwise and Listwise Queries: Lower Bounds and Upper Bounds
|
This paper explores the adaptive (active) PAC (probably approximately
correct) top-$k$ ranking (i.e., top-$k$ item selection) and total ranking
problems from $l$-wise ($l\geq 2$) comparisons under the multinomial logit
(MNL) model. By adaptively choosing sets to query and observing the noisy
output of the most favored item of each query, we want to design ranking
algorithms that recover the top-$k$ or total ranking using as few queries as
possible. For the PAC top-$k$ ranking problem, we derive a lower bound on the
sample complexity (aka number of queries), and propose an algorithm that is
sample-complexity-optimal up to an $O(\log(k+l)/\log{k})$ factor. When $l=2$
(i.e., pairwise comparisons) or $l=O(poly(k))$, this algorithm matches the
lower bound. For the PAC total ranking problem, we derive a tight lower bound,
and propose an algorithm that matches the lower bound. When $l=2$, the MNL
model reduces to the popular Plackett-Luce (PL) model. In this setting, our
results still outperform the state-of-the-art both theoretically and
numerically. We also compare our algorithms with the state-of-the-art using
synthetic data as well as real-world data to verify the efficiency of our
algorithms.
| null |
http://arxiv.org/abs/1806.02970v2
|
http://arxiv.org/pdf/1806.02970v2.pdf
| null |
[
"Wenbo Ren",
"Jia Liu",
"Ness B. Shroff"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/message-passing-stein-variational-gradient
|
1711.04425
| null | null |
Message Passing Stein Variational Gradient Descent
|
Stein variational gradient descent (SVGD) is a recently proposed
particle-based Bayesian inference method, which has attracted a lot of interest
due to its remarkable approximation ability and particle efficiency compared to
traditional variational inference and Markov Chain Monte Carlo methods.
However, we observed that particles of SVGD tend to collapse to modes of the
target distribution, and this particle degeneracy phenomenon becomes more
severe with higher dimensions. Our theoretical analysis finds out that there
exists a negative correlation between the dimensionality and the repulsive
force of SVGD which should be blamed for this phenomenon. We propose Message
Passing SVGD (MP-SVGD) to solve this problem. By leveraging the conditional
independence structure of probabilistic graphical models (PGMs), MP-SVGD
converts the original high-dimensional global inference problem into a set of
local ones over the Markov blanket with lower dimensions. Experimental results
show its advantages of preventing vanishing repulsive force in high-dimensional
space over SVGD, and its particle efficiency and approximation flexibility over
other inference methods on graphical models.
| null |
http://arxiv.org/abs/1711.04425v3
|
http://arxiv.org/pdf/1711.04425v3.pdf
|
ICML 2018 7
|
[
"Jingwei Zhuo",
"Chang Liu",
"Jiaxin Shi",
"Jun Zhu",
"Ning Chen",
"Bo Zhang"
] |
[
"Bayesian Inference",
"Variational Inference"
] | 2017-11-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1915
|
http://proceedings.mlr.press/v80/zhuo18a/zhuo18a.pdf
|
message-passing-stein-variational-gradient-1
| null |
[] |
https://paperswithcode.com/paper/locating-the-boundaries-of-pareto-fronts-a
|
1806.02967
| null | null |
Locating the boundaries of Pareto fronts: A Many-Objective Evolutionary Algorithm Based on Corner Solution Search
|
In this paper, an evolutionary many-objective optimization algorithm based on
corner solution search (MaOEA-CS) was proposed. MaOEA-CS implicitly contains
two phases: the exploitative search for the most important boundary optimal
solutions - corner solutions, at the first phase, and the use of angle-based
selection [1] with the explorative search for the extension of PF approximation
at the second phase. Due to its high efficiency and robustness to the shapes of
PFs, it has won the CEC'2017 Competition on Evolutionary Many-Objective
Optimization. In addition, MaOEA-CS has also been applied on two real-world
engineering optimization problems with very irregular PFs. The experimental
results show that MaOEA-CS outperforms other six state-of-the-art compared
algorithms, which indicates it has the ability to handle real-world complex
optimization problems with irregular PFs.
| null |
http://arxiv.org/abs/1806.02967v1
|
http://arxiv.org/pdf/1806.02967v1.pdf
| null |
[
"Xinye Cai",
"Haoran Sun",
"Chunyang Zhu",
"Zhenyu Li",
"Qingfu Zhang"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sgd-and-hogwild-convergence-without-the
|
1802.03801
| null | null |
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
|
Stochastic gradient descent (SGD) is the optimization algorithm of choice in
many machine learning applications such as regularized empirical risk
minimization and training deep neural networks. The classical convergence
analysis of SGD is carried out under the assumption that the norm of the
stochastic gradient is uniformly bounded. While this might hold for some loss
functions, it is always violated for cases where the objective function is
strongly convex. In (Bottou et al.,2016), a new analysis of convergence of SGD
is performed under the assumption that stochastic gradients are bounded with
respect to the true gradient norm. Here we show that for stochastic problems
arising in machine learning such bound always holds; and we also propose an
alternative convergence analysis of SGD with diminishing learning rate regime,
which results in more relaxed conditions than those in (Bottou et al.,2016). We
then move on the asynchronous parallel setting, and prove convergence of
Hogwild! algorithm in the same regime, obtaining the first convergence results
for this method in the case of diminished learning rate.
| null |
http://arxiv.org/abs/1802.03801v2
|
http://arxiv.org/pdf/1802.03801v2.pdf
|
ICML 2018 7
|
[
"Lam M. Nguyen",
"Phuong Ha Nguyen",
"Marten van Dijk",
"Peter Richtárik",
"Katya Scheinberg",
"Martin Takáč"
] |
[
"BIG-bench Machine Learning"
] | 2018-02-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2313
|
http://proceedings.mlr.press/v80/nguyen18c/nguyen18c.pdf
|
sgd-and-hogwild-convergence-without-the-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/bsn-boundary-sensitive-network-for-temporal
|
1806.02964
| null | null |
BSN: Boundary Sensitive Network for Temporal Action Proposal Generation
|
Temporal action proposal generation is an important yet challenging problem,
since temporal proposals with rich action content are indispensable for
analysing real-world videos with long duration and high proportion irrelevant
content. This problem requires methods not only generating proposals with
precise temporal boundaries, but also retrieving proposals to cover truth
action instances with high recall and high overlap using relatively fewer
proposals. To address these difficulties, we introduce an effective proposal
generation method, named Boundary-Sensitive Network (BSN), which adopts "local
to global" fashion. Locally, BSN first locates temporal boundaries with high
probabilities, then directly combines these boundaries as proposals. Globally,
with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating
the confidence of whether a proposal contains an action within its region. We
conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14,
where BSN outperforms other state-of-the-art temporal action proposal
generation methods with high recall and high temporal precision. Finally,
further experiments demonstrate that by combining existing action classifiers,
our method significantly improves the state-of-the-art temporal action
detection performance.
|
Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content.
|
http://arxiv.org/abs/1806.02964v3
|
http://arxiv.org/pdf/1806.02964v3.pdf
|
ECCV 2018 9
|
[
"Tianwei Lin",
"Xu Zhao",
"Haisheng Su",
"Chongjing Wang",
"Ming Yang"
] |
[
"Action Detection",
"Temporal Action Localization",
"Temporal Action Proposal Generation"
] | 2018-06-08T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Tianwei_Lin_BSN_Boundary_Sensitive_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Tianwei_Lin_BSN_Boundary_Sensitive_ECCV_2018_paper.pdf
|
bsn-boundary-sensitive-network-for-temporal-1
| null |
[] |
https://paperswithcode.com/paper/revisiting-the-poverty-of-the-stimulus
|
1802.09091
| null | null |
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks
|
Syntactic rules in natural language typically need to make reference to
hierarchical sentence structure. However, the simple examples that language
learners receive are often equally compatible with linear rules. Children
consistently ignore these linear explanations and settle instead on the correct
hierarchical one. This fact has motivated the proposal that the learner's
hypothesis space is constrained to include only hierarchical rules. We examine
this proposal using recurrent neural networks (RNNs), which are not constrained
in such a way. We simulate the acquisition of question formation, a
hierarchical transformation, in a fragment of English. We find that some RNN
architectures tend to learn the hierarchical rule, suggesting that hierarchical
cues within the language, combined with the implicit architectural biases
inherent in certain RNNs, may be sufficient to induce hierarchical
generalizations. The likelihood of acquiring the hierarchical generalization
increased when the language included an additional cue to hierarchy in the form
of subject-verb agreement, underscoring the role of cues to hierarchy in the
learner's input.
| null |
http://arxiv.org/abs/1802.09091v3
|
http://arxiv.org/pdf/1802.09091v3.pdf
| null |
[
"R. Thomas McCoy",
"Robert Frank",
"Tal Linzen"
] |
[
"Sentence"
] | 2018-02-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lipschitz-continuity-in-model-based
|
1804.07193
| null | null |
Lipschitz Continuity in Model-based Reinforcement Learning
|
We examine the impact of learning Lipschitz continuous models in the context
of model-based reinforcement learning. We provide a novel bound on multi-step
prediction error of Lipschitz models where we quantify the error using the
Wasserstein metric. We go on to prove an error bound for the value-function
estimate arising from Lipschitz models and show that the estimated value
function is itself Lipschitz. We conclude with empirical results that show the
benefits of controlling the Lipschitz constant of neural-network models.
|
We go on to prove an error bound for the value-function estimate arising from Lipschitz models and show that the estimated value function is itself Lipschitz.
|
http://arxiv.org/abs/1804.07193v3
|
http://arxiv.org/pdf/1804.07193v3.pdf
|
ICML 2018 7
|
[
"Kavosh Asadi",
"Dipendra Misra",
"Michael L. Littman"
] |
[
"model",
"Model-based Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-04-19T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2467
|
http://proceedings.mlr.press/v80/asadi18a/asadi18a.pdf
|
lipschitz-continuity-in-model-based-1
| null |
[] |
https://paperswithcode.com/paper/representation-learning-of-entities-and
|
1806.02960
| null | null |
Representation Learning of Entities and Documents from Knowledge Base Descriptions
|
In this paper, we describe TextEnt, a neural network model that learns
distributed representations of entities and documents directly from a knowledge
base (KB). Given a document in a KB consisting of words and entity annotations,
we train our model to predict the entity that the document describes and map
the document and its target entity close to each other in a continuous vector
space. Our model is trained using a large number of documents extracted from
Wikipedia. The performance of the proposed model is evaluated using two tasks,
namely fine-grained entity typing and multiclass text classification. The
results demonstrate that our model achieves state-of-the-art performance on
both tasks. The code and the trained representations are made available online
for further academic research.
|
In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).
|
http://arxiv.org/abs/1806.02960v1
|
http://arxiv.org/pdf/1806.02960v1.pdf
|
COLING 2018 8
|
[
"Ikuya Yamada",
"Hiroyuki Shindo",
"Yoshiyasu Takefuji"
] |
[
"Entity Typing",
"General Classification",
"Representation Learning",
"text-classification",
"Text Classification"
] | 2018-06-08T00:00:00 |
https://aclanthology.org/C18-1016
|
https://aclanthology.org/C18-1016.pdf
|
representation-learning-of-entities-and-1
| null |
[] |
https://paperswithcode.com/paper/a-theoretical-explanation-for-perplexing
|
1805.07039
| null | null |
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
|
Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map. Motivated by this, we develop a theoretical explanation revealing that GBP and DeconvNet are essentially doing (partial) image recovery which is unrelated to the network decisions. Specifically, our analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Extensive experiments are provided that support the theoretical analysis.
|
Backpropagation-based visualizations have been proposed to interpret convolutional neural networks (CNNs), however a theory is missing to justify their behaviors: Guided backpropagation (GBP) and deconvolutional network (DeconvNet) generate more human-interpretable but less class-sensitive visualizations than saliency map.
|
https://arxiv.org/abs/1805.07039v4
|
https://arxiv.org/pdf/1805.07039v4.pdf
|
ICML 2018 7
|
[
"Weili Nie",
"Yang Zhang",
"Ankit Patel"
] |
[] | 2018-05-18T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2279
|
http://proceedings.mlr.press/v80/nie18a/nie18a.pdf
|
a-theoretical-explanation-for-perplexing-1
| null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/the-case-for-full-matrix-adaptive
|
1806.02958
| null |
rkxd2oR9Y7
|
Efficient Full-Matrix Adaptive Regularization
|
Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide a novel theoretical analysis for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of the efficient computation of the inverse square root of a low-rank matrix. Our preliminary experiments show improved iteration-wise convergence rates across synthetic tasks and standard deep learning benchmarks, and that the more carefully-preconditioned steps sometimes lead to a better solution.
| null |
https://arxiv.org/abs/1806.02958v2
|
https://arxiv.org/pdf/1806.02958v2.pdf
|
ICLR 2019 5
|
[
"Naman Agarwal",
"Brian Bullins",
"Xinyi Chen",
"Elad Hazan",
"Karan Singh",
"Cyril Zhang",
"Yi Zhang"
] |
[] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=rkxd2oR9Y7
|
https://openreview.net/pdf?id=rkxd2oR9Y7
|
the-case-for-full-matrix-adaptive-1
| null |
[] |
https://paperswithcode.com/paper/dank-learning-generating-memes-using-deep
|
1806.04510
| null | null |
Dank Learning: Generating Memes Using Deep Neural Networks
|
We introduce a novel meme generation system, which given any image can
produce a humorous and relevant caption. Furthermore, the system can be
conditioned on not only an image but also a user-defined label relating to the
meme template, giving a handle to the user on meme content. The system uses a
pretrained Inception-v3 network to return an image embedding which is passed to
an attention-based deep-layer LSTM model producing the caption - inspired by
the widely recognised Show and Tell Model. We implement a modified beam search
to encourage diversity in the captions. We evaluate the quality of our model
using perplexity and human assessment on both the quality of memes generated
and whether they can be differentiated from real ones. Our model produces
original memes that cannot on the whole be differentiated from real ones.
|
We introduce a novel meme generation system, which given any image can produce a humorous and relevant caption.
|
http://arxiv.org/abs/1806.04510v1
|
http://arxiv.org/pdf/1806.04510v1.pdf
| null |
[
"Abel L Peirson V",
"E Meltem Tolunay"
] |
[
"Diversity"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks.",
"full_name": "Auxiliary Classifier",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "The following is a list of miscellaneous components used in neural networks.",
"name": "Miscellaneous Components",
"parent": null
},
"name": "Auxiliary Classifier",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/fd8e2064e094f301d910b91a757b860aae3e3116/torch/optim/rmsprop.py#L69-L108",
"description": "**RMSProp** is an unpublished adaptive learning rate optimizer [proposed by Geoff Hinton](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The motivation is that the magnitude of gradients can differ for different weights, and can change during learning, making it hard to choose a single global learning rate. RMSProp tackles this by keeping a moving average of the squared gradient and adjusting the weight updates by this magnitude. The gradient updates are performed as:\r\n\r\n$$E\\left[g^{2}\\right]\\_{t} = \\gamma E\\left[g^{2}\\right]\\_{t-1} + \\left(1 - \\gamma\\right) g^{2}\\_{t}$$\r\n\r\n$$\\theta\\_{t+1} = \\theta\\_{t} - \\frac{\\eta}{\\sqrt{E\\left[g^{2}\\right]\\_{t} + \\epsilon}}g\\_{t}$$\r\n\r\nHinton suggests $\\gamma=0.9$, with a good default for $\\eta$ as $0.001$.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)",
"full_name": "RMSProp",
"introduced_year": 2013,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "RMSProp",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L210",
"description": "**Inception-v3 Module** is an image block used in the [Inception-v3](https://paperswithcode.com/method/inception-v3) architecture. This architecture is used on the coarsest (8 × 8) grids to promote high dimensional representations.",
"full_name": "Inception-v3 Module",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-v3 Module",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L64",
"description": "**Inception-v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of [batch normalization](https://paperswithcode.com/method/batch-normalization) for layers in the sidehead).",
"full_name": "Inception-v3",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Inception-v3",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-deep-neural-network-surrogate-for-high
|
1806.02957
| null | null |
A Deep Neural Network Surrogate for High-Dimensional Random Partial Differential Equations
|
Developing efficient numerical algorithms for the solution of high
dimensional random Partial Differential Equations (PDEs) has been a challenging
task due to the well-known curse of dimensionality. We present a new solution
framework for these problems based on a deep learning approach. Specifically,
the random PDE is approximated by a feed-forward fully-connected deep residual
network, with either strong or weak enforcement of initial and boundary
constraints. The framework is mesh-free, and can handle irregular computational
domains. Parameters of the approximating deep neural network are determined
iteratively using variants of the Stochastic Gradient Descent (SGD) algorithm.
The satisfactory accuracy of the proposed frameworks is numerically
demonstrated on diffusion and heat conduction problems, in comparison with the
converged Monte Carlo-based finite element results.
| null |
http://arxiv.org/abs/1806.02957v2
|
http://arxiv.org/pdf/1806.02957v2.pdf
| null |
[
"Mohammad Amin Nabian",
"Hadi Meidani"
] |
[] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/endoscopic-navigation-in-the-absence-of-ct
|
1806.03997
| null | null |
Endoscopic navigation in the absence of CT imaging
|
Clinical examinations that involve endoscopic exploration of the nasal cavity
and sinuses often do not have a reference image to provide structural context
to the clinician. In this paper, we present a system for navigation during
clinical endoscopic exploration in the absence of computed tomography (CT)
scans by making use of shape statistics from past CT scans. Using a deformable
registration algorithm along with dense reconstructions from video, we show
that we are able to achieve submillimeter registrations in in-vivo clinical
data and are able to assign confidence to these registrations using confidence
criteria established using simulated data.
| null |
http://arxiv.org/abs/1806.03997v1
|
http://arxiv.org/pdf/1806.03997v1.pdf
| null |
[
"Ayushi Sinha",
"Xingtong Liu",
"Austin Reiter",
"Masaru Ishii",
"Gregory D. Hager",
"Russell H. Taylor"
] |
[
"Computed Tomography (CT)"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/using-social-network-information-in-bayesian
|
1806.02954
| null | null |
Using Social Network Information in Bayesian Truth Discovery
|
We investigate the problem of truth discovery based on opinions from multiple
agents who may be unreliable or biased. We consider the case where agents'
reliabilities or biases are correlated if they belong to the same community,
which defines a group of agents with similar opinions regarding a particular
event. An agent can belong to different communities for different events, and
these communities are unknown a priori. We incorporate knowledge of the agents'
social network in our truth discovery framework and develop Laplace variational
inference methods to estimate agents' reliabilities, communities, and the event
states. We also develop a stochastic variational inference method to scale our
model to large social networks. Simulations and experiments on real data
suggest that when observations are sparse, our proposed methods perform better
than several other inference methods, including majority voting, TruthFinder,
AccuSim, the Confidence-Aware Truth Discovery method, the Bayesian Classifier
Combination (BCC) method, and the Community BCC method.
| null |
http://arxiv.org/abs/1806.02954v3
|
http://arxiv.org/pdf/1806.02954v3.pdf
| null |
[
"Jielong Yang",
"Junshan Wang",
"Wee Peng Tay"
] |
[
"Variational Inference"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rgcnn-regularized-graph-cnn-for-point-cloud
|
1806.02952
| null | null |
RGCNN: Regularized Graph CNN for Point Cloud Segmentation
|
Point cloud, an efficient 3D object representation, has become popular with
the development of depth sensing and 3D laser scanning techniques. It has
attracted attention in various applications such as 3D tele-presence,
navigation for unmanned vehicles and heritage reconstruction. The understanding
of point clouds, such as point cloud segmentation, is crucial in exploiting the
informative value of point clouds for such applications. Due to the
irregularity of the data format, previous deep learning works often convert
point clouds to regular 3D voxel grids or collections of images before feeding
them into neural networks, which leads to voluminous data and quantization
artifacts. In this paper, we instead propose a regularized graph convolutional
neural network (RGCNN) that directly consumes point clouds. Leveraging on
spectral graph theory, we treat features of points in a point cloud as signals
on graph, and define the convolution over graph by Chebyshev polynomial
approximation. In particular, we update the graph Laplacian matrix that
describes the connectivity of features in each layer according to the
corresponding learned features, which adaptively captures the structure of
dynamic graphs. Further, we deploy a graph-signal smoothness prior in the loss
function, thus regularizing the learning process. Experimental results on the
ShapeNet part dataset show that the proposed approach significantly reduces the
computational complexity while achieving competitive performance with the state
of the art. Also, experiments show RGCNN is much more robust to both noise and
point cloud density in comparison with other methods. We further apply RGCNN to
point cloud classification and achieve competitive results on ModelNet40
dataset.
|
Leveraging on spectral graph theory, we treat features of points in a point cloud as signals on graph, and define the convolution over graph by Chebyshev polynomial approximation.
|
http://arxiv.org/abs/1806.02952v1
|
http://arxiv.org/pdf/1806.02952v1.pdf
| null |
[
"Gusi Te",
"Wei Hu",
"Zongming Guo",
"Amin Zheng"
] |
[
"Point Cloud Classification",
"Point Cloud Segmentation",
"Quantization"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/programmatically-interpretable-reinforcement
|
1804.02477
| null | null |
Programmatically Interpretable Reinforcement Learning
|
We present a reinforcement learning framework, called Programmatically
Interpretable Reinforcement Learning (PIRL), that is designed to generate
interpretable and verifiable agent policies. Unlike the popular Deep
Reinforcement Learning (DRL) paradigm, which represents policies by neural
networks, PIRL represents policies using a high-level, domain-specific
programming language. Such programmatic policies have the benefits of being
more easily interpreted than neural networks, and being amenable to
verification by symbolic methods. We propose a new method, called Neurally
Directed Program Search (NDPS), for solving the challenging nonsmooth
optimization problem of finding a programmatic policy with maximal reward. NDPS
works by first learning a neural policy network using DRL, and then performing
a local search over programmatic policies that seeks to minimize a distance
from this neural "oracle". We evaluate NDPS on the task of learning to drive a
simulated car in the TORCS car-racing environment. We demonstrate that NDPS is
able to discover human-readable policies that pass some significant performance
bars. We also show that PIRL policies can have smoother trajectories, and can
be more easily transferred to environments not encountered during training,
than corresponding policies discovered by DRL.
| null |
http://arxiv.org/abs/1804.02477v3
|
http://arxiv.org/pdf/1804.02477v3.pdf
|
ICML 2018 7
|
[
"Abhinav Verma",
"Vijayaraghavan Murali",
"Rishabh Singh",
"Pushmeet Kohli",
"Swarat Chaudhuri"
] |
[
"Car Racing",
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-04-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2203
|
http://proceedings.mlr.press/v80/verma18a/verma18a.pdf
|
programmatically-interpretable-reinforcement-1
| null |
[] |
https://paperswithcode.com/paper/supportnet-solving-catastrophic-forgetting-in
|
1806.02942
| null |
BkxSHsC5FQ
|
SupportNet: solving catastrophic forgetting in class incremental learning with support data
|
A plain well-trained deep learning model often does not have the ability to
learn new knowledge without forgetting the previously learned knowledge, which
is known as catastrophic forgetting. Here we propose a novel method,
SupportNet, to efficiently and effectively solve the catastrophic forgetting
problem in the class incremental learning scenario. SupportNet combines the
strength of deep learning and support vector machine (SVM), where SVM is used
to identify the support data from the old data, which are fed to the deep
learning model together with the new data for further training so that the
model can review the essential information of the old data when learning the
new information. Two powerful consolidation regularizers are applied to
stabilize the learned representation and ensure the robustness of the learned
model. We validate our method with comprehensive experiments on various tasks,
which show that SupportNet drastically outperforms the state-of-the-art
incremental learning methods and even reaches similar performance as the deep
learning model trained from scratch on both old and new data. Our program is
accessible at: https://github.com/lykaust15/SupportNet
|
A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting.
|
http://arxiv.org/abs/1806.02942v3
|
http://arxiv.org/pdf/1806.02942v3.pdf
| null |
[
"Yu Li",
"Zhongxiao Li",
"Lizhong Ding",
"Yijie Pan",
"Chao Huang",
"Yuhui Hu",
"Wei Chen",
"Xin Gao"
] |
[
"class-incremental learning",
"Class Incremental Learning",
"Deep Learning",
"Incremental Learning"
] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=BkxSHsC5FQ
|
https://openreview.net/pdf?id=BkxSHsC5FQ
| null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/hyperspectral-image-denoising-employing-a
|
1806.00183
| null | null |
Hyperspectral Image Denoising Employing a Spatial-Spectral Deep Residual Convolutional Neural Network
|
Hyperspectral image (HSI) denoising is a crucial preprocessing procedure to
improve the performance of the subsequent HSI interpretation and applications.
In this paper, a novel deep learning-based method for this task is proposed, by
learning a non-linear end-to-end mapping between the noisy and clean HSIs with
a combined spatial-spectral deep convolutional neural network (HSID-CNN). Both
the spatial and spectral information are simultaneously assigned to the
proposed network. In addition, multi-scale feature extraction and multi-level
feature representation are respectively employed to capture both the
multi-scale spatial-spectral feature and fuse the feature representations with
different levels for the final restoration. The simulated and real-data
experiments demonstrate that the proposed HSID-CNN outperforms many of the
mainstream methods in both the quantitative evaluation indexes, visual effects,
and HSI classification accuracy.
|
Hyperspectral image (HSI) denoising is a crucial preprocessing procedure to improve the performance of the subsequent HSI interpretation and applications.
|
http://arxiv.org/abs/1806.00183v3
|
http://arxiv.org/pdf/1806.00183v3.pdf
| null |
[
"Qiangqiang Yuan",
"Qiang Zhang",
"Jie Li",
"Huanfeng Shen",
"Liangpei Zhang"
] |
[
"Denoising",
"Hyperspectral Image Denoising",
"Image Denoising"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-source-neural-machine-translation-with
|
1806.02525
| null | null |
Multi-Source Neural Machine Translation with Missing Data
|
Multi-source translation is an approach to exploit multiple inputs (e.g. in
two different languages) to increase translation accuracy. In this paper, we
examine approaches for multi-source neural machine translation (NMT) using an
incomplete multilingual corpus in which some translations are missing. In
practice, many multilingual corpora are not complete due to the difficulty to
provide translations in all of the relevant languages (for example, in TED
talks, most English talks only have subtitles for a small portion of the
languages that TED supports). Existing studies on multi-source translation did
not explicitly handle such situations. This study focuses on the use of
incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts
and examines a very simple implementation where missing source translations are
replaced by a special symbol <NULL>. These methods allow us to use incomplete
corpora both at training time and test time. In experiments with real
incomplete multilingual corpora of TED Talks, the multi-source NMT with the
<NULL> tokens achieved higher translation accuracies measured by BLEU than
those by any one-to-one NMT systems.
| null |
http://arxiv.org/abs/1806.02525v2
|
http://arxiv.org/pdf/1806.02525v2.pdf
|
WS 2018 7
|
[
"Yuta Nishimura",
"Katsuhito Sudoh",
"Graham Neubig",
"Satoshi Nakamura"
] |
[
"Machine Translation",
"NMT",
"Translation"
] | 2018-06-07T00:00:00 |
https://aclanthology.org/W18-2711
|
https://aclanthology.org/W18-2711.pdf
|
multi-source-neural-machine-translation-with-2
| null |
[] |
https://paperswithcode.com/paper/causal-effects-based-on-distributional
|
1806.02935
| null | null |
Causal effects based on distributional distances
|
Comparing counterfactual distributions can provide more nuanced and valuable measures for causal effects, going beyond typical summary statistics such as averages. In this work, we consider characterizing causal effects via distributional distances, focusing on two kinds of target parameters. The first is the counterfactual outcome density. We propose a doubly robust-style estimator for the counterfactual density and study its rates of convergence and limiting distributions. We analyze asymptotic upper bounds on the $L_q$ and the integrated $L_q$ risks of the proposed estimator, and propose a bootstrap-based confidence band. The second is a novel distributional causal effect defined by the $L_1$ distance between different counterfactual distributions. We study three approaches for estimating the proposed distributional effect: smoothing the counterfactual density, smoothing the $L_1$ distance, and imposing a margin condition. For each approach, we analyze asymptotic properties and error bounds of the proposed estimator, and discuss potential advantages and disadvantages. We go on to present a bootstrap approach for obtaining confidence intervals, and propose a test of no distributional effect. We conclude with a numerical illustration and a real-world example.
| null |
https://arxiv.org/abs/1806.02935v3
|
https://arxiv.org/pdf/1806.02935v3.pdf
| null |
[
"Kwangho Kim",
"Jisu Kim",
"Edward H. Kennedy"
] |
[
"counterfactual"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learn-from-your-neighbor-learning-multi-modal
|
1806.02934
| null | null |
Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations
|
Many structured prediction problems (particularly in vision and language
domains) are ambiguous, with multiple outputs being correct for an input - e.g.
there are many ways of describing an image, multiple ways of translating a
sentence; however, exhaustively annotating the applicability of all possible
outputs is intractable due to exponentially large output spaces (e.g. all
English sentences). In practice, these problems are cast as multi-class
prediction, with the likelihood of only a sparse set of annotations being
maximized - unfortunately penalizing for placing beliefs on plausible but
unannotated outputs. We make and test the following hypothesis - for a given
input, the annotations of its neighbors may serve as an additional supervisory
signal. Specifically, we propose an objective that transfers supervision from
neighboring examples. We first study the properties of our developed method in
a controlled toy setup before reporting results on multi-label classification
and two image-grounded sequence modeling tasks - captioning and question
generation. We evaluate using standard task-specific metrics and measures of
output diversity, finding consistent improvements over standard maximum
likelihood training and other baselines.
| null |
http://arxiv.org/abs/1806.02934v1
|
http://arxiv.org/pdf/1806.02934v1.pdf
|
ICML 2018 7
|
[
"Ashwin Kalyan",
"Stefan Lee",
"Anitha Kannan",
"Dhruv Batra"
] |
[
"Diversity",
"Multi-Label Classification",
"MUlTI-LABEL-ClASSIFICATION",
"Question Generation",
"Question-Generation",
"Sentence",
"Structured Prediction"
] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2325
|
http://proceedings.mlr.press/v80/kalyan18a/kalyan18a.pdf
|
learn-from-your-neighbor-learning-multi-modal-1
| null |
[] |
https://paperswithcode.com/paper/between-hard-and-soft-thresholding-optimal
|
1804.08841
| null | null |
Between hard and soft thresholding: optimal iterative thresholding algorithms
|
Iterative thresholding algorithms seek to optimize a differentiable objective function over a sparsity or rank constraint by alternating between gradient steps that reduce the objective, and thresholding steps that enforce the constraint. This work examines the choice of the thresholding operator, and asks whether it is possible to achieve stronger guarantees than what is possible with hard thresholding. We develop the notion of relative concavity of a thresholding operator, a quantity that characterizes the worst-case convergence performance of any thresholding operator on the target optimization problem. Surprisingly, we find that commonly used thresholding operators, such as hard thresholding and soft thresholding, are suboptimal in terms of worst-case convergence guarantees. Instead, a general class of thresholding operators, lying between hard thresholding and soft thresholding, is shown to be optimal with the strongest possible convergence guarantee among all thresholding operators. Examples of this general class includes $\ell_q$ thresholding with appropriate choices of $q$, and a newly defined {\em reciprocal thresholding} operator. We also investigate the implications of the improved optimization guarantee in the statistical setting of sparse linear regression, and show that this new class of thresholding operators attain the optimal rate for computationally efficient estimators, matching the Lasso.
| null |
https://arxiv.org/abs/1804.08841v4
|
https://arxiv.org/pdf/1804.08841v4.pdf
| null |
[
"Haoyang Liu",
"Rina Foygel Barber"
] |
[] | 2018-04-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/program-synthesis-through-reinforcement
|
1806.02932
| null | null |
Program Synthesis Through Reinforcement Learning Guided Tree Search
|
Program Synthesis is the task of generating a program from a provided
specification. Traditionally, this has been treated as a search problem by the
programming languages (PL) community and more recently as a supervised learning
problem by the machine learning community. Here, we propose a third approach,
representing the task of synthesizing a given program as a Markov decision
process solvable via reinforcement learning(RL). From observations about the
states of partial programs, we attempt to find a program that is optimal over a
provided reward metric on pairs of programs and states. We instantiate this
approach on a subset of the RISC-V assembly language operating on floating
point numbers, and as an optimization inspired by search-based techniques from
the PL community, we combine RL with a priority search tree. We evaluate this
instantiation and demonstrate the effectiveness of our combined method compared
to a variety of baselines, including a pure RL ablation and a state of the art
Markov chain Monte Carlo search method on this task.
| null |
http://arxiv.org/abs/1806.02932v1
|
http://arxiv.org/pdf/1806.02932v1.pdf
| null |
[
"Riley Simmons-Edler",
"Anders Miltner",
"Sebastian Seung"
] |
[
"Program Synthesis",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reference-model-of-multi-entity-bayesian
|
1806.02457
| null | null |
Reference Model of Multi-Entity Bayesian Networks for Predictive Situation Awareness
|
During the past quarter-century, situation awareness (SAW) has become a
critical research theme, because of its importance. Since the concept of SAW
was first introduced during World War I, various versions of SAW have been
researched and introduced. Predictive Situation Awareness (PSAW) focuses on the
ability to predict aspects of a temporally evolving situation over time. PSAW
requires a formal representation and a reasoning method using such a
representation. A Multi-Entity Bayesian Network (MEBN) is a knowledge
representation formalism combining Bayesian Networks (BN) with First-Order
Logic (FOL). MEBN can be used to represent uncertain situations (supported by
BN) as well as complex situations (supported by FOL). Also, efficient reasoning
algorithms for MEBN have been developed. MEBN can be a formal representation to
support PSAW and has been used for several PSAW systems. Although several MEBN
applications for PSAW exist, very little work can be found in the literature
that attempts to generalize a MEBN model to support PSAW. In this research, we
define a reference model for MEBN in PSAW, called a PSAW-MEBN reference model.
The PSAW-MEBN reference model enables us to easily develop a MEBN model for
PSAW by supporting the design of a MEBN model for PSAW. In this research, we
introduce two example use cases using the PSAW-MEBN reference model to develop
MEBN models to support PSAW: a Smart Manufacturing System and a Maritime Domain
Awareness System.
| null |
http://arxiv.org/abs/1806.02457v2
|
http://arxiv.org/pdf/1806.02457v2.pdf
| null |
[
"Cheol Young Park",
"Kathryn Blackmond Laskey"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mebn-rm-a-mapping-between-multi-entity
|
1806.02455
| null | null |
MEBN-RM: A Mapping between Multi-Entity Bayesian Network and Relational Model
|
Multi-Entity Bayesian Network (MEBN) is a knowledge representation formalism
combining Bayesian Networks (BN) with First-Order Logic (FOL). MEBN has
sufficient expressive power for general-purpose knowledge representation and
reasoning. Developing a MEBN model to support a given application is a
challenge, requiring definition of entities, relationships, random variables,
conditional dependence relationships, and probability distributions. When
available, data can be invaluable both to improve performance and to streamline
development. By far the most common format for available data is the relational
database (RDB). Relational databases describe and organize data according to
the Relational Model (RM). Developing a MEBN model from data stored in an RDB
therefore requires mapping between the two formalisms. This paper presents
MEBN-RM, a set of mapping rules between key elements of MEBN and RM. We
identify links between the two languages (RM and MEBN) and define four levels
of mapping from elements of RM to elements of MEBN. These definitions are
implemented in the MEBN-RM algorithm, which converts a relational schema in RM
to a partial MEBN model. Through this research, the software has been released
as a MEBN-RM open-source software tool. The method is illustrated through two
example use cases using MEBN-RM to develop MEBN models: a Critical
Infrastructure Defense System and a Smart Manufacturing System.
| null |
http://arxiv.org/abs/1806.02455v2
|
http://arxiv.org/pdf/1806.02455v2.pdf
| null |
[
"Cheol Young Park",
"Kathryn Blackmond Laskey"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/robust-and-scalable-models-of-microbiome-1
|
1805.04591
| null | null |
Robust and Scalable Models of Microbiome Dynamics
|
Microbes are everywhere, including in and on our bodies, and have been shown
to play key roles in a variety of prevalent human diseases. Consequently, there
has been intense interest in the design of bacteriotherapies or "bugs as
drugs," which are communities of bacteria administered to patients for specific
therapeutic applications. Central to the design of such therapeutics is an
understanding of the causal microbial interaction network and the population
dynamics of the organisms. In this work we present a Bayesian nonparametric
model and associated efficient inference algorithm that addresses the key
conceptual and practical challenges of learning microbial dynamics from time
series microbe abundance data. These challenges include high-dimensional (300+
strains of bacteria in the gut) but temporally sparse and non-uniformly sampled
data; high measurement noise; and, nonlinear and physically non-negative
dynamics. Our contributions include a new type of dynamical systems model for
microbial dynamics based on what we term interaction modules, or learned
clusters of latent variables with redundant interaction structure (reducing the
expected number of interaction coefficients from $O(n^2)$ to $O((\log n)^2)$);
a fully Bayesian formulation of the stochastic dynamical systems model that
propagates measurement and latent state uncertainty throughout the model; and
introduction of a temporally varying auxiliary variable technique to enable
efficient inference by relaxing the hard non-negativity constraint on states.
We apply our method to simulated and real data, and demonstrate the utility of
our technique for system identification from limited data and gaining new
biological insights into bacteriotherapy design.
| null |
http://arxiv.org/abs/1805.04591v2
|
http://arxiv.org/pdf/1805.04591v2.pdf
|
ICML 2018
|
[
"Travis E. Gibson",
"Georg K. Gerber"
] |
[
"Time Series Analysis"
] | 2018-05-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lightweight-stochastic-optimization-for
|
1806.02927
| null | null |
Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data
|
Variance reduction has been commonly used in stochastic optimization. It
relies crucially on the assumption that the data set is finite. However, when
the data are imputed with random noise as in data augmentation, the perturbed
data set be- comes essentially infinite. Recently, the stochastic MISO (S-MISO)
algorithm is introduced to address this expected risk minimization problem.
Though it converges faster than SGD, a significant amount of memory is
required. In this pa- per, we propose two SGD-like algorithms for expected risk
minimization with random perturbation, namely, stochastic sample average
gradient (SSAG) and stochastic SAGA (S-SAGA). The memory cost of SSAG does not
depend on the sample size, while that of S-SAGA is the same as those of
variance reduction methods on un- perturbed data. Theoretical analysis and
experimental results on logistic regression and AUC maximization show that SSAG
has faster convergence rate than SGD with comparable space requirement, while
S-SAGA outperforms S-MISO in terms of both iteration complexity and storage.
| null |
http://arxiv.org/abs/1806.02927v1
|
http://arxiv.org/pdf/1806.02927v1.pdf
|
ICML 2018 7
|
[
"Shuai Zheng",
"James T. Kwok"
] |
[
"Data Augmentation",
"Stochastic Optimization"
] | 2018-06-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2215
|
http://proceedings.mlr.press/v80/zheng18a/zheng18a.pdf
|
lightweight-stochastic-optimization-for-1
| null |
[
{
"code_snippet_url": null,
"description": "SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.",
"full_name": "SAGA",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Optimization",
"parent": null
},
"name": "SAGA",
"source_title": "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives",
"source_url": "http://arxiv.org/abs/1407.0202v3"
},
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/the-language-of-generalization
|
1608.02926
| null | null |
The Language of Generalization
|
Language provides simple ways of communicating generalizable knowledge to
each other (e.g., "Birds fly", "John hikes", "Fire makes smoke"). Though found
in every language and emerging early in development, the language of
generalization is philosophically puzzling and has resisted precise
formalization. Here, we propose the first formal account of generalizations
conveyed with language that makes quantitative predictions about human
understanding. We test our model in three diverse domains: generalizations
about categories (generic language), events (habitual language), and causes
(causal language). The model explains the gradience in human endorsement
through the interplay between a simple truth-conditional semantic theory and
diverse beliefs about properties, formalized in a probabilistic model of
language understanding. This work opens the door to understanding precisely how
abstract knowledge is learned from language.
|
Language provides simple ways of communicating generalizable knowledge to each other (e. g., "Birds fly", "John hikes", "Fire makes smoke").
|
http://arxiv.org/abs/1608.02926v4
|
http://arxiv.org/pdf/1608.02926v4.pdf
| null |
[
"Michael Henry Tessler",
"Noah D. Goodman"
] |
[] | 2016-08-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-semantic-loss-function-for-deep-learning
|
1711.11157
| null |
HkepKG-Rb
|
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
|
This paper develops a novel methodology for using symbolic knowledge in deep
learning. From first principles, we derive a semantic loss function that
bridges between neural output vectors and logical constraints. This loss
function captures how close the neural network is to satisfying the constraints
on its output. An experimental evaluation shows that it effectively guides the
learner to achieve (near-)state-of-the-art results on semi-supervised
multi-class classification. Moreover, it significantly increases the ability of
the neural network to predict structured objects, such as rankings and paths.
These discrete concepts are tremendously difficult to learn, and benefit from a
tight integration of deep learning and symbolic reasoning methods.
|
This paper develops a novel methodology for using symbolic knowledge in deep learning.
|
http://arxiv.org/abs/1711.11157v2
|
http://arxiv.org/pdf/1711.11157v2.pdf
|
ICML 2018 7
|
[
"Jingyi Xu",
"Zilu Zhang",
"Tal Friedman",
"Yitao Liang",
"Guy Van Den Broeck"
] |
[
"Deep Learning",
"General Classification",
"Multi-class Classification"
] | 2017-11-29T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2431
|
http://proceedings.mlr.press/v80/xu18h/xu18h.pdf
|
a-semantic-loss-function-for-deep-learning-2
| null |
[] |
https://paperswithcode.com/paper/not-to-cry-wolf-distantly-supervised
|
1802.05027
| null | null |
Not to Cry Wolf: Distantly Supervised Multitask Learning in Critical Care
|
Patients in the intensive care unit (ICU) require constant and close
supervision. To assist clinical staff in this task, hospitals use monitoring
systems that trigger audiovisual alarms if their algorithms indicate that a
patient's condition may be worsening. However, current monitoring systems are
extremely sensitive to movement artefacts and technical errors. As a result,
they typically trigger hundreds to thousands of false alarms per patient per
day - drowning the important alarms in noise and adding to the exhaustion of
clinical staff. In this setting, data is abundantly available, but obtaining
trustworthy annotations by experts is laborious and expensive. We frame the
problem of false alarm reduction from multivariate time series as a
machine-learning task and address it with a novel multitask network
architecture that utilises distant supervision through multiple related
auxiliary tasks in order to reduce the number of expensive labels required for
training. We show that our approach leads to significant improvements over
several state-of-the-art baselines on real-world ICU data and provide new
insights on the importance of task selection and architectural choices in
distantly supervised multitask learning.
|
Patients in the intensive care unit (ICU) require constant and close supervision.
|
http://arxiv.org/abs/1802.05027v2
|
http://arxiv.org/pdf/1802.05027v2.pdf
|
ICML 2018 7
|
[
"Patrick Schwab",
"Emanuela Keller",
"Carl Muroi",
"David J. Mack",
"Christian Strässle",
"Walter Karlen"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-02-14T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2119
|
http://proceedings.mlr.press/v80/schwab18a/schwab18a.pdf
|
not-to-cry-wolf-distantly-supervised-1
| null |
[] |
https://paperswithcode.com/paper/a-spectral-approach-to-gradient-estimation
|
1806.02925
| null | null |
A Spectral Approach to Gradient Estimation for Implicit Distributions
|
Recently there have been increasing interests in learning and inference with
implicit distributions (i.e., distributions without tractable densities). To
this end, we develop a gradient estimator for implicit distributions based on
Stein's identity and a spectral decomposition of kernel operators, where the
eigenfunctions are approximated by the Nystr\"om method. Unlike the previous
works that only provide estimates at the sample points, our approach directly
estimates the gradient function, thus allows for a simple and principled
out-of-sample extension. We provide theoretical results on the error bound of
the estimator and discuss the bias-variance tradeoff in practice. The
effectiveness of our method is demonstrated by applications to gradient-free
Hamiltonian Monte Carlo and variational inference with implicit distributions.
Finally, we discuss the intuition behind the estimator by drawing connections
between the Nystr\"om method and kernel PCA, which indicates that the estimator
can automatically adapt to the geometry of the underlying distribution.
|
Recently there have been increasing interests in learning and inference with implicit distributions (i. e., distributions without tractable densities).
|
http://arxiv.org/abs/1806.02925v1
|
http://arxiv.org/pdf/1806.02925v1.pdf
|
ICML 2018 7
|
[
"Jiaxin Shi",
"Shengyang Sun",
"Jun Zhu"
] |
[
"Variational Inference"
] | 2018-06-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2490
|
http://proceedings.mlr.press/v80/shi18a/shi18a.pdf
|
a-spectral-approach-to-gradient-estimation-1
| null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/multimodal-relational-tensor-network-for
|
1806.02923
| null | null |
Multimodal Relational Tensor Network for Sentiment and Emotion Classification
|
Understanding Affect from video segments has brought researchers from the
language, audio and video domains together. Most of the current multimodal
research in this area deals with various techniques to fuse the modalities, and
mostly treat the segments of a video independently. Motivated by the work of
(Zadeh et al., 2017) and (Poria et al., 2017), we present our architecture,
Relational Tensor Network, where we use the inter-modal interactions within a
segment (intra-segment) and also consider the sequence of segments in a video
to model the inter-segment inter-modal interactions. We also generate rich
representations of text and audio modalities by leveraging richer audio and
linguistic context alongwith fusing fine-grained knowledge based polarity
scores from text. We present the results of our model on CMU-MOSEI dataset and
show that our model outperforms many baselines and state of the art methods for
sentiment classification and emotion recognition.
| null |
http://arxiv.org/abs/1806.02923v1
|
http://arxiv.org/pdf/1806.02923v1.pdf
|
WS 2018 7
|
[
"Saurav Sahay",
"Shachi H. Kumar",
"Rui Xia",
"Jonathan Huang",
"Lama Nachman"
] |
[
"Classification",
"Emotion Classification",
"Emotion Recognition",
"General Classification",
"Sentiment Analysis",
"Sentiment Classification"
] | 2018-06-07T00:00:00 |
https://aclanthology.org/W18-3303
|
https://aclanthology.org/W18-3303.pdf
|
multimodal-relational-tensor-network-for-1
| null |
[] |
https://paperswithcode.com/paper/feature-selection-in-functional-data
|
1806.02922
| null | null |
Feature selection in functional data classification with recursive maxima hunting
|
Dimensionality reduction is one of the key issues in the design of effective
machine learning methods for automatic induction. In this work, we introduce
recursive maxima hunting (RMH) for variable selection in classification
problems with functional data. In this context, variable selection techniques
are especially attractive because they reduce the dimensionality, facilitate
the interpretation and can improve the accuracy of the predictive models. The
method, which is a recursive extension of maxima hunting (MH), performs
variable selection by identifying the maxima of a relevance function, which
measures the strength of the correlation of the predictor functional variable
with the class label. At each stage, the information associated with the
selected variable is removed by subtracting the conditional expectation of the
process. The results of an extensive empirical evaluation are used to
illustrate that, in the problems investigated, RMH has comparable or higher
predictive accuracy than the standard dimensionality reduction techniques, such
as PCA and PLS, and state-of-the-art feature selection methods for functional
data, such as maxima hunting.
| null |
http://arxiv.org/abs/1806.02922v1
|
http://arxiv.org/pdf/1806.02922v1.pdf
|
NeurIPS 2016 12
|
[
"José L. Torrecilla",
"Alberto Suárez"
] |
[
"Dimensionality Reduction",
"feature selection",
"General Classification",
"Variable Selection"
] | 2018-06-07T00:00:00 |
http://papers.nips.cc/paper/6392-feature-selection-in-functional-data-classification-with-recursive-maxima-hunting
|
http://papers.nips.cc/paper/6392-feature-selection-in-functional-data-classification-with-recursive-maxima-hunting.pdf
|
feature-selection-in-functional-data-1
| null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/radialgan-leveraging-multiple-datasets-to
|
1802.06403
| null | null |
RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using Generative Adversarial Networks
|
Training complex machine learning models for prediction often requires a
large amount of data that is not always readily available. Leveraging these
external datasets from related but different sources is therefore an important
task if good predictive models are to be built for deployment in settings where
data can be rare. In this paper we propose a novel approach to the problem in
which we use multiple GAN architectures to learn to translate from one dataset
to another, thereby allowing us to effectively enlarge the target dataset, and
therefore learn better predictive models than if we simply used the target
dataset. We show the utility of such an approach, demonstrating that our method
improves the prediction performance on the target domain over using just the
target dataset and also show that our framework outperforms several other
benchmarks on a collection of real-world medical datasets.
|
Training complex machine learning models for prediction often requires a large amount of data that is not always readily available.
|
http://arxiv.org/abs/1802.06403v2
|
http://arxiv.org/pdf/1802.06403v2.pdf
|
ICML 2018 7
|
[
"Jinsung Yoon",
"James Jordon",
"Mihaela van der Schaar"
] |
[] | 2018-02-18T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2027
|
http://proceedings.mlr.press/v80/yoon18b/yoon18b.pdf
|
radialgan-leveraging-multiple-datasets-to-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/location-name-extraction-from-targeted-text
|
1708.03105
| null | null |
Location Name Extraction from Targeted Text Streams using Gazetteer-based Statistical Language Models
|
Extracting location names from informal and unstructured social media data requires the identification of referent boundaries and partitioning compound names. Variability, particularly systematic variability in location names (Carroll, 1983), challenges the identification task. Some of this variability can be anticipated as operations within a statistical language model, in this case drawn from gazetteers such as OpenStreetMap (OSM), Geonames, and DBpedia. This permits evaluation of an observed n-gram in Twitter targeted text as a legitimate location name variant from the same location-context. Using n-gram statistics and location-related dictionaries, our Location Name Extraction tool (LNEx) handles abbreviations and automatically filters and augments the location names in gazetteers (handling name contractions and auxiliary contents) to help detect the boundaries of multi-word location names and thereby delimit them in texts. We evaluated our approach on 4,500 event-specific tweets from three targeted streams to compare the performance of LNEx against that of ten state-of-the-art taggers that rely on standard semantic, syntactic and/or orthographic features. LNEx improved the average F-Score by 33-179%, outperforming all taggers. Further, LNEx is capable of stream processing.
|
Extracting location names from informal and unstructured social media data requires the identification of referent boundaries and partitioning compound names.
|
https://arxiv.org/abs/1708.03105v3
|
https://arxiv.org/pdf/1708.03105v3.pdf
|
COLING 2018 8
|
[
"Hussein S. Al-Olimat",
"Krishnaprasad Thirunarayan",
"Valerie Shalin",
"Amit Sheth"
] |
[
"Language Modeling",
"Language Modelling"
] | 2017-08-10T00:00:00 |
https://aclanthology.org/C18-1169
|
https://aclanthology.org/C18-1169.pdf
|
location-name-extraction-from-targeted-text-2
| null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.