paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/dynamic-network-model-from-partial
|
1805.10616
| null | null |
Dynamic Network Model from Partial Observations
|
Can evolving networks be inferred and modeled without directly observing
their nodes and edges? In many applications, the edges of a dynamic network
might not be observed, but one can observe the dynamics of stochastic cascading
processes (e.g., information diffusion, virus propagation) occurring over the
unobserved network. While there have been efforts to infer networks based on
such data, providing a generative probabilistic model that is able to identify
the underlying time-varying network remains an open question. Here we consider
the problem of inferring generative dynamic network models based on network
cascade diffusion data. We propose a novel framework for providing a
non-parametric dynamic network model--based on a mixture of coupled
hierarchical Dirichlet processes-- based on data capturing cascade node
infection times. Our approach allows us to infer the evolving community
structure in networks and to obtain an explicit predictive distribution over
the edges of the underlying network--including those that were not involved in
transmission of any cascade, or are likely to appear in the future. We show the
effectiveness of our approach using extensive experiments on synthetic as well
as real-world networks.
| null |
http://arxiv.org/abs/1805.10616v4
|
http://arxiv.org/pdf/1805.10616v4.pdf
|
NeurIPS 2018 12
|
[
"Elahe Ghalebi",
"Baharan Mirzasoleiman",
"Radu Grosu",
"Jure Leskovec"
] |
[
"model",
"Open-Ended Question Answering"
] | 2018-05-27T00:00:00 |
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations
|
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations.pdf
|
dynamic-network-model-from-partial-1
| null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ooJpiued",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "ooJpiued",
"source_title": "Dynamic Network Model from Partial Observations",
"source_url": "http://arxiv.org/abs/1805.10616v4"
}
] |
https://paperswithcode.com/paper/pac-bayes-bounds-for-stable-algorithms-with
|
1806.06827
| null | null |
PAC-Bayes bounds for stable algorithms with instance-dependent priors
|
PAC-Bayes bounds have been proposed to get risk estimates based on a training
sample. In this paper the PAC-Bayes approach is combined with stability of the
hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting
is used with a Gaussian prior centered at the expected output. Thus a novelty
of our paper is using priors defined in terms of the data-generating
distribution. Our main result estimates the risk of the randomized algorithm in
terms of the hypothesis stability coefficients. We also provide a new bound for
the SVM classifier, which is compared to other known bounds experimentally.
Ours appears to be the first stability-based bound that evaluates to
non-trivial values.
| null |
http://arxiv.org/abs/1806.06827v2
|
http://arxiv.org/pdf/1806.06827v2.pdf
|
NeurIPS 2018 12
|
[
"Omar Rivasplata",
"Emilio Parrado-Hernandez",
"John Shawe-Taylor",
"Shiliang Sun",
"Csaba Szepesvari"
] |
[] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors
|
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors.pdf
|
pac-bayes-bounds-for-stable-algorithms-with-1
| null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/automated-bridge-component-recognition-using
|
1806.06820
| null | null |
Automated Bridge Component Recognition using Video Data
|
This paper investigates the automated recognition of structural bridge
components using video data. Although understanding video data for structural
inspections is straightforward for human inspectors, the implementation of the
same task using machine learning methods has not been fully realized. In
particular, single-frame image processing techniques, such as convolutional
neural networks (CNNs), are not expected to identify structural components
accurately when the image is a close-up view, lacking contextual information
regarding where on the structure the image originates. Inspired by the
significant progress in video processing techniques, this study investigates
automated bridge component recognition using video data, where the information
from the past frames is used to augment the understanding of the current frame.
A new simulated video dataset is created to train the machine learning
algorithms. Then, convolutional Neural Networks (CNNs) with recurrent
architectures are designed and applied to implement the automated bridge
component recognition task. Results are presented for simulated video data, as
well as video collected in the field.
| null |
http://arxiv.org/abs/1806.06820v2
|
http://arxiv.org/pdf/1806.06820v2.pdf
| null |
[
"Yasutaka Narazaki",
"Vedhus Hoskere",
"Tu A. Hoang",
"Billie F. Spencer Jr"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gradient-descent-with-identity-initialization-1
|
1802.06093
| null | null |
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
|
We analyze algorithms for approximating a function $f(x) = \Phi x$ mapping
$\Re^d$ to $\Re^d$ using deep linear neural networks, i.e. that learn a
function $h$ parameterized by matrices $\Theta_1,...,\Theta_L$ and defined by
$h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focus on algorithms that
learn through gradient descent on the population quadratic loss in the case
that the distribution over the inputs is isotropic.
We provide polynomial bounds on the number of iterations for gradient descent
to approximate the least squares matrix $\Phi$, in the case where the initial
hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small
enough constant. On the other hand, we show that gradient descent fails to
converge for $\Phi$ whose distance from the identity is a larger constant, and
we show that some forms of regularization toward the identity in each layer do
not help.
If $\Phi$ is symmetric positive definite, we show that an algorithm that
initializes $\Theta_i = I$ learns an $\epsilon$-approximation of $f$ using a
number of updates polynomial in $L$, the condition number of $\Phi$, and
$\log(d/\epsilon)$. In contrast, we show that if the least squares matrix
$\Phi$ is symmetric and has a negative eigenvalue, then all members of a class
of algorithms that perform gradient descent with identity initialization, and
optionally regularize toward the identity in each layer, fail to converge.
We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top} \Phi u >
0$ for all $u$, but may not be symmetric. This algorithm uses two regularizers:
one that maintains the invariant $u^{\top} \Theta_L \Theta_{L-1} ... \Theta_1 u
> 0$ for all $u$, and another that "balances" $\Theta_1, ..., \Theta_L$ so that
they have the same singular values.
| null |
http://arxiv.org/abs/1802.06093v4
|
http://arxiv.org/pdf/1802.06093v4.pdf
|
ICML 2018
|
[
"Peter L. Bartlett",
"David P. Helmbold",
"Philip M. Long"
] |
[] | 2018-02-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/temporal-coherence-based-self-supervised
|
1806.06811
| null | null |
Temporal coherence-based self-supervised learning for laparoscopic workflow analysis
|
In order to provide the right type of assistance at the right time,
computer-assisted surgery systems need context awareness. To achieve this,
methods for surgical workflow analysis are crucial. Currently, convolutional
neural networks provide the best performance for video-based workflow analysis
tasks. For training such networks, large amounts of annotated data are
necessary. However, collecting a sufficient amount of data is often costly,
time-consuming, and not always feasible. In this paper, we address this problem
by presenting and comparing different approaches for self-supervised
pretraining of neural networks on unlabeled laparoscopic videos using temporal
coherence. We evaluate our pretrained networks on Cholec80, a publicly
available dataset for surgical phase segmentation, on which a maximum F1 score
of 84.6 was reached. Furthermore, we were able to achieve an increase of the F1
score of up to 10 points when compared to a non-pretrained neural network.
|
To achieve this, methods for surgical workflow analysis are crucial.
|
http://arxiv.org/abs/1806.06811v2
|
http://arxiv.org/pdf/1806.06811v2.pdf
| null |
[
"Isabel Funke",
"Alexander Jenke",
"Sören Torge Mees",
"Jürgen Weitz",
"Stefanie Speidel",
"Sebastian Bodenstedt"
] |
[
"Self-Supervised Learning",
"Surgical phase recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/better-runtime-guarantees-via-stochastic
|
1801.04487
| null | null |
Better Runtime Guarantees Via Stochastic Domination
|
Apart from few exceptions, the mathematical runtime analysis of evolutionary
algorithms is mostly concerned with expected runtimes. In this work, we argue
that stochastic domination is a notion that should be used more frequently in
this area. Stochastic domination allows to formulate much more informative
performance guarantees, it allows to decouple the algorithm analysis into the
true algorithmic part of detecting a domination statement and the
probability-theoretical part of deriving the desired probabilistic guarantees
from this statement, and it helps finding simpler and more natural proofs.
As particular results, we prove a fitness level theorem which shows that the
runtime is dominated by a sum of independent geometric random variables, we
prove the first tail bounds for several classic runtime problems, and we give a
short and natural proof for Witt's result that the runtime of any $(\mu,p)$
mutation-based algorithm on any function with unique optimum is subdominated by
the runtime of a variant of the \oea on the \onemax function.
As side-products, we determine the fastest unbiased (1+1) algorithm for the
\leadingones benchmark problem, both in the general case and when restricted to
static mutation operators, and we prove a Chernoff-type tail bound for sums of
independent coupon collector distributions.
| null |
http://arxiv.org/abs/1801.04487v5
|
http://arxiv.org/pdf/1801.04487v5.pdf
| null |
[
"Benjamin Doerr"
] |
[
"Evolutionary Algorithms"
] | 2018-01-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scaling-neural-machine-translation
|
1806.00187
| null | null |
Scaling Neural Machine Translation
|
Sequence to sequence learning models still require several days to reach
state of the art performance on large benchmark datasets using a single
machine. This paper shows that reduced precision and large batch training can
speedup training by nearly 5x on a single 8-GPU machine with careful tuning and
implementation. On WMT'14 English-German translation, we match the accuracy of
Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a
new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We
further improve these results to 29.8 BLEU by training on the much larger
Paracrawl dataset. On the WMT'14 English-French task, we obtain a
state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.
|
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine.
|
http://arxiv.org/abs/1806.00187v3
|
http://arxiv.org/pdf/1806.00187v3.pdf
|
WS 2018 10
|
[
"Myle Ott",
"Sergey Edunov",
"David Grangier",
"Michael Auli"
] |
[
"GPU",
"Machine Translation",
"Question Answering",
"Translation"
] | 2018-06-01T00:00:00 |
https://aclanthology.org/W18-6301
|
https://aclanthology.org/W18-6301.pdf
|
scaling-neural-machine-translation-1
| null |
[] |
https://paperswithcode.com/paper/almost-exact-matching-with-replacement-for
|
1806.06802
| null | null |
Interpretable Almost Matching Exactly for Causal Inference
|
We aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework. Matching methods are heavily used in the social sciences due to their interpretability, but most matching methods do not pass basic sanity checks: they fail when irrelevant variables are introduced, and tend to be either computationally slow or produce low-quality matches. The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; the algorithm aims to match units on as many relevant variables as possible. To do this, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
|
Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
|
https://arxiv.org/abs/1806.06802v6
|
https://arxiv.org/pdf/1806.06802v6.pdf
| null |
[
"Yameng Liu",
"Aw Dieng",
"Sudeepa Roy",
"Cynthia Rudin",
"Alexander Volfovsky"
] |
[
"Causal Inference"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-spatiotemporal-representation-of-the
|
1806.06793
| null | null |
Deep Spatiotemporal Representation of the Face for Automatic Pain Intensity Estimation
|
Automatic pain intensity assessment has a high value in disease diagnosis
applications. Inspired by the fact that many diseases and brain disorders can
interrupt normal facial expression formation, we aim to develop a computational
model for automatic pain intensity assessment from spontaneous and micro facial
variations. For this purpose, we propose a 3D deep architecture for dynamic
facial video representation. The proposed model is built by stacking several
convolutional modules where each module encompasses a 3D convolution kernel
with a fixed temporal depth, several parallel 3D convolutional kernels with
different temporal depths, and an average pooling layer. Deploying variable
temporal depths in the proposed architecture allows the model to effectively
capture a wide range of spatiotemporal variations on the faces. Extensive
experiments on the UNBC-McMaster Shoulder Pain Expression Archive database show
that our proposed model yields in a promising performance compared to the
state-of-the-art in automatic pain intensity estimation.
| null |
http://arxiv.org/abs/1806.06793v1
|
http://arxiv.org/pdf/1806.06793v1.pdf
| null |
[
"Mohammad Tavakolian",
"Abdenour Hadid"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/73642d9425a358b51a683cf6f95852d06cba1096/torch/nn/modules/conv.py#L421",
"description": "A **3D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) where the kernel slides in 3 dimensions as opposed to 2 dimensions with 2D convolutions. One example use case is medical imaging where a model is constructed using 3D image slices. Additionally video based data has an additional temporal dimension over images making it suitable for this module. \r\n\r\nImage: Lung nodule detection based on 3D convolutional neural networks, Fan et al",
"full_name": "3D Convolution",
"introduced_year": 2015,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "3D Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/flexible-collaborative-estimation-of-the
|
1806.06784
| null | null |
Robust inference on the average treatment effect using the outcome highly adaptive lasso
|
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques such as semiparametric regression or machine learning to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome-adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss large sample theory for this estimator and propose closed form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.
| null |
https://arxiv.org/abs/1806.06784v3
|
https://arxiv.org/pdf/1806.06784v3.pdf
| null |
[
"Cheng Ju",
"David Benkeser",
"Mark J. Van Der Laan"
] |
[
"regression"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/consistent-individualized-feature-attribution
|
1802.03888
| null | null |
Consistent Individualized Feature Attribution for Tree Ensembles
|
A unified approach to explain the output of any machine learning model.
|
A unified approach to explain the output of any machine learning model.
|
http://arxiv.org/abs/1802.03888v3
|
http://arxiv.org/pdf/1802.03888v3.pdf
| null |
[
"Scott M. Lundberg",
"Gabriel G. Erion",
"Su-In Lee"
] |
[
"BIG-bench Machine Learning"
] | 2018-02-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bingan-learning-compact-binary-descriptors
|
1806.06778
| null | null |
BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
|
In this paper, we propose a novel regularization method for Generative
Adversarial Networks, which allows the model to learn discriminative yet
compact binary representations of image patches (image descriptors). We employ
the dimensionality reduction that takes place in the intermediate layers of the
discriminator network and train binarized low-dimensional representation of the
penultimate layer to mimic the distribution of the higher-dimensional preceding
layers. To achieve this, we introduce two loss terms that aim at: (i) reducing
the correlation between the dimensions of the binarized low-dimensional
representation of the penultimate layer i. e. maximizing joint entropy) and
(ii) propagating the relations between the dimensions in the high-dimensional
space to the low-dimensional space. We evaluate the resulting binary image
descriptors on two challenging applications, image matching and retrieval, and
achieve state-of-the-art results.
|
In this paper, we propose a novel regularization method for Generative Adversarial Networks, which allows the model to learn discriminative yet compact binary representations of image patches (image descriptors).
|
http://arxiv.org/abs/1806.06778v5
|
http://arxiv.org/pdf/1806.06778v5.pdf
|
NeurIPS 2018 12
|
[
"Maciej Zieba",
"Piotr Semberecki",
"Tarek El-Gaaly",
"Tomasz Trzcinski"
] |
[
"Dimensionality Reduction",
"Retrieval"
] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan
|
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan.pdf
|
bingan-learning-compact-binary-descriptors-1
| null |
[
{
"code_snippet_url": null,
"description": "Need help with a Lufthansa Airlines reservation, cancellation, or flight change? Speaking directly with a live Lufthansa agent at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) who can save your time, eliminate confusion, and ensure your travel needs are met quickly. Whether you’re dealing with last-minute changes, refund issues, or booking complications, here are five quick and easy ways to get in touch with a live person at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person).\r\n\r\n1. Call Lufthansa Customer Service (Best Way)\r\nThe fastest and most effective way to talk to a Lufthansa agent is by calling their dedicated support line. You’ll be connected to a live person who can assist with:\r\n\r\nFlight bookings\r\nRefund requests\r\nSeat upgrades\r\nFlight changes and cancellations\r\nBaggage issues\r\nCall now at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to speak with a Lufthansa representative 24/7. This hotline is especially helpful if you're traveling soon or dealing with an urgent issue.\r\n\r\n\r\n2. Use the Lufthansa Website Live Chat\r\nAnother way to connect is through Lufthansa’s Live Chat feature on their official website. While chatbots handle basic queries, you can request a human agent for complex issues.\r\n\r\nIf the chat tool isn’t responsive or available, calling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) ensures direct voice support with no waiting.\r\n\r\n\r\n3. Reach Lufthansa via Social Media\r\nLufthansa is active on platforms like Twitter (@lufthansa) and Facebook. While social media responses may take time, you can send a direct message and request a callback or live chat.\r\n\r\nStill prefer real-time help? Skip the wait and dial ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to speak with a Lufthansa agent immediately.\r\n\r\n\r\n4. Visit a Lufthansa Airport Counter\r\nIf you're already at the airport, head over to the Lufthansa service desk. Staff members can help with boarding passes, seat changes, upgrades, and delays.\r\n\r\nBut if you're not at the airport or need help before heading out, calling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) is the quickest way to resolve issues without traveling unnecessarily.\r\n\r\n\r\n5. Email Lufthansa Customer Support\r\nWhile emailing Lufthansa support can work for non-urgent concerns, responses may take up to 72 hours. This method isn’t ideal for urgent travel issues or last-minute changes.\r\n\r\nFor a faster response, contact Lufthansa’s phone support at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) and speak with a real human instantly.\r\n\r\n\r\nWhy Call Lufthansa’s Live Agent Number?\r\nCalling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) allows you to:\r\n\r\nSkip online wait times\r\nResolve flight problems fast\r\nSpeak with a native English support agent\r\nGet 24/7 customer care for US passengers\r\nRequest special assistance or meal options\r\nThis number connects you directly to trained Lufthansa agents who can manage both international and domestic flight concerns with efficiency and accuracy.\r\n\r\n\r\nCommon Reasons to Contact a Lufthansa Agent\r\nCancel or reschedule your flight\r\nConfirm baggage allowance\r\nAsk about COVID-19 travel rules\r\nChange travel dates or passenger names\r\nRequest travel receipts or booking confirmation\r\nTrack lost baggage\r\nAll of these services can be easily handled when you call Lufthansa at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person).\r\n\r\n\r\nLufthansa Support Hotline: Contact Anytime\r\nWhether you're flying tomorrow or next month, you can reach a live Lufthansa agent by calling:\r\n\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Flight Changes\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Cancellations & Refunds\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – New Bookings\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Travel Insurance Help\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Seat Selection\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Emergency Travel\r\nFeel free to call anytime—☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) is always available to assist you, even on weekends and holidays.\r\n\r\n\r\nConclusion\r\nNo matter what your travel situation is, talking to a Lufthansa agent doesn’t have to be stressful. With multiple contact options and 24/7 service, Lufthansa makes it easy for passengers to get the help they need. But for the fastest results, always call ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to reach a real person instantly.",
"full_name": "Five Ways: How Can I Talk To an Agent at Lufthansa Airlines?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Adversarial Attacks",
"parent": null
},
"name": "How Can I Talk To an Agent at Lufthansa Airlines?",
"source_title": "BinGAN: Learning Compact Binary Descriptors with a Regularized GAN",
"source_url": "http://arxiv.org/abs/1806.06778v5"
}
] |
https://paperswithcode.com/paper/multifit-a-multivariate-multiscale-framework
|
1806.06777
| null | null |
Multiscale Fisher's Independence Test for Multivariate Dependence
|
Identifying dependency in multivariate data is a common inference task that arises in numerous applications. However, existing nonparametric independence tests typically require computation that scales at least quadratically with the sample size, making it difficult to apply them to massive data. Moreover, resampling is usually necessary to evaluate the statistical significance of the resulting test statistics at finite sample sizes, further worsening the computational burden. We introduce a scalable, resampling-free approach to testing the independence between two random vectors by breaking down the task into simple univariate tests of independence on a collection of 2x2 contingency tables constructed through sequential coarse-to-fine discretization of the sample space, transforming the inference task into a multiple testing problem that can be completed with almost linear complexity with respect to the sample size. To address increasing dimensionality, we introduce a coarse-to-fine sequential adaptive procedure that exploits the spatial features of dependency structures to more effectively examine the sample space. We derive a finite-sample theory that guarantees the inferential validity of our adaptive procedure at any given sample size. In particular, we show that our approach can achieve strong control of the family-wise error rate without resampling or large-sample approximation. We demonstrate the substantial computational advantage of the procedure in comparison to existing approaches as well as its decent statistical power under various dependency scenarios through an extensive simulation study, and illustrate how the divide-and-conquer nature of the procedure can be exploited to not just test independence but to learn the nature of the underlying dependency. Finally, we demonstrate the use of our method through analyzing a large data set from a flow cytometry experiment.
|
Identifying dependency in multivariate data is a common inference task that arises in numerous applications.
|
https://arxiv.org/abs/1806.06777v7
|
https://arxiv.org/pdf/1806.06777v7.pdf
| null |
[
"Shai Gorsky",
"Li Ma"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kernel-based-outlier-detection-using-the
|
1806.06775
| null | null |
Kernel-based Outlier Detection using the Inverse Christoffel Function
|
Outlier detection methods have become increasingly relevant in recent years
due to increased security concerns and because of its vast application to
different fields. Recently, Pauwels and Lasserre (2016) noticed that the
sublevel sets of the inverse Christoffel function accurately depict the shape
of a cloud of data using a sum-of-squares polynomial and can be used to perform
outlier detection. In this work, we propose a kernelized variant of the inverse
Christoffel function that makes it computationally tractable for data sets with
a large number of features. We compare our approach to current methods on 15
different data sets and achieve the best average area under the precision
recall curve (AUPRC) score, the best average rank and the lowest root mean
square deviation.
| null |
http://arxiv.org/abs/1806.06775v1
|
http://arxiv.org/pdf/1806.06775v1.pdf
| null |
[
"Armin Askari",
"Forest Yang",
"Laurent El Ghaoui"
] |
[
"Outlier Detection"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kid-net-convolution-networks-for-kidney
|
1806.06769
| null | null |
Kid-Net: Convolution Networks for Kidney Vessels Segmentation from CT-Volumes
|
Semantic image segmentation plays an important role in modeling
patient-specific anatomy. We propose a convolution neural network, called
Kid-Net, along with a training schema to segment kidney vessels: artery, vein
and collecting system. Such segmentation is vital during the surgical planning
phase in which medical decisions are made before surgical incision. Our main
contribution is developing a training schema that handles unbalanced data,
reduces false positives and enables high-resolution segmentation with a limited
memory budget. These objectives are attained using dynamic weighting, random
sampling and 3D patch segmentation. Manual medical image annotation is both
time-consuming and expensive. Kid-Net reduces kidney vessels segmentation time
from matter of hours to minutes. It is trained end-to-end using 3D patches from
volumetric CT-images. A complete segmentation for a 512x512x512 CT-volume is
obtained within a few minutes (1-2 mins) by stitching the output 3D patches
together. Feature down-sampling and up-sampling are utilized to achieve higher
classification and localization accuracies. Quantitative and qualitative
evaluation results on a challenging testing dataset show Kid-Net competence.
| null |
http://arxiv.org/abs/1806.06769v1
|
http://arxiv.org/pdf/1806.06769v1.pdf
| null |
[
"Ahmed Taha",
"Pechin Lo",
"Junning Li",
"Tao Zhao"
] |
[
"Anatomy",
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/modularity-matters-learning-invariant
|
1806.06765
| null | null |
Modularity Matters: Learning Invariant Relational Reasoning Tasks
|
We focus on two supervised visual reasoning tasks whose labels encode a
semantic relational rule between two or more objects in an image: the MNIST
Parity task and the colorized Pentomino task. The objects in the images undergo
random translation, scaling, rotation and coloring transformations. Thus these
tasks involve invariant relational reasoning. We report uneven performance of
various deep CNN models on these two tasks. For the MNIST Parity task, we
report that the VGG19 model soundly outperforms a family of ResNet models.
Moreover, the family of ResNet models exhibits a general sensitivity to random
initialization for the MNIST Parity task. For the colorized Pentomino task, now
both the VGG19 and ResNet models exhibit sluggish optimization and very poor
test generalization, hovering around 30% test error. The CNN we tested all
learn hierarchies of fully distributed features and thus encode the distributed
representation prior. We are motivated by a hypothesis from cognitive
neuroscience which posits that the human visual cortex is modularized, and this
allows the visual cortex to learn higher order invariances. To this end, we
consider a modularized variant of the ResNet model, referred to as a Residual
Mixture Network (ResMixNet) which employs a mixture-of-experts architecture to
interleave distributed representations with more specialized, modular
representations. We show that very shallow ResMixNets are capable of learning
each of the two tasks well, attaining less than 2% and 1% test error on the
MNIST Parity and the colorized Pentomino tasks respectively. Most importantly,
the ResMixNet models are extremely parameter efficient: generalizing better
than various non-modular CNNs that have over 10x the number of parameters.
These experimental results support the hypothesis that modularity is a robust
prior for learning invariant relational reasoning.
| null |
http://arxiv.org/abs/1806.06765v1
|
http://arxiv.org/pdf/1806.06765v1.pdf
| null |
[
"Jason Jo",
"Vikas Verma",
"Yoshua Bengio"
] |
[
"Mixture-of-Experts",
"Relational Reasoning",
"Visual Reasoning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/closing-the-generalization-gap-of-adaptive
|
1806.06763
| null | null |
Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
|
Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter $p$, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
|
Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.
|
https://arxiv.org/abs/1806.06763v3
|
https://arxiv.org/pdf/1806.06763v3.pdf
| null |
[
"Jinghui Chen",
"Dongruo Zhou",
"Yiqi Tang",
"Ziyan Yang",
"Yuan Cao",
"Quanquan Gu"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/paultsw/nice_pytorch/blob/15cfc543fc3dc81ee70398b8dfc37b67269ede95/nice/layers.py#L109",
"description": "**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse function and the log-determinant are computationally efficient. For the forward function, we split the input dimension into two parts:\r\n\r\n$$ \\mathbf{x}\\_{a}, \\mathbf{x}\\_{b} = \\text{split}\\left(\\mathbf{x}\\right) $$\r\n\r\nThe second part stays the same $\\mathbf{x}\\_{b} = \\mathbf{y}\\_{b}$, while the first part $\\mathbf{x}\\_{a}$ undergoes an affine transformation, where the parameters for this transformation are learnt using the second part $\\mathbf{x}\\_{b}$ being put through a neural network. Together we have:\r\n\r\n$$ \\left(\\log{\\mathbf{s}, \\mathbf{t}}\\right) = \\text{NN}\\left(\\mathbf{x}\\_{b}\\right) $$\r\n\r\n$$ \\mathbf{s} = \\exp\\left(\\log{\\mathbf{s}}\\right) $$\r\n\r\n$$ \\mathbf{y}\\_{a} = \\mathbf{s} \\odot \\mathbf{x}\\_{a} + \\mathbf{t} $$\r\n\r\n$$ \\mathbf{y}\\_{b} = \\mathbf{x}\\_{b} $$\r\n\r\n$$ \\mathbf{y} = \\text{concat}\\left(\\mathbf{y}\\_{a}, \\mathbf{y}\\_{b}\\right) $$\r\n\r\nImage: [GLOW](https://paperswithcode.com/method/glow)",
"full_name": "Affine Coupling",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Bijective Transformations** are transformations that are bijective, i.e. they can be reversed. They are used within the context of normalizing flow models. Below you can find a continuously updating list of bijective transformation methods.",
"name": "Bijective Transformation",
"parent": null
},
"name": "Affine Coupling",
"source_title": "NICE: Non-linear Independent Components Estimation",
"source_url": "http://arxiv.org/abs/1410.8516v6"
},
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-memory-network-approach-for-story-based
|
1805.02838
| null | null |
A Memory Network Approach for Story-based Temporal Summarization of 360° Videos
|
We address the problem of story-based temporal summarization of long
360{\deg} videos. We propose a novel memory network model named Past-Future
Memory Network (PFMN), in which we first compute the scores of 81 normal field
of view (NFOV) region proposals cropped from the input 360{\deg} video, and
then recover a latent, collective summary using the network with two external
memories that store the embeddings of previously selected subshots and future
candidate subshots. Our major contributions are two-fold. First, our work is
the first to address story-based temporal summarization of 360{\deg} videos.
Second, our model is the first attempt to leverage memory networks for video
summarization tasks. For evaluation, we perform three sets of experiments.
First, we investigate the view selection capability of our model on the
Pano2Vid dataset. Second, we evaluate the temporal summarization with a newly
collected 360{\deg} video dataset. Finally, we experiment our model's
performance in another domain, with image-based storytelling VIST dataset. We
verify that our model achieves state-of-the-art performance on all the tasks.
| null |
http://arxiv.org/abs/1805.02838v3
|
http://arxiv.org/pdf/1805.02838v3.pdf
|
CVPR 2018
|
[
"Sang-ho Lee",
"Jinyoung Sung",
"Youngjae Yu",
"Gunhee Kim"
] |
[
"Video Summarization"
] | 2018-05-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/aykutaaykut/Memory-Networks",
"description": "A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartmentalized enough to accurately remember facts from the past (RNNs for example, have difficult memorizing and doing tasks like copying). \r\n\r\nA memory network consists of a memory $\\textbf{m}$ (an array of objects indexed by $\\textbf{m}\\_{i}$ and four potentially learned components:\r\n\r\n- Input feature map $I$ - feature representation of the data input.\r\n- Generalization $G$ - updates old memories given the new input.\r\n- Output feature map $O$ - produces new feature map given $I$ and $G$.\r\n- Response $R$ - converts output into the desired response. \r\n\r\nGiven an input $x$ (e.g., an input character, word or sentence depending on the granularity chosen, an image or an audio signal) the flow of the model is as follows:\r\n\r\n1. Convert $x$ to an internal feature representation $I\\left(x\\right)$.\r\n2. Update memories $m\\_{i}$ given the new input: $m\\_{i} = G\\left(m\\_{i}, I\\left(x\\right), m\\right)$, $\\forall{i}$.\r\n3. Compute output features $o$ given the new input and the memory: $o = O\\left(I\\left(x\\right), m\\right)$.\r\n4. Finally, decode output features $o$ to give the final response: $r = R\\left(o\\right)$.\r\n\r\nThis process is applied at both train and test time, if there is a distinction between such phases, that\r\nis, memories are also stored at test time, but the model parameters of $I$, $G$, $O$ and $R$ are not updated. Memory networks cover a wide class of possible implementations. The components $I$, $G$, $O$ and $R$ can potentially use any existing ideas from the machine learning literature.\r\n\r\nImage Source: [Adrian Colyer](https://blog.acolyer.org/2016/03/10/memory-networks/)",
"full_name": "Memory Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Working Memory Models** aim to supplement neural networks with a memory module to increase their capability for memorization and allowing them to more easily perform tasks such as retrieving and copying information. Below you can find a continuously updating list of working memory models.",
"name": "Working Memory Models",
"parent": null
},
"name": "Memory Network",
"source_title": "Memory Networks",
"source_url": "http://arxiv.org/abs/1410.3916v11"
}
] |
https://paperswithcode.com/paper/pots-protective-optimization-technologies
|
1806.02711
| null | null |
POTs: Protective Optimization Technologies
|
Algorithmic fairness aims to address the economic, moral, social, and political impact that digital systems have on populations through solutions that can be applied by service providers. Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures. Not surprisingly, these decisions limit fairness frameworks' ability to capture a variety of harms caused by systems. We characterize fairness limitations using concepts from requirements engineering and from social sciences. We show that the focus on algorithms' inputs and outputs misses harms that arise from systems interacting with the world; that the focus on bias and discrimination omits broader harms on populations and their environments; and that relying on service providers excludes scenarios where they are not cooperative or intentionally adversarial. We propose Protective Optimization Technologies (POTs). POTs provide means for affected parties to address the negative impacts of systems in the environment, expanding avenues for political contestation. POTs intervene from outside the system, do not require service providers to cooperate, and can serve to correct, shift, or expose harms that systems impose on populations and their environments. We illustrate the potential and limitations of POTs in two case studies: countering road congestion caused by traffic-beating applications, and recalibrating credit scoring for loan applicants.
|
Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures.
|
https://arxiv.org/abs/1806.02711v6
|
https://arxiv.org/pdf/1806.02711v6.pdf
| null |
[
"Bogdan Kulynych",
"Rebekah Overdorf",
"Carmela Troncoso",
"Seda Gürses"
] |
[
"Decision Making",
"Fairness"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/surface-networks
|
1705.10819
| null | null |
Surface Networks
|
We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs.
|
We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry.
|
http://arxiv.org/abs/1705.10819v2
|
http://arxiv.org/pdf/1705.10819v2.pdf
|
CVPR 2018 6
|
[
"Ilya Kostrikov",
"Zhongshi Jiang",
"Daniele Panozzo",
"Denis Zorin",
"Joan Bruna"
] |
[
"3D geometry"
] | 2017-05-30T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Kostrikov_Surface_Networks_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Kostrikov_Surface_Networks_CVPR_2018_paper.pdf
|
surface-networks-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/extracting-automata-from-recurrent-neural
|
1711.09576
| null | null |
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples
|
We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle. Our technique efficiently extracts accurate automata from trained RNNs, even when the state vectors are large and require fine differentiation.
|
We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle.
|
https://arxiv.org/abs/1711.09576v4
|
https://arxiv.org/pdf/1711.09576v4.pdf
|
ICML 2018 7
|
[
"Gail Weiss",
"Yoav Goldberg",
"Eran Yahav"
] |
[] | 2017-11-27T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2276
|
http://proceedings.mlr.press/v80/weiss18a/weiss18a.pdf
|
extracting-automata-from-recurrent-neural-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-word-segmentation-from-speech
|
1806.06734
| null | null |
Unsupervised Word Segmentation from Speech with Attention
|
We present a first attempt to perform attentional word segmentation directly
from the speech signal, with the final goal to automatically identify lexical
units in a low-resource, unwritten language (UL). Our methodology assumes a
pairing between recordings in the UL with translations in a well-resourced
language. It uses Acoustic Unit Discovery (AUD) to convert speech into a
sequence of pseudo-phones that is segmented using neural soft-alignments
produced by a neural machine translation model. Evaluation uses an actual Bantu
UL, Mboshi; comparisons to monolingual and bilingual baselines illustrate the
potential of attentional word segmentation for language documentation.
| null |
http://arxiv.org/abs/1806.06734v1
|
http://arxiv.org/pdf/1806.06734v1.pdf
| null |
[
"Pierre Godard",
"Marcely Zanon-Boito",
"Lucas Ondel",
"Alexandre Berard",
"François Yvon",
"Aline Villavicencio",
"Laurent Besacier"
] |
[
"Acoustic Unit Discovery",
"Machine Translation",
"Segmentation",
"Translation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semantically-selective-augmentation-for-deep
|
1806.04074
| null | null |
Semantically Selective Augmentation for Deep Compact Person Re-Identification
|
We present a deep person re-identification approach that combines
semantically selective, deep data augmentation with clustering-based network
compression to generate high performance, light and fast inference networks. In
particular, we propose to augment limited training data via sampling from a
deep convolutional generative adversarial network (DCGAN), whose discriminator
is constrained by a semantic classifier to explicitly control the domain
specificity of the generation process. Thereby, we encode information in the
classifier network which can be utilized to steer adversarial synthesis, and
which fuels our CondenseNet ID-network training. We provide a quantitative and
qualitative analysis of the approach and its variants on a number of datasets,
obtaining results that outperform the state-of-the-art on the LIMA dataset for
long-term monitoring in indoor living spaces.
| null |
http://arxiv.org/abs/1806.04074v3
|
http://arxiv.org/pdf/1806.04074v3.pdf
| null |
[
"Víctor Ponce-López",
"Tilo Burghardt",
"Sion Hannunna",
"Dima Damen",
"Alessandro Masullo",
"Majid Mirmehdi"
] |
[
"Clustering",
"Data Augmentation",
"Generative Adversarial Network",
"Person Re-Identification",
"Specificity"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/assessing-robustness-of-radiomic-features-by
|
1806.06719
| null | null |
Assessing robustness of radiomic features by image perturbation
|
Image features need to be robust against differences in positioning,
acquisition and segmentation to ensure reproducibility. Radiomic models that
only include robust features can be used to analyse new images, whereas models
with non-robust features may fail to predict the outcome of interest
accurately. Test-retest imaging is recommended to assess robustness, but may
not be available for the phenotype of interest. We therefore investigated 18
methods to determine feature robustness based on image perturbations.
Test-retest and perturbation robustness were compared for 4032 features that
were computed from the gross tumour volume in two cohorts with computed
tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19
head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was measured
using the intraclass correlation coefficient (1,1) (ICC). Features with
ICC$\geq0.90$ were considered robust. The NSCLC cohort contained more robust
features for test-retest imaging than the HNSCC cohort ($73.5\%$ vs. $34.0\%$).
A perturbation chain consisting of noise addition, affine translation, volume
growth/shrinkage and supervoxel-based contour randomisation identified the
fewest false positive robust features (NSCLC: $3.3\%$; HNSCC: $10.0\%$). Thus,
this perturbation chain may be used to assess feature robustness.
| null |
http://arxiv.org/abs/1806.06719v1
|
http://arxiv.org/pdf/1806.06719v1.pdf
| null |
[
"Alex Zwanenburg",
"Stefan Leger",
"Linda Agolli",
"Karoline Pilz",
"Esther G. C. Troost",
"Christian Richter",
"Steffen Löck"
] |
[
"Translation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reconvnet-video-object-segmentation-with
|
1806.05510
| null | null |
ReConvNet: Video Object Segmentation with Spatio-Temporal Features Modulation
|
We introduce ReConvNet, a recurrent convolutional architecture for
semi-supervised video object segmentation that is able to fast adapt its
features to focus on any specific object of interest at inference time.
Generalization to new objects never observed during training is known to be a
hard task for supervised approaches that would need to be retrained. To tackle
this problem, we propose a more efficient solution that learns spatio-temporal
features self-adapting to the object of interest via conditional affine
transformations. This approach is simple, can be trained end-to-end and does
not necessarily require extra training steps at inference time. Our method
shows competitive results on DAVIS2016 with respect to state-of-the art
approaches that use online fine-tuning, and outperforms them on DAVIS2017.
ReConvNet shows also promising results on the DAVIS-Challenge 2018 winning the
$10$-th position.
| null |
http://arxiv.org/abs/1806.05510v2
|
http://arxiv.org/pdf/1806.05510v2.pdf
| null |
[
"Francesco Lattari",
"Marco Ciccone",
"Matteo Matteucci",
"Jonathan Masci",
"Francesco Visin"
] |
[
"Object",
"Position",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | 2018-06-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tree-edit-distance-learning-via-adaptive-1
|
1806.05009
| null | null |
Tree Edit Distance Learning via Adaptive Symbol Embeddings
|
Metric learning has the aim to improve classification accuracy by learning a
distance measure which brings data points from the same class closer together
and pushes data points from different classes further apart. Recent research
has demonstrated that metric learning approaches can also be applied to trees,
such as molecular structures, abstract syntax trees of computer programs, or
syntax trees of natural language, by learning the cost function of an edit
distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree.
However, learning such costs directly may yield an edit distance which violates
metric axioms, is challenging to interpret, and may not generalize well. In
this contribution, we propose a novel metric learning approach for trees which
we call embedding edit distance learning (BEDL) and which learns an edit
distance indirectly by embedding the tree nodes as vectors, such that the
Euclidean distance between those vectors supports class discrimination. We
learn such embeddings by reducing the distance to prototypical trees from the
same class and increasing the distance to prototypical trees from different
classes. In our experiments, we show that BEDL improves upon the
state-of-the-art in metric learning for trees on six benchmark data sets,
ranging from computer science over biomedical data to a natural-language
processing data set containing over 300,000 nodes.
| null |
http://arxiv.org/abs/1806.05009v3
|
http://arxiv.org/pdf/1806.05009v3.pdf
|
ICML 2018 7
|
[
"Benjamin Paaßen",
"Claudio Gallicchio",
"Alessio Micheli",
"Barbara Hammer"
] |
[
"Metric Learning"
] | 2018-06-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2180
|
http://proceedings.mlr.press/v80/paassen18a/paassen18a.pdf
|
tree-edit-distance-learning-via-adaptive-2
| null |
[] |
https://paperswithcode.com/paper/towards-multi-instrument-drum-transcription
|
1806.06676
| null | null |
Towards multi-instrument drum transcription
|
Automatic drum transcription, a subtask of the more general automatic music
transcription, deals with extracting drum instrument note onsets from an audio
source. Recently, progress in transcription performance has been made using
non-negative matrix factorization as well as deep learning methods. However,
these works primarily focus on transcribing three drum instruments only: snare
drum, bass drum, and hi-hat. Yet, for many applications, the ability to
transcribe more drum instruments which make up standard drum kits used in
western popular music would be desirable. In this work, convolutional and
convolutional recurrent neural networks are trained to transcribe a wider range
of drum instruments. First, the shortcomings of publicly available datasets in
this context are discussed. To overcome these limitations, a larger synthetic
dataset is introduced. Then, methods to train models using the new dataset
focusing on generalization to real world data are investigated. Finally, the
trained models are evaluated on publicly available datasets and results are
discussed. The contributions of this work comprise: (i.) a large-scale
synthetic dataset for drum transcription, (ii.) first steps towards an
automatic drum transcription system that supports a larger range of instruments
by evaluating and discussing training setups and the impact of datasets in this
context, and (iii.) a publicly available set of trained models for drum
transcription. Additional materials are available at
http://ifs.tuwien.ac.at/~vogl/dafx2018
|
In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments.
|
http://arxiv.org/abs/1806.06676v2
|
http://arxiv.org/pdf/1806.06676v2.pdf
| null |
[
"Richard Vogl",
"Gerhard Widmer",
"Peter Knees"
] |
[
"Drum Transcription",
"Music Transcription"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/subword-and-crossword-units-for-ctc-acoustic
|
1712.06855
| null | null |
Subword and Crossword Units for CTC Acoustic Models
|
This paper proposes a novel approach to create an unit set for CTC based
speech recognition systems. By using Byte Pair Encoding we learn an unit set of
an arbitrary size on a given training text. In contrast to using characters or
words as units this allows us to find a good trade-off between the size of our
unit set and the available training data. We evaluate both Crossword units,
that may span multiple word, and Subword units. By combining this approach with
decoding methods using a separate language model we are able to achieve state
of the art results for grapheme based CTC systems.
| null |
http://arxiv.org/abs/1712.06855v2
|
http://arxiv.org/pdf/1712.06855v2.pdf
| null |
[
"Thomas Zenkel",
"Ramon Sanabria",
"Florian Metze",
"Alex Waibel"
] |
[
"Language Modeling",
"Language Modelling",
"speech-recognition",
"Speech Recognition"
] | 2017-12-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cardinality-leap-for-open-ended-evolution
|
1806.06628
| null | null |
Cardinality Leap for Open-Ended Evolution: Theoretical Consideration and Demonstration by "Hash Chemistry"
|
Open-ended evolution requires unbounded possibilities that evolving entities
can explore. The cardinality of a set of those possibilities thus has a
significant implication for the open-endedness of evolution. We propose that
facilitating formation of higher-order entities is a generalizable, effective
way to cause a "cardinality leap" in the set of possibilities that promotes
open-endedness. We demonstrate this idea with a simple, proof-of-concept toy
model called "Hash Chemistry" that uses a hash function as a fitness evaluator
of evolving entities of any size/order. Simulation results showed that the
cumulative number of unique replicating entities that appeared in evolution
increased almost linearly along time without an apparent bound, demonstrating
the effectiveness of the proposed cardinality leap. It was also observed that
the number of individual entities involved in a single replication event
gradually increased over time, indicating evolutionary appearance of
higher-order entities. Moreover, these behaviors were not observed in control
experiments in which fitness evaluators were replaced by random number
generators. This strongly suggests that the dynamics observed in Hash Chemistry
were indeed evolutionary behaviors driven by selection and adaptation taking
place at multiple scales.
| null |
http://arxiv.org/abs/1806.06628v4
|
http://arxiv.org/pdf/1806.06628v4.pdf
| null |
[
"Hiroki Sayama"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/warp-wavelets-with-adaptive-recursive
|
1711.00789
| null | null |
Learning Asymmetric and Local Features in Multi-Dimensional Data through Wavelets with Recursive Partitioning
|
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images. It requires methods that are sensitive to local details while fast enough to handle massive numbers of images of ever increasing sizes. We introduce a probabilistic model-based framework that achieves these objectives by incorporating adaptivity into discrete wavelet transforms (DWT) through Bayesian hierarchical modeling, thereby allowing wavelet bases to adapt to the geometric structure of the data while maintaining the high computational scalability of wavelet methods---linear in the sample size (e.g., the resolution of an image). We derive a recursive representation of the Bayesian posterior model which leads to an exact message passing algorithm to complete learning and inference. While our framework is applicable to a range of problems including multi-dimensional signal processing, compression, and structural learning, we illustrate its work and evaluate its performance in the context of image reconstruction using real images from the ImageNet database, two widely used benchmark datasets, and a dataset from retinal optical coherence tomography and compare its performance to state-of-the-art methods based on basis transforms and deep learning.
|
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images.
|
https://arxiv.org/abs/1711.00789v5
|
https://arxiv.org/pdf/1711.00789v5.pdf
| null |
[
"Meng Li",
"Li Ma"
] |
[
"Bayesian Inference",
"Image Reconstruction"
] | 2017-11-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-enhancing-speech-emotion-recognition-using
|
1806.06626
| null | null |
On Enhancing Speech Emotion Recognition using Generative Adversarial Networks
|
Generative Adversarial Networks (GANs) have gained a lot of attention from
machine learning community due to their ability to learn and mimic an input
data distribution. GANs consist of a discriminator and a generator working in
tandem playing a min-max game to learn a target underlying data distribution;
when fed with data-points sampled from a simpler distribution (like uniform or
Gaussian distribution). Once trained, they allow synthetic generation of
examples sampled from the target distribution. We investigate the application
of GANs to generate synthetic feature vectors used for speech emotion
recognition. Specifically, we investigate two set ups: (i) a vanilla GAN that
learns the distribution of a lower dimensional representation of the actual
higher dimensional feature vector and, (ii) a conditional GAN that learns the
distribution of the higher dimensional feature vectors conditioned on the
labels or the emotional class to which it belongs. As a potential practical
application of these synthetically generated samples, we measure any
improvement in a classifier's performance when the synthetic data is used along
with real data for training. We perform cross-validation analyses followed by a
cross-corpus study.
| null |
http://arxiv.org/abs/1806.06626v1
|
http://arxiv.org/pdf/1806.06626v1.pdf
| null |
[
"Saurabh Sahu",
"Rahul Gupta",
"Carol Espy-Wilson"
] |
[
"Cross-corpus",
"Emotion Recognition",
"Speech Emotion Recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/banach-wasserstein-gan
|
1806.06621
| null | null |
Banach Wasserstein GAN
|
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate
realistic samples from complicated image distributions. The Wasserstein metric
used in WGANs is based on a notion of distance between individual images, which
induces a notion of distance between probability distributions of images. So
far the community has considered $\ell^2$ as the underlying distance. We
generalize the theory of WGAN with gradient penalty to Banach spaces, allowing
practitioners to select the features to emphasize in the generator. We further
discuss the effect of some particular choices of underlying norms, focusing on
Sobolev norms. Finally, we demonstrate a boost in performance for an
appropriate choice of norm on CIFAR-10 and CelebA.
|
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions.
|
http://arxiv.org/abs/1806.06621v2
|
http://arxiv.org/pdf/1806.06621v2.pdf
|
NeurIPS 2018 12
|
[
"Jonas Adler",
"Sebastian Lunz"
] |
[] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/7909-banach-wasserstein-gan
|
http://papers.nips.cc/paper/7909-banach-wasserstein-gan.pdf
|
banach-wasserstein-gan-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/daheyinyin/wgan",
"description": "**Wasserstein GAN**, or **WGAN**, is a type of generative adversarial network that minimizes an approximation of the Earth-Mover's distance (EM) rather than the Jensen-Shannon divergence as in the original [GAN](https://paperswithcode.com/method/gan) formulation. It leads to more stable training than original GANs with less evidence of mode collapse, as well as meaningful curves that can be used for debugging and searching hyperparameters.",
"full_name": "Wasserstein GAN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Adversarial Networks (GANs)** are a type of generative model that use two networks, a generator to generate images and a discriminator to discriminate between real and fake, to train a model that approximates the distribution of the data. Below you can find a continuously updating list of GANs.",
"name": "Generative Adversarial Networks",
"parent": "Generative Models"
},
"name": "WGAN",
"source_title": "Wasserstein GAN",
"source_url": "http://arxiv.org/abs/1701.07875v3"
}
] |
https://paperswithcode.com/paper/comparison-based-random-forests
|
1806.06616
| null | null |
Comparison-Based Random Forests
|
Assume we are given a set of items from a general metric space, but we
neither have access to the representation of the data nor to the distances
between data points. Instead, suppose that we can actively choose a triplet of
items (A,B,C) and ask an oracle whether item A is closer to item B or to item
C. In this paper, we propose a novel random forest algorithm for regression and
classification that relies only on such triplet comparisons. In the theory part
of this paper, we establish sufficient conditions for the consistency of such a
forest. In a set of comprehensive experiments, we then demonstrate that the
proposed random forest is efficient both for classification and regression. In
particular, it is even competitive with other methods that have direct access
to the metric representation of the data.
| null |
http://arxiv.org/abs/1806.06616v1
|
http://arxiv.org/pdf/1806.06616v1.pdf
|
ICML 2018 7
|
[
"Siavash Haghiri",
"Damien Garreau",
"Ulrike Von Luxburg"
] |
[
"General Classification",
"regression",
"Triplet"
] | 2018-06-18T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1979
|
http://proceedings.mlr.press/v80/haghiri18a/haghiri18a.pdf
|
comparison-based-random-forests-1
| null |
[] |
https://paperswithcode.com/paper/on-multi-resident-activity-recognition-in
|
1806.06611
| null | null |
On Multi-resident Activity Recognition in Ambient Smart-Homes
|
Increasing attention to the research on activity monitoring in smart homes
has motivated the employment of ambient intelligence to reduce the deployment
cost and solve the privacy issue. Several approaches have been proposed for
multi-resident activity recognition, however, there still lacks a comprehensive
benchmark for future research and practical selection of models. In this paper
we study different methods for multi-resident activity recognition and evaluate
them on same sets of data. The experimental results show that recurrent neural
network with gated recurrent units is better than other models and also
considerably efficient, and that using combined activities as single labels is
more effective than represent them as separate labels.
| null |
http://arxiv.org/abs/1806.06611v1
|
http://arxiv.org/pdf/1806.06611v1.pdf
| null |
[
"Son N. Tran",
"Qing Zhang",
"Mohan Karunanithi"
] |
[
"Activity Recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-and-characterizing-incremental
|
1806.06610
| null | null |
Evaluating and Characterizing Incremental Learning from Non-Stationary Data
|
Incremental learning from non-stationary data poses special challenges to the
field of machine learning. Although new algorithms have been developed for
this, assessment of results and comparison of behaviors are still open
problems, mainly because evaluation metrics, adapted from more traditional
tasks, can be ineffective in this context. Overall, there is a lack of common
testing practices. This paper thus presents a testbed for incremental
non-stationary learning algorithms, based on specially designed synthetic
datasets. Also, test results are reported for some well-known algorithms to
show that the proposed methodology is effective at characterizing their
strengths and weaknesses. It is expected that this methodology will provide a
common basis for evaluating future contributions in the field.
| null |
http://arxiv.org/abs/1806.06610v1
|
http://arxiv.org/pdf/1806.06610v1.pdf
| null |
[
"Alejandro Cervantes",
"Christian Gagné",
"Pedro Isasi",
"Marc Parizeau"
] |
[
"Incremental Learning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantized-compressive-k-means
|
1804.10109
| null | null |
Quantized Compressive K-Means
|
The recent framework of compressive statistical learning aims at designing
tractable learning algorithms that use only a heavily compressed
representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such
a method: it estimates the centroids of data clusters from pooled, non-linear,
random signatures of the learning examples. While this approach significantly
reduces computational time on very large datasets, its digital implementation
wastes acquisition resources because the learning examples are compressed only
after the sensing stage. The present work generalizes the sketching procedure
initially defined in Compressive K-Means to a large class of periodic
nonlinearities including hardware-friendly implementations that compressively
acquire entire datasets. This idea is exemplified in a Quantized Compressive
K-Means procedure, a variant of CKM that leverages 1-bit universal quantization
(i.e. retaining the least significant bit of a standard uniform quantizer) as
the periodic sketch nonlinearity. Trading for this resource-efficient signature
(standard in most acquisition schemes) has almost no impact on the clustering
performances, as illustrated by numerical experiments.
| null |
http://arxiv.org/abs/1804.10109v2
|
http://arxiv.org/pdf/1804.10109v2.pdf
| null |
[
"Vincent Schellekens",
"Laurent Jacques"
] |
[
"Clustering",
"Quantization"
] | 2018-04-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/self-attentional-acoustic-models
|
1803.09519
| null | null |
Self-Attentional Acoustic Models
|
Self-attention is a method of encoding sequences of vectors by relating these
vectors to each-other based on pairwise similarities. These models have
recently shown promising results for modeling discrete sequences, but they are
non-trivial to apply to acoustic modeling due to computational and modeling
issues. In this paper, we apply self-attention to acoustic modeling, proposing
several improvements to mitigate these issues: First, self-attention memory
grows quadratically in the sequence length, which we address through a
downsampling technique. Second, we find that previous approaches to incorporate
position information into the model are unsuitable and explore other
representations and hybrid models to this end. Third, to stress the importance
of local context in the acoustic signal, we propose a Gaussian biasing approach
that allows explicit control over the context range. Experiments find that our
model approaches a strong baseline based on LSTMs with network-in-network
connections while being much faster to compute. Besides speed, we find that
interpretability is a strength of self-attentional acoustic models, and
demonstrate that self-attention heads learn a linguistically plausible division
of labor.
|
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.
|
http://arxiv.org/abs/1803.09519v2
|
http://arxiv.org/pdf/1803.09519v2.pdf
| null |
[
"Matthias Sperber",
"Jan Niehues",
"Graham Neubig",
"Sebastian Stüker",
"Alex Waibel"
] |
[] | 2018-03-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/snap-ml-a-hierarchical-framework-for-machine
|
1803.06333
| null | null |
Snap ML: A Hierarchical Framework for Machine Learning
|
We describe a new software framework for fast training of generalized linear
models. The framework, named Snap Machine Learning (Snap ML), combines recent
advances in machine learning systems and algorithms in a nested manner to
reflect the hierarchical architecture of modern computing systems. We prove
theoretically that such a hierarchical system can accelerate training in
distributed environments where intra-node communication is cheaper than
inter-node communication. Additionally, we provide a review of the
implementation of Snap ML in terms of GPU acceleration, pipelining,
communication patterns and software architecture, highlighting aspects that
were critical for achieving high performance. We evaluate the performance of
Snap ML in both single-node and multi-node environments, quantifying the
benefit of the hierarchical scheme and the data streaming functionality, and
comparing with other widely-used machine learning software frameworks. Finally,
we present a logistic regression benchmark on the Criteo Terabyte Click Logs
dataset and show that Snap ML achieves the same test loss an order of magnitude
faster than any of the previously reported results, including those obtained
using TensorFlow and scikit-learn.
| null |
http://arxiv.org/abs/1803.06333v3
|
http://arxiv.org/pdf/1803.06333v3.pdf
|
NeurIPS 2018 12
|
[
"Celestine Dünner",
"Thomas Parnell",
"Dimitrios Sarigiannis",
"Nikolas Ioannou",
"Andreea Anghel",
"Gummadi Ravi",
"Madhusudanan Kandasamy",
"Haralampos Pozidis"
] |
[
"BIG-bench Machine Learning",
"GPU"
] | 2018-03-16T00:00:00 |
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning
|
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning.pdf
|
snap-ml-a-hierarchical-framework-for-machine-1
| null |
[
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/multilingual-bottleneck-features-for-subword
|
1803.08863
| null | null |
Multilingual bottleneck features for subword modeling in zero-resource languages
|
How can we effectively develop speech technology for languages where no
transcribed data is available? Many existing approaches use no annotated
resources at all, yet it makes sense to leverage information from large
annotated corpora in other languages, for example in the form of multilingual
bottleneck features (BNFs) obtained from a supervised speech recognition
system. In this work, we evaluate the benefits of BNFs for subword modeling
(feature extraction) in six unseen languages on a word discrimination task.
First we establish a strong unsupervised baseline by combining two existing
methods: vocal tract length normalisation (VTLN) and the correspondence
autoencoder (cAE). We then show that BNFs trained on a single language already
beat this baseline; including up to 10 languages results in additional
improvements which cannot be matched by just adding more data from a single
language. Finally, we show that the cAE can improve further on the BNFs if
high-quality same-word pairs are available.
|
How can we effectively develop speech technology for languages where no transcribed data is available?
|
http://arxiv.org/abs/1803.08863v2
|
http://arxiv.org/pdf/1803.08863v2.pdf
| null |
[
"Enno Hermann",
"Sharon Goldwater"
] |
[
"speech-recognition",
"Speech Recognition"
] | 2018-03-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-write-stylized-chinese-characters
|
1712.06424
| null | null |
Learning to Write Stylized Chinese Characters by Reading a Handful of Examples
|
Automatically writing stylized Chinese characters is an attractive yet
challenging task due to its wide applicabilities. In this paper, we propose a
novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly
generate Chinese characters. Specifically, we propose to capture the different
characteristics of a Chinese character by disentangling the latent features
into content-related and style-related components. Considering of the complex
shapes and structures, we incorporate the structure information as prior
knowledge into our framework to guide the generation. Our framework shows a
powerful one-shot/low-shot generalization ability by inferring the style
component given a character with unseen style. To the best of our knowledge,
this is the first attempt to learn to write new-style Chinese characters by
observing only one or a few examples. Extensive experiments demonstrate its
effectiveness in generating different stylized Chinese characters by fusing the
feature vectors corresponding to different contents and styles, which is of
significant importance in real-world applications.
| null |
http://arxiv.org/abs/1712.06424v3
|
http://arxiv.org/pdf/1712.06424v3.pdf
| null |
[
"Danyang Sun",
"Tongzheng Ren",
"Chongxun Li",
"Hang Su",
"Jun Zhu"
] |
[] | 2017-12-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ipose-instance-aware-6d-pose-estimation-of
|
1712.01924
| null | null |
iPose: Instance-Aware 6D Pose Estimation of Partly Occluded Objects
|
We address the task of 6D pose estimation of known rigid objects from single
input images in scenarios where the objects are partly occluded. Recent
RGB-D-based methods are robust to moderate degrees of occlusion. For RGB
inputs, no previous method works well for partly occluded objects. Our main
contribution is to present the first deep learning-based system that estimates
accurate poses for partly occluded objects from RGB-D and RGB input. We achieve
this with a new instance-aware pipeline that decomposes 6D object pose
estimation into a sequence of simpler steps, where each step removes specific
aspects of the problem. The first step localizes all known objects in the image
using an instance segmentation network, and hence eliminates surrounding
clutter and occluders. The second step densely maps pixels to 3D object surface
positions, so called object coordinates, using an encoder-decoder network, and
hence eliminates object appearance. The third, and final, step predicts the 6D
pose using geometric optimization. We demonstrate that we significantly
outperform the state-of-the-art for pose estimation of partly occluded objects
for both RGB and RGB-D input.
| null |
http://arxiv.org/abs/1712.01924v3
|
http://arxiv.org/pdf/1712.01924v3.pdf
| null |
[
"Omid Hosseini Jafari",
"Siva Karthik Mustikovela",
"Karl Pertsch",
"Eric Brachmann",
"Carsten Rother"
] |
[
"6D Pose Estimation",
"6D Pose Estimation using RGB",
"Decoder",
"Instance Segmentation",
"Object",
"Pose Estimation",
"Semantic Segmentation"
] | 2017-12-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/uncertainty-in-multitask-learning-joint
|
1806.06595
| null | null |
Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning
|
Multi-task neural network architectures provide a mechanism that jointly
integrates information from distinct sources. It is ideal in the context of
MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT)
scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic
multi-task network that estimates: 1) intrinsic uncertainty through a
heteroscedastic noise model for spatially-adaptive task loss weighting and 2)
parameter uncertainty through approximate Bayesian inference. This allows
sampling of multiple segmentations and synCTs that share their network
representation. We test our model on prostate cancer scans and show that it
produces more accurate and consistent synCTs with a better estimation in the
variance of the errors, state of the art results in OAR segmentation and a
methodology for quality assurance in radiotherapy treatment planning.
| null |
http://arxiv.org/abs/1806.06595v1
|
http://arxiv.org/pdf/1806.06595v1.pdf
| null |
[
"Felix J. S. Bragman",
"Ryutaro Tanno",
"Zach Eaton-Rosen",
"Wenqi Li",
"David J. Hawkes",
"Sebastien Ourselin",
"Daniel C. Alexander",
"Jamie R. McClelland",
"M. Jorge Cardoso"
] |
[
"Bayesian Inference"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-recurrent-neural-network-for-multi
|
1806.06594
| null | null |
Deep Recurrent Neural Network for Multi-target Filtering
|
This paper addresses the problem of fixed motion and measurement models for
multi-target filtering using an adaptive learning framework. This is performed
by defining target tuples with random finite set terminology and utilisation of
recurrent neural networks with a long short-term memory architecture. A novel
data association algorithm compatible with the predicted tracklet tuples is
proposed, enabling the update of occluded targets, in addition to assigning
birth, survival and death of targets. The algorithm is evaluated over a
commonly used filtering simulation scenario, with highly promising results.
| null |
http://arxiv.org/abs/1806.06594v2
|
http://arxiv.org/pdf/1806.06594v2.pdf
| null |
[
"Mehryar Emambakhsh",
"Alessandro Bay",
"Eduard Vazquez"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/low-resource-speech-to-text-translation
|
1803.09164
| null | null |
Low-Resource Speech-to-Text Translation
|
Speech-to-text translation has many potential applications for low-resource
languages, but the typical approach of cascading speech recognition with
machine translation is often impossible, since the transcripts needed to train
a speech recognizer are usually not available for low-resource languages.
Recent work has found that neural encoder-decoder models can learn to directly
translate foreign speech in high-resource scenarios, without the need for
intermediate transcription. We investigate whether this approach also works in
settings where both data and computation are limited. To make the approach
efficient, we make several architectural changes, including a change from
character-level to word-level decoding. We find that this choice yields crucial
speed improvements that allow us to train with fewer computational resources,
yet still performs well on frequent words. We explore models trained on between
20 and 160 hours of data, and find that although models trained on less data
have considerably lower BLEU scores, they can still predict words with
relatively high precision and recall---around 50% for a model trained on 50
hours of data, versus around 60% for the full 160 hour model. Thus, they may
still be useful for some low-resource scenarios.
| null |
http://arxiv.org/abs/1803.09164v2
|
http://arxiv.org/pdf/1803.09164v2.pdf
| null |
[
"Sameer Bansal",
"Herman Kamper",
"Karen Livescu",
"Adam Lopez",
"Sharon Goldwater"
] |
[
"Decoder",
"Machine Translation",
"speech-recognition",
"Speech Recognition",
"Speech-to-Text",
"Speech-to-Text Translation",
"Translation"
] | 2018-03-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/computational-theories-of-curiosity-driven
|
1802.10546
| null | null |
Computational Theories of Curiosity-Driven Learning
|
What are the functions of curiosity? What are the mechanisms of
curiosity-driven learning? We approach these questions about the living using
concepts and tools from machine learning and developmental robotics. We argue
that curiosity-driven learning enables organisms to make discoveries to solve
complex problems with rare or deceptive rewards. By fostering exploration and
discovery of a diversity of behavioural skills, and ignoring these rewards,
curiosity can be efficient to bootstrap learning when there is no information,
or deceptive information, about local improvement towards these problems. We
also explain the key role of curiosity for efficient learning of world models.
We review both normative and heuristic computational frameworks used to
understand the mechanisms of curiosity in humans, conceptualizing the child as
a sense-making organism. These frameworks enable us to discuss the
bi-directional causal links between curiosity and learning, and to provide new
hypotheses about the fundamental role of curiosity in self-organizing
developmental structures through curriculum learning. We present various
developmental robotics experiments that study these mechanisms in action, both
supporting these hypotheses to understand better curiosity in humans and
opening new research avenues in machine learning and artificial intelligence.
Finally, we discuss challenges for the design of experimental paradigms for
studying curiosity in psychology and cognitive neuroscience.
Keywords: Curiosity, intrinsic motivation, lifelong learning, predictions,
world model, rewards, free-energy principle, learning progress, machine
learning, AI, developmental robotics, development, curriculum learning,
self-organization.
| null |
http://arxiv.org/abs/1802.10546v2
|
http://arxiv.org/pdf/1802.10546v2.pdf
| null |
[
"Pierre-Yves Oudeyer"
] |
[
"BIG-bench Machine Learning",
"Lifelong learning"
] | 2018-02-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nonparametric-topic-modeling-with-neural
|
1806.06583
| null | null |
Nonparametric Topic Modeling with Neural Inference
|
This work focuses on combining nonparametric topic models with Auto-Encoding
Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the
topics are treated as trainable parameters and the document-specific topic
proportions are obtained by a stick-breaking construction. The inference of
iTM-VAE is modeled by neural networks such that it can be computed in a simple
feed-forward manner. We also describe how to introduce a hyper-prior into
iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the
hyper-prior technique is quite general and we show that it can be applied to
other AEVB based models to alleviate the {\it collapse-to-prior} problem
elegantly. Moreover, we also propose HiTM-VAE, where the document-specific
topic distributions are generated in a hierarchical manner. HiTM-VAE is even
more flexible and can generate topic distributions with better variability.
Experimental results on 20News and Reuters RCV1-V2 datasets show that the
proposed models outperform the state-of-the-art baselines significantly. The
advantages of the hyper-prior technique and the hierarchical model construction
are also confirmed by experiments.
| null |
http://arxiv.org/abs/1806.06583v1
|
http://arxiv.org/pdf/1806.06583v1.pdf
| null |
[
"Xuefei Ning",
"Yin Zheng",
"Zhuxi Jiang",
"Yu Wang",
"Huazhong Yang",
"Junzhou Huang"
] |
[
"Topic Models"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wsd-algorithm-based-on-a-new-method-of-vector
|
1805.09559
| null | null |
WSD algorithm based on a new method of vector-word contexts proximity calculation via epsilon-filtration
|
The problem of word sense disambiguation (WSD) is considered in the article.
Given a set of synonyms (synsets) and sentences with these synonyms. It is
necessary to select the meaning of the word in the sentence automatically. 1285
sentences were tagged by experts, namely, one of the dictionary meanings was
selected by experts for target words. To solve the WSD-problem, an algorithm
based on a new method of vector-word contexts proximity calculation is
proposed. In order to achieve higher accuracy, a preliminary epsilon-filtering
of words is performed, both in the sentence and in the set of synonyms. An
extensive program of experiments was carried out. Four algorithms are
implemented, including a new algorithm. Experiments have shown that in a number
of cases the new algorithm shows better results. The developed software and the
tagged corpus have an open license and are available online. Wiktionary and
Wikisource are used. A brief description of this work can be viewed in slides
(https://goo.gl/9ak6Gt). Video lecture in Russian on this research is available
online (https://youtu.be/-DLmRkepf58).
|
It is necessary to select the meaning of the word in the sentence automatically.
|
http://arxiv.org/abs/1805.09559v2
|
http://arxiv.org/pdf/1805.09559v2.pdf
| null |
[
"Alexander Kirillov",
"Natalia Krizhanovsky",
"Andrew Krizhanovsky"
] |
[
"Sentence",
"Word Sense Disambiguation"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-kanerva-machine-a-generative-distributed
|
1804.01756
| null |
S1HlA-ZAZ
|
The Kanerva Machine: A Generative Distributed Memory
|
We present an end-to-end trained memory system that quickly adapts to new
data and generates samples like them. Inspired by Kanerva's sparse distributed
memory, it has a robust distributed reading and writing mechanism. The memory
is analytically tractable, which enables optimal on-line compression via a
Bayesian update-rule. We formulate it as a hierarchical conditional generative
model, where memory provides a rich data-dependent prior distribution.
Consequently, the top-down memory and bottom-up perception are combined to
produce the code representing an observation. Empirically, we demonstrate that
the adaptive memory significantly improves generative models trained on both
the Omniglot and CIFAR datasets. Compared with the Differentiable Neural
Computer (DNC) and its variants, our memory model has greater capacity and is
significantly easier to train.
| null |
http://arxiv.org/abs/1804.01756v3
|
http://arxiv.org/pdf/1804.01756v3.pdf
|
ICLR 2018 1
|
[
"Yan Wu",
"Greg Wayne",
"Alex Graves",
"Timothy Lillicrap"
] |
[] | 2018-04-05T00:00:00 |
https://openreview.net/forum?id=S1HlA-ZAZ
|
https://openreview.net/pdf?id=S1HlA-ZAZ
|
the-kanerva-machine-a-generative-distributed-1
| null |
[] |
https://paperswithcode.com/paper/rendernet-a-deep-convolutional-network-for
|
1806.06575
| null | null |
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
|
Traditional computer graphics rendering pipeline is designed for procedurally
generating 2D quality images from 3D shapes with high performance. The
non-differentiability due to discrete operations such as visibility computation
makes it hard to explicitly correlate rendering parameters and the resulting
image, posing a significant challenge for inverse rendering tasks. Recent work
on differentiable rendering achieves differentiability either by designing
surrogate gradients for non-differentiable operations or via an approximate but
differentiable renderer. These methods, however, are still limited when it
comes to handling occlusion, and restricted to particular rendering effects. We
present RenderNet, a differentiable rendering convolutional network with a
novel projection unit that can render 2D images from 3D shapes. Spatial
occlusion and shading calculation are automatically encoded in the network. Our
experiments show that RenderNet can successfully learn to implement different
shaders, and can be used in inverse rendering tasks to estimate shape, pose,
lighting and texture from a single image.
|
We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.
|
http://arxiv.org/abs/1806.06575v3
|
http://arxiv.org/pdf/1806.06575v3.pdf
|
NeurIPS 2018 12
|
[
"Thu Nguyen-Phuoc",
"Chuan Li",
"Stephen Balaban",
"Yong-Liang Yang"
] |
[
"Inverse Rendering"
] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes
|
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes.pdf
|
rendernet-a-deep-convolutional-network-for-1
| null |
[] |
https://paperswithcode.com/paper/distributed-learning-with-compressed
|
1806.06573
| null | null |
Distributed learning with compressed gradients
|
Asynchronous computation and gradient compression have emerged as two key
techniques for achieving scalability in distributed optimization for
large-scale machine learning. This paper presents a unified analysis framework
for distributed gradient methods operating with staled and compressed
gradients. Non-asymptotic bounds on convergence rates and information exchange
are derived for several optimization algorithms. These bounds give explicit
expressions for step-sizes and characterize how the amount of asynchrony and
the compression accuracy affect iteration and communication complexity
guarantees. Numerical results highlight convergence properties of different
gradient compression algorithms and confirm that fast convergence under limited
information exchange is indeed possible.
| null |
http://arxiv.org/abs/1806.06573v2
|
http://arxiv.org/pdf/1806.06573v2.pdf
| null |
[
"Sarit Khirirat",
"Hamid Reza Feyzmahdavian",
"Mikael Johansson"
] |
[
"BIG-bench Machine Learning",
"Distributed Optimization"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/subgram-extending-skip-gram-word
|
1806.06571
| null | null |
SubGram: Extending Skip-gram Word Representation with Substrings
|
Skip-gram (word2vec) is a recent method for creating vector representations
of words ("distributed word representations") using a neural network. The
representation gained popularity in various areas of natural language
processing, because it seems to capture syntactic and semantic information
about words without any explicit supervision in this respect. We propose
SubGram, a refinement of the Skip-gram model to consider also the word
structure during the training process, achieving large gains on the Skip-gram
original test set.
|
Skip-gram (word2vec) is a recent method for creating vector representations of words ("distributed word representations") using a neural network.
|
http://arxiv.org/abs/1806.06571v1
|
http://arxiv.org/pdf/1806.06571v1.pdf
| null |
[
"Tom Kocmi",
"Ondřej Bojar"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-from-outside-the-viability-kernel
|
1806.06569
| null | null |
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fall with Grace
|
Despite impressive results using reinforcement learning to solve complex
problems from scratch, in robotics this has still been largely limited to
model-based learning with very informative reward functions. One of the major
challenges is that the reward landscape often has large patches with no
gradient, making it difficult to sample gradients effectively. We show here
that the robot state-initialization can have a more important effect on the
reward landscape than is generally expected. In particular, we show the
counter-intuitive benefit of including initializations that are unviable, in
other words initializing in states that are doomed to fail.
| null |
http://arxiv.org/abs/1806.06569v1
|
http://arxiv.org/pdf/1806.06569v1.pdf
| null |
[
"Steve Heim",
"Alexander Spröwitz"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ista-net-interpretable-optimization-inspired
|
1706.07929
| null | null |
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
|
With the aim of developing a fast yet accurate algorithm for compressive
sensing (CS) reconstruction of natural images, we combine in this paper the
merits of two existing categories of CS methods: the structure insights of
traditional optimization-based methods and the speed of recent network-based
ones. Specifically, we propose a novel structured deep network, dubbed
ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm
(ISTA) for optimizing a general $\ell_1$ norm CS reconstruction model. To cast
ISTA into deep network form, we develop an effective strategy to solve the
proximal mapping associated with the sparsity-inducing regularizer using
nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms,
shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than
being hand-crafted. Moreover, considering that the residuals of natural images
are more compressible, an enhanced version of ISTA-Net in the residual domain,
dubbed {ISTA-Net}$^+$, is derived to further improve CS reconstruction.
Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform
existing state-of-the-art optimization-based and network-based CS methods by
large margins, while maintaining fast computational speed. Our source codes are
available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.
|
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones.
|
http://arxiv.org/abs/1706.07929v2
|
http://arxiv.org/pdf/1706.07929v2.pdf
|
CVPR 2018 6
|
[
"Jian Zhang",
"Bernard Ghanem"
] |
[
"Compressive Sensing"
] | 2017-06-24T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.pdf
|
ista-net-interpretable-optimization-inspired-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/state-gradients-for-rnn-memory-analysis
|
1805.04264
| null | null |
State Gradients for RNN Memory Analysis
|
We present a framework for analyzing what the state in RNNs remembers from
its input embeddings. Our approach is inspired by backpropagation, in the sense
that we compute the gradients of the states with respect to the input
embeddings. The gradient matrix is decomposed with Singular Value Decomposition
to analyze which directions in the embedding space are best transferred to the
hidden state space, characterized by the largest singular values. We apply our
approach to LSTM language models and investigate to what extent and for how
long certain classes of words are remembered on average for a certain corpus.
Additionally, the extent to which a specific property or relationship is
remembered by the RNN can be tracked by comparing a vector characterizing that
property with the direction(s) in embedding space that are best preserved in
hidden state space.
| null |
http://arxiv.org/abs/1805.04264v2
|
http://arxiv.org/pdf/1805.04264v2.pdf
|
WS 2018 11
|
[
"Lyan Verwimp",
"Hugo Van hamme",
"Vincent Renkens",
"Patrick Wambacq"
] |
[] | 2018-05-11T00:00:00 |
https://aclanthology.org/W18-5443
|
https://aclanthology.org/W18-5443.pdf
|
state-gradients-for-rnn-memory-analysis-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/convex-optimization-with-unbounded-nonconvex
|
1711.02621
| null | null |
Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing
|
We consider the problem of minimizing a convex objective function $F$ when
one can only evaluate its noisy approximation $\hat{F}$. Unless one assumes
some structure on the noise, $\hat{F}$ may be an arbitrary nonconvex function,
making the task of minimizing $F$ intractable. To overcome this, prior work has
often focused on the case when $F(x)-\hat{F}(x)$ is uniformly-bounded. In this
paper we study the more general case when the noise has magnitude $\alpha F(x)
+ \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm
that finds an approximate minimizer of $F$ for this noise model. Previously,
Markov chains, such as the stochastic gradient Langevin dynamics, have been
used to arrive at approximate solutions to these optimization problems.
However, for the noise model considered in this paper, no single temperature
allows such a Markov chain to both mix quickly and concentrate near the global
minimizer. We bypass this by combining "simulated annealing" with the
stochastic gradient Langevin dynamics, and gradually decreasing the temperature
of the chain in order to approach the global minimizer. As a corollary one can
approximately minimize a nonconvex function that is close to a convex function;
however, the closeness can deteriorate as one moves away from the optimum.
| null |
http://arxiv.org/abs/1711.02621v2
|
http://arxiv.org/pdf/1711.02621v2.pdf
| null |
[
"Oren Mangoubi",
"Nisheeth K. Vishnoi"
] |
[] | 2017-11-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/incremental-sparse-bayesian-ordinal
|
1806.06553
| null | null |
Incremental Sparse Bayesian Ordinal Regression
|
Ordinal Regression (OR) aims to model the ordering information between
different data categories, which is a crucial topic in multi-label learning. An
important class of approaches to OR models the problem as a linear combination
of basis functions that map features to a high dimensional non-linear space.
However, most of the basis function-based algorithms are time consuming. We
propose an incremental sparse Bayesian approach to OR tasks and introduce an
algorithm to sequentially learn the relevant basis functions in the ordinal
scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression
(ISBOR), automatically optimizes the hyper-parameters via the type-II maximum
likelihood method. By exploiting fast marginal likelihood optimization, ISBOR
can avoid big matrix inverses, which is the main bottleneck in applying basis
function-based algorithms to OR tasks on large-scale datasets. We show that
ISBOR can make accurate predictions with parsimonious basis functions while
offering automatic estimates of the prediction uncertainty. Extensive
experiments on synthetic and real word datasets demonstrate the efficiency and
effectiveness of ISBOR compared to other basis function-based OR approaches.
|
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning.
|
http://arxiv.org/abs/1806.06553v1
|
http://arxiv.org/pdf/1806.06553v1.pdf
| null |
[
"Chang Li",
"Maarten de Rijke"
] |
[
"Multi-Label Learning",
"regression"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sniper-efficient-multi-scale-training
|
1805.09300
| null | null |
SNIPER: Efficient Multi-Scale Training
|
We present SNIPER, an algorithm for performing efficient multi-scale training
in instance level visual recognition tasks. Instead of processing every pixel
in an image pyramid, SNIPER processes context regions around ground-truth
instances (referred to as chips) at the appropriate scale. For background
sampling, these context-regions are generated using proposals extracted from a
region proposal network trained with a short learning schedule. Hence, the
number of chips generated per image during training adaptively changes based on
the scene complexity. SNIPER only processes 30% more pixels compared to the
commonly used single scale training at 800x1333 pixels on the COCO dataset.
But, it also observes samples from extreme resolutions of the image pyramid,
like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips
(512x512 pixels), it can have a batch size as large as 20 on a single GPU even
with a ResNet-101 backbone. Therefore it can benefit from batch-normalization
during training without the need for synchronizing batch-normalization
statistics across GPUs. SNIPER brings training of instance level recognition
tasks like object detection closer to the protocol for image classification and
suggests that the commonly accepted guideline that it is important to train on
high resolution images for instance level visual recognition tasks might not be
correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone
obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can
process 5 images per second during inference with a single GPU. Code is
available at https://github.com/MahyarNajibi/SNIPER/.
|
Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47. 6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU.
|
http://arxiv.org/abs/1805.09300v3
|
http://arxiv.org/pdf/1805.09300v3.pdf
|
NeurIPS 2018 12
|
[
"Bharat Singh",
"Mahyar Najibi",
"Larry S. Davis"
] |
[
"GPU",
"image-classification",
"object-detection",
"Object Detection",
"Region Proposal"
] | 2018-05-23T00:00:00 |
http://papers.nips.cc/paper/8143-sniper-efficient-multi-scale-training
|
http://papers.nips.cc/paper/8143-sniper-efficient-multi-scale-training.pdf
|
sniper-efficient-multi-scale-training-1
| null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**SNIPER** is a multi-scale training approach for instance-level recognition tasks like object detection and instance-level segmentation. Instead of processing all pixels in an image pyramid, SNIPER selectively processes context regions around the ground-truth objects (a.k.a chips). This can help to speed up multi-scale training as it operates on low-resolution chips. Due to its memory-efficient design, SNIPER can benefit from [Batch Normalization](https://paperswithcode.com/method/batch-normalization) during training and it makes larger batch-sizes possible for instance-level recognition tasks on a single GPU.",
"full_name": "SNIPER",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Multi-Scale Training",
"parent": null
},
"name": "SNIPER",
"source_title": "SNIPER: Efficient Multi-Scale Training",
"source_url": "http://arxiv.org/abs/1805.09300v3"
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/constraining-the-dynamics-of-deep
|
1802.05680
| null | null |
Constraining the Dynamics of Deep Probabilistic Models
|
We introduce a novel generative formulation of deep probabilistic models
implementing "soft" constraints on their function dynamics. In particular, we
develop a flexible methodological framework where the modeled functions and
derivatives of a given order are subject to inequality or equality constraints.
We then characterize the posterior distribution over model and constraint
parameters through stochastic variational inference. As a result, the proposed
approach allows for accurate and scalable uncertainty quantification on the
predictions and on all parameters. We demonstrate the application of equality
constraints in the challenging problem of parameter inference in ordinary
differential equation models, while we showcase the application of inequality
constraints on the problem of monotonic regression of count data. The proposed
approach is extensively tested in several experimental settings, leading to
highly competitive results in challenging modeling applications, while offering
high expressiveness, flexibility and scalability.
| null |
http://arxiv.org/abs/1802.05680v2
|
http://arxiv.org/pdf/1802.05680v2.pdf
|
ICML 2018 7
|
[
"Marco Lorenzi",
"Maurizio Filippone"
] |
[
"Uncertainty Quantification",
"Variational Inference"
] | 2018-02-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2209
|
http://proceedings.mlr.press/v80/lorenzi18a/lorenzi18a.pdf
|
constraining-the-dynamics-of-deep-1
| null |
[] |
https://paperswithcode.com/paper/a-simple-reservoir-model-of-working-memory
|
1806.06545
| null | null |
A Simple Reservoir Model of Working Memory with Real Values
|
The prefrontal cortex is known to be involved in many high-level cognitive
functions, in particular, working memory. Here, we study to what extent a group
of randomly connected units (namely an Echo State Network, ESN) can store and
maintain (as output) an arbitrary real value from a streamed input, i.e. can
act as a sustained working memory unit. Furthermore, we explore to what extent
such an architecture can take advantage of the stored value in order to produce
non-linear computations. Comparison between different architectures (with and
without feedback, with and without a working memory unit) shows that an
explicit memory improves the performances.
|
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory.
|
http://arxiv.org/abs/1806.06545v1
|
http://arxiv.org/pdf/1806.06545v1.pdf
| null |
[
"Anthony Strock",
"Nicolas Rougier",
"Xavier Hinaut"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/segmentation-of-photovoltaic-module-cells-in
|
1806.06530
| null | null |
Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images
|
High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kinds of defects, which is time-consuming and expensive. Automated segmentation of cells is therefore a key step in automating the visual inspection workflow. In this work, we propose a robust automated segmentation method for extraction of individual solar cells from EL images of PV modules. This enables controlled studies on large amounts of data to understanding the effects of module degradation over time-a process not yet fully understood. The proposed method infers in several steps a high-level solar module representation from low-level edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 94.47% and an $F_1$ score of 97.62%, both indicating a very high similarity between automatically segmented and ground truth solar cell masks.
|
Automated segmentation of cells is therefore a key step in automating the visual inspection workflow.
|
https://arxiv.org/abs/1806.06530v4
|
https://arxiv.org/pdf/1806.06530v4.pdf
| null |
[
"Sergiu Deitsch",
"Claudia Buerhop-Lutz",
"Evgenii Sovetkin",
"Ansgar Steland",
"Andreas Maier",
"Florian Gallwitz",
"Christian Riess"
] |
[
"Segmentation",
"Solar Cell Segmentation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dual-recovery-network-with-online
|
1701.05652
| null | null |
Dual Recovery Network with Online Compensation for Image Super-Resolution
|
Image super-resolution (SR) methods essentially lead to a loss of some
high-frequency (HF) information when predicting high-resolution (HR) images
from low-resolution (LR) images without using external references. To address
this issue, we additionally utilize online retrieved data to facilitate image
SR in a unified deep framework. A novel dual high-frequency recovery network
(DHN) is proposed to predict an HR image with three parts: an LR image, an
internal inferred HF (IHF) map (HF missing part inferred solely from the LR
image) and an external extracted HF (EHF) map. In particular, we infer the HF
information based on both the LR image and similar HR references which are
retrieved online. For the EHF map, we align the references with affine
transformation and then in the aligned references, part of HF signals are
extracted by the proposed DHN to compensate for the HF loss. Extensive
experimental results demonstrate that our DHN achieves notably better
performance than state-of-the-art SR methods.
| null |
http://arxiv.org/abs/1701.05652v3
|
http://arxiv.org/pdf/1701.05652v3.pdf
| null |
[
"Sifeng Xia",
"Wenhan Yang",
"Jiaying Liu",
"Zongming Guo"
] |
[
"Image Super-Resolution",
"Super-Resolution"
] | 2017-01-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hitnet-a-neural-network-with-capsules
|
1806.06519
| null | null |
HitNet: a neural network with capsules embedded in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules
|
Neural networks designed for the task of classification have become a
commodity in recent years. Many works target the development of better
networks, which results in a complexification of their architectures with more
layers, multiple sub-networks, or even the combination of multiple classifiers.
In this paper, we show how to redesign a simple network to reach excellent
performances, which are better than the results reproduced with CapsNet on
several datasets, by replacing a layer with a Hit-or-Miss layer. This layer
contains activated vectors, called capsules, that we train to hit or miss a
central capsule by tailoring a specific centripetal loss function. We also show
how our network, named HitNet, is capable of synthesizing a representative
sample of the images of a given class by including a reconstruction network.
This possibility allows to develop a data augmentation step combining
information from the data space and the feature space, resulting in a hybrid
data augmentation process. In addition, we introduce the possibility for
HitNet, to adopt an alternative to the true target when needed by using the new
concept of ghost capsules, which is used here to detect potentially mislabeled
images in the training data.
|
In this paper, we show how to redesign a simple network to reach excellent performances, which are better than the results reproduced with CapsNet on several datasets, by replacing a layer with a Hit-or-Miss layer.
|
http://arxiv.org/abs/1806.06519v1
|
http://arxiv.org/pdf/1806.06519v1.pdf
| null |
[
"Adrien Deliège",
"Anthony Cioppa",
"Marc Van Droogenbroeck"
] |
[
"Data Augmentation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-information-autoencoding-family-a
|
1806.06514
| null | null |
The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models
|
A large number of objectives have been proposed to train latent variable
generative models. We show that many of them are Lagrangian dual functions of
the same primal optimization problem. The primal problem optimizes the mutual
information between latent and visible variables, subject to the constraints of
accurately modeling the data distribution and performing correct amortized
inference. By choosing to maximize or minimize mutual information, and choosing
different Lagrange multipliers, we obtain different objectives including
InfoGAN, ALI/BiGAN, ALICE, CycleGAN, beta-VAE, adversarial autoencoders, AVB,
AS-VAE and InfoVAE. Based on this observation, we provide an exhaustive
characterization of the statistical and computational trade-offs made by all
the training objectives in this class of Lagrangian duals. Next, we propose a
dual optimization method where we optimize model parameters as well as the
Lagrange multipliers. This method achieves Pareto optimal solutions in terms of
optimizing information and satisfying the constraints.
|
A large number of objectives have been proposed to train latent variable generative models.
|
http://arxiv.org/abs/1806.06514v2
|
http://arxiv.org/pdf/1806.06514v2.pdf
| null |
[
"Shengjia Zhao",
"Jiaming Song",
"Stefano Ermon"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/znxlwm/pytorch-pix2pix/blob/3059f2af53324e77089bbcfc31279f01a38c40b8/network.py#L104",
"description": "**PatchGAN** is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \\times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss.",
"full_name": "PatchGAN",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Discriminators** are a type of module used in architectures such as generative adversarial networks to discriminate between real and generated samples. Below you can find a continuously updating list of discriminators.",
"name": "Discriminators",
"parent": null
},
"name": "PatchGAN",
"source_title": "Image-to-Image Translation with Conditional Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.07004v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/instancenorm.py#L141",
"description": "**Instance Normalization** (also known as contrast normalization) is a normalization layer where:\r\n\r\n$$\r\n y_{tijk} = \\frac{x_{tijk} - \\mu_{ti}}{\\sqrt{\\sigma_{ti}^2 + \\epsilon}},\r\n \\quad\r\n \\mu_{ti} = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H x_{tilm},\r\n \\quad\r\n \\sigma_{ti}^2 = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H (x_{tilm} - \\mu_{ti})^2.\r\n$$\r\n\r\nThis prevents instance-specific mean and covariance shift simplifying the learning process. Intuitively, the normalization process allows to remove instance-specific contrast information from the content image in a task like image stylization, which simplifies generation.",
"full_name": "Instance Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Instance Normalization",
"source_title": "Instance Normalization: The Missing Ingredient for Fast Stylization",
"source_url": "http://arxiv.org/abs/1607.08022v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!\r\n\r\nHow do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!",
"full_name": "HuMan(Expedia)||How do I get a human at Expedia?",
"introduced_year": 2014,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "HuMan(Expedia)||How do I get a human at Expedia?",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/eriklindernoren/PyTorch-GAN/blob/a163b82beff3d01688d8315a3fd39080400e7c01/implementations/lsgan/lsgan.py#L102",
"description": "**GAN Least Squares Loss** is a least squares loss function for generative adversarial networks. Minimizing this objective function is equivalent to minimizing the Pearson $\\chi^{2}$ divergence. The objective function (here for [LSGAN](https://paperswithcode.com/method/lsgan)) can be defined as:\r\n\r\n$$ \\min\\_{D}V\\_{LS}\\left(D\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{x} \\sim p\\_{data}\\left(\\mathbf{x}\\right)}\\left[\\left(D\\left(\\mathbf{x}\\right) - b\\right)^{2}\\right] + \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z}\\sim p\\_{data}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - a\\right)^{2}\\right] $$\r\n\r\n$$ \\min\\_{G}V\\_{LS}\\left(G\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z} \\sim p\\_{\\mathbf{z}}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - c\\right)^{2}\\right] $$\r\n\r\nwhere $a$ and $b$ are the labels for fake data and real data and $c$ denotes the value that $G$ wants $D$ to believe for fake data.",
"full_name": "GAN Least Squares Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "GAN Least Squares Loss",
"source_title": "Least Squares Generative Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.04076v3"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/f5834b3ed339ec268f40cf56928234eed8dfeb92/models/cycle_gan_model.py#L172",
"description": "**Cycle Consistency Loss** is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the [CycleGAN](https://paperswithcode.com/method/cyclegan) architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \\rightarrow Y$ and $F: Y \\rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\\left(G\\left(x\\right)\\right) \\approx x$ and $G\\left(F\\left(y\\right)\\right) \\approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:\r\n\r\n$$ \\mathcal{L}\\_{cyc}\\left(G, F\\right) = \\mathbb{E}\\_{x \\sim p\\_{data}\\left(x\\right)}\\left[||F\\left(G\\left(x\\right)\\right) - x||\\_{1}\\right] + \\mathbb{E}\\_{y \\sim p\\_{data}\\left(y\\right)}\\left[||G\\left(F\\left(y\\right)\\right) - y||\\_{1}\\right] $$",
"full_name": "Cycle Consistency Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Cycle Consistency Loss",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9e6fff7b7d5215a38be3cac074ca7087041bea0d/models/cycle_gan_model.py#L8",
"description": "In today’s digital age, Cardano has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Cardano transaction not confirmed, your Cardano wallet not showing balance, or you're trying to recover a lost Cardano wallet, knowing where to get help is essential. That’s why the Cardano customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Cardano Customer Support Number +1-833-534-1729\r\nCardano operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Cardano Transaction Not Confirmed\r\nOne of the most common concerns is when a Cardano transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Cardano Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Cardano wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Cardano Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Cardano wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Cardano Deposit Not Received\r\nIf someone has sent you Cardano but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Cardano deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Cardano Transaction Stuck or Pending\r\nSometimes your Cardano transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Cardano Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Cardano wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Cardano Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Cardano tech.\r\n\r\n24/7 Availability: Cardano doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Cardano Support and Wallet Issues\r\nQ1: Can Cardano support help me recover stolen BTC?\r\nA: While Cardano transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Cardano transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Cardano’s official number (Cardano is decentralized), it connects you to trained professionals experienced in resolving all major Cardano issues.\r\n\r\nFinal Thoughts\r\nCardano is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Cardano transaction not confirmed, your Cardano wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Cardano customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Cardano Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Cardano Customer Service Number +1-833-534-1729",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
}
] |
https://paperswithcode.com/paper/semi-tied-units-for-efficient-gating-in-lstm
|
1806.06513
| null | null |
Semi-tied Units for Efficient Gating in LSTM and Highway Networks
|
Gating is a key technique used for integrating information from multiple
sources by long short-term memory (LSTM) models and has recently also been
applied to other models such as the highway network. Although gating is
powerful, it is rather expensive in terms of both computation and storage as
each gating unit uses a separate full weight matrix. This issue can be severe
since several gates can be used together in e.g. an LSTM cell. This paper
proposes a semi-tied unit (STU) approach to solve this efficiency issue, which
uses one shared weight matrix to replace those in all the units in the same
layer. The approach is termed "semi-tied" since extra parameters are used to
separately scale each of the shared output values. These extra scaling factors
are associated with the network activation functions and result in the use of
parameterised sigmoid, hyperbolic tangent, and rectified linear unit functions.
Speech recognition experiments using British English multi-genre broadcast data
showed that using STUs can reduce the calculation and storage cost by a factor
of three for highway networks and four for LSTMs, while giving similar word
error rates to the original models.
| null |
http://arxiv.org/abs/1806.06513v1
|
http://arxiv.org/pdf/1806.06513v1.pdf
| null |
[
"Chao Zhang",
"Philip Woodland"
] |
[
"speech-recognition",
"Speech Recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on \"information highways\". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures.",
"full_name": "Highway networks",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Highway networks",
"source_title": "Highway Networks",
"source_url": "http://arxiv.org/abs/1505.00387v2"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/predicting-citation-counts-with-a-neural
|
1806.04641
| null | null |
Predicting Citation Counts with a Neural Network
|
We here describe and present results of a simple neural network that predicts
individual researchers' future citation counts based on a variety of data from
the researchers' past. For publications available on the open access-server
arXiv.org we find a higher predictability than previous studies.
|
We here describe and present results of a simple neural network that predicts individual researchers' future citation counts based on a variety of data from the researchers' past.
|
http://arxiv.org/abs/1806.04641v2
|
http://arxiv.org/pdf/1806.04641v2.pdf
| null |
[
"Tobias Mistele",
"Tom Price",
"Sabine Hossenfelder"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-ensemble-of-transfer-semi-supervised-and
|
1806.06506
| null | null |
An Ensemble of Transfer, Semi-supervised and Supervised Learning Methods for Pathological Heart Sound Classification
|
In this work, we propose an ensemble of classifiers to distinguish between
various degrees of abnormalities of the heart using Phonocardiogram (PCG)
signals acquired using digital stethoscopes in a clinical setting, for the
INTERSPEECH 2018 Computational Paralinguistics (ComParE) Heart Beats
SubChallenge. Our primary classification framework constitutes a convolutional
neural network with 1D-CNN time-convolution (tConv) layers, which uses features
transferred from a model trained on the 2016 Physionet Heart Sound Database. We
also employ a Representation Learning (RL) approach to generate features in an
unsupervised manner using Deep Recurrent Autoencoders and use Support Vector
Machine (SVM) and Linear Discriminant Analysis (LDA) classifiers. Finally, we
utilize an SVM classifier on a high-dimensional segment-level feature extracted
using various functionals on short-term acoustic features, i.e., Low-Level
Descriptors (LLD). An ensemble of the three different approaches provides a
relative improvement of 11.13% compared to our best single sub-system in terms
of the Unweighted Average Recall (UAR) performance metric on the evaluation
dataset.
|
In this work, we propose an ensemble of classifiers to distinguish between various degrees of abnormalities of the heart using Phonocardiogram (PCG) signals acquired using digital stethoscopes in a clinical setting, for the INTERSPEECH 2018 Computational Paralinguistics (ComParE) Heart Beats SubChallenge.
|
http://arxiv.org/abs/1806.06506v2
|
http://arxiv.org/pdf/1806.06506v2.pdf
| null |
[
"Ahmed Imtiaz Humayun",
"Md. Tauhiduzzaman Khan",
"Shabnam Ghaffarzadegan",
"Zhe Feng",
"Taufiq Hasan"
] |
[
"General Classification",
"Representation Learning",
"Sound Classification"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-unified-strategy-for-implementing-curiosity
|
1806.06505
| null | null |
A unified strategy for implementing curiosity and empowerment driven reinforcement learning
|
Although there are many approaches to implement intrinsically motivated
artificial agents, the combined usage of multiple intrinsic drives remains
still a relatively unexplored research area. Specifically, we hypothesize that
a mechanism capable of quantifying and controlling the evolution of the
information flow between the agent and the environment could be the fundamental
component for implementing a higher degree of autonomy into artificial
intelligent agents. This paper propose a unified strategy for implementing two
semantically orthogonal intrinsic motivations: curiosity and empowerment.
Curiosity reward informs the agent about the relevance of a recent agent
action, whereas empowerment is implemented as the opposite information flow
from the agent to the environment that quantifies the agent's potential of
controlling its own future. We show that an additional homeostatic drive is
derived from the curiosity reward, which generalizes and enhances the
information gain of a classical curious/heterostatic reinforcement learning
agent. We show how a shared internal model by curiosity and empowerment
facilitates a more efficient training of the empowerment function. Finally, we
discuss future directions for further leveraging the interplay between these
two intrinsic rewards.
| null |
http://arxiv.org/abs/1806.06505v1
|
http://arxiv.org/pdf/1806.06505v1.pdf
| null |
[
"Ildefons Magrans de Abril",
"Ryota Kanai"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-modal-data-augmentation-for-end-to-end
|
1803.10299
| null | null |
Multi-Modal Data Augmentation for End-to-End ASR
|
We present a new end-to-end architecture for automatic speech recognition
(ASR) that can be trained using \emph{symbolic} input in addition to the
traditional acoustic input. This architecture utilizes two separate encoders:
one for acoustic input and another for symbolic input, both sharing the
attention and decoder parameters. We call this architecture a multi-modal data
augmentation network (MMDA), as it can support multi-modal (acoustic and
symbolic) input and enables seamless mixing of large text datasets with
significantly smaller transcribed speech corpora during training. We study
different ways of transforming large text corpora into a symbolic form suitable
for training our MMDA network. Our best MMDA setup obtains small improvements
on character error rate (CER), and as much as 7-10\% relative word error rate
(WER) improvement over a baseline both with and without an external language
model.
| null |
http://arxiv.org/abs/1803.10299v3
|
http://arxiv.org/pdf/1803.10299v3.pdf
| null |
[
"Adithya Renduchintala",
"Shuoyang Ding",
"Matthew Wiesner",
"Shinji Watanabe"
] |
[
"Automatic Speech Recognition",
"Automatic Speech Recognition (ASR)",
"Data Augmentation",
"Decoder",
"Language Modeling",
"Language Modelling",
"speech-recognition",
"Speech Recognition"
] | 2018-03-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deforming-autoencoders-unsupervised
|
1806.06503
| null | null |
Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance
|
In this work we introduce Deforming Autoencoders, a generative model for
images that disentangles shape from appearance in an unsupervised manner. As in
the deformable template paradigm, shape is represented as a deformation between
a canonical coordinate system (`template') and an observed image, while
appearance is modeled in `canonical', template, coordinates, thus discarding
variability due to deformations. We introduce novel techniques that allow this
approach to be deployed in the setting of autoencoders and show that this
method can be used for unsupervised group-wise image alignment. We show
experiments with expression morphing in humans, hands, and digits, face
manipulation, such as shape and appearance interpolation, as well as
unsupervised landmark localization. A more powerful form of unsupervised
disentangling becomes possible in template coordinates, allowing us to
successfully decompose face images into shading and albedo, and further
manipulate face images.
|
In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner.
|
http://arxiv.org/abs/1806.06503v1
|
http://arxiv.org/pdf/1806.06503v1.pdf
|
ECCV 2018 9
|
[
"Zhixin Shu",
"Mihir Sahasrabudhe",
"Alp Guler",
"Dimitris Samaras",
"Nikos Paragios",
"Iasonas Kokkinos"
] |
[
"Unsupervised Facial Landmark Detection"
] | 2018-06-18T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Zhixin_Shu_Deforming_Autoencoders_Unsupervised_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Zhixin_Shu_Deforming_Autoencoders_Unsupervised_ECCV_2018_paper.pdf
|
deforming-autoencoders-unsupervised-1
| null |
[] |
https://paperswithcode.com/paper/conditional-affordance-learning-for-driving
|
1806.06498
| null | null |
Conditional Affordance Learning for Driving in Urban Environments
|
Most existing approaches to autonomous driving fall into one of two
categories: modular pipelines, that build an extensive model of the
environment, and imitation learning approaches, that map images directly to
control outputs. A recently proposed third paradigm, direct perception, aims to
combine the advantages of both by using a neural network to learn appropriate
low-dimensional intermediate representations. However, existing direct
perception approaches are restricted to simple highway situations, lacking the
ability to navigate intersections, stop at traffic lights or respect speed
limits. In this work, we propose a direct perception approach which maps video
input to intermediate representations suitable for autonomous navigation in
complex urban environments given high-level directional inputs. Compared to
state-of-the-art reinforcement and conditional imitation learning approaches,
we achieve an improvement of up to 68 % in goal-directed navigation on the
challenging CARLA simulation benchmark. In addition, our approach is the first
to handle traffic lights and speed signs by using image-level labels only, as
well as smooth car-following, resulting in a significant reduction of traffic
accidents in simulation.
|
Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs.
|
http://arxiv.org/abs/1806.06498v3
|
http://arxiv.org/pdf/1806.06498v3.pdf
| null |
[
"Axel Sauer",
"Nikolay Savinov",
"Andreas Geiger"
] |
[
"Autonomous Driving",
"Autonomous Navigation",
"Imitation Learning",
"Navigate"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100",
"description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\\pi\\left(a\\mid{s}\\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:\r\n\r\n$$H(X) = -\\sum\\pi\\left(x\\right)\\log\\left(\\pi\\left(x\\right)\\right) $$\r\n\r\nImage Credit: Wikipedia",
"full_name": "Entropy Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Entropy Regularization",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
},
{
"code_snippet_url": null,
"description": "**Proximal Policy Optimization**, or **PPO**, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization. \r\n\r\nLet $r\\_{t}\\left(\\theta\\right)$ denote the probability ratio $r\\_{t}\\left(\\theta\\right) = \\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}$, so $r\\left(\\theta\\_{old}\\right) = 1$. TRPO maximizes a “surrogate” objective:\r\n\r\n$$ L^{\\text{CPI}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)})\\hat{A}\\_{t}\\right] = \\hat{\\mathbb{E}}\\_{t}\\left[r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}\\right] $$\r\n\r\nWhere $CPI$ refers to a conservative policy iteration. Without a constraint, maximization of $L^{CPI}$ would lead to an excessively large policy update; hence, we PPO modifies the objective, to penalize changes to the policy that move $r\\_{t}\\left(\\theta\\right)$ away from 1:\r\n\r\n$$ J^{\\text{CLIP}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\min\\left(r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}, \\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}\\right)\\right] $$\r\n\r\nwhere $\\epsilon$ is a hyperparameter, say, $\\epsilon = 0.2$. The motivation for this objective is as follows. The first term inside the min is $L^{CPI}$. The second term, $\\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}$ modifies the surrogate\r\nobjective by clipping the probability ratio, which removes the incentive for moving $r\\_{t}$ outside of the interval $\\left[1 − \\epsilon, 1 + \\epsilon\\right]$. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. \r\n\r\nOne detail to note is that when we apply PPO for a network where we have shared parameters for actor and critic functions, we typically add to the objective function an error term on value estimation and an entropy term to encourage exploration.",
"full_name": "Proximal Policy Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "PPO",
"source_title": "Proximal Policy Optimization Algorithms",
"source_url": "http://arxiv.org/abs/1707.06347v2"
},
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. \r\n\r\nSource: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf)\r\n\r\nImage source: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf)",
"full_name": "CARLA: An Open Urban Driving Simulator",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Video Game Models",
"parent": null
},
"name": "CARLA",
"source_title": "CARLA: An Open Urban Driving Simulator",
"source_url": "http://arxiv.org/abs/1711.03938v1"
}
] |
https://paperswithcode.com/paper/detecting-zero-day-controller-hijacking
|
1806.06496
| null | null |
Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning
|
Controllers of security-critical cyber-physical systems, like the power grid, are a very important class of computer systems. Attacks against the control code of a power-grid system, especially zero-day attacks, can be catastrophic. Earlier detection of the anomalies can prevent further damage. However, detecting zero-day attacks is extremely challenging because they have no known code and have unknown behavior. Furthermore, if data collected from the controller is transferred to a server through networks for analysis and detection of anomalous behavior, this creates a very large attack surface and also delays detection. In order to address this problem, we propose Reconstruction Error Distribution (RED) of Hardware Performance Counters (HPCs), and a data-driven defense system based on it. Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller. Then, we run this model using real-time data from commonly available HPCs. We use the proposed RED to enhance the temporal deep learning detection of anomalous behavior, by estimating distribution deviations from the normal behavior with an effective statistical test. Experimental results on a real power-grid controller show that we can detect anomalous behavior with high accuracy (>99.9%), nearly zero false positives and short (<360ms) latency.
| null |
https://arxiv.org/abs/1806.06496v3
|
https://arxiv.org/pdf/1806.06496v3.pdf
| null |
[
"Zecheng He",
"Aswin Raghavan",
"Guangyuan Hu",
"Sek Chai",
"Ruby Lee"
] |
[
"Anomaly Detection",
"Deep Learning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/women-also-snowboard-overcoming-bias-in-1
|
1803.09797
| null | null |
Women also Snowboard: Overcoming Bias in Captioning Models
|
Most machine learning methods are known to capture and exploit biases of the
training data. While some biases are beneficial for learning, others are
harmful. Specifically, image captioning models tend to exaggerate biases
present in training data (e.g., if a word is present in 60% of training
sentences, it might be predicted in 70% of sentences at test time). This can
lead to incorrect captions in domains where unbiased captions are desired, or
required, due to over-reliance on the learned prior and image context. In this
work we investigate generation of gender-specific caption words (e.g. man,
woman) based on the person's appearance or the image context. We introduce a
new Equalizer model that ensures equal gender probability when gender evidence
is occluded in a scene and confident predictions when gender evidence is
present. The resulting model is forced to look at a person rather than use
contextual cues to make a gender-specific predictions. The losses that comprise
our model, the Appearance Confusion Loss and the Confident Loss, are general,
and can be added to any description model in order to mitigate impacts of
unwanted bias in a description dataset. Our proposed model has lower error than
prior work when describing images with people and mentioning their gender and
more closely matches the ground truth ratio of sentences including women to
sentences including men. We also show that unlike other approaches, our model
is indeed more often looking at people when predicting their gender.
|
We introduce a new Equalizer model that ensures equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present.
|
http://arxiv.org/abs/1803.09797v4
|
http://arxiv.org/pdf/1803.09797v4.pdf
|
ECCV 2018 9
|
[
"Kaylee Burns",
"Lisa Anne Hendricks",
"Kate Saenko",
"Trevor Darrell",
"Anna Rohrbach"
] |
[
"Image Captioning"
] | 2018-03-26T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Lisa_Anne_Hendricks_Women_also_Snowboard_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Lisa_Anne_Hendricks_Women_also_Snowboard_ECCV_2018_paper.pdf
|
women-also-snowboard-overcoming-bias-in-2
| null |
[] |
https://paperswithcode.com/paper/boosted-density-estimation-remastered
|
1803.08178
| null | null |
Boosted Density Estimation Remastered
|
There has recently been a steady increase in the number iterative approaches
to density estimation. However, an accompanying burst of formal convergence
guarantees has not followed; all results pay the price of heavy assumptions
which are often unrealistic or hard to check. The Generative Adversarial
Network (GAN) literature --- seemingly orthogonal to the aforementioned pursuit
--- has had the side effect of a renewed interest in variational divergence
minimisation (notably $f$-GAN). We show that by introducing a weak learning
assumption (in the sense of the classical boosting framework) we are able to
import some recent results from the GAN literature to develop an iterative
boosted density estimation algorithm, including formal convergence results with
rates, that does not suffer the shortcomings other approaches. We show that the
density fit is an exponential family, and as part of our analysis obtain an
improved variational characterisation of $f$-GAN.
| null |
http://arxiv.org/abs/1803.08178v3
|
http://arxiv.org/pdf/1803.08178v3.pdf
| null |
[
"Zac Cranko",
"Richard Nock"
] |
[
"Density Estimation",
"Generative Adversarial Network"
] | 2018-03-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/disturbance-grassmann-kernels-for-subspace
|
1802.03517
| null | null |
Disturbance Grassmann Kernels for Subspace-Based Learning
|
In this paper, we focus on subspace-based learning problems, where data
elements are linear subspaces instead of vectors. To handle this kind of data,
Grassmann kernels were proposed to measure the space structure and used with
classifiers, e.g., Support Vector Machines (SVMs). However, the existing
discriminative algorithms mostly ignore the instability of subspaces, which
would cause the classifiers misled by disturbed instances. Thus we propose
considering all potential disturbance of subspaces in learning processes to
obtain more robust classifiers. Firstly, we derive the dual optimization of
linear classifiers with disturbance subject to a known distribution, resulting
in a new kernel, Disturbance Grassmann (DG) kernel. Secondly, we research into
two kinds of disturbance, relevant to the subspace matrix and singular values
of bases, with which we extend the Projection kernel on Grassmann manifolds to
two new kernels. Experiments on action data indicate that the proposed kernels
perform better compared to state-of-the-art subspace-based methods, even in a
worse environment.
| null |
http://arxiv.org/abs/1802.03517v2
|
http://arxiv.org/pdf/1802.03517v2.pdf
| null |
[
"Junyuan Hong",
"Huanhuan Chen",
"Feng Lin"
] |
[] | 2018-02-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/entity-aware-language-model-as-an
|
1803.04291
| null | null |
Entity-Aware Language Model as an Unsupervised Reranker
|
In language modeling, it is difficult to incorporate entity relationships
from a knowledge-base. One solution is to use a reranker trained with global
features, in which global features are derived from n-best lists. However,
training such a reranker requires manually annotated n-best lists, which is
expensive to obtain. We propose a method based on the contrastive estimation
method that alleviates the need for such data. Experiments in the music domain
demonstrate that global features, as well as features extracted from an
external knowledge-base, can be incorporated into our reranker. Our final
model, a simple ensemble of a language model and reranker, achieves a 0.44\%
absolute word error rate improvement over an LSTM language model on the blind
test data.
| null |
http://arxiv.org/abs/1803.04291v2
|
http://arxiv.org/pdf/1803.04291v2.pdf
| null |
[
"Mohammad Sadegh Rasooli",
"Sarangarajan Parthasarathy"
] |
[
"Language Modeling",
"Language Modelling"
] | 2018-03-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/co-training-embeddings-of-knowledge-graphs
|
1806.06478
| null | null |
Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment
|
Multilingual knowledge graph (KG) embeddings provide latent semantic
representations of entities and structured knowledge with cross-lingual
inferences, which benefit various knowledge-driven cross-lingual NLP tasks.
However, precisely learning such cross-lingual inferences is usually hindered
by the low coverage of entity alignment in many KGs. Since many multilingual
KGs also provide literal descriptions of entities, in this paper, we introduce
an embedding-based approach which leverages a weakly aligned multilingual KG
for semi-supervised cross-lingual learning using entity descriptions. Our
approach performs co-training of two embedding models, i.e. a multilingual KG
embedding model and a multilingual literal description embedding model. The
models are trained on a large Wikipedia-based trilingual dataset where most
entity alignment is unknown to training. Experimental results show that the
performance of the proposed approach on the entity alignment task improves at
each iteration of co-training, and eventually reaches a stage at which it
significantly surpasses previous approaches. We also show that our approach has
promising abilities for zero-shot entity alignment, and cross-lingual KG
completion.
| null |
http://arxiv.org/abs/1806.06478v1
|
http://arxiv.org/pdf/1806.06478v1.pdf
| null |
[
"Muhao Chen",
"Yingtao Tian",
"Kai-Wei Chang",
"Steven Skiena",
"Carlo Zaniolo"
] |
[
"Entity Alignment",
"Knowledge Graphs"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/video-salient-object-detection-using
|
1708.01447
| null | null |
Video Salient Object Detection Using Spatiotemporal Deep Features
|
This paper presents a method for detecting salient objects in videos where
temporal information in addition to spatial information is fully taken into
account. Following recent reports on the advantage of deep features over
conventional hand-crafted features, we propose a new set of SpatioTemporal Deep
(STD) features that utilize local and global contexts over frames. We also
propose new SpatioTemporal Conditional Random Field (STCRF) to compute saliency
from STD features. STCRF is our extension of CRF to the temporal domain and
describes the relationships among neighboring regions both in a frame and over
frames. STCRF leads to temporally consistent saliency maps over frames,
contributing to the accurate detection of salient objects' boundaries and noise
reduction during detection. Our proposed method first segments an input video
into multiple scales and then computes a saliency map at each scale level using
STD features with STCRF. The final saliency map is computed by fusing saliency
maps at different scale levels. Our experiments, using publicly available
benchmark datasets, confirm that the proposed method significantly outperforms
state-of-the-art methods. We also applied our saliency computation to the video
object segmentation task, showing that our method outperforms existing video
object segmentation methods.
| null |
http://arxiv.org/abs/1708.01447v3
|
http://arxiv.org/pdf/1708.01447v3.pdf
| null |
[
"Trung-Nghia Le",
"Akihiro Sugimoto"
] |
[
"Object",
"object-detection",
"Object Detection",
"RGB Salient Object Detection",
"Salient Object Detection",
"Semantic Segmentation",
"Video Object Segmentation",
"Video Salient Object Detection",
"Video Semantic Segmentation"
] | 2017-08-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/reinforcement-learning-in-rich-observation
|
1611.03907
| null | null |
Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
|
Reinforcement learning (RL) in Markov decision processes (MDPs) with large
state spaces is a challenging problem. The performance of standard RL
algorithms degrades drastically with the dimensionality of state space.
However, in practice, these large MDPs typically incorporate a latent or hidden
low-dimensional structure. In this paper, we study the setting of
rich-observation Markov decision processes (ROMDP), where there are a small
number of hidden states which possess an injective mapping to the observation
states. In other words, every observation state is generated through a single
hidden state, and this mapping is unknown a priori. We introduce a spectral
decomposition method that consistently learns this mapping, and more
importantly, achieves it with low regret. The estimated mapping is integrated
into an optimistic RL algorithm (UCRL), which operates on the estimated hidden
space. We derive finite-time regret bounds for our algorithm with a weak
dependence on the dimensionality of the observed space. In fact, our algorithm
asymptotically achieves the same average regret as the oracle UCRL algorithm,
which has the knowledge of the mapping from hidden to observed spaces. Thus, we
derive an efficient spectral RL algorithm for ROMDPs.
| null |
http://arxiv.org/abs/1611.03907v4
|
http://arxiv.org/pdf/1611.03907v4.pdf
| null |
[
"Kamyar Azizzadenesheli",
"Alessandro Lazaric",
"Animashree Anandkumar"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2016-11-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/breaking-transferability-of-adversarial
|
1805.04613
| null | null |
Breaking Transferability of Adversarial Samples with Randomness
|
We investigate the role of transferability of adversarial attacks in the
observed vulnerabilities of Deep Neural Networks (DNNs). We demonstrate that
introducing randomness to the DNN models is sufficient to defeat adversarial
attacks, given that the adversary does not have an unlimited attack budget.
Instead of making one specific DNN model robust to perfect knowledge attacks
(a.k.a, white box attacks), creating randomness within an army of DNNs
completely eliminates the possibility of perfect knowledge acquisition,
resulting in a significantly more robust DNN ensemble against the strongest
form of attacks. We also show that when the adversary has an unlimited budget
of data perturbation, all defensive techniques would eventually break down as
the budget increases. Therefore, it is important to understand the game saddle
point where the adversary would not further pursue this endeavor.
Furthermore, we explore the relationship between attack severity and decision
boundary robustness in the version space. We empirically demonstrate that by
simply adding a small Gaussian random noise to the learned weights, a DNN model
can increase its resilience to adversarial attacks by as much as 74.2%. More
importantly, we show that by randomly activating/revealing a model from a pool
of pre-trained DNNs at each query request, we can put a tremendous strain on
the adversary's attack strategies. We compare our randomization techniques to
the Ensemble Adversarial Training technique and show that our randomization
techniques are superior under different attack budget constraints.
| null |
http://arxiv.org/abs/1805.04613v2
|
http://arxiv.org/pdf/1805.04613v2.pdf
| null |
[
"Yan Zhou",
"Murat Kantarcioglu",
"Bowei Xi"
] |
[] | 2018-05-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-rbo-dataset-of-articulated-objects-and
|
1806.06465
| null | null |
The RBO Dataset of Articulated Objects and Interactions
|
We present a dataset with models of 14 articulated objects commonly found in
human environments and with RGB-D video sequences and wrenches recorded of
human interactions with them. The 358 interaction sequences total 67 minutes of
human manipulation under varying experimental conditions (type of interaction,
lighting, perspective, and background). Each interaction with an object is
annotated with the ground truth poses of its rigid parts and the kinematic
state obtained by a motion capture system. For a subset of 78 sequences (25
minutes), we also measured the interaction wrenches. The object models contain
textured three-dimensional triangle meshes of each link and their motion
constraints. We provide Python scripts to download and visualize the data. The
data is available at https://tu-rbo.github.io/articulated-objects/ and hosted
at https://zenodo.org/record/1036660/.
| null |
http://arxiv.org/abs/1806.06465v1
|
http://arxiv.org/pdf/1806.06465v1.pdf
| null |
[
"Roberto Martín-Martín",
"Clemens Eppner",
"Oliver Brock"
] |
[] | 2018-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-policy-representations-in-multiagent
|
1806.06464
| null | null |
Learning Policy Representations in Multiagent Systems
|
Modeling agent behavior is central to understanding the emergence of complex
phenomena in multiagent systems. Prior work in agent modeling has largely been
task-specific and driven by hand-engineering domain-specific prior knowledge.
We propose a general learning framework for modeling agent behavior in any
multiagent system using only a handful of interaction data. Our framework casts
agent modeling as a representation learning problem. Consequently, we construct
a novel objective inspired by imitation learning and agent identification and
design an algorithm for unsupervised learning of representations of agent
policies. We demonstrate empirically the utility of the proposed framework in
(i) a challenging high-dimensional competitive environment for continuous
control and (ii) a cooperative environment for communication, on supervised
predictive tasks, unsupervised clustering, and policy optimization using deep
reinforcement learning.
| null |
http://arxiv.org/abs/1806.06464v2
|
http://arxiv.org/pdf/1806.06464v2.pdf
|
ICML 2018 7
|
[
"Aditya Grover",
"Maruan Al-Shedivat",
"Jayesh K. Gupta",
"Yura Burda",
"Harrison Edwards"
] |
[
"Clustering",
"continuous-control",
"Continuous Control",
"Deep Reinforcement Learning",
"Imitation Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Representation Learning"
] | 2018-06-17T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2435
|
http://proceedings.mlr.press/v80/grover18a/grover18a.pdf
|
learning-policy-representations-in-multiagent-1
| null |
[] |
https://paperswithcode.com/paper/sub-gaussian-estimators-of-the-mean-of-a-1
|
1605.07129
| null | null |
Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries
|
Estimation of the covariance matrix has attracted a lot of attention of the
statistical research community over the years, partially due to important
applications such as Principal Component Analysis. However, frequently used
empirical covariance estimator (and its modifications) is very sensitive to
outliers in the data. As P. J. Huber wrote in 1964, "...This raises a question
which could have been asked already by Gauss, but which was, as far as I know,
only raised a few years ago (notably by Tukey): what happens if the true
distribution deviates slightly from the assumed normal one? As is now well
known, the sample mean then may have a catastrophically bad performance..."
Motivated by this question, we develop a new estimator of the (element-wise)
mean of a random matrix, which includes covariance estimation problem as a
special case. Assuming that the entries of a matrix possess only finite second
moment, this new estimator admits sub-Gaussian or sub-exponential concentration
around the unknown mean in the operator norm. We will explain the key ideas
behind our construction, as well as applications to covariance estimation and
matrix completion problems.
| null |
http://arxiv.org/abs/1605.07129v5
|
http://arxiv.org/pdf/1605.07129v5.pdf
| null |
[
"Stanislav Minsker"
] |
[
"Matrix Completion"
] | 2016-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-convex-pruning-of-deep-neural-networks
|
1806.06457
| null | null |
Fast Convex Pruning of Deep Neural Networks
|
We develop a fast, tractable technique called Net-Trim for simplifying a
trained neural network. The method is a convex post-processing module, which
prunes (sparsifies) a trained network layer by layer, while preserving the
internal responses. We present a comprehensive analysis of Net-Trim from both
the algorithmic and sample complexity standpoints, centered on a fast, scalable
convex optimization program. Our analysis includes consistency results between
the initial and retrained models before and after Net-Trim application and
guarantees on the number of training samples needed to discover a network that
can be expressed using a certain number of nonzero terms. Specifically, if
there is a set of weights that uses at most $s$ terms that can re-create the
layer outputs from the layer inputs, we can find these weights from
$\mathcal{O}(s\log N/s)$ samples, where $N$ is the input size. These
theoretical results are similar to those for sparse regression using the Lasso,
and our analysis uses some of the same recently-developed tools (namely recent
results on the concentration of measure and convex analysis). Finally, we
propose an algorithmic framework based on the alternating direction method of
multipliers (ADMM), which allows a fast and simple implementation of Net-Trim
for network pruning and compression.
|
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.
|
http://arxiv.org/abs/1806.06457v2
|
http://arxiv.org/pdf/1806.06457v2.pdf
| null |
[
"Alireza Aghasi",
"Afshin Abdi",
"Justin Romberg"
] |
[
"Network Pruning"
] | 2018-06-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/cross-modality-image-synthesis-from-unpaired
|
1803.06629
| null | null |
Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size
|
CT is commonly used in orthopedic procedures. MRI is used along with CT to
identify muscle structures and diagnose osteonecrosis due to its superior soft
tissue contrast. However, MRI has poor contrast for bone structures. Clearly,
it would be helpful if a corresponding CT were available, as bone boundaries
are more clearly seen and CT has standardized (i.e., Hounsfield) units.
Therefore, we aim at MR-to-CT synthesis. The CycleGAN was successfully applied
to unpaired CT and MR images of the head, these images do not have as much
variation of intensity pairs as do images in the pelvic region due to the
presence of joints and muscles. In this paper, we extended the CycleGAN
approach by adding the gradient consistency loss to improve the accuracy at the
boundaries. We conducted two experiments. To evaluate image synthesis, we
investigated dependency of image synthesis accuracy on 1) the number of
training data and 2) the gradient consistency loss. To demonstrate the
applicability of our method, we also investigated a segmentation accuracy on
synthesized images.
| null |
http://arxiv.org/abs/1803.06629v3
|
http://arxiv.org/pdf/1803.06629v3.pdf
| null |
[
"Yuta Hiasa",
"Yoshito Otake",
"Masaki Takao",
"Takumi Matsuoka",
"Kazuma Takashima",
"Jerry L. Prince",
"Nobuhiko Sugano",
"Yoshinobu Sato"
] |
[
"Image Generation"
] | 2018-03-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/znxlwm/pytorch-pix2pix/blob/3059f2af53324e77089bbcfc31279f01a38c40b8/network.py#L104",
"description": "**PatchGAN** is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \\times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss.",
"full_name": "PatchGAN",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Discriminators** are a type of module used in architectures such as generative adversarial networks to discriminate between real and generated samples. Below you can find a continuously updating list of discriminators.",
"name": "Discriminators",
"parent": null
},
"name": "PatchGAN",
"source_title": "Image-to-Image Translation with Conditional Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.07004v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/instancenorm.py#L141",
"description": "**Instance Normalization** (also known as contrast normalization) is a normalization layer where:\r\n\r\n$$\r\n y_{tijk} = \\frac{x_{tijk} - \\mu_{ti}}{\\sqrt{\\sigma_{ti}^2 + \\epsilon}},\r\n \\quad\r\n \\mu_{ti} = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H x_{tilm},\r\n \\quad\r\n \\sigma_{ti}^2 = \\frac{1}{HW}\\sum_{l=1}^W \\sum_{m=1}^H (x_{tilm} - \\mu_{ti})^2.\r\n$$\r\n\r\nThis prevents instance-specific mean and covariance shift simplifying the learning process. Intuitively, the normalization process allows to remove instance-specific contrast information from the content image in a task like image stylization, which simplifies generation.",
"full_name": "Instance Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Instance Normalization",
"source_title": "Instance Normalization: The Missing Ingredient for Fast Stylization",
"source_url": "http://arxiv.org/abs/1607.08022v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!\r\n\r\nHow do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!",
"full_name": "HuMan(Expedia)||How do I get a human at Expedia?",
"introduced_year": 2014,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "HuMan(Expedia)||How do I get a human at Expedia?",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/eriklindernoren/PyTorch-GAN/blob/a163b82beff3d01688d8315a3fd39080400e7c01/implementations/lsgan/lsgan.py#L102",
"description": "**GAN Least Squares Loss** is a least squares loss function for generative adversarial networks. Minimizing this objective function is equivalent to minimizing the Pearson $\\chi^{2}$ divergence. The objective function (here for [LSGAN](https://paperswithcode.com/method/lsgan)) can be defined as:\r\n\r\n$$ \\min\\_{D}V\\_{LS}\\left(D\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{x} \\sim p\\_{data}\\left(\\mathbf{x}\\right)}\\left[\\left(D\\left(\\mathbf{x}\\right) - b\\right)^{2}\\right] + \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z}\\sim p\\_{data}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - a\\right)^{2}\\right] $$\r\n\r\n$$ \\min\\_{G}V\\_{LS}\\left(G\\right) = \\frac{1}{2}\\mathbb{E}\\_{\\mathbf{z} \\sim p\\_{\\mathbf{z}}\\left(\\mathbf{z}\\right)}\\left[\\left(D\\left(G\\left(\\mathbf{z}\\right)\\right) - c\\right)^{2}\\right] $$\r\n\r\nwhere $a$ and $b$ are the labels for fake data and real data and $c$ denotes the value that $G$ wants $D$ to believe for fake data.",
"full_name": "GAN Least Squares Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "GAN Least Squares Loss",
"source_title": "Least Squares Generative Adversarial Networks",
"source_url": "http://arxiv.org/abs/1611.04076v3"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/f5834b3ed339ec268f40cf56928234eed8dfeb92/models/cycle_gan_model.py#L172",
"description": "**Cycle Consistency Loss** is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the [CycleGAN](https://paperswithcode.com/method/cyclegan) architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \\rightarrow Y$ and $F: Y \\rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\\left(G\\left(x\\right)\\right) \\approx x$ and $G\\left(F\\left(y\\right)\\right) \\approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:\r\n\r\n$$ \\mathcal{L}\\_{cyc}\\left(G, F\\right) = \\mathbb{E}\\_{x \\sim p\\_{data}\\left(x\\right)}\\left[||F\\left(G\\left(x\\right)\\right) - x||\\_{1}\\right] + \\mathbb{E}\\_{y \\sim p\\_{data}\\left(y\\right)}\\left[||G\\left(F\\left(y\\right)\\right) - y||\\_{1}\\right] $$",
"full_name": "Cycle Consistency Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Cycle Consistency Loss",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
},
{
"code_snippet_url": "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9e6fff7b7d5215a38be3cac074ca7087041bea0d/models/cycle_gan_model.py#L8",
"description": "In today’s digital age, Cardano has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Cardano transaction not confirmed, your Cardano wallet not showing balance, or you're trying to recover a lost Cardano wallet, knowing where to get help is essential. That’s why the Cardano customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Cardano Customer Support Number +1-833-534-1729\r\nCardano operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Cardano Transaction Not Confirmed\r\nOne of the most common concerns is when a Cardano transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Cardano Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Cardano wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Cardano Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Cardano wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Cardano Deposit Not Received\r\nIf someone has sent you Cardano but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Cardano deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Cardano Transaction Stuck or Pending\r\nSometimes your Cardano transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Cardano Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Cardano wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Cardano Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Cardano tech.\r\n\r\n24/7 Availability: Cardano doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Cardano Support and Wallet Issues\r\nQ1: Can Cardano support help me recover stolen BTC?\r\nA: While Cardano transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Cardano transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Cardano’s official number (Cardano is decentralized), it connects you to trained professionals experienced in resolving all major Cardano issues.\r\n\r\nFinal Thoughts\r\nCardano is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Cardano transaction not confirmed, your Cardano wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Cardano customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Cardano Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Cardano Customer Service Number +1-833-534-1729",
"source_title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"source_url": "https://arxiv.org/abs/1703.10593v7"
}
] |
https://paperswithcode.com/paper/self-attentive-neural-collaborative-filtering
|
1806.06446
| null | null |
Self-Attentive Neural Collaborative Filtering
|
This paper has been withdrawn as we discovered a bug in our tensorflow
implementation that involved accidental mixing of vectors across batches. This
lead to different inference results given different batch sizes which is
completely strange. The performance scores still remain the same but we
concluded that it was not the self-attention that contributed to the
performance. We are withdrawing the paper because this renders the main claim
of the paper false. Thanks to Guan Xinyu from NUS for discovering this issue in
our previously open source code.
| null |
http://arxiv.org/abs/1806.06446v2
|
http://arxiv.org/pdf/1806.06446v2.pdf
| null |
[
"Yi Tay",
"Shuai Zhang",
"Luu Anh Tuan",
"Siu Cheung Hui"
] |
[
"Collaborative Filtering"
] | 2018-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ncrf-an-open-source-neural-sequence-labeling
|
1806.05626
| null | null |
NCRF++: An Open-source Neural Sequence Labeling Toolkit
|
This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++
is designed for quick implementation of different neural sequence labeling
models with a CRF inference layer. It provides users with an inference for
building the custom model structure through configuration file with flexible
neural feature design and utilization. Built on PyTorch, the core operations
are calculated in batch, making the toolkit efficient with the acceleration of
GPU. It also includes the implementations of most state-of-the-art neural
sequence labeling models such as LSTM-CRF, facilitating reproducing and
refinement on those methods.
|
This paper describes NCRF++, a toolkit for neural sequence labeling.
|
http://arxiv.org/abs/1806.05626v2
|
http://arxiv.org/pdf/1806.05626v2.pdf
|
ACL 2018 7
|
[
"Jie Yang",
"Yue Zhang"
] |
[
"Chunking",
"GPU",
"Named Entity Recognition (NER)",
"Part-Of-Speech Tagging"
] | 2018-06-14T00:00:00 |
https://aclanthology.org/P18-4013
|
https://aclanthology.org/P18-4013.pdf
|
ncrf-an-open-source-neural-sequence-labeling-1
| null |
[
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/predicting-switching-graph-labelings-with
|
1806.06439
| null | null |
Online Prediction of Switching Graph Labelings with Cluster Specialists
|
We address the problem of predicting the labeling of a graph in an online setting when the labeling is changing over time. We present an algorithm based on a specialist approach; we develop the machinery of cluster specialists which probabilistically exploits the cluster structure in the graph. Our algorithm has two variants, one of which surprisingly only requires $\mathcal{O}(\log n)$ time on any trial $t$ on an $n$-vertex graph, an exponential speed up over existing methods. We prove switching mistake-bound guarantees for both variants of our algorithm. Furthermore these mistake bounds smoothly vary with the magnitude of the change between successive labelings. We perform experiments on Chicago Divvy Bicycle Sharing data and show that our algorithms significantly outperform an existing algorithm (a kernelized Perceptron) as well as several natural benchmarks.
|
We address the problem of predicting the labeling of a graph in an online setting when the labeling is changing over time.
|
https://arxiv.org/abs/1806.06439v3
|
https://arxiv.org/pdf/1806.06439v3.pdf
|
NeurIPS 2019 12
|
[
"Mark Herbster",
"James Robinson"
] |
[] | 2018-06-17T00:00:00 |
http://papers.nips.cc/paper/8923-online-prediction-of-switching-graph-labelings-with-cluster-specialists
|
http://papers.nips.cc/paper/8923-online-prediction-of-switching-graph-labelings-with-cluster-specialists.pdf
|
online-prediction-of-switching-graph
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/compressed-sensing-with-deep-image-prior-and
|
1806.06438
| null |
Hkl_sAVtwr
|
Compressed Sensing with Deep Image Prior and Learned Regularization
|
We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require pre-training over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally, we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks can perfectly fit any signal despite the non-convex nature of the fitting problem. This theoretical result provides justification for early stopping.
|
We propose a novel method for compressed sensing recovery using untrained deep generative models.
|
https://arxiv.org/abs/1806.06438v4
|
https://arxiv.org/pdf/1806.06438v4.pdf
| null |
[
"Dave Van Veen",
"Ajil Jalal",
"Mahdi Soltanolkotabi",
"Eric Price",
"Sriram Vishwanath",
"Alexandros G. Dimakis"
] |
[
"compressed sensing"
] | 2018-06-17T00:00:00 |
https://openreview.net/forum?id=Hkl_sAVtwr
|
https://openreview.net/pdf?id=Hkl_sAVtwr
| null | null |
[] |
https://paperswithcode.com/paper/subspace-embedding-and-linear-regression-with
|
1806.06430
| null | null |
Subspace Embedding and Linear Regression with Orlicz Norm
|
We consider a generalization of the classic linear regression problem to the
case when the loss is an Orlicz norm. An Orlicz norm is parameterized by a
non-negative convex function $G:\mathbb{R}_+\rightarrow\mathbb{R}_+$ with
$G(0)=0$: the Orlicz norm of a vector $x\in\mathbb{R}^n$ is defined as $
\|x\|_G=\inf\left\{\alpha>0\large\mid\sum_{i=1}^n G(|x_i|/\alpha)\leq
1\right\}. $ We consider the cases where the function $G(\cdot)$ grows
subquadratically. Our main result is based on a new oblivious embedding which
embeds the column space of a given matrix $A\in\mathbb{R}^{n\times d}$ with
Orlicz norm into a lower dimensional space with $\ell_2$ norm. Specifically, we
show how to efficiently find an embedding matrix $S\in\mathbb{R}^{m\times
n},m<n$ such that $\forall x\in\mathbb{R}^{d},\Omega(1/(d\log n)) \cdot
\|Ax\|_G\leq \|SAx\|_2\leq O(d^2\log n) \cdot \|Ax\|_G.$ By applying this
subspace embedding technique, we show an approximation algorithm for the
regression problem $\min_{x\in\mathbb{R}^d} \|Ax-b\|_G$, up to a $O(d\log^2 n)$
factor. As a further application of our techniques, we show how to also use
them to improve on the algorithm for the $\ell_p$ low rank matrix approximation
problem for $1\leq p<2$.
| null |
http://arxiv.org/abs/1806.06430v1
|
http://arxiv.org/pdf/1806.06430v1.pdf
|
ICML 2018 7
|
[
"Alexandr Andoni",
"Chengyu Lin",
"Ying Sheng",
"Peilin Zhong",
"Ruiqi Zhong"
] |
[
"regression"
] | 2018-06-17T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2451
|
http://proceedings.mlr.press/v80/andoni18a/andoni18a.pdf
|
subspace-embedding-and-linear-regression-with-1
| null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/scalable-methods-for-8-bit-training-of-neural
|
1805.11046
| null | null |
Scalable Methods for 8-bit Training of Neural Networks
|
Quantized Neural Networks (QNNs) are often used to improve network efficiency
during the inference phase, i.e. after the network has been trained. Extensive
research in the field suggests many different quantization schemes. Still, the
number of bits required, as well as the best quantization scheme, are yet
unknown. Our theoretical analysis suggests that most of the training process is
robust to substantial precision reduction, and points to only a few specific
operations that require higher precision. Armed with this knowledge, we
quantize the model parameters, activations and layer gradients to 8-bit,
leaving at a higher precision only the final step in the computation of the
weight gradients. Additionally, as QNNs require batch-normalization to be
trained at high precision, we introduce Range Batch-Normalization (BN) which
has significantly higher tolerance to quantization noise and improved
computational complexity. Our simulations show that Range BN is equivalent to
the traditional batch norm if a precise scale adjustment, which can be
approximated analytically, is applied. To the best of the authors' knowledge,
this work is the first to quantize the weights, activations, as well as a
substantial volume of the gradients stream, in all layers (including batch
normalization) to 8-bit while showing state-of-the-art results over the
ImageNet-1K dataset.
|
Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients.
|
http://arxiv.org/abs/1805.11046v3
|
http://arxiv.org/pdf/1805.11046v3.pdf
|
NeurIPS 2018 12
|
[
"Ron Banner",
"Itay Hubara",
"Elad Hoffer",
"Daniel Soudry"
] |
[
"Quantization"
] | 2018-05-25T00:00:00 |
http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks
|
http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks.pdf
|
scalable-methods-for-8-bit-training-of-neural-1
| null |
[] |
https://paperswithcode.com/paper/a-novel-hybrid-machine-learning-model-for
|
1806.06423
| null | null |
A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases
|
Automatic clinical diagnosis of retinal diseases has emerged as a promising
approach to facilitate discovery in areas with limited access to specialists.
We propose a novel visual-assisted diagnosis hybrid model based on the support
vector machine (SVM) and deep neural networks (DNNs). The model incorporates
complementary strengths of DNNs and SVM. Furthermore, we present a new clinical
retina label collection for ophthalmology incorporating 32 retina diseases
classes. Using EyeNet, our model achieves 89.73% diagnosis accuracy and the
model performance is comparable to the professional ophthalmologists.
|
Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists.
|
http://arxiv.org/abs/1806.06423v1
|
http://arxiv.org/pdf/1806.06423v1.pdf
| null |
[
"C. -H. Huck Yang",
"Jia-Hong Huang",
"Fangyu Liu",
"Fang-Yi Chiu",
"Mengya Gao",
"Weifeng Lyu",
"I-Hung Lin M. D.",
"Jesper Tegner"
] |
[
"BIG-bench Machine Learning",
"General Classification",
"Hybrid Machine Learning"
] | 2018-06-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/learning-to-evaluate-image-captioning
|
1806.06422
| null | null |
Learning to Evaluate Image Captioning
|
Evaluation metrics for image captioning face two challenges. Firstly,
commonly used metrics such as CIDEr, METEOR, ROUGE and BLEU often do not
correlate well with human judgments. Secondly, each metric has well known blind
spots to pathological caption constructions, and rule-based metrics lack
provisions to repair such blind spots once identified. For example, the newly
proposed SPICE correlates well with human judgments, but fails to capture the
syntactic structure of a sentence. To address these two challenges, we propose
a novel learning based discriminative evaluation metric that is directly
trained to distinguish between human and machine-generated captions. In
addition, we further propose a data augmentation scheme to explicitly
incorporate pathological transformations as negative examples during training.
The proposed metric is evaluated with three kinds of robustness tests and its
correlation with human judgments. Extensive experiments show that the proposed
data augmentation scheme not only makes our metric more robust toward several
pathological transformations, but also improves its correlation with human
judgments. Our metric outperforms other metrics on both caption level human
correlation in Flickr 8k and system level human correlation in COCO. The
proposed approach could be served as a learning based evaluation metric that is
complementary to existing rule-based metrics.
|
To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions.
|
http://arxiv.org/abs/1806.06422v1
|
http://arxiv.org/pdf/1806.06422v1.pdf
|
CVPR 2018 6
|
[
"Yin Cui",
"Guandao Yang",
"Andreas Veit",
"Xun Huang",
"Serge Belongie"
] |
[
"8k",
"Data Augmentation",
"Image Captioning",
"Sentence"
] | 2018-06-17T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Cui_Learning_to_Evaluate_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Cui_Learning_to_Evaluate_CVPR_2018_paper.pdf
|
learning-to-evaluate-image-captioning-1
| null |
[] |
https://paperswithcode.com/paper/high-speed-tracking-with-multi-kernel
|
1806.06418
| null | null |
High-speed Tracking with Multi-kernel Correlation Filters
|
Correlation filter (CF) based trackers are currently ranked top in terms of
their performances. Nevertheless, only some of them, such as
KCF~\cite{henriques15} and MKCF~\cite{tangm15}, are able to exploit the
powerful discriminability of non-linear kernels. Although MKCF achieves more
powerful discriminability than KCF through introducing multi-kernel learning
(MKL) into KCF, its improvement over KCF is quite limited and its computational
burden increases significantly in comparison with KCF. In this paper, we will
introduce the MKL into KCF in a different way than MKCF. We reformulate the MKL
version of CF objective function with its upper bound, alleviating the negative
mutual interference of different kernels significantly. Our novel MKCF tracker,
MKCFup, outperforms KCF and MKCF with large margins and can still work at very
high fps. Extensive experiments on public datasets show that our method is
superior to state-of-the-art algorithms for target objects of small move at
very high speed.
|
In this paper, we will introduce the MKL into KCF in a different way than MKCF.
|
http://arxiv.org/abs/1806.06418v1
|
http://arxiv.org/pdf/1806.06418v1.pdf
|
CVPR 2018 6
|
[
"Ming Tang",
"Bin Yu",
"Fan Zhang",
"Jinqiao Wang"
] |
[
"Video Object Tracking",
"Vocal Bursts Intensity Prediction"
] | 2018-06-17T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Tang_High-Speed_Tracking_With_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Tang_High-Speed_Tracking_With_CVPR_2018_paper.pdf
|
high-speed-tracking-with-multi-kernel-1
| null |
[] |
https://paperswithcode.com/paper/feature-learning-and-classification-in
|
1806.06415
| null | null |
Feature Learning and Classification in Neuroimaging: Predicting Cognitive Impairment from Magnetic Resonance Imaging
|
Due to the rapid innovation of technology and the desire to find and employ
biomarkers for neurodegenerative disease, high-dimensional data classification
problems are routinely encountered in neuroimaging studies. To avoid
over-fitting and to explore relationships between disease and potential
biomarkers, feature learning and selection plays an important role in
classifier construction and is an important area in machine learning. In this
article, we review several important feature learning and selection techniques
including lasso-based methods, PCA, the two-sample t-test, and stacked
auto-encoders. We compare these approaches using a numerical study involving
the prediction of Alzheimer's disease from Magnetic Resonance Imaging.
| null |
http://arxiv.org/abs/1806.06415v1
|
http://arxiv.org/pdf/1806.06415v1.pdf
| null |
[
"Shan Shi",
"Farouk Nathoo"
] |
[
"BIG-bench Machine Learning",
"General Classification"
] | 2018-06-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/one-to-one-mapping-between-stimulus-and
|
1805.09001
| null | null |
One-to-one Mapping between Stimulus and Neural State: Memory and Classification
|
Synaptic strength can be seen as probability to propagate impulse, and
according to synaptic plasticity, function could exist from propagation
activity to synaptic strength. If the function satisfies constraints such as
continuity and monotonicity, neural network under external stimulus will always
go to fixed point, and there could be one-to-one mapping between external
stimulus and synaptic strength at fixed point. In other words, neural network
"memorizes" external stimulus in its synapses. A biological classifier is
proposed to utilize this mapping.
|
Synaptic strength can be seen as probability to propagate impulse, and according to synaptic plasticity, function could exist from propagation activity to synaptic strength.
|
http://arxiv.org/abs/1805.09001v6
|
http://arxiv.org/pdf/1805.09001v6.pdf
| null |
[
"Sizhong Lan"
] |
[
"General Classification"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/negative-learning-rates-and-p-learning
|
1603.08253
| null | null |
Negative Learning Rates and P-Learning
|
We present a method of training a differentiable function approximator for a
regression task using negative examples. We effect this training using negative
learning rates. We also show how this method can be used to perform direct
policy learning in a reinforcement learning setting.
| null |
http://arxiv.org/abs/1603.08253v3
|
http://arxiv.org/pdf/1603.08253v3.pdf
| null |
[
"Devon Merrill"
] |
[
"regression",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2016-03-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/measuring-semantic-coherence-of-a
|
1806.06411
| null | null |
Measuring Semantic Coherence of a Conversation
|
Conversational systems have become increasingly popular as a way for humans
to interact with computers. To be able to provide intelligent responses,
conversational systems must correctly model the structure and semantics of a
conversation. We introduce the task of measuring semantic (in)coherence in a
conversation with respect to background knowledge, which relies on the
identification of semantic relations between concepts introduced during a
conversation. We propose and evaluate graph-based and machine learning-based
approaches for measuring semantic coherence using knowledge graphs, their
vector space embeddings and word embedding models, as sources of background
knowledge. We demonstrate how these approaches are able to uncover different
coherence patterns in conversations on the Ubuntu Dialogue Corpus.
|
Conversational systems have become increasingly popular as a way for humans to interact with computers.
|
http://arxiv.org/abs/1806.06411v1
|
http://arxiv.org/pdf/1806.06411v1.pdf
| null |
[
"Svitlana Vakulenko",
"Maarten de Rijke",
"Michael Cochez",
"Vadim Savenkov",
"Axel Polleres"
] |
[
"Knowledge Graphs"
] | 2018-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-a-prior-over-intent-via-meta-inverse
|
1805.12573
| null | null |
Learning a Prior over Intent via Meta-Inverse Reinforcement Learning
|
A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.
| null |
https://arxiv.org/abs/1805.12573v5
|
https://arxiv.org/pdf/1805.12573v5.pdf
| null |
[
"Kelvin Xu",
"Ellis Ratner",
"Anca Dragan",
"Sergey Levine",
"Chelsea Finn"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gated-path-planning-networks
|
1806.06408
| null | null |
Gated Path Planning Networks
|
Value Iteration Networks (VINs) are effective differentiable path planning
modules that can be used by agents to perform navigation while still
maintaining end-to-end differentiability of the entire architecture. Despite
their effectiveness, they suffer from several disadvantages including training
instability, random seed sensitivity, and other optimization problems. In this
work, we reframe VINs as recurrent-convolutional networks which demonstrates
that VINs couple recurrent convolutions with an unconventional max-pooling
activation. From this perspective, we argue that standard gated recurrent
update equations could potentially alleviate the optimization issues plaguing
VIN. The resulting architecture, which we call the Gated Path Planning Network,
is shown to empirically outperform VIN on a variety of metrics such as learning
speed, hyperparameter sensitivity, iteration count, and even generalization.
Furthermore, we show that this performance gap is consistent across different
maze transition types, maze sizes and even show success on a challenging 3D
environment, where the planner is only provided with first-person RGB images.
|
Value Iteration Networks (VINs) are effective differentiable path planning modules that can be used by agents to perform navigation while still maintaining end-to-end differentiability of the entire architecture.
|
http://arxiv.org/abs/1806.06408v1
|
http://arxiv.org/pdf/1806.06408v1.pdf
|
ICML 2018 7
|
[
"Lisa Lee",
"Emilio Parisotto",
"Devendra Singh Chaplot",
"Eric Xing",
"Ruslan Salakhutdinov"
] |
[
"Sensitivity"
] | 2018-06-17T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2488
|
http://proceedings.mlr.press/v80/lee18c/lee18c.pdf
|
gated-path-planning-networks-1
| null |
[] |
https://paperswithcode.com/paper/an-improved-text-sentiment-classification
|
1806.06407
| null | null |
An Improved Text Sentiment Classification Model Using TF-IDF and Next Word Negation
|
With the rapid growth of Text sentiment analysis, the demand for automatic
classification of electronic documents has increased by leaps and bound. The
paradigm of text classification or text mining has been the subject of many
research works in recent time. In this paper we propose a technique for text
sentiment classification using term frequency- inverse document frequency
(TF-IDF) along with Next Word Negation (NWN). We have also compared the
performances of binary bag of words model, TF-IDF model and TF-IDF with next
word negation (TF-IDF-NWN) model for text classification. Our proposed model is
then applied on three different text mining algorithms and we found the Linear
Support vector machine (LSVM) is the most appropriate to work with our proposed
model. The achieved results show significant increase in accuracy compared to
earlier methods.
| null |
http://arxiv.org/abs/1806.06407v1
|
http://arxiv.org/pdf/1806.06407v1.pdf
| null |
[
"Bijoyan Das",
"Sarit Chakraborty"
] |
[
"Classification",
"General Classification",
"Negation",
"Sentiment Analysis",
"Sentiment Classification",
"text-classification",
"Text Classification"
] | 2018-06-17T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.