Dataset Viewer
paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
nulllengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[us]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
nullclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/dynamic-network-model-from-partial
|
1805.10616
| null | null |
Dynamic Network Model from Partial Observations
|
Can evolving networks be inferred and modeled without directly observing
their nodes and edges? In many applications, the edges of a dynamic network
might not be observed, but one can observe the dynamics of stochastic cascading
processes (e.g., information diffusion, virus propagation) occurring over the
unobserved network. While there have been efforts to infer networks based on
such data, providing a generative probabilistic model that is able to identify
the underlying time-varying network remains an open question. Here we consider
the problem of inferring generative dynamic network models based on network
cascade diffusion data. We propose a novel framework for providing a
non-parametric dynamic network model--based on a mixture of coupled
hierarchical Dirichlet processes-- based on data capturing cascade node
infection times. Our approach allows us to infer the evolving community
structure in networks and to obtain an explicit predictive distribution over
the edges of the underlying network--including those that were not involved in
transmission of any cascade, or are likely to appear in the future. We show the
effectiveness of our approach using extensive experiments on synthetic as well
as real-world networks.
| null |
http://arxiv.org/abs/1805.10616v4
|
http://arxiv.org/pdf/1805.10616v4.pdf
|
NeurIPS 2018 12
|
[
"Elahe Ghalebi",
"Baharan Mirzasoleiman",
"Radu Grosu",
"Jure Leskovec"
] |
[
"model",
"Open-Ended Question Answering"
] | 2018-05-27T00:00:00 |
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations
|
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations.pdf
|
dynamic-network-model-from-partial-1
| null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ooJpiued",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "ooJpiued",
"source_title": "Dynamic Network Model from Partial Observations",
"source_url": "http://arxiv.org/abs/1805.10616v4"
}
] |
https://paperswithcode.com/paper/pac-bayes-bounds-for-stable-algorithms-with
|
1806.06827
| null | null |
PAC-Bayes bounds for stable algorithms with instance-dependent priors
|
PAC-Bayes bounds have been proposed to get risk estimates based on a training
sample. In this paper the PAC-Bayes approach is combined with stability of the
hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting
is used with a Gaussian prior centered at the expected output. Thus a novelty
of our paper is using priors defined in terms of the data-generating
distribution. Our main result estimates the risk of the randomized algorithm in
terms of the hypothesis stability coefficients. We also provide a new bound for
the SVM classifier, which is compared to other known bounds experimentally.
Ours appears to be the first stability-based bound that evaluates to
non-trivial values.
| null |
http://arxiv.org/abs/1806.06827v2
|
http://arxiv.org/pdf/1806.06827v2.pdf
|
NeurIPS 2018 12
|
[
"Omar Rivasplata",
"Emilio Parrado-Hernandez",
"John Shawe-Taylor",
"Shiliang Sun",
"Csaba Szepesvari"
] |
[] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors
|
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors.pdf
|
pac-bayes-bounds-for-stable-algorithms-with-1
| null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/automated-bridge-component-recognition-using
|
1806.06820
| null | null |
Automated Bridge Component Recognition using Video Data
|
This paper investigates the automated recognition of structural bridge
components using video data. Although understanding video data for structural
inspections is straightforward for human inspectors, the implementation of the
same task using machine learning methods has not been fully realized. In
particular, single-frame image processing techniques, such as convolutional
neural networks (CNNs), are not expected to identify structural components
accurately when the image is a close-up view, lacking contextual information
regarding where on the structure the image originates. Inspired by the
significant progress in video processing techniques, this study investigates
automated bridge component recognition using video data, where the information
from the past frames is used to augment the understanding of the current frame.
A new simulated video dataset is created to train the machine learning
algorithms. Then, convolutional Neural Networks (CNNs) with recurrent
architectures are designed and applied to implement the automated bridge
component recognition task. Results are presented for simulated video data, as
well as video collected in the field.
| null |
http://arxiv.org/abs/1806.06820v2
|
http://arxiv.org/pdf/1806.06820v2.pdf
| null |
[
"Yasutaka Narazaki",
"Vedhus Hoskere",
"Tu A. Hoang",
"Billie F. Spencer Jr"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gradient-descent-with-identity-initialization-1
|
1802.06093
| null | null |
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
|
We analyze algorithms for approximating a function $f(x) = \Phi x$ mapping
$\Re^d$ to $\Re^d$ using deep linear neural networks, i.e. that learn a
function $h$ parameterized by matrices $\Theta_1,...,\Theta_L$ and defined by
$h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focus on algorithms that
learn through gradient descent on the population quadratic loss in the case
that the distribution over the inputs is isotropic.
We provide polynomial bounds on the number of iterations for gradient descent
to approximate the least squares matrix $\Phi$, in the case where the initial
hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small
enough constant. On the other hand, we show that gradient descent fails to
converge for $\Phi$ whose distance from the identity is a larger constant, and
we show that some forms of regularization toward the identity in each layer do
not help.
If $\Phi$ is symmetric positive definite, we show that an algorithm that
initializes $\Theta_i = I$ learns an $\epsilon$-approximation of $f$ using a
number of updates polynomial in $L$, the condition number of $\Phi$, and
$\log(d/\epsilon)$. In contrast, we show that if the least squares matrix
$\Phi$ is symmetric and has a negative eigenvalue, then all members of a class
of algorithms that perform gradient descent with identity initialization, and
optionally regularize toward the identity in each layer, fail to converge.
We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top} \Phi u >
0$ for all $u$, but may not be symmetric. This algorithm uses two regularizers:
one that maintains the invariant $u^{\top} \Theta_L \Theta_{L-1} ... \Theta_1 u
> 0$ for all $u$, and another that "balances" $\Theta_1, ..., \Theta_L$ so that
they have the same singular values.
| null |
http://arxiv.org/abs/1802.06093v4
|
http://arxiv.org/pdf/1802.06093v4.pdf
|
ICML 2018
|
[
"Peter L. Bartlett",
"David P. Helmbold",
"Philip M. Long"
] |
[] | 2018-02-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/temporal-coherence-based-self-supervised
|
1806.06811
| null | null |
Temporal coherence-based self-supervised learning for laparoscopic workflow analysis
|
In order to provide the right type of assistance at the right time,
computer-assisted surgery systems need context awareness. To achieve this,
methods for surgical workflow analysis are crucial. Currently, convolutional
neural networks provide the best performance for video-based workflow analysis
tasks. For training such networks, large amounts of annotated data are
necessary. However, collecting a sufficient amount of data is often costly,
time-consuming, and not always feasible. In this paper, we address this problem
by presenting and comparing different approaches for self-supervised
pretraining of neural networks on unlabeled laparoscopic videos using temporal
coherence. We evaluate our pretrained networks on Cholec80, a publicly
available dataset for surgical phase segmentation, on which a maximum F1 score
of 84.6 was reached. Furthermore, we were able to achieve an increase of the F1
score of up to 10 points when compared to a non-pretrained neural network.
|
To achieve this, methods for surgical workflow analysis are crucial.
|
http://arxiv.org/abs/1806.06811v2
|
http://arxiv.org/pdf/1806.06811v2.pdf
| null |
[
"Isabel Funke",
"Alexander Jenke",
"Sören Torge Mees",
"Jürgen Weitz",
"Stefanie Speidel",
"Sebastian Bodenstedt"
] |
[
"Self-Supervised Learning",
"Surgical phase recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/better-runtime-guarantees-via-stochastic
|
1801.04487
| null | null |
Better Runtime Guarantees Via Stochastic Domination
|
Apart from few exceptions, the mathematical runtime analysis of evolutionary
algorithms is mostly concerned with expected runtimes. In this work, we argue
that stochastic domination is a notion that should be used more frequently in
this area. Stochastic domination allows to formulate much more informative
performance guarantees, it allows to decouple the algorithm analysis into the
true algorithmic part of detecting a domination statement and the
probability-theoretical part of deriving the desired probabilistic guarantees
from this statement, and it helps finding simpler and more natural proofs.
As particular results, we prove a fitness level theorem which shows that the
runtime is dominated by a sum of independent geometric random variables, we
prove the first tail bounds for several classic runtime problems, and we give a
short and natural proof for Witt's result that the runtime of any $(\mu,p)$
mutation-based algorithm on any function with unique optimum is subdominated by
the runtime of a variant of the \oea on the \onemax function.
As side-products, we determine the fastest unbiased (1+1) algorithm for the
\leadingones benchmark problem, both in the general case and when restricted to
static mutation operators, and we prove a Chernoff-type tail bound for sums of
independent coupon collector distributions.
| null |
http://arxiv.org/abs/1801.04487v5
|
http://arxiv.org/pdf/1801.04487v5.pdf
| null |
[
"Benjamin Doerr"
] |
[
"Evolutionary Algorithms"
] | 2018-01-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scaling-neural-machine-translation
|
1806.00187
| null | null |
Scaling Neural Machine Translation
|
Sequence to sequence learning models still require several days to reach
state of the art performance on large benchmark datasets using a single
machine. This paper shows that reduced precision and large batch training can
speedup training by nearly 5x on a single 8-GPU machine with careful tuning and
implementation. On WMT'14 English-German translation, we match the accuracy of
Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a
new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We
further improve these results to 29.8 BLEU by training on the much larger
Paracrawl dataset. On the WMT'14 English-French task, we obtain a
state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.
|
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine.
|
http://arxiv.org/abs/1806.00187v3
|
http://arxiv.org/pdf/1806.00187v3.pdf
|
WS 2018 10
|
[
"Myle Ott",
"Sergey Edunov",
"David Grangier",
"Michael Auli"
] |
[
"GPU",
"Machine Translation",
"Question Answering",
"Translation"
] | 2018-06-01T00:00:00 |
https://aclanthology.org/W18-6301
|
https://aclanthology.org/W18-6301.pdf
|
scaling-neural-machine-translation-1
| null |
[] |
https://paperswithcode.com/paper/almost-exact-matching-with-replacement-for
|
1806.06802
| null | null |
Interpretable Almost Matching Exactly for Causal Inference
|
We aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework. Matching methods are heavily used in the social sciences due to their interpretability, but most matching methods do not pass basic sanity checks: they fail when irrelevant variables are introduced, and tend to be either computationally slow or produce low-quality matches. The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; the algorithm aims to match units on as many relevant variables as possible. To do this, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
|
Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
|
https://arxiv.org/abs/1806.06802v6
|
https://arxiv.org/pdf/1806.06802v6.pdf
| null |
[
"Yameng Liu",
"Aw Dieng",
"Sudeepa Roy",
"Cynthia Rudin",
"Alexander Volfovsky"
] |
[
"Causal Inference"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-spatiotemporal-representation-of-the
|
1806.06793
| null | null |
Deep Spatiotemporal Representation of the Face for Automatic Pain Intensity Estimation
|
Automatic pain intensity assessment has a high value in disease diagnosis
applications. Inspired by the fact that many diseases and brain disorders can
interrupt normal facial expression formation, we aim to develop a computational
model for automatic pain intensity assessment from spontaneous and micro facial
variations. For this purpose, we propose a 3D deep architecture for dynamic
facial video representation. The proposed model is built by stacking several
convolutional modules where each module encompasses a 3D convolution kernel
with a fixed temporal depth, several parallel 3D convolutional kernels with
different temporal depths, and an average pooling layer. Deploying variable
temporal depths in the proposed architecture allows the model to effectively
capture a wide range of spatiotemporal variations on the faces. Extensive
experiments on the UNBC-McMaster Shoulder Pain Expression Archive database show
that our proposed model yields in a promising performance compared to the
state-of-the-art in automatic pain intensity estimation.
| null |
http://arxiv.org/abs/1806.06793v1
|
http://arxiv.org/pdf/1806.06793v1.pdf
| null |
[
"Mohammad Tavakolian",
"Abdenour Hadid"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/73642d9425a358b51a683cf6f95852d06cba1096/torch/nn/modules/conv.py#L421",
"description": "A **3D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) where the kernel slides in 3 dimensions as opposed to 2 dimensions with 2D convolutions. One example use case is medical imaging where a model is constructed using 3D image slices. Additionally video based data has an additional temporal dimension over images making it suitable for this module. \r\n\r\nImage: Lung nodule detection based on 3D convolutional neural networks, Fan et al",
"full_name": "3D Convolution",
"introduced_year": 2015,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "3D Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/flexible-collaborative-estimation-of-the
|
1806.06784
| null | null |
Robust inference on the average treatment effect using the outcome highly adaptive lasso
|
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques such as semiparametric regression or machine learning to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome-adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss large sample theory for this estimator and propose closed form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.
| null |
https://arxiv.org/abs/1806.06784v3
|
https://arxiv.org/pdf/1806.06784v3.pdf
| null |
[
"Cheng Ju",
"David Benkeser",
"Mark J. Van Der Laan"
] |
[
"regression"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/consistent-individualized-feature-attribution
|
1802.03888
| null | null |
Consistent Individualized Feature Attribution for Tree Ensembles
|
A unified approach to explain the output of any machine learning model.
|
A unified approach to explain the output of any machine learning model.
|
http://arxiv.org/abs/1802.03888v3
|
http://arxiv.org/pdf/1802.03888v3.pdf
| null |
[
"Scott M. Lundberg",
"Gabriel G. Erion",
"Su-In Lee"
] |
[
"BIG-bench Machine Learning"
] | 2018-02-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bingan-learning-compact-binary-descriptors
|
1806.06778
| null | null |
BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
|
In this paper, we propose a novel regularization method for Generative
Adversarial Networks, which allows the model to learn discriminative yet
compact binary representations of image patches (image descriptors). We employ
the dimensionality reduction that takes place in the intermediate layers of the
discriminator network and train binarized low-dimensional representation of the
penultimate layer to mimic the distribution of the higher-dimensional preceding
layers. To achieve this, we introduce two loss terms that aim at: (i) reducing
the correlation between the dimensions of the binarized low-dimensional
representation of the penultimate layer i. e. maximizing joint entropy) and
(ii) propagating the relations between the dimensions in the high-dimensional
space to the low-dimensional space. We evaluate the resulting binary image
descriptors on two challenging applications, image matching and retrieval, and
achieve state-of-the-art results.
|
In this paper, we propose a novel regularization method for Generative Adversarial Networks, which allows the model to learn discriminative yet compact binary representations of image patches (image descriptors).
|
http://arxiv.org/abs/1806.06778v5
|
http://arxiv.org/pdf/1806.06778v5.pdf
|
NeurIPS 2018 12
|
[
"Maciej Zieba",
"Piotr Semberecki",
"Tarek El-Gaaly",
"Tomasz Trzcinski"
] |
[
"Dimensionality Reduction",
"Retrieval"
] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan
|
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan.pdf
|
bingan-learning-compact-binary-descriptors-1
| null |
[
{
"code_snippet_url": null,
"description": "Need help with a Lufthansa Airlines reservation, cancellation, or flight change? Speaking directly with a live Lufthansa agent at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) who can save your time, eliminate confusion, and ensure your travel needs are met quickly. Whether you’re dealing with last-minute changes, refund issues, or booking complications, here are five quick and easy ways to get in touch with a live person at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person).\r\n\r\n1. Call Lufthansa Customer Service (Best Way)\r\nThe fastest and most effective way to talk to a Lufthansa agent is by calling their dedicated support line. You’ll be connected to a live person who can assist with:\r\n\r\nFlight bookings\r\nRefund requests\r\nSeat upgrades\r\nFlight changes and cancellations\r\nBaggage issues\r\nCall now at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to speak with a Lufthansa representative 24/7. This hotline is especially helpful if you're traveling soon or dealing with an urgent issue.\r\n\r\n\r\n2. Use the Lufthansa Website Live Chat\r\nAnother way to connect is through Lufthansa’s Live Chat feature on their official website. While chatbots handle basic queries, you can request a human agent for complex issues.\r\n\r\nIf the chat tool isn’t responsive or available, calling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) ensures direct voice support with no waiting.\r\n\r\n\r\n3. Reach Lufthansa via Social Media\r\nLufthansa is active on platforms like Twitter (@lufthansa) and Facebook. While social media responses may take time, you can send a direct message and request a callback or live chat.\r\n\r\nStill prefer real-time help? Skip the wait and dial ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to speak with a Lufthansa agent immediately.\r\n\r\n\r\n4. Visit a Lufthansa Airport Counter\r\nIf you're already at the airport, head over to the Lufthansa service desk. Staff members can help with boarding passes, seat changes, upgrades, and delays.\r\n\r\nBut if you're not at the airport or need help before heading out, calling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) is the quickest way to resolve issues without traveling unnecessarily.\r\n\r\n\r\n5. Email Lufthansa Customer Support\r\nWhile emailing Lufthansa support can work for non-urgent concerns, responses may take up to 72 hours. This method isn’t ideal for urgent travel issues or last-minute changes.\r\n\r\nFor a faster response, contact Lufthansa’s phone support at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) and speak with a real human instantly.\r\n\r\n\r\nWhy Call Lufthansa’s Live Agent Number?\r\nCalling ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) allows you to:\r\n\r\nSkip online wait times\r\nResolve flight problems fast\r\nSpeak with a native English support agent\r\nGet 24/7 customer care for US passengers\r\nRequest special assistance or meal options\r\nThis number connects you directly to trained Lufthansa agents who can manage both international and domestic flight concerns with efficiency and accuracy.\r\n\r\n\r\nCommon Reasons to Contact a Lufthansa Agent\r\nCancel or reschedule your flight\r\nConfirm baggage allowance\r\nAsk about COVID-19 travel rules\r\nChange travel dates or passenger names\r\nRequest travel receipts or booking confirmation\r\nTrack lost baggage\r\nAll of these services can be easily handled when you call Lufthansa at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person).\r\n\r\n\r\nLufthansa Support Hotline: Contact Anytime\r\nWhether you're flying tomorrow or next month, you can reach a live Lufthansa agent by calling:\r\n\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Flight Changes\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Cancellations & Refunds\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – New Bookings\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Travel Insurance Help\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Seat Selection\r\n☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) – Emergency Travel\r\nFeel free to call anytime—☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) is always available to assist you, even on weekends and holidays.\r\n\r\n\r\nConclusion\r\nNo matter what your travel situation is, talking to a Lufthansa agent doesn’t have to be stressful. With multiple contact options and 24/7 service, Lufthansa makes it easy for passengers to get the help they need. But for the fastest results, always call ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) to reach a real person instantly.",
"full_name": "Five Ways: How Can I Talk To an Agent at Lufthansa Airlines?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Adversarial Attacks",
"parent": null
},
"name": "How Can I Talk To an Agent at Lufthansa Airlines?",
"source_title": "BinGAN: Learning Compact Binary Descriptors with a Regularized GAN",
"source_url": "http://arxiv.org/abs/1806.06778v5"
}
] |
https://paperswithcode.com/paper/multifit-a-multivariate-multiscale-framework
|
1806.06777
| null | null |
Multiscale Fisher's Independence Test for Multivariate Dependence
|
Identifying dependency in multivariate data is a common inference task that arises in numerous applications. However, existing nonparametric independence tests typically require computation that scales at least quadratically with the sample size, making it difficult to apply them to massive data. Moreover, resampling is usually necessary to evaluate the statistical significance of the resulting test statistics at finite sample sizes, further worsening the computational burden. We introduce a scalable, resampling-free approach to testing the independence between two random vectors by breaking down the task into simple univariate tests of independence on a collection of 2x2 contingency tables constructed through sequential coarse-to-fine discretization of the sample space, transforming the inference task into a multiple testing problem that can be completed with almost linear complexity with respect to the sample size. To address increasing dimensionality, we introduce a coarse-to-fine sequential adaptive procedure that exploits the spatial features of dependency structures to more effectively examine the sample space. We derive a finite-sample theory that guarantees the inferential validity of our adaptive procedure at any given sample size. In particular, we show that our approach can achieve strong control of the family-wise error rate without resampling or large-sample approximation. We demonstrate the substantial computational advantage of the procedure in comparison to existing approaches as well as its decent statistical power under various dependency scenarios through an extensive simulation study, and illustrate how the divide-and-conquer nature of the procedure can be exploited to not just test independence but to learn the nature of the underlying dependency. Finally, we demonstrate the use of our method through analyzing a large data set from a flow cytometry experiment.
|
Identifying dependency in multivariate data is a common inference task that arises in numerous applications.
|
https://arxiv.org/abs/1806.06777v7
|
https://arxiv.org/pdf/1806.06777v7.pdf
| null |
[
"Shai Gorsky",
"Li Ma"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kernel-based-outlier-detection-using-the
|
1806.06775
| null | null |
Kernel-based Outlier Detection using the Inverse Christoffel Function
|
Outlier detection methods have become increasingly relevant in recent years
due to increased security concerns and because of its vast application to
different fields. Recently, Pauwels and Lasserre (2016) noticed that the
sublevel sets of the inverse Christoffel function accurately depict the shape
of a cloud of data using a sum-of-squares polynomial and can be used to perform
outlier detection. In this work, we propose a kernelized variant of the inverse
Christoffel function that makes it computationally tractable for data sets with
a large number of features. We compare our approach to current methods on 15
different data sets and achieve the best average area under the precision
recall curve (AUPRC) score, the best average rank and the lowest root mean
square deviation.
| null |
http://arxiv.org/abs/1806.06775v1
|
http://arxiv.org/pdf/1806.06775v1.pdf
| null |
[
"Armin Askari",
"Forest Yang",
"Laurent El Ghaoui"
] |
[
"Outlier Detection"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kid-net-convolution-networks-for-kidney
|
1806.06769
| null | null |
Kid-Net: Convolution Networks for Kidney Vessels Segmentation from CT-Volumes
|
Semantic image segmentation plays an important role in modeling
patient-specific anatomy. We propose a convolution neural network, called
Kid-Net, along with a training schema to segment kidney vessels: artery, vein
and collecting system. Such segmentation is vital during the surgical planning
phase in which medical decisions are made before surgical incision. Our main
contribution is developing a training schema that handles unbalanced data,
reduces false positives and enables high-resolution segmentation with a limited
memory budget. These objectives are attained using dynamic weighting, random
sampling and 3D patch segmentation. Manual medical image annotation is both
time-consuming and expensive. Kid-Net reduces kidney vessels segmentation time
from matter of hours to minutes. It is trained end-to-end using 3D patches from
volumetric CT-images. A complete segmentation for a 512x512x512 CT-volume is
obtained within a few minutes (1-2 mins) by stitching the output 3D patches
together. Feature down-sampling and up-sampling are utilized to achieve higher
classification and localization accuracies. Quantitative and qualitative
evaluation results on a challenging testing dataset show Kid-Net competence.
| null |
http://arxiv.org/abs/1806.06769v1
|
http://arxiv.org/pdf/1806.06769v1.pdf
| null |
[
"Ahmed Taha",
"Pechin Lo",
"Junning Li",
"Tao Zhao"
] |
[
"Anatomy",
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/modularity-matters-learning-invariant
|
1806.06765
| null | null |
Modularity Matters: Learning Invariant Relational Reasoning Tasks
|
We focus on two supervised visual reasoning tasks whose labels encode a
semantic relational rule between two or more objects in an image: the MNIST
Parity task and the colorized Pentomino task. The objects in the images undergo
random translation, scaling, rotation and coloring transformations. Thus these
tasks involve invariant relational reasoning. We report uneven performance of
various deep CNN models on these two tasks. For the MNIST Parity task, we
report that the VGG19 model soundly outperforms a family of ResNet models.
Moreover, the family of ResNet models exhibits a general sensitivity to random
initialization for the MNIST Parity task. For the colorized Pentomino task, now
both the VGG19 and ResNet models exhibit sluggish optimization and very poor
test generalization, hovering around 30% test error. The CNN we tested all
learn hierarchies of fully distributed features and thus encode the distributed
representation prior. We are motivated by a hypothesis from cognitive
neuroscience which posits that the human visual cortex is modularized, and this
allows the visual cortex to learn higher order invariances. To this end, we
consider a modularized variant of the ResNet model, referred to as a Residual
Mixture Network (ResMixNet) which employs a mixture-of-experts architecture to
interleave distributed representations with more specialized, modular
representations. We show that very shallow ResMixNets are capable of learning
each of the two tasks well, attaining less than 2% and 1% test error on the
MNIST Parity and the colorized Pentomino tasks respectively. Most importantly,
the ResMixNet models are extremely parameter efficient: generalizing better
than various non-modular CNNs that have over 10x the number of parameters.
These experimental results support the hypothesis that modularity is a robust
prior for learning invariant relational reasoning.
| null |
http://arxiv.org/abs/1806.06765v1
|
http://arxiv.org/pdf/1806.06765v1.pdf
| null |
[
"Jason Jo",
"Vikas Verma",
"Yoshua Bengio"
] |
[
"Mixture-of-Experts",
"Relational Reasoning",
"Visual Reasoning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/closing-the-generalization-gap-of-adaptive
|
1806.06763
| null | null |
Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
|
Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter $p$, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
|
Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.
|
https://arxiv.org/abs/1806.06763v3
|
https://arxiv.org/pdf/1806.06763v3.pdf
| null |
[
"Jinghui Chen",
"Dongruo Zhou",
"Yiqi Tang",
"Ziyan Yang",
"Yuan Cao",
"Quanquan Gu"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/paultsw/nice_pytorch/blob/15cfc543fc3dc81ee70398b8dfc37b67269ede95/nice/layers.py#L109",
"description": "**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse function and the log-determinant are computationally efficient. For the forward function, we split the input dimension into two parts:\r\n\r\n$$ \\mathbf{x}\\_{a}, \\mathbf{x}\\_{b} = \\text{split}\\left(\\mathbf{x}\\right) $$\r\n\r\nThe second part stays the same $\\mathbf{x}\\_{b} = \\mathbf{y}\\_{b}$, while the first part $\\mathbf{x}\\_{a}$ undergoes an affine transformation, where the parameters for this transformation are learnt using the second part $\\mathbf{x}\\_{b}$ being put through a neural network. Together we have:\r\n\r\n$$ \\left(\\log{\\mathbf{s}, \\mathbf{t}}\\right) = \\text{NN}\\left(\\mathbf{x}\\_{b}\\right) $$\r\n\r\n$$ \\mathbf{s} = \\exp\\left(\\log{\\mathbf{s}}\\right) $$\r\n\r\n$$ \\mathbf{y}\\_{a} = \\mathbf{s} \\odot \\mathbf{x}\\_{a} + \\mathbf{t} $$\r\n\r\n$$ \\mathbf{y}\\_{b} = \\mathbf{x}\\_{b} $$\r\n\r\n$$ \\mathbf{y} = \\text{concat}\\left(\\mathbf{y}\\_{a}, \\mathbf{y}\\_{b}\\right) $$\r\n\r\nImage: [GLOW](https://paperswithcode.com/method/glow)",
"full_name": "Affine Coupling",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Bijective Transformations** are transformations that are bijective, i.e. they can be reversed. They are used within the context of normalizing flow models. Below you can find a continuously updating list of bijective transformation methods.",
"name": "Bijective Transformation",
"parent": null
},
"name": "Affine Coupling",
"source_title": "NICE: Non-linear Independent Components Estimation",
"source_url": "http://arxiv.org/abs/1410.8516v6"
},
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-memory-network-approach-for-story-based
|
1805.02838
| null | null |
A Memory Network Approach for Story-based Temporal Summarization of 360° Videos
|
We address the problem of story-based temporal summarization of long
360{\deg} videos. We propose a novel memory network model named Past-Future
Memory Network (PFMN), in which we first compute the scores of 81 normal field
of view (NFOV) region proposals cropped from the input 360{\deg} video, and
then recover a latent, collective summary using the network with two external
memories that store the embeddings of previously selected subshots and future
candidate subshots. Our major contributions are two-fold. First, our work is
the first to address story-based temporal summarization of 360{\deg} videos.
Second, our model is the first attempt to leverage memory networks for video
summarization tasks. For evaluation, we perform three sets of experiments.
First, we investigate the view selection capability of our model on the
Pano2Vid dataset. Second, we evaluate the temporal summarization with a newly
collected 360{\deg} video dataset. Finally, we experiment our model's
performance in another domain, with image-based storytelling VIST dataset. We
verify that our model achieves state-of-the-art performance on all the tasks.
| null |
http://arxiv.org/abs/1805.02838v3
|
http://arxiv.org/pdf/1805.02838v3.pdf
|
CVPR 2018
|
[
"Sang-ho Lee",
"Jinyoung Sung",
"Youngjae Yu",
"Gunhee Kim"
] |
[
"Video Summarization"
] | 2018-05-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/aykutaaykut/Memory-Networks",
"description": "A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartmentalized enough to accurately remember facts from the past (RNNs for example, have difficult memorizing and doing tasks like copying). \r\n\r\nA memory network consists of a memory $\\textbf{m}$ (an array of objects indexed by $\\textbf{m}\\_{i}$ and four potentially learned components:\r\n\r\n- Input feature map $I$ - feature representation of the data input.\r\n- Generalization $G$ - updates old memories given the new input.\r\n- Output feature map $O$ - produces new feature map given $I$ and $G$.\r\n- Response $R$ - converts output into the desired response. \r\n\r\nGiven an input $x$ (e.g., an input character, word or sentence depending on the granularity chosen, an image or an audio signal) the flow of the model is as follows:\r\n\r\n1. Convert $x$ to an internal feature representation $I\\left(x\\right)$.\r\n2. Update memories $m\\_{i}$ given the new input: $m\\_{i} = G\\left(m\\_{i}, I\\left(x\\right), m\\right)$, $\\forall{i}$.\r\n3. Compute output features $o$ given the new input and the memory: $o = O\\left(I\\left(x\\right), m\\right)$.\r\n4. Finally, decode output features $o$ to give the final response: $r = R\\left(o\\right)$.\r\n\r\nThis process is applied at both train and test time, if there is a distinction between such phases, that\r\nis, memories are also stored at test time, but the model parameters of $I$, $G$, $O$ and $R$ are not updated. Memory networks cover a wide class of possible implementations. The components $I$, $G$, $O$ and $R$ can potentially use any existing ideas from the machine learning literature.\r\n\r\nImage Source: [Adrian Colyer](https://blog.acolyer.org/2016/03/10/memory-networks/)",
"full_name": "Memory Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Working Memory Models** aim to supplement neural networks with a memory module to increase their capability for memorization and allowing them to more easily perform tasks such as retrieving and copying information. Below you can find a continuously updating list of working memory models.",
"name": "Working Memory Models",
"parent": null
},
"name": "Memory Network",
"source_title": "Memory Networks",
"source_url": "http://arxiv.org/abs/1410.3916v11"
}
] |
https://paperswithcode.com/paper/pots-protective-optimization-technologies
|
1806.02711
| null | null |
POTs: Protective Optimization Technologies
|
Algorithmic fairness aims to address the economic, moral, social, and political impact that digital systems have on populations through solutions that can be applied by service providers. Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures. Not surprisingly, these decisions limit fairness frameworks' ability to capture a variety of harms caused by systems. We characterize fairness limitations using concepts from requirements engineering and from social sciences. We show that the focus on algorithms' inputs and outputs misses harms that arise from systems interacting with the world; that the focus on bias and discrimination omits broader harms on populations and their environments; and that relying on service providers excludes scenarios where they are not cooperative or intentionally adversarial. We propose Protective Optimization Technologies (POTs). POTs provide means for affected parties to address the negative impacts of systems in the environment, expanding avenues for political contestation. POTs intervene from outside the system, do not require service providers to cooperate, and can serve to correct, shift, or expose harms that systems impose on populations and their environments. We illustrate the potential and limitations of POTs in two case studies: countering road congestion caused by traffic-beating applications, and recalibrating credit scoring for loan applicants.
|
Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures.
|
https://arxiv.org/abs/1806.02711v6
|
https://arxiv.org/pdf/1806.02711v6.pdf
| null |
[
"Bogdan Kulynych",
"Rebekah Overdorf",
"Carmela Troncoso",
"Seda Gürses"
] |
[
"Decision Making",
"Fairness"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/surface-networks
|
1705.10819
| null | null |
Surface Networks
|
We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs.
|
We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry.
|
http://arxiv.org/abs/1705.10819v2
|
http://arxiv.org/pdf/1705.10819v2.pdf
|
CVPR 2018 6
|
[
"Ilya Kostrikov",
"Zhongshi Jiang",
"Daniele Panozzo",
"Denis Zorin",
"Joan Bruna"
] |
[
"3D geometry"
] | 2017-05-30T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Kostrikov_Surface_Networks_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Kostrikov_Surface_Networks_CVPR_2018_paper.pdf
|
surface-networks-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/extracting-automata-from-recurrent-neural
|
1711.09576
| null | null |
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples
|
We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle. Our technique efficiently extracts accurate automata from trained RNNs, even when the state vectors are large and require fine differentiation.
|
We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle.
|
https://arxiv.org/abs/1711.09576v4
|
https://arxiv.org/pdf/1711.09576v4.pdf
|
ICML 2018 7
|
[
"Gail Weiss",
"Yoav Goldberg",
"Eran Yahav"
] |
[] | 2017-11-27T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2276
|
http://proceedings.mlr.press/v80/weiss18a/weiss18a.pdf
|
extracting-automata-from-recurrent-neural-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-word-segmentation-from-speech
|
1806.06734
| null | null |
Unsupervised Word Segmentation from Speech with Attention
|
We present a first attempt to perform attentional word segmentation directly
from the speech signal, with the final goal to automatically identify lexical
units in a low-resource, unwritten language (UL). Our methodology assumes a
pairing between recordings in the UL with translations in a well-resourced
language. It uses Acoustic Unit Discovery (AUD) to convert speech into a
sequence of pseudo-phones that is segmented using neural soft-alignments
produced by a neural machine translation model. Evaluation uses an actual Bantu
UL, Mboshi; comparisons to monolingual and bilingual baselines illustrate the
potential of attentional word segmentation for language documentation.
| null |
http://arxiv.org/abs/1806.06734v1
|
http://arxiv.org/pdf/1806.06734v1.pdf
| null |
[
"Pierre Godard",
"Marcely Zanon-Boito",
"Lucas Ondel",
"Alexandre Berard",
"François Yvon",
"Aline Villavicencio",
"Laurent Besacier"
] |
[
"Acoustic Unit Discovery",
"Machine Translation",
"Segmentation",
"Translation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semantically-selective-augmentation-for-deep
|
1806.04074
| null | null |
Semantically Selective Augmentation for Deep Compact Person Re-Identification
|
We present a deep person re-identification approach that combines
semantically selective, deep data augmentation with clustering-based network
compression to generate high performance, light and fast inference networks. In
particular, we propose to augment limited training data via sampling from a
deep convolutional generative adversarial network (DCGAN), whose discriminator
is constrained by a semantic classifier to explicitly control the domain
specificity of the generation process. Thereby, we encode information in the
classifier network which can be utilized to steer adversarial synthesis, and
which fuels our CondenseNet ID-network training. We provide a quantitative and
qualitative analysis of the approach and its variants on a number of datasets,
obtaining results that outperform the state-of-the-art on the LIMA dataset for
long-term monitoring in indoor living spaces.
| null |
http://arxiv.org/abs/1806.04074v3
|
http://arxiv.org/pdf/1806.04074v3.pdf
| null |
[
"Víctor Ponce-López",
"Tilo Burghardt",
"Sion Hannunna",
"Dima Damen",
"Alessandro Masullo",
"Majid Mirmehdi"
] |
[
"Clustering",
"Data Augmentation",
"Generative Adversarial Network",
"Person Re-Identification",
"Specificity"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/assessing-robustness-of-radiomic-features-by
|
1806.06719
| null | null |
Assessing robustness of radiomic features by image perturbation
|
Image features need to be robust against differences in positioning,
acquisition and segmentation to ensure reproducibility. Radiomic models that
only include robust features can be used to analyse new images, whereas models
with non-robust features may fail to predict the outcome of interest
accurately. Test-retest imaging is recommended to assess robustness, but may
not be available for the phenotype of interest. We therefore investigated 18
methods to determine feature robustness based on image perturbations.
Test-retest and perturbation robustness were compared for 4032 features that
were computed from the gross tumour volume in two cohorts with computed
tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19
head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was measured
using the intraclass correlation coefficient (1,1) (ICC). Features with
ICC$\geq0.90$ were considered robust. The NSCLC cohort contained more robust
features for test-retest imaging than the HNSCC cohort ($73.5\%$ vs. $34.0\%$).
A perturbation chain consisting of noise addition, affine translation, volume
growth/shrinkage and supervoxel-based contour randomisation identified the
fewest false positive robust features (NSCLC: $3.3\%$; HNSCC: $10.0\%$). Thus,
this perturbation chain may be used to assess feature robustness.
| null |
http://arxiv.org/abs/1806.06719v1
|
http://arxiv.org/pdf/1806.06719v1.pdf
| null |
[
"Alex Zwanenburg",
"Stefan Leger",
"Linda Agolli",
"Karoline Pilz",
"Esther G. C. Troost",
"Christian Richter",
"Steffen Löck"
] |
[
"Translation"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reconvnet-video-object-segmentation-with
|
1806.05510
| null | null |
ReConvNet: Video Object Segmentation with Spatio-Temporal Features Modulation
|
We introduce ReConvNet, a recurrent convolutional architecture for
semi-supervised video object segmentation that is able to fast adapt its
features to focus on any specific object of interest at inference time.
Generalization to new objects never observed during training is known to be a
hard task for supervised approaches that would need to be retrained. To tackle
this problem, we propose a more efficient solution that learns spatio-temporal
features self-adapting to the object of interest via conditional affine
transformations. This approach is simple, can be trained end-to-end and does
not necessarily require extra training steps at inference time. Our method
shows competitive results on DAVIS2016 with respect to state-of-the art
approaches that use online fine-tuning, and outperforms them on DAVIS2017.
ReConvNet shows also promising results on the DAVIS-Challenge 2018 winning the
$10$-th position.
| null |
http://arxiv.org/abs/1806.05510v2
|
http://arxiv.org/pdf/1806.05510v2.pdf
| null |
[
"Francesco Lattari",
"Marco Ciccone",
"Matteo Matteucci",
"Jonathan Masci",
"Francesco Visin"
] |
[
"Object",
"Position",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | 2018-06-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tree-edit-distance-learning-via-adaptive-1
|
1806.05009
| null | null |
Tree Edit Distance Learning via Adaptive Symbol Embeddings
|
Metric learning has the aim to improve classification accuracy by learning a
distance measure which brings data points from the same class closer together
and pushes data points from different classes further apart. Recent research
has demonstrated that metric learning approaches can also be applied to trees,
such as molecular structures, abstract syntax trees of computer programs, or
syntax trees of natural language, by learning the cost function of an edit
distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree.
However, learning such costs directly may yield an edit distance which violates
metric axioms, is challenging to interpret, and may not generalize well. In
this contribution, we propose a novel metric learning approach for trees which
we call embedding edit distance learning (BEDL) and which learns an edit
distance indirectly by embedding the tree nodes as vectors, such that the
Euclidean distance between those vectors supports class discrimination. We
learn such embeddings by reducing the distance to prototypical trees from the
same class and increasing the distance to prototypical trees from different
classes. In our experiments, we show that BEDL improves upon the
state-of-the-art in metric learning for trees on six benchmark data sets,
ranging from computer science over biomedical data to a natural-language
processing data set containing over 300,000 nodes.
| null |
http://arxiv.org/abs/1806.05009v3
|
http://arxiv.org/pdf/1806.05009v3.pdf
|
ICML 2018 7
|
[
"Benjamin Paaßen",
"Claudio Gallicchio",
"Alessio Micheli",
"Barbara Hammer"
] |
[
"Metric Learning"
] | 2018-06-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2180
|
http://proceedings.mlr.press/v80/paassen18a/paassen18a.pdf
|
tree-edit-distance-learning-via-adaptive-2
| null |
[] |
https://paperswithcode.com/paper/towards-multi-instrument-drum-transcription
|
1806.06676
| null | null |
Towards multi-instrument drum transcription
|
Automatic drum transcription, a subtask of the more general automatic music
transcription, deals with extracting drum instrument note onsets from an audio
source. Recently, progress in transcription performance has been made using
non-negative matrix factorization as well as deep learning methods. However,
these works primarily focus on transcribing three drum instruments only: snare
drum, bass drum, and hi-hat. Yet, for many applications, the ability to
transcribe more drum instruments which make up standard drum kits used in
western popular music would be desirable. In this work, convolutional and
convolutional recurrent neural networks are trained to transcribe a wider range
of drum instruments. First, the shortcomings of publicly available datasets in
this context are discussed. To overcome these limitations, a larger synthetic
dataset is introduced. Then, methods to train models using the new dataset
focusing on generalization to real world data are investigated. Finally, the
trained models are evaluated on publicly available datasets and results are
discussed. The contributions of this work comprise: (i.) a large-scale
synthetic dataset for drum transcription, (ii.) first steps towards an
automatic drum transcription system that supports a larger range of instruments
by evaluating and discussing training setups and the impact of datasets in this
context, and (iii.) a publicly available set of trained models for drum
transcription. Additional materials are available at
http://ifs.tuwien.ac.at/~vogl/dafx2018
|
In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments.
|
http://arxiv.org/abs/1806.06676v2
|
http://arxiv.org/pdf/1806.06676v2.pdf
| null |
[
"Richard Vogl",
"Gerhard Widmer",
"Peter Knees"
] |
[
"Drum Transcription",
"Music Transcription"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/subword-and-crossword-units-for-ctc-acoustic
|
1712.06855
| null | null |
Subword and Crossword Units for CTC Acoustic Models
|
This paper proposes a novel approach to create an unit set for CTC based
speech recognition systems. By using Byte Pair Encoding we learn an unit set of
an arbitrary size on a given training text. In contrast to using characters or
words as units this allows us to find a good trade-off between the size of our
unit set and the available training data. We evaluate both Crossword units,
that may span multiple word, and Subword units. By combining this approach with
decoding methods using a separate language model we are able to achieve state
of the art results for grapheme based CTC systems.
| null |
http://arxiv.org/abs/1712.06855v2
|
http://arxiv.org/pdf/1712.06855v2.pdf
| null |
[
"Thomas Zenkel",
"Ramon Sanabria",
"Florian Metze",
"Alex Waibel"
] |
[
"Language Modeling",
"Language Modelling",
"speech-recognition",
"Speech Recognition"
] | 2017-12-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cardinality-leap-for-open-ended-evolution
|
1806.06628
| null | null |
Cardinality Leap for Open-Ended Evolution: Theoretical Consideration and Demonstration by "Hash Chemistry"
|
Open-ended evolution requires unbounded possibilities that evolving entities
can explore. The cardinality of a set of those possibilities thus has a
significant implication for the open-endedness of evolution. We propose that
facilitating formation of higher-order entities is a generalizable, effective
way to cause a "cardinality leap" in the set of possibilities that promotes
open-endedness. We demonstrate this idea with a simple, proof-of-concept toy
model called "Hash Chemistry" that uses a hash function as a fitness evaluator
of evolving entities of any size/order. Simulation results showed that the
cumulative number of unique replicating entities that appeared in evolution
increased almost linearly along time without an apparent bound, demonstrating
the effectiveness of the proposed cardinality leap. It was also observed that
the number of individual entities involved in a single replication event
gradually increased over time, indicating evolutionary appearance of
higher-order entities. Moreover, these behaviors were not observed in control
experiments in which fitness evaluators were replaced by random number
generators. This strongly suggests that the dynamics observed in Hash Chemistry
were indeed evolutionary behaviors driven by selection and adaptation taking
place at multiple scales.
| null |
http://arxiv.org/abs/1806.06628v4
|
http://arxiv.org/pdf/1806.06628v4.pdf
| null |
[
"Hiroki Sayama"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/warp-wavelets-with-adaptive-recursive
|
1711.00789
| null | null |
Learning Asymmetric and Local Features in Multi-Dimensional Data through Wavelets with Recursive Partitioning
|
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images. It requires methods that are sensitive to local details while fast enough to handle massive numbers of images of ever increasing sizes. We introduce a probabilistic model-based framework that achieves these objectives by incorporating adaptivity into discrete wavelet transforms (DWT) through Bayesian hierarchical modeling, thereby allowing wavelet bases to adapt to the geometric structure of the data while maintaining the high computational scalability of wavelet methods---linear in the sample size (e.g., the resolution of an image). We derive a recursive representation of the Bayesian posterior model which leads to an exact message passing algorithm to complete learning and inference. While our framework is applicable to a range of problems including multi-dimensional signal processing, compression, and structural learning, we illustrate its work and evaluate its performance in the context of image reconstruction using real images from the ImageNet database, two widely used benchmark datasets, and a dataset from retinal optical coherence tomography and compare its performance to state-of-the-art methods based on basis transforms and deep learning.
|
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images.
|
https://arxiv.org/abs/1711.00789v5
|
https://arxiv.org/pdf/1711.00789v5.pdf
| null |
[
"Meng Li",
"Li Ma"
] |
[
"Bayesian Inference",
"Image Reconstruction"
] | 2017-11-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-enhancing-speech-emotion-recognition-using
|
1806.06626
| null | null |
On Enhancing Speech Emotion Recognition using Generative Adversarial Networks
|
Generative Adversarial Networks (GANs) have gained a lot of attention from
machine learning community due to their ability to learn and mimic an input
data distribution. GANs consist of a discriminator and a generator working in
tandem playing a min-max game to learn a target underlying data distribution;
when fed with data-points sampled from a simpler distribution (like uniform or
Gaussian distribution). Once trained, they allow synthetic generation of
examples sampled from the target distribution. We investigate the application
of GANs to generate synthetic feature vectors used for speech emotion
recognition. Specifically, we investigate two set ups: (i) a vanilla GAN that
learns the distribution of a lower dimensional representation of the actual
higher dimensional feature vector and, (ii) a conditional GAN that learns the
distribution of the higher dimensional feature vectors conditioned on the
labels or the emotional class to which it belongs. As a potential practical
application of these synthetically generated samples, we measure any
improvement in a classifier's performance when the synthetic data is used along
with real data for training. We perform cross-validation analyses followed by a
cross-corpus study.
| null |
http://arxiv.org/abs/1806.06626v1
|
http://arxiv.org/pdf/1806.06626v1.pdf
| null |
[
"Saurabh Sahu",
"Rahul Gupta",
"Carol Espy-Wilson"
] |
[
"Cross-corpus",
"Emotion Recognition",
"Speech Emotion Recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/banach-wasserstein-gan
|
1806.06621
| null | null |
Banach Wasserstein GAN
|
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate
realistic samples from complicated image distributions. The Wasserstein metric
used in WGANs is based on a notion of distance between individual images, which
induces a notion of distance between probability distributions of images. So
far the community has considered $\ell^2$ as the underlying distance. We
generalize the theory of WGAN with gradient penalty to Banach spaces, allowing
practitioners to select the features to emphasize in the generator. We further
discuss the effect of some particular choices of underlying norms, focusing on
Sobolev norms. Finally, we demonstrate a boost in performance for an
appropriate choice of norm on CIFAR-10 and CelebA.
|
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions.
|
http://arxiv.org/abs/1806.06621v2
|
http://arxiv.org/pdf/1806.06621v2.pdf
|
NeurIPS 2018 12
|
[
"Jonas Adler",
"Sebastian Lunz"
] |
[] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/7909-banach-wasserstein-gan
|
http://papers.nips.cc/paper/7909-banach-wasserstein-gan.pdf
|
banach-wasserstein-gan-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/daheyinyin/wgan",
"description": "**Wasserstein GAN**, or **WGAN**, is a type of generative adversarial network that minimizes an approximation of the Earth-Mover's distance (EM) rather than the Jensen-Shannon divergence as in the original [GAN](https://paperswithcode.com/method/gan) formulation. It leads to more stable training than original GANs with less evidence of mode collapse, as well as meaningful curves that can be used for debugging and searching hyperparameters.",
"full_name": "Wasserstein GAN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Adversarial Networks (GANs)** are a type of generative model that use two networks, a generator to generate images and a discriminator to discriminate between real and fake, to train a model that approximates the distribution of the data. Below you can find a continuously updating list of GANs.",
"name": "Generative Adversarial Networks",
"parent": "Generative Models"
},
"name": "WGAN",
"source_title": "Wasserstein GAN",
"source_url": "http://arxiv.org/abs/1701.07875v3"
}
] |
https://paperswithcode.com/paper/comparison-based-random-forests
|
1806.06616
| null | null |
Comparison-Based Random Forests
|
Assume we are given a set of items from a general metric space, but we
neither have access to the representation of the data nor to the distances
between data points. Instead, suppose that we can actively choose a triplet of
items (A,B,C) and ask an oracle whether item A is closer to item B or to item
C. In this paper, we propose a novel random forest algorithm for regression and
classification that relies only on such triplet comparisons. In the theory part
of this paper, we establish sufficient conditions for the consistency of such a
forest. In a set of comprehensive experiments, we then demonstrate that the
proposed random forest is efficient both for classification and regression. In
particular, it is even competitive with other methods that have direct access
to the metric representation of the data.
| null |
http://arxiv.org/abs/1806.06616v1
|
http://arxiv.org/pdf/1806.06616v1.pdf
|
ICML 2018 7
|
[
"Siavash Haghiri",
"Damien Garreau",
"Ulrike Von Luxburg"
] |
[
"General Classification",
"regression",
"Triplet"
] | 2018-06-18T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1979
|
http://proceedings.mlr.press/v80/haghiri18a/haghiri18a.pdf
|
comparison-based-random-forests-1
| null |
[] |
https://paperswithcode.com/paper/on-multi-resident-activity-recognition-in
|
1806.06611
| null | null |
On Multi-resident Activity Recognition in Ambient Smart-Homes
|
Increasing attention to the research on activity monitoring in smart homes
has motivated the employment of ambient intelligence to reduce the deployment
cost and solve the privacy issue. Several approaches have been proposed for
multi-resident activity recognition, however, there still lacks a comprehensive
benchmark for future research and practical selection of models. In this paper
we study different methods for multi-resident activity recognition and evaluate
them on same sets of data. The experimental results show that recurrent neural
network with gated recurrent units is better than other models and also
considerably efficient, and that using combined activities as single labels is
more effective than represent them as separate labels.
| null |
http://arxiv.org/abs/1806.06611v1
|
http://arxiv.org/pdf/1806.06611v1.pdf
| null |
[
"Son N. Tran",
"Qing Zhang",
"Mohan Karunanithi"
] |
[
"Activity Recognition"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-and-characterizing-incremental
|
1806.06610
| null | null |
Evaluating and Characterizing Incremental Learning from Non-Stationary Data
|
Incremental learning from non-stationary data poses special challenges to the
field of machine learning. Although new algorithms have been developed for
this, assessment of results and comparison of behaviors are still open
problems, mainly because evaluation metrics, adapted from more traditional
tasks, can be ineffective in this context. Overall, there is a lack of common
testing practices. This paper thus presents a testbed for incremental
non-stationary learning algorithms, based on specially designed synthetic
datasets. Also, test results are reported for some well-known algorithms to
show that the proposed methodology is effective at characterizing their
strengths and weaknesses. It is expected that this methodology will provide a
common basis for evaluating future contributions in the field.
| null |
http://arxiv.org/abs/1806.06610v1
|
http://arxiv.org/pdf/1806.06610v1.pdf
| null |
[
"Alejandro Cervantes",
"Christian Gagné",
"Pedro Isasi",
"Marc Parizeau"
] |
[
"Incremental Learning"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantized-compressive-k-means
|
1804.10109
| null | null |
Quantized Compressive K-Means
|
The recent framework of compressive statistical learning aims at designing
tractable learning algorithms that use only a heavily compressed
representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such
a method: it estimates the centroids of data clusters from pooled, non-linear,
random signatures of the learning examples. While this approach significantly
reduces computational time on very large datasets, its digital implementation
wastes acquisition resources because the learning examples are compressed only
after the sensing stage. The present work generalizes the sketching procedure
initially defined in Compressive K-Means to a large class of periodic
nonlinearities including hardware-friendly implementations that compressively
acquire entire datasets. This idea is exemplified in a Quantized Compressive
K-Means procedure, a variant of CKM that leverages 1-bit universal quantization
(i.e. retaining the least significant bit of a standard uniform quantizer) as
the periodic sketch nonlinearity. Trading for this resource-efficient signature
(standard in most acquisition schemes) has almost no impact on the clustering
performances, as illustrated by numerical experiments.
| null |
http://arxiv.org/abs/1804.10109v2
|
http://arxiv.org/pdf/1804.10109v2.pdf
| null |
[
"Vincent Schellekens",
"Laurent Jacques"
] |
[
"Clustering",
"Quantization"
] | 2018-04-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/self-attentional-acoustic-models
|
1803.09519
| null | null |
Self-Attentional Acoustic Models
|
Self-attention is a method of encoding sequences of vectors by relating these
vectors to each-other based on pairwise similarities. These models have
recently shown promising results for modeling discrete sequences, but they are
non-trivial to apply to acoustic modeling due to computational and modeling
issues. In this paper, we apply self-attention to acoustic modeling, proposing
several improvements to mitigate these issues: First, self-attention memory
grows quadratically in the sequence length, which we address through a
downsampling technique. Second, we find that previous approaches to incorporate
position information into the model are unsuitable and explore other
representations and hybrid models to this end. Third, to stress the importance
of local context in the acoustic signal, we propose a Gaussian biasing approach
that allows explicit control over the context range. Experiments find that our
model approaches a strong baseline based on LSTMs with network-in-network
connections while being much faster to compute. Besides speed, we find that
interpretability is a strength of self-attentional acoustic models, and
demonstrate that self-attention heads learn a linguistically plausible division
of labor.
|
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.
|
http://arxiv.org/abs/1803.09519v2
|
http://arxiv.org/pdf/1803.09519v2.pdf
| null |
[
"Matthias Sperber",
"Jan Niehues",
"Graham Neubig",
"Sebastian Stüker",
"Alex Waibel"
] |
[] | 2018-03-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/snap-ml-a-hierarchical-framework-for-machine
|
1803.06333
| null | null |
Snap ML: A Hierarchical Framework for Machine Learning
|
We describe a new software framework for fast training of generalized linear
models. The framework, named Snap Machine Learning (Snap ML), combines recent
advances in machine learning systems and algorithms in a nested manner to
reflect the hierarchical architecture of modern computing systems. We prove
theoretically that such a hierarchical system can accelerate training in
distributed environments where intra-node communication is cheaper than
inter-node communication. Additionally, we provide a review of the
implementation of Snap ML in terms of GPU acceleration, pipelining,
communication patterns and software architecture, highlighting aspects that
were critical for achieving high performance. We evaluate the performance of
Snap ML in both single-node and multi-node environments, quantifying the
benefit of the hierarchical scheme and the data streaming functionality, and
comparing with other widely-used machine learning software frameworks. Finally,
we present a logistic regression benchmark on the Criteo Terabyte Click Logs
dataset and show that Snap ML achieves the same test loss an order of magnitude
faster than any of the previously reported results, including those obtained
using TensorFlow and scikit-learn.
| null |
http://arxiv.org/abs/1803.06333v3
|
http://arxiv.org/pdf/1803.06333v3.pdf
|
NeurIPS 2018 12
|
[
"Celestine Dünner",
"Thomas Parnell",
"Dimitrios Sarigiannis",
"Nikolas Ioannou",
"Andreea Anghel",
"Gummadi Ravi",
"Madhusudanan Kandasamy",
"Haralampos Pozidis"
] |
[
"BIG-bench Machine Learning",
"GPU"
] | 2018-03-16T00:00:00 |
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning
|
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning.pdf
|
snap-ml-a-hierarchical-framework-for-machine-1
| null |
[
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/multilingual-bottleneck-features-for-subword
|
1803.08863
| null | null |
Multilingual bottleneck features for subword modeling in zero-resource languages
|
How can we effectively develop speech technology for languages where no
transcribed data is available? Many existing approaches use no annotated
resources at all, yet it makes sense to leverage information from large
annotated corpora in other languages, for example in the form of multilingual
bottleneck features (BNFs) obtained from a supervised speech recognition
system. In this work, we evaluate the benefits of BNFs for subword modeling
(feature extraction) in six unseen languages on a word discrimination task.
First we establish a strong unsupervised baseline by combining two existing
methods: vocal tract length normalisation (VTLN) and the correspondence
autoencoder (cAE). We then show that BNFs trained on a single language already
beat this baseline; including up to 10 languages results in additional
improvements which cannot be matched by just adding more data from a single
language. Finally, we show that the cAE can improve further on the BNFs if
high-quality same-word pairs are available.
|
How can we effectively develop speech technology for languages where no transcribed data is available?
|
http://arxiv.org/abs/1803.08863v2
|
http://arxiv.org/pdf/1803.08863v2.pdf
| null |
[
"Enno Hermann",
"Sharon Goldwater"
] |
[
"speech-recognition",
"Speech Recognition"
] | 2018-03-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-write-stylized-chinese-characters
|
1712.06424
| null | null |
Learning to Write Stylized Chinese Characters by Reading a Handful of Examples
|
Automatically writing stylized Chinese characters is an attractive yet
challenging task due to its wide applicabilities. In this paper, we propose a
novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly
generate Chinese characters. Specifically, we propose to capture the different
characteristics of a Chinese character by disentangling the latent features
into content-related and style-related components. Considering of the complex
shapes and structures, we incorporate the structure information as prior
knowledge into our framework to guide the generation. Our framework shows a
powerful one-shot/low-shot generalization ability by inferring the style
component given a character with unseen style. To the best of our knowledge,
this is the first attempt to learn to write new-style Chinese characters by
observing only one or a few examples. Extensive experiments demonstrate its
effectiveness in generating different stylized Chinese characters by fusing the
feature vectors corresponding to different contents and styles, which is of
significant importance in real-world applications.
| null |
http://arxiv.org/abs/1712.06424v3
|
http://arxiv.org/pdf/1712.06424v3.pdf
| null |
[
"Danyang Sun",
"Tongzheng Ren",
"Chongxun Li",
"Hang Su",
"Jun Zhu"
] |
[] | 2017-12-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ipose-instance-aware-6d-pose-estimation-of
|
1712.01924
| null | null |
iPose: Instance-Aware 6D Pose Estimation of Partly Occluded Objects
|
We address the task of 6D pose estimation of known rigid objects from single
input images in scenarios where the objects are partly occluded. Recent
RGB-D-based methods are robust to moderate degrees of occlusion. For RGB
inputs, no previous method works well for partly occluded objects. Our main
contribution is to present the first deep learning-based system that estimates
accurate poses for partly occluded objects from RGB-D and RGB input. We achieve
this with a new instance-aware pipeline that decomposes 6D object pose
estimation into a sequence of simpler steps, where each step removes specific
aspects of the problem. The first step localizes all known objects in the image
using an instance segmentation network, and hence eliminates surrounding
clutter and occluders. The second step densely maps pixels to 3D object surface
positions, so called object coordinates, using an encoder-decoder network, and
hence eliminates object appearance. The third, and final, step predicts the 6D
pose using geometric optimization. We demonstrate that we significantly
outperform the state-of-the-art for pose estimation of partly occluded objects
for both RGB and RGB-D input.
| null |
http://arxiv.org/abs/1712.01924v3
|
http://arxiv.org/pdf/1712.01924v3.pdf
| null |
[
"Omid Hosseini Jafari",
"Siva Karthik Mustikovela",
"Karl Pertsch",
"Eric Brachmann",
"Carsten Rother"
] |
[
"6D Pose Estimation",
"6D Pose Estimation using RGB",
"Decoder",
"Instance Segmentation",
"Object",
"Pose Estimation",
"Semantic Segmentation"
] | 2017-12-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/uncertainty-in-multitask-learning-joint
|
1806.06595
| null | null |
Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning
|
Multi-task neural network architectures provide a mechanism that jointly
integrates information from distinct sources. It is ideal in the context of
MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT)
scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic
multi-task network that estimates: 1) intrinsic uncertainty through a
heteroscedastic noise model for spatially-adaptive task loss weighting and 2)
parameter uncertainty through approximate Bayesian inference. This allows
sampling of multiple segmentations and synCTs that share their network
representation. We test our model on prostate cancer scans and show that it
produces more accurate and consistent synCTs with a better estimation in the
variance of the errors, state of the art results in OAR segmentation and a
methodology for quality assurance in radiotherapy treatment planning.
| null |
http://arxiv.org/abs/1806.06595v1
|
http://arxiv.org/pdf/1806.06595v1.pdf
| null |
[
"Felix J. S. Bragman",
"Ryutaro Tanno",
"Zach Eaton-Rosen",
"Wenqi Li",
"David J. Hawkes",
"Sebastien Ourselin",
"Daniel C. Alexander",
"Jamie R. McClelland",
"M. Jorge Cardoso"
] |
[
"Bayesian Inference"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-recurrent-neural-network-for-multi
|
1806.06594
| null | null |
Deep Recurrent Neural Network for Multi-target Filtering
|
This paper addresses the problem of fixed motion and measurement models for
multi-target filtering using an adaptive learning framework. This is performed
by defining target tuples with random finite set terminology and utilisation of
recurrent neural networks with a long short-term memory architecture. A novel
data association algorithm compatible with the predicted tracklet tuples is
proposed, enabling the update of occluded targets, in addition to assigning
birth, survival and death of targets. The algorithm is evaluated over a
commonly used filtering simulation scenario, with highly promising results.
| null |
http://arxiv.org/abs/1806.06594v2
|
http://arxiv.org/pdf/1806.06594v2.pdf
| null |
[
"Mehryar Emambakhsh",
"Alessandro Bay",
"Eduard Vazquez"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/low-resource-speech-to-text-translation
|
1803.09164
| null | null |
Low-Resource Speech-to-Text Translation
|
Speech-to-text translation has many potential applications for low-resource
languages, but the typical approach of cascading speech recognition with
machine translation is often impossible, since the transcripts needed to train
a speech recognizer are usually not available for low-resource languages.
Recent work has found that neural encoder-decoder models can learn to directly
translate foreign speech in high-resource scenarios, without the need for
intermediate transcription. We investigate whether this approach also works in
settings where both data and computation are limited. To make the approach
efficient, we make several architectural changes, including a change from
character-level to word-level decoding. We find that this choice yields crucial
speed improvements that allow us to train with fewer computational resources,
yet still performs well on frequent words. We explore models trained on between
20 and 160 hours of data, and find that although models trained on less data
have considerably lower BLEU scores, they can still predict words with
relatively high precision and recall---around 50% for a model trained on 50
hours of data, versus around 60% for the full 160 hour model. Thus, they may
still be useful for some low-resource scenarios.
| null |
http://arxiv.org/abs/1803.09164v2
|
http://arxiv.org/pdf/1803.09164v2.pdf
| null |
[
"Sameer Bansal",
"Herman Kamper",
"Karen Livescu",
"Adam Lopez",
"Sharon Goldwater"
] |
[
"Decoder",
"Machine Translation",
"speech-recognition",
"Speech Recognition",
"Speech-to-Text",
"Speech-to-Text Translation",
"Translation"
] | 2018-03-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/computational-theories-of-curiosity-driven
|
1802.10546
| null | null |
Computational Theories of Curiosity-Driven Learning
|
What are the functions of curiosity? What are the mechanisms of
curiosity-driven learning? We approach these questions about the living using
concepts and tools from machine learning and developmental robotics. We argue
that curiosity-driven learning enables organisms to make discoveries to solve
complex problems with rare or deceptive rewards. By fostering exploration and
discovery of a diversity of behavioural skills, and ignoring these rewards,
curiosity can be efficient to bootstrap learning when there is no information,
or deceptive information, about local improvement towards these problems. We
also explain the key role of curiosity for efficient learning of world models.
We review both normative and heuristic computational frameworks used to
understand the mechanisms of curiosity in humans, conceptualizing the child as
a sense-making organism. These frameworks enable us to discuss the
bi-directional causal links between curiosity and learning, and to provide new
hypotheses about the fundamental role of curiosity in self-organizing
developmental structures through curriculum learning. We present various
developmental robotics experiments that study these mechanisms in action, both
supporting these hypotheses to understand better curiosity in humans and
opening new research avenues in machine learning and artificial intelligence.
Finally, we discuss challenges for the design of experimental paradigms for
studying curiosity in psychology and cognitive neuroscience.
Keywords: Curiosity, intrinsic motivation, lifelong learning, predictions,
world model, rewards, free-energy principle, learning progress, machine
learning, AI, developmental robotics, development, curriculum learning,
self-organization.
| null |
http://arxiv.org/abs/1802.10546v2
|
http://arxiv.org/pdf/1802.10546v2.pdf
| null |
[
"Pierre-Yves Oudeyer"
] |
[
"BIG-bench Machine Learning",
"Lifelong learning"
] | 2018-02-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nonparametric-topic-modeling-with-neural
|
1806.06583
| null | null |
Nonparametric Topic Modeling with Neural Inference
|
This work focuses on combining nonparametric topic models with Auto-Encoding
Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the
topics are treated as trainable parameters and the document-specific topic
proportions are obtained by a stick-breaking construction. The inference of
iTM-VAE is modeled by neural networks such that it can be computed in a simple
feed-forward manner. We also describe how to introduce a hyper-prior into
iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the
hyper-prior technique is quite general and we show that it can be applied to
other AEVB based models to alleviate the {\it collapse-to-prior} problem
elegantly. Moreover, we also propose HiTM-VAE, where the document-specific
topic distributions are generated in a hierarchical manner. HiTM-VAE is even
more flexible and can generate topic distributions with better variability.
Experimental results on 20News and Reuters RCV1-V2 datasets show that the
proposed models outperform the state-of-the-art baselines significantly. The
advantages of the hyper-prior technique and the hierarchical model construction
are also confirmed by experiments.
| null |
http://arxiv.org/abs/1806.06583v1
|
http://arxiv.org/pdf/1806.06583v1.pdf
| null |
[
"Xuefei Ning",
"Yin Zheng",
"Zhuxi Jiang",
"Yu Wang",
"Huazhong Yang",
"Junzhou Huang"
] |
[
"Topic Models"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wsd-algorithm-based-on-a-new-method-of-vector
|
1805.09559
| null | null |
WSD algorithm based on a new method of vector-word contexts proximity calculation via epsilon-filtration
|
The problem of word sense disambiguation (WSD) is considered in the article.
Given a set of synonyms (synsets) and sentences with these synonyms. It is
necessary to select the meaning of the word in the sentence automatically. 1285
sentences were tagged by experts, namely, one of the dictionary meanings was
selected by experts for target words. To solve the WSD-problem, an algorithm
based on a new method of vector-word contexts proximity calculation is
proposed. In order to achieve higher accuracy, a preliminary epsilon-filtering
of words is performed, both in the sentence and in the set of synonyms. An
extensive program of experiments was carried out. Four algorithms are
implemented, including a new algorithm. Experiments have shown that in a number
of cases the new algorithm shows better results. The developed software and the
tagged corpus have an open license and are available online. Wiktionary and
Wikisource are used. A brief description of this work can be viewed in slides
(https://goo.gl/9ak6Gt). Video lecture in Russian on this research is available
online (https://youtu.be/-DLmRkepf58).
|
It is necessary to select the meaning of the word in the sentence automatically.
|
http://arxiv.org/abs/1805.09559v2
|
http://arxiv.org/pdf/1805.09559v2.pdf
| null |
[
"Alexander Kirillov",
"Natalia Krizhanovsky",
"Andrew Krizhanovsky"
] |
[
"Sentence",
"Word Sense Disambiguation"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-kanerva-machine-a-generative-distributed
|
1804.01756
| null | null |
The Kanerva Machine: A Generative Distributed Memory
|
We present an end-to-end trained memory system that quickly adapts to new
data and generates samples like them. Inspired by Kanerva's sparse distributed
memory, it has a robust distributed reading and writing mechanism. The memory
is analytically tractable, which enables optimal on-line compression via a
Bayesian update-rule. We formulate it as a hierarchical conditional generative
model, where memory provides a rich data-dependent prior distribution.
Consequently, the top-down memory and bottom-up perception are combined to
produce the code representing an observation. Empirically, we demonstrate that
the adaptive memory significantly improves generative models trained on both
the Omniglot and CIFAR datasets. Compared with the Differentiable Neural
Computer (DNC) and its variants, our memory model has greater capacity and is
significantly easier to train.
| null |
http://arxiv.org/abs/1804.01756v3
|
http://arxiv.org/pdf/1804.01756v3.pdf
|
ICLR 2018 1
|
[
"Yan Wu",
"Greg Wayne",
"Alex Graves",
"Timothy Lillicrap"
] |
[] | 2018-04-05T00:00:00 |
https://openreview.net/forum?id=S1HlA-ZAZ
|
https://openreview.net/pdf?id=S1HlA-ZAZ
|
the-kanerva-machine-a-generative-distributed-1
| null |
[] |
https://paperswithcode.com/paper/rendernet-a-deep-convolutional-network-for
|
1806.06575
| null | null |
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
|
Traditional computer graphics rendering pipeline is designed for procedurally
generating 2D quality images from 3D shapes with high performance. The
non-differentiability due to discrete operations such as visibility computation
makes it hard to explicitly correlate rendering parameters and the resulting
image, posing a significant challenge for inverse rendering tasks. Recent work
on differentiable rendering achieves differentiability either by designing
surrogate gradients for non-differentiable operations or via an approximate but
differentiable renderer. These methods, however, are still limited when it
comes to handling occlusion, and restricted to particular rendering effects. We
present RenderNet, a differentiable rendering convolutional network with a
novel projection unit that can render 2D images from 3D shapes. Spatial
occlusion and shading calculation are automatically encoded in the network. Our
experiments show that RenderNet can successfully learn to implement different
shaders, and can be used in inverse rendering tasks to estimate shape, pose,
lighting and texture from a single image.
|
We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.
|
http://arxiv.org/abs/1806.06575v3
|
http://arxiv.org/pdf/1806.06575v3.pdf
|
NeurIPS 2018 12
|
[
"Thu Nguyen-Phuoc",
"Chuan Li",
"Stephen Balaban",
"Yong-Liang Yang"
] |
[
"Inverse Rendering"
] | 2018-06-18T00:00:00 |
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes
|
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes.pdf
|
rendernet-a-deep-convolutional-network-for-1
| null |
[] |
https://paperswithcode.com/paper/distributed-learning-with-compressed
|
1806.06573
| null | null |
Distributed learning with compressed gradients
|
Asynchronous computation and gradient compression have emerged as two key
techniques for achieving scalability in distributed optimization for
large-scale machine learning. This paper presents a unified analysis framework
for distributed gradient methods operating with staled and compressed
gradients. Non-asymptotic bounds on convergence rates and information exchange
are derived for several optimization algorithms. These bounds give explicit
expressions for step-sizes and characterize how the amount of asynchrony and
the compression accuracy affect iteration and communication complexity
guarantees. Numerical results highlight convergence properties of different
gradient compression algorithms and confirm that fast convergence under limited
information exchange is indeed possible.
| null |
http://arxiv.org/abs/1806.06573v2
|
http://arxiv.org/pdf/1806.06573v2.pdf
| null |
[
"Sarit Khirirat",
"Hamid Reza Feyzmahdavian",
"Mikael Johansson"
] |
[
"BIG-bench Machine Learning",
"Distributed Optimization"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/subgram-extending-skip-gram-word
|
1806.06571
| null | null |
SubGram: Extending Skip-gram Word Representation with Substrings
|
Skip-gram (word2vec) is a recent method for creating vector representations
of words ("distributed word representations") using a neural network. The
representation gained popularity in various areas of natural language
processing, because it seems to capture syntactic and semantic information
about words without any explicit supervision in this respect. We propose
SubGram, a refinement of the Skip-gram model to consider also the word
structure during the training process, achieving large gains on the Skip-gram
original test set.
|
Skip-gram (word2vec) is a recent method for creating vector representations of words ("distributed word representations") using a neural network.
|
http://arxiv.org/abs/1806.06571v1
|
http://arxiv.org/pdf/1806.06571v1.pdf
| null |
[
"Tom Kocmi",
"Ondřej Bojar"
] |
[] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-from-outside-the-viability-kernel
|
1806.06569
| null | null |
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fall with Grace
|
Despite impressive results using reinforcement learning to solve complex
problems from scratch, in robotics this has still been largely limited to
model-based learning with very informative reward functions. One of the major
challenges is that the reward landscape often has large patches with no
gradient, making it difficult to sample gradients effectively. We show here
that the robot state-initialization can have a more important effect on the
reward landscape than is generally expected. In particular, we show the
counter-intuitive benefit of including initializations that are unviable, in
other words initializing in states that are doomed to fail.
| null |
http://arxiv.org/abs/1806.06569v1
|
http://arxiv.org/pdf/1806.06569v1.pdf
| null |
[
"Steve Heim",
"Alexander Spröwitz"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ista-net-interpretable-optimization-inspired
|
1706.07929
| null | null |
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
|
With the aim of developing a fast yet accurate algorithm for compressive
sensing (CS) reconstruction of natural images, we combine in this paper the
merits of two existing categories of CS methods: the structure insights of
traditional optimization-based methods and the speed of recent network-based
ones. Specifically, we propose a novel structured deep network, dubbed
ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm
(ISTA) for optimizing a general $\ell_1$ norm CS reconstruction model. To cast
ISTA into deep network form, we develop an effective strategy to solve the
proximal mapping associated with the sparsity-inducing regularizer using
nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms,
shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than
being hand-crafted. Moreover, considering that the residuals of natural images
are more compressible, an enhanced version of ISTA-Net in the residual domain,
dubbed {ISTA-Net}$^+$, is derived to further improve CS reconstruction.
Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform
existing state-of-the-art optimization-based and network-based CS methods by
large margins, while maintaining fast computational speed. Our source codes are
available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.
|
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones.
|
http://arxiv.org/abs/1706.07929v2
|
http://arxiv.org/pdf/1706.07929v2.pdf
|
CVPR 2018 6
|
[
"Jian Zhang",
"Bernard Ghanem"
] |
[
"Compressive Sensing"
] | 2017-06-24T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.pdf
|
ista-net-interpretable-optimization-inspired-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/state-gradients-for-rnn-memory-analysis
|
1805.04264
| null | null |
State Gradients for RNN Memory Analysis
|
We present a framework for analyzing what the state in RNNs remembers from
its input embeddings. Our approach is inspired by backpropagation, in the sense
that we compute the gradients of the states with respect to the input
embeddings. The gradient matrix is decomposed with Singular Value Decomposition
to analyze which directions in the embedding space are best transferred to the
hidden state space, characterized by the largest singular values. We apply our
approach to LSTM language models and investigate to what extent and for how
long certain classes of words are remembered on average for a certain corpus.
Additionally, the extent to which a specific property or relationship is
remembered by the RNN can be tracked by comparing a vector characterizing that
property with the direction(s) in embedding space that are best preserved in
hidden state space.
| null |
http://arxiv.org/abs/1805.04264v2
|
http://arxiv.org/pdf/1805.04264v2.pdf
|
WS 2018 11
|
[
"Lyan Verwimp",
"Hugo Van hamme",
"Vincent Renkens",
"Patrick Wambacq"
] |
[] | 2018-05-11T00:00:00 |
https://aclanthology.org/W18-5443
|
https://aclanthology.org/W18-5443.pdf
|
state-gradients-for-rnn-memory-analysis-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/convex-optimization-with-unbounded-nonconvex
|
1711.02621
| null | null |
Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing
|
We consider the problem of minimizing a convex objective function $F$ when
one can only evaluate its noisy approximation $\hat{F}$. Unless one assumes
some structure on the noise, $\hat{F}$ may be an arbitrary nonconvex function,
making the task of minimizing $F$ intractable. To overcome this, prior work has
often focused on the case when $F(x)-\hat{F}(x)$ is uniformly-bounded. In this
paper we study the more general case when the noise has magnitude $\alpha F(x)
+ \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm
that finds an approximate minimizer of $F$ for this noise model. Previously,
Markov chains, such as the stochastic gradient Langevin dynamics, have been
used to arrive at approximate solutions to these optimization problems.
However, for the noise model considered in this paper, no single temperature
allows such a Markov chain to both mix quickly and concentrate near the global
minimizer. We bypass this by combining "simulated annealing" with the
stochastic gradient Langevin dynamics, and gradually decreasing the temperature
of the chain in order to approach the global minimizer. As a corollary one can
approximately minimize a nonconvex function that is close to a convex function;
however, the closeness can deteriorate as one moves away from the optimum.
| null |
http://arxiv.org/abs/1711.02621v2
|
http://arxiv.org/pdf/1711.02621v2.pdf
| null |
[
"Oren Mangoubi",
"Nisheeth K. Vishnoi"
] |
[] | 2017-11-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/incremental-sparse-bayesian-ordinal
|
1806.06553
| null | null |
Incremental Sparse Bayesian Ordinal Regression
|
Ordinal Regression (OR) aims to model the ordering information between
different data categories, which is a crucial topic in multi-label learning. An
important class of approaches to OR models the problem as a linear combination
of basis functions that map features to a high dimensional non-linear space.
However, most of the basis function-based algorithms are time consuming. We
propose an incremental sparse Bayesian approach to OR tasks and introduce an
algorithm to sequentially learn the relevant basis functions in the ordinal
scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression
(ISBOR), automatically optimizes the hyper-parameters via the type-II maximum
likelihood method. By exploiting fast marginal likelihood optimization, ISBOR
can avoid big matrix inverses, which is the main bottleneck in applying basis
function-based algorithms to OR tasks on large-scale datasets. We show that
ISBOR can make accurate predictions with parsimonious basis functions while
offering automatic estimates of the prediction uncertainty. Extensive
experiments on synthetic and real word datasets demonstrate the efficiency and
effectiveness of ISBOR compared to other basis function-based OR approaches.
|
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning.
|
http://arxiv.org/abs/1806.06553v1
|
http://arxiv.org/pdf/1806.06553v1.pdf
| null |
[
"Chang Li",
"Maarten de Rijke"
] |
[
"Multi-Label Learning",
"regression"
] | 2018-06-18T00:00:00 | null | null | null | null |
[] |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 13