paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/writing-style-invariant-deep-learning-model
|
1806.03987
| null | null |
Writing Style Invariant Deep Learning Model for Historical Manuscripts Alignment
|
Historical manuscript alignment is a widely known problem in document
analysis. Finding the differences between manuscript editions is mostly done
manually. In this paper, we present a writer independent deep learning model
which is trained on several writing styles, and able to achieve high detection
accuracy when tested on writing styles not present in training data. We test
our model using cross validation, each time we train the model on five
manuscripts, and test it on the other two manuscripts, never seen in the
training data. We've applied cross validation on seven manuscripts, netting 21
different tests, achieving average accuracy of $\%92.17$. We also present a new
alignment algorithm based on dynamic sized sliding window, which is able to
successfully handle complex cases.
| null |
http://arxiv.org/abs/1806.03987v1
|
http://arxiv.org/pdf/1806.03987v1.pdf
| null |
[
"Majeed Kassis",
"Jumana Nassour",
"Jihad El-Sana"
] |
[
"Deep Learning"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nonparametric-density-flows-for-mri-intensity
|
1806.02613
| null | null |
Nonparametric Density Flows for MRI Intensity Normalisation
|
With the adoption of powerful machine learning methods in medical image
analysis, it is becoming increasingly desirable to aggregate data that is
acquired across multiple sites. However, the underlying assumption of many
analysis techniques that corresponding tissues have consistent intensities in
all images is often violated in multi-centre databases. We introduce a novel
intensity normalisation scheme based on density matching, wherein the
histograms are modelled as Dirichlet process Gaussian mixtures. The source
mixture model is transformed to minimise its $L^2$ divergence towards a target
model, then the voxel intensities are transported through a mass-conserving
flow to maintain agreement with the moving density. In a multi-centre study
with brain MRI data, we show that the proposed technique produces excellent
correspondence between the matched densities and histograms. We further
demonstrate that our method makes tissue intensity statistics substantially
more compatible between images than a baseline affine transformation and is
comparable to state-of-the-art while providing considerably smoother
transformations. Finally, we validate that nonlinear intensity normalisation is
a step toward effective imaging data harmonisation.
|
With the adoption of powerful machine learning methods in medical image analysis, it is becoming increasingly desirable to aggregate data that is acquired across multiple sites.
|
http://arxiv.org/abs/1806.02613v1
|
http://arxiv.org/pdf/1806.02613v1.pdf
| null |
[
"Daniel C. Castro",
"Ben Glocker"
] |
[
"Medical Image Analysis"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dimensionality-driven-learning-with-noisy
|
1806.02612
| null | null |
Dimensionality-Driven Learning with Noisy Labels
|
Datasets with significant proportions of noisy (incorrect) class labels
present challenges for training accurate Deep Neural Networks (DNNs). We
propose a new perspective for understanding DNN generalization for such
datasets, by investigating the dimensionality of the deep representation
subspace of training samples. We show that from a dimensionality perspective,
DNNs exhibit quite distinctive learning styles when trained with clean labels
versus when trained with a proportion of noisy labels. Based on this finding,
we develop a new dimensionality-driven learning strategy, which monitors the
dimensionality of subspaces during training and adapts the loss function
accordingly. We empirically demonstrate that our approach is highly tolerant to
significant proportions of noisy labels, and can effectively learn
low-dimensional local subspaces that capture the data distribution.
|
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).
|
http://arxiv.org/abs/1806.02612v2
|
http://arxiv.org/pdf/1806.02612v2.pdf
|
ICML 2018 7
|
[
"Xingjun Ma",
"Yisen Wang",
"Michael E. Houle",
"Shuo Zhou",
"Sarah M. Erfani",
"Shu-Tao Xia",
"Sudanthi Wijewickrema",
"James Bailey"
] |
[
"Image Classification",
"Learning with noisy labels"
] | 2018-06-07T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1970
|
http://proceedings.mlr.press/v80/ma18d/ma18d.pdf
|
dimensionality-driven-learning-with-noisy-1
| null |
[] |
https://paperswithcode.com/paper/learning-multi-modal-self-awareness-models
|
1806.02609
| null | null |
Learning Multi-Modal Self-Awareness Models for Autonomous Vehicles from Human Driving
|
This paper presents a novel approach for learning self-awareness models for
autonomous vehicles. The proposed technique is based on the availability of
synchronized multi-sensor dynamic data related to different maneuvering tasks
performed by a human operator. It is shown that different machine learning
approaches can be used to first learn single modality models using coupled
Dynamic Bayesian Networks; such models are then correlated at event level to
discover contextual multi-modal concepts. In the presented case, visual
perception and localization are used as modalities. Cross-correlations among
modalities in time is discovered from data and are described as probabilistic
links connecting shared and private multi-modal DBNs at the event (discrete)
level. Results are presented on experiments performed on an autonomous vehicle,
highlighting potentiality of the proposed approach to allow anomaly detection
and autonomous decision making based on learned self-awareness models.
| null |
http://arxiv.org/abs/1806.02609v1
|
http://arxiv.org/pdf/1806.02609v1.pdf
| null |
[
"Mahdyar Ravanbakhsh",
"Mohamad Baydoun",
"Damian Campo",
"Pablo Marin",
"David Martin",
"Lucio Marcenaro",
"Carlo S. Regazzoni"
] |
[
"Anomaly Detection",
"Autonomous Vehicles",
"Decision Making"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adadepth-unsupervised-content-congruent
|
1803.01599
| null | null |
AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation
|
Supervised deep learning methods have shown promising results for the task of
monocular depth estimation; but acquiring ground truth is costly, and prone to
noise as well as inaccuracies. While synthetic datasets have been used to
circumvent above problems, the resultant models do not generalize well to
natural scenes due to the inherent domain shift. Recent adversarial approaches
for domain adaption have performed well in mitigating the differences between
the source and target domains. But these methods are mostly limited to a
classification setup and do not scale well for fully-convolutional
architectures. In this work, we propose AdaDepth - an unsupervised domain
adaptation strategy for the pixel-wise regression task of monocular depth
estimation. The proposed approach is devoid of above limitations through a)
adversarial learning and b) explicit imposition of content consistency on the
adapted target representation. Our unsupervised approach performs competitively
with other established approaches on depth estimation tasks and achieves
state-of-the-art results in a semi-supervised setting.
| null |
http://arxiv.org/abs/1803.01599v2
|
http://arxiv.org/pdf/1803.01599v2.pdf
|
CVPR 2018 6
|
[
"Jogendra Nath Kundu",
"Phani Krishna Uppala",
"Anuj Pahuja",
"R. Venkatesh Babu"
] |
[
"Depth Estimation",
"Domain Adaptation",
"Monocular Depth Estimation",
"Unsupervised Domain Adaptation"
] | 2018-03-05T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Kundu_AdaDepth_Unsupervised_content_cvpr_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Kundu_AdaDepth_Unsupervised_content_cvpr_2018_paper.pdf
|
adadepth-unsupervised-content-congruent-1
| null |
[] |
https://paperswithcode.com/paper/comparing-dynamics-deep-neural-networks-1
|
1803.06969
| null | null |
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
|
We analyze numerically the training dynamics of deep neural networks (DNN) by
using methods developed in statistical physics of glassy systems. The two main
issues we address are (1) the complexity of the loss landscape and of the
dynamics within it, and (2) to what extent DNNs share similarities with glassy
systems. Our findings, obtained for different architectures and datasets,
suggest that during the training process the dynamics slows down because of an
increasingly large number of flat directions. At large times, when the loss is
approaching zero, the system diffuses at the bottom of the landscape. Despite
some similarities with the dynamics of mean-field glassy systems, in
particular, the absence of barrier crossing, we find distinctive dynamical
behaviors in the two cases, showing that the statistical properties of the
corresponding loss and energy landscapes are different. In contrast, when the
network is under-parametrized we observe a typical glassy behavior, thus
suggesting the existence of different phases depending on whether the network
is under-parametrized or over-parametrized.
|
We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems.
|
http://arxiv.org/abs/1803.06969v2
|
http://arxiv.org/pdf/1803.06969v2.pdf
|
ICML 2018
|
[
"M. Baity-Jesi",
"L. Sagun",
"M. Geiger",
"S. Spigler",
"G. Ben Arous",
"C. Cammarota",
"Y. LeCun",
"M. Wyart",
"G. Biroli"
] |
[] | 2018-03-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generative-adversarial-networks-for-realistic
|
1806.02583
| null | null |
Generative Adversarial Networks for Realistic Synthesis of Hyperspectral Samples
|
This work addresses the scarcity of annotated hyperspectral data required to
train deep neural networks. Especially, we investigate generative adversarial
networks and their application to the synthesis of consistent labeled spectra.
By training such networks on public datasets, we show that these models are not
only able to capture the underlying distribution, but also to generate
genuine-looking and physically plausible spectra. Moreover, we experimentally
validate that the synthetic samples can be used as an effective data
augmentation strategy. We validate our approach on several public
hyper-spectral datasets using a variety of deep classifiers.
| null |
http://arxiv.org/abs/1806.02583v1
|
http://arxiv.org/pdf/1806.02583v1.pdf
| null |
[
"Nicolas Audebert",
"Bertrand Le Saux",
"Sébastien Lefèvre"
] |
[
"Data Augmentation"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/criteres-de-qualite-dun-classifieur
|
1802.03567
| null | null |
Critères de qualité d'un classifieur généraliste
|
This paper considers the problem of choosing a good classifier. For each
problem there exist an optimal classifier, but none are optimal, regarding the
error rate, in all cases. Because there exists a large number of classifiers, a
user would rather prefer an all-purpose classifier that is easy to adjust, in
the hope that it will do almost as good as the optimal. In this paper we
establish a list of criteria that a good generalist classifier should satisfy .
We first discuss data analytic, these criteria are presented. Six among the
most popular classifiers are selected and scored according to these criteria.
Tables allow to easily appreciate the relative values of each. In the end,
random forests turn out to be the best classifiers.
| null |
http://arxiv.org/abs/1802.03567v2
|
http://arxiv.org/pdf/1802.03567v2.pdf
| null |
[
"Gilles R. Ducharme"
] |
[] | 2018-02-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ai-based-two-stage-intrusion-detection-for
|
1806.02566
| null | null |
AI-based Two-Stage Intrusion Detection for Software Defined IoT Networks
|
Software Defined Internet of Things (SD-IoT) Networks profits from
centralized management and interactive resource sharing which enhances the
efficiency and scalability of IoT applications. But with the rapid growth in
services and applications, it is vulnerable to possible attacks and faces
severe security challenges. Intrusion detection has been widely used to ensure
network security, but classical detection means are usually signature-based or
explicit-behavior-based and fail to detect unknown attacks intelligently, which
are hard to satisfy the requirements of SD-IoT Networks. In this paper, we
propose an AI-based two-stage intrusion detection empowered by software defined
technology. It flexibly captures network flows with a globle view and detects
attacks intelligently through applying AI algorithms. We firstly leverage Bat
algorithm with swarm division and Differential Mutation to select typical
features. Then, we exploit Random forest through adaptively altering the
weights of samples using weighted voting mechanism to classify flows.
Evaluation results prove that the modified intelligent algorithms select more
important features and achieve superior performance in flow classification. It
is also verified that intelligent intrusion detection shows better accuracy
with lower overhead comparied with existing solutions.
| null |
http://arxiv.org/abs/1806.02566v1
|
http://arxiv.org/pdf/1806.02566v1.pdf
| null |
[
"Jiaqi Li",
"Zhifeng Zhao",
"Rongpeng Li",
"Honggang Zhang"
] |
[
"Intrusion Detection",
"Management",
"Vocal Bursts Valence Prediction"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-effect-of-inter-observer-variability
|
1806.02562
| null | null |
On the Effect of Inter-observer Variability for a Reliable Estimation of Uncertainty of Medical Image Segmentation
|
Uncertainty estimation methods are expected to improve the understanding and
quality of computer-assisted methods used in medical applications (e.g.,
neurosurgical interventions, radiotherapy planning), where automated medical
image segmentation is crucial. In supervised machine learning, a common
practice to generate ground truth label data is to merge observer annotations.
However, as many medical image tasks show a high inter-observer variability
resulting from factors such as image quality, different levels of user
expertise and domain knowledge, little is known as to how inter-observer
variability and commonly used fusion methods affect the estimation of
uncertainty of automated image segmentation. In this paper we analyze the
effect of common image label fusion techniques on uncertainty estimation, and
propose to learn the uncertainty among observers. The results highlight the
negative effect of fusion methods applied in deep learning, to obtain reliable
estimates of segmentation uncertainty. Additionally, we show that the learned
observers' uncertainty can be combined with current standard Monte Carlo
dropout Bayesian neural networks to characterize uncertainty of model's
parameters.
| null |
http://arxiv.org/abs/1806.02562v1
|
http://arxiv.org/pdf/1806.02562v1.pdf
| null |
[
"Alain Jungo",
"Raphael Meier",
"Ekin Ermis",
"Marcela Blatti-Moreno",
"Evelyn Herrmann",
"Roland Wiest",
"Mauricio Reyes"
] |
[
"Image Segmentation",
"Medical Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/shape-robust-text-detection-with-progressive
|
1806.02559
| null | null |
Shape Robust Text Detection with Progressive Scale Expansion Network
|
The challenges of shape robust text detection lie in two aspects: 1) most
existing quadrangular bounding box based detectors are difficult to locate
texts with arbitrary shapes, which are hard to be enclosed perfectly in a
rectangle; 2) most pixel-wise segmentation-based detectors may not separate the
text instances that are very close to each other. To address these problems, we
propose a novel Progressive Scale Expansion Network (PSENet), designed as a
segmentation-based detector with multiple predictions for each text instance.
These predictions correspond to different `kernels' produced by shrinking the
original text instance into various scales. Consequently, the final detection
can be conducted through our progressive scale expansion algorithm which
gradually expands the kernels with minimal scales to the text instances with
maximal and complete shapes. Due to the fact that there are large geometrical
margins among these minimal kernels, our method is effective to distinguish the
adjacent text instances and is robust to arbitrary shapes. The state-of-the-art
results on ICDAR 2015 and ICDAR 2017 MLT benchmarks further confirm the great
effectiveness of PSENet. Notably, PSENet outperforms the previous best record
by absolute 6.37\% on the curve text dataset SCUT-CTW1500. Code will be
available in https://github.com/whai362/PSENet.
|
To address these problems, we propose a novel Progressive Scale Expansion Network (PSENet), designed as a segmentation-based detector with multiple predictions for each text instance.
|
http://arxiv.org/abs/1806.02559v1
|
http://arxiv.org/pdf/1806.02559v1.pdf
| null |
[
"Xiang Li",
"Wenhai Wang",
"Wenbo Hou",
"Ruo-Ze Liu",
"Tong Lu",
"Jian Yang"
] |
[
"Curved Text Detection",
"Scene Text Detection",
"Text Detection"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/online-convolutional-sparse-coding-with
|
1804.10366
| null | null |
Online Convolutional Sparse Coding with Sample-Dependent Dictionary
|
Convolutional sparse coding (CSC) has been popularly used for the learning of
shift-invariant dictionaries in image and signal processing. However, existing
methods have limited scalability. In this paper, instead of convolving with a
dictionary shared by all samples, we propose the use of a sample-dependent
dictionary in which filters are obtained as linear combinations of a small set
of base filters learned from the data. This added flexibility allows a large
number of sample-dependent patterns to be captured, while the resultant model
can still be efficiently learned by online learning. Extensive experimental
results show that the proposed method outperforms existing CSC algorithms with
significantly reduced time and space requirements.
| null |
http://arxiv.org/abs/1804.10366v2
|
http://arxiv.org/pdf/1804.10366v2.pdf
|
ICML 2018 7
|
[
"Yaqing Wang",
"Quanming Yao",
"James T. Kwok",
"Lionel M. Ni"
] |
[] | 2018-04-27T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2192
|
http://proceedings.mlr.press/v80/wang18k/wang18k.pdf
|
online-convolutional-sparse-coding-with-1
| null |
[] |
https://paperswithcode.com/paper/grouped-gaussian-processes-for-solar-power
|
1806.02543
| null | null |
Grouped Gaussian Processes for Solar Power Prediction
|
We consider multi-task regression models where the observations are assumed
to be a linear combination of several latent node functions and weight
functions, which are both drawn from Gaussian process priors. Driven by the
problem of developing scalable methods for forecasting distributed solar and
other renewable power generation, we propose coupled priors over groups of
(node or weight) processes to exploit spatial dependence between functions. We
estimate forecast models for solar power at multiple distributed sites and
ground wind speed at multiple proximate weather stations. Our results show that
our approach maintains or improves point-prediction accuracy relative to
competing solar benchmarks and improves over wind forecast benchmark models on
all measures. Our approach consistently dominates the equivalent model without
coupled priors, achieving faster gains in forecast accuracy. At the same time
our approach provides better quantification of predictive uncertainties.
| null |
http://arxiv.org/abs/1806.02543v3
|
http://arxiv.org/pdf/1806.02543v3.pdf
| null |
[
"Astrid Dahl",
"Edwin V. Bonilla"
] |
[
"Gaussian Processes",
"Prediction"
] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/segment-based-credit-scoring-using-latent
|
1806.02538
| null | null |
Segment-Based Credit Scoring Using Latent Clusters in the Variational Autoencoder
|
Identifying customer segments in retail banking portfolios with different
risk profiles can improve the accuracy of credit scoring. The Variational
Autoencoder (VAE) has shown promising results in different research domains,
and it has been documented the powerful information embedded in the latent
space of the VAE. We use the VAE and show that transforming the input data into
a meaningful representation, it is possible to steer configurations in the
latent space of the VAE. Specifically, the Weight of Evidence (WoE)
transformation encapsulates the propensity to fall into financial distress and
the latent space in the VAE preserves this characteristic in a well-defined
clustering structure. These clusters have considerably different risk profiles
and therefore are suitable not only for credit scoring but also for marketing
and customer purposes. This new clustering methodology offers solutions to some
of the challenges in the existing clustering algorithms, e.g., suggests the
number of clusters, assigns cluster labels to new customers, enables cluster
visualization, scales to large datasets, captures non-linear relationships
among others. Finally, for portfolios with a large number of customers in each
cluster, developing one classifier model per cluster can improve the credit
scoring assessment.
| null |
http://arxiv.org/abs/1806.02538v1
|
http://arxiv.org/pdf/1806.02538v1.pdf
| null |
[
"Rogelio Andrade Mancisidor",
"Michael Kampffmeyer",
"Kjersti Aas",
"Robert Jenssen"
] |
[
"Clustering",
"Marketing"
] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/deep-supervision-with-additional-labels-for
|
1806.02132
| null | null |
Deep supervision with additional labels for retinal vessel segmentation task
|
Automatic analysis of retinal blood images is of vital importance in
diagnosis tasks of retinopathy. Segmenting vessels accurately is a fundamental
step in analysing retinal images. However, it is usually difficult due to
various imaging conditions, low image contrast and the appearance of
pathologies such as micro-aneurysms. In this paper, we propose a novel method
with deep neural networks to solve this problem. We utilize U-net with residual
connection to detect vessels. To achieve better accuracy, we introduce an
edge-aware mechanism, in which we convert the original task into a multi-class
task by adding additional labels on boundary areas. In this way, the network
will pay more attention to the boundary areas of vessels and achieve a better
performance, especially in tiny vessels detecting. Besides, side output layers
are applied in order to give deep supervision and therefore help convergence.
We train and evaluate our model on three databases: DRIVE, STARE, and CHASEDB1.
Experimental results show that our method has a comparable performance with AUC
of 97.99% on DRIVE and an efficient running time compared to the
state-of-the-art methods.
| null |
http://arxiv.org/abs/1806.02132v3
|
http://arxiv.org/pdf/1806.02132v3.pdf
| null |
[
"Yishuo Zhang",
"Albert C. S. Chung"
] |
[
"Retinal Vessel Segmentation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/instance-segmentation-and-tracking-with
|
1806.02070
| null | null |
Instance Segmentation and Tracking with Cosine Embeddings and Recurrent Hourglass Networks
|
Different to semantic segmentation, instance segmentation assigns unique
labels to each individual instance of the same class. In this work, we propose
a novel recurrent fully convolutional network architecture for tracking such
instance segmentations over time. The network architecture incorporates
convolutional gated recurrent units (ConvGRU) into a stacked hourglass network
to utilize temporal video information. Furthermore, we train the network with a
novel embedding loss based on cosine similarities, such that the network
predicts unique embeddings for every instance throughout videos. Afterwards,
these embeddings are clustered among subsequent video frames to create the
final tracked instance segmentations. We evaluate the recurrent hourglass
network by segmenting left ventricles in MR videos of the heart, where it
outperforms a network that does not incorporate video information. Furthermore,
we show applicability of the cosine embedding loss for segmenting leaf
instances on still images of plants. Finally, we evaluate the framework for
instance segmentation and tracking on six datasets of the ISBI celltracking
challenge, where it shows state-of-the-art performance.
| null |
http://arxiv.org/abs/1806.02070v3
|
http://arxiv.org/pdf/1806.02070v3.pdf
| null |
[
"Christian Payer",
"Darko Štern",
"Thomas Neff",
"Horst Bischof",
"Martin Urschler"
] |
[
"Instance Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/supervised-saliency-map-driven-segmentation
|
1703.00087
| null | null |
Supervised Saliency Map Driven Segmentation of the Lesions in Dermoscopic Images
|
Lesion segmentation is the first step in most automatic melanoma recognition
systems. Deficiencies and difficulties in dermoscopic images such as color
inconstancy, hair occlusion, dark corners and color charts make lesion
segmentation an intricate task. In order to detect the lesion in the presence
of these problems, we propose a supervised saliency detection method tailored
for dermoscopic images based on the discriminative regional feature integration
(DRFI). DRFI method incorporates multi-level segmentation, regional contrast,
property, background descriptors, and a random forest regressor to create
saliency scores for each region in the image. In our improved saliency
detection method, mDRFI, we have added some new features to regional property
descriptors. Also, in order to achieve more robust regional background
descriptors, a thresholding algorithm is proposed to obtain a new
pseudo-background region. Findings reveal that mDRFI is superior to DRFI in
detecting the lesion as the salient object in dermoscopic images. The proposed
overall lesion segmentation framework uses detected saliency map to construct
an initial mask of the lesion through thresholding and post-processing
operations. The initial mask is then evolving in a level set framework to fit
better on the lesion's boundaries. The results of evaluation tests on three
public datasets show that our proposed segmentation method outperforms the
other conventional state-of-the-art segmentation algorithms and its performance
is comparable with most recent approaches that are based on deep convolutional
neural networks.
|
In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI).
|
http://arxiv.org/abs/1703.00087v4
|
http://arxiv.org/pdf/1703.00087v4.pdf
| null |
[
"Mostafa Jahanifar",
"Neda Zamani Tajeddin",
"Babak Mohammadzadeh Asl",
"Ali Gooya"
] |
[
"Lesion Segmentation",
"Saliency Detection",
"Segmentation"
] | 2017-02-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/conditional-end-to-end-audio-transforms
|
1804.00047
| null | null |
Conditional End-to-End Audio Transforms
|
We present an end-to-end method for transforming audio from one style to
another. For the case of speech, by conditioning on speaker identities, we can
train a single model to transform words spoken by multiple people into multiple
target voices. For the case of music, we can specify musical instruments and
achieve the same result. Architecturally, our method is a fully-differentiable
sequence-to-sequence model based on convolutional and hierarchical recurrent
neural networks. It is designed to capture long-term acoustic dependencies,
requires minimal post-processing, and produces realistic audio transforms.
Ablation studies confirm that our model can separate speaker and instrument
properties from acoustic content at different receptive fields. Empirically,
our method achieves competitive performance on community-standard datasets.
| null |
http://arxiv.org/abs/1804.00047v2
|
http://arxiv.org/pdf/1804.00047v2.pdf
| null |
[
"Albert Haque",
"Michelle Guo",
"Prateek Verma"
] |
[] | 2018-03-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/respond-cam-analyzing-deep-models-for-3d
|
1806.00102
| null | null |
Respond-CAM: Analyzing Deep Models for 3D Imaging Data by Visualizations
|
The convolutional neural network (CNN) has become a powerful tool for various
biomedical image analysis tasks, but there is a lack of visual explanation for
the machinery of CNNs. In this paper, we present a novel algorithm,
Respond-weighted Class Activation Mapping (Respond-CAM), for making CNN-based
models interpretable by visualizing input regions that are important for
predictions, especially for biomedical 3D imaging data inputs. Our method uses
the gradients of any target concept (e.g. the score of target class) that flows
into a convolutional layer. The weighted feature maps are combined to produce a
heatmap that highlights the important regions in the image for predicting the
target concept. We prove a preferable sum-to-score property of the Respond-CAM
and verify its significant improvement on 3D images from the current
state-of-the-art approach. Our tests on Cellular Electron Cryo-Tomography 3D
images show that Respond-CAM achieves superior performance on visualizing the
CNNs with 3D biomedical images inputs, and is able to get reasonably good
results on visualizing the CNNs with natural image inputs. The Respond-CAM is
an efficient and reliable approach for visualizing the CNN machinery, and is
applicable to a wide variety of CNN model families and image analysis tasks.
| null |
http://arxiv.org/abs/1806.00102v2
|
http://arxiv.org/pdf/1806.00102v2.pdf
| null |
[
"Guannan Zhao",
"Bo Zhou",
"Kaiwen Wang",
"Rui Jiang",
"Min Xu"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/toward-diverse-text-generation-with-inverse
|
1804.11258
| null | null |
Toward Diverse Text Generation with Inverse Reinforcement Learning
|
Text generation is a crucial task in NLP. Recently, several adversarial
generative models have been proposed to improve the exposure bias problem in
text generation. Though these models gain great success, they still suffer from
the problems of reward sparsity and mode collapse. In order to address these
two problems, in this paper, we employ inverse reinforcement learning (IRL) for
text generation. Specifically, the IRL framework learns a reward function on
training data, and then an optimal policy to maximum the expected total reward.
Similar to the adversarial models, the reward and policy function in IRL are
optimized alternately. Our method has two advantages: (1) the reward function
can produce more dense reward signals. (2) the generation policy, trained by
"entropy regularized" policy gradient, encourages to generate more diversified
texts. Experiment results demonstrate that our proposed method can generate
higher quality texts than the previous methods.
|
Similar to the adversarial models, the reward and policy function in IRL are optimized alternately.
|
http://arxiv.org/abs/1804.11258v3
|
http://arxiv.org/pdf/1804.11258v3.pdf
| null |
[
"Zhan Shi",
"Xinchi Chen",
"Xipeng Qiu",
"Xuanjing Huang"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Text Generation"
] | 2018-04-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/information-maximizing-sampling-to-promote
|
1806.02523
| null | null |
Information-Maximizing Sampling to Promote Tracking-by-Detection
|
The performance of an adaptive tracking-by-detection algorithm not only
depends on the classification and updating processes but also on the sampling.
Typically, such trackers select their samples from the vicinity of the last
predicted object location, or from its expected location using a pre-defined
motion model, which does not exploit the contents of the samples nor the
information provided by the classifier. We introduced the idea of most
informative sampling, in which the sampler attempts to select samples that
trouble the classifier of a discriminative tracker. We then proposed an active
discriminative co-tracker that embed an adversarial sampler to increase its
robustness against various tracking challenges. Experiments show that our
proposed tracker outperforms state-of-the-art trackers on various benchmark
videos.
| null |
http://arxiv.org/abs/1806.02523v1
|
http://arxiv.org/pdf/1806.02523v1.pdf
| null |
[
"Kourosh Meshgi",
"Maryam Sadat Mirzaei",
"Shigeyuki Oba"
] |
[
"General Classification"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-diverse-ensemble-for-discriminative
|
1711.06564
| null | null |
Efficient Diverse Ensemble for Discriminative Co-Tracking
|
Ensemble discriminative tracking utilizes a committee of classifiers, to
label data samples, which are in turn, used for retraining the tracker to
localize the target using the collective knowledge of the committee. Committee
members could vary in their features, memory update schemes, or training data,
however, it is inevitable to have committee members that excessively agree
because of large overlaps in their version space. To remove this redundancy and
have an effective ensemble learning, it is critical for the committee to
include consistent hypotheses that differ from one-another, covering the
version space with minimum overlaps. In this study, we propose an online
ensemble tracker that directly generates a diverse committee by generating an
efficient set of artificial training. The artificial data is sampled from the
empirical distribution of the samples taken from both target and background,
whereas the process is governed by query-by-committee to shrink the overlap
between classifiers. The experimental results demonstrate that the proposed
scheme outperforms conventional ensemble trackers on public benchmarks.
| null |
http://arxiv.org/abs/1711.06564v2
|
http://arxiv.org/pdf/1711.06564v2.pdf
|
CVPR 2018 6
|
[
"Kourosh Meshgi",
"Shigeyuki Oba",
"Shin Ishii"
] |
[
"Ensemble Learning"
] | 2017-11-16T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Meshgi_Efficient_Diverse_Ensemble_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Meshgi_Efficient_Diverse_Ensemble_CVPR_2018_paper.pdf
|
efficient-diverse-ensemble-for-discriminative-1
| null |
[] |
https://paperswithcode.com/paper/global-local-airborne-mapping-glam
|
1706.01580
| null | null |
Global-Local Airborne Mapping (GLAM): Reconstructing a City from Aerial Videos
|
Monocular visual SLAM has become an attractive practical approach for robot
localization and 3D environment mapping, since cameras are small, lightweight,
inexpensive, and produce high-rate, high-resolution data streams. Although
numerous robust tools have been developed, most existing systems are designed
to operate in terrestrial environments and at relatively small scale (a few
thousand frames) due to constraints on computation and storage.
In this paper, we present a feature-based visual SLAM system for aerial video
whose simple design permits near real-time operation, and whose scalability
permits large-area mapping using tens of thousands of frames, all on a single
conventional computer. Our approach consists of two parallel threads: the first
incrementally creates small locally consistent submaps and estimates camera
poses at video rate; the second aligns these submaps with one another to
produce a single globally consistent map via factor graph optimization over
both poses and landmarks. Scale drift is minimized through the use of
7-degree-of-freedom similarity transformations during submap alignment.
We quantify our system's performance on both simulated and real data sets,
and demonstrate city-scale map reconstruction accurate to within 2 meters using
nearly 90,000 aerial video frames - to our knowledge, the largest and fastest
such reconstruction to date.
| null |
http://arxiv.org/abs/1706.01580v2
|
http://arxiv.org/pdf/1706.01580v2.pdf
| null |
[
"Hasnain Vohra",
"Maxim Bazik",
"Matthew Antone",
"Joseph Mundy",
"William Stephenson"
] |
[] | 2017-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/importance-weighted-generative-networks
|
1806.02512
| null | null |
Importance Weighted Generative Networks
|
Deep generative networks can simulate from a complex target distribution, by minimizing a loss with respect to samples from that distribution. However, often we do not have direct access to our target distribution - our data may be subject to sample selection bias, or may be from a different but related distribution. We present methods based on importance weighting that can estimate the loss with respect to a target distribution, even if we cannot access that distribution directly, in a variety of settings. These estimators, which differentially weight the contribution of data to the loss function, offer both theoretical guarantees and impressive empirical performance.
| null |
https://arxiv.org/abs/1806.02512v3
|
https://arxiv.org/pdf/1806.02512v3.pdf
| null |
[
"Maurice Diesendruck",
"Ethan R. Elenberg",
"Rajat Sen",
"Guy W. Cole",
"Sanjay Shakkottai",
"Sinead A. Williamson"
] |
[
"Selection bias"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exact-low-tubal-rank-tensor-recovery-from
|
1806.02511
| null | null |
Exact Low Tubal Rank Tensor Recovery from Gaussian Measurements
|
The recent proposed Tensor Nuclear Norm (TNN) [Lu et al., 2016; 2018a] is an
interesting convex penalty induced by the tensor SVD [Kilmer and Martin, 2011].
It plays a similar role as the matrix nuclear norm which is the convex
surrogate of the matrix rank. Considering that the TNN based Tensor Robust PCA
[Lu et al., 2018a] is an elegant extension of Robust PCA with a similar tight
recovery bound, it is natural to solve other low rank tensor recovery problems
extended from the matrix cases. However, the extensions and proofs are
generally tedious. The general atomic norm provides a unified view of
low-complexity structures induced norms, e.g., the $\ell_1$-norm and nuclear
norm. The sharp estimates of the required number of generic measurements for
exact recovery based on the atomic norm are known in the literature. In this
work, with a careful choice of the atomic set, we prove that TNN is a special
atomic norm. Then by computing the Gaussian width of certain cone which is
necessary for the sharp estimate, we achieve a simple bound for guaranteed low
tubal rank tensor recovery from Gaussian measurements. Specifically, we show
that by solving a TNN minimization problem, the underlying tensor of size
$n_1\times n_2\times n_3$ with tubal rank $r$ can be exactly recovered when the
given number of Gaussian measurements is $O(r(n_1+n_2-r)n_3)$. It is order
optimal when comparing with the degrees of freedom $r(n_1+n_2-r)n_3$. Beyond
the Gaussian mapping, we also give the recovery guarantee of tensor completion
based on the uniform random mapping by TNN minimization. Numerical experiments
verify our theoretical results.
|
Specifically, we show that by solving a TNN minimization problem, the underlying tensor of size $n_1\times n_2\times n_3$ with tubal rank $r$ can be exactly recovered when the given number of Gaussian measurements is $O(r(n_1+n_2-r)n_3)$.
|
http://arxiv.org/abs/1806.02511v1
|
http://arxiv.org/pdf/1806.02511v1.pdf
| null |
[
"Canyi Lu",
"Jiashi Feng",
"Zhouchen Lin",
"Shuicheng Yan"
] |
[] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/stochastic-gradientmirror-descent-minimax
|
1806.00952
| null |
HJf9ZhC9FX
|
Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization
|
Stochastic descent methods (of the gradient and mirror varieties) have become
increasingly popular in optimization. In fact, it is now widely recognized that
the success of deep learning is not only due to the special deep architecture
of the models, but also due to the behavior of the stochastic descent methods
used, which play a key role in reaching "good" solutions that generalize well
to unseen data. In an attempt to shed some light on why this is the case, we
revisit some minimax properties of stochastic gradient descent (SGD) for the
square loss of linear models---originally developed in the 1990's---and extend
them to general stochastic mirror descent (SMD) algorithms for general loss
functions and nonlinear models. In particular, we show that there is a
fundamental identity which holds for SMD (and SGD) under very general
conditions, and which implies the minimax optimality of SMD (and SGD) for
sufficiently small step size, and for a general class of loss functions and
general nonlinear models. We further show that this identity can be used to
naturally establish other properties of SMD (and SGD), namely convergence and
implicit regularization for over-parameterized linear models (in what is now
being called the "interpolating regime"), some of which have been shown in
certain cases in prior literature. We also argue how this identity can be used
in the so-called "highly over-parameterized" nonlinear setting (where the
number of parameters far exceeds the number of data points) to provide insights
into why SMD (and SGD) may have similar convergence and implicit regularization
properties for deep learning.
| null |
http://arxiv.org/abs/1806.00952v4
|
http://arxiv.org/pdf/1806.00952v4.pdf
|
ICLR 2019 5
|
[
"Navid Azizan",
"Babak Hassibi"
] |
[] | 2018-06-04T00:00:00 |
https://openreview.net/forum?id=HJf9ZhC9FX
|
https://openreview.net/pdf?id=HJf9ZhC9FX
|
stochastic-gradientmirror-descent-minimax-1
| null |
[] |
https://paperswithcode.com/paper/removing-algorithmic-discrimination-with
|
1806.02510
| null | null |
Removing Algorithmic Discrimination (With Minimal Individual Error)
|
We address the problem of correcting group discriminations within a score
function, while minimizing the individual error. Each group is described by a
probability density function on the set of profiles. We first solve the problem
analytically in the case of two populations, with a uniform bonus-malus on the
zones where each population is a majority. We then address the general case of
n populations, where the entanglement of populations does not allow a similar
analytical solution. We show that an approximate solution with an arbitrarily
high level of precision can be computed with linear programming. Finally, we
address the inverse problem where the error should not go beyond a certain
value and we seek to minimize the discrimination.
| null |
http://arxiv.org/abs/1806.02510v1
|
http://arxiv.org/pdf/1806.02510v1.pdf
| null |
[
"El Mahdi El Mhamdi",
"Rachid Guerraoui",
"Lê Nguyên Hoang",
"Alexandre Maurer"
] |
[] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/interpretability-beyond-feature-attribution
|
1711.11279
| null | null |
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
|
The interpretation of deep learning models is a challenge due to their size,
complexity, and often opaque internal state. In addition, many systems, such as
image classifiers, operate on low-level features rather than high-level
concepts. To address these challenges, we introduce Concept Activation Vectors
(CAVs), which provide an interpretation of a neural net's internal state in
terms of human-friendly concepts. The key idea is to view the high-dimensional
internal state of a neural net as an aid, not an obstacle. We show how to use
CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional
derivatives to quantify the degree to which a user-defined concept is important
to a classification result--for example, how sensitive a prediction of "zebra"
is to the presence of stripes. Using the domain of image classification as a
testing ground, we describe how CAVs may be used to explore hypotheses and
generate insights for a standard image classification network as well as a
medical application.
|
The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state.
|
http://arxiv.org/abs/1711.11279v5
|
http://arxiv.org/pdf/1711.11279v5.pdf
|
ICML 2018 7
|
[
"Been Kim",
"Martin Wattenberg",
"Justin Gilmer",
"Carrie Cai",
"James Wexler",
"Fernanda Viegas",
"Rory Sayres"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2017-11-30T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2089
|
http://proceedings.mlr.press/v80/kim18d/kim18d.pdf
|
interpretability-beyond-feature-attribution-1
| null |
[] |
https://paperswithcode.com/paper/fast-distributed-deep-learning-via-worker
|
1806.02508
| null | null |
Semi-Dynamic Load Balancing: Efficient Distributed Learning in Non-Dedicated Environments
|
Machine learning (ML) models are increasingly trained in clusters with non-dedicated workers possessing heterogeneous resources. In such scenarios, model training efficiency can be negatively affected by stragglers -- workers that run much slower than others. Efficient model training requires eliminating such stragglers, yet for modern ML workloads, existing load balancing strategies are inefficient and even infeasible. In this paper, we propose a novel strategy called semi-dynamic load balancing to eliminate stragglers of distributed ML workloads. The key insight is that ML workers shall be load-balanced at iteration boundaries, being non-intrusive to intra-iteration execution. We develop LB-BSP based on such an insight, which is an integrated worker coordination mechanism that adapts workers' load to their instantaneous processing capabilities by right-sizing the sample batches at the synchronization barriers. We have custom-designed the batch sizing algorithm respectively for CPU and GPU clusters based on their own characteristics. LB-BSP has been implemented as a Python module for ML frameworks like TensorFlow and PyTorch. Our EC2 deployment confirms that LB-BSP is practical, effective and light-weight, and is able to accelerating distributed training by up to $54\%$.
| null |
https://arxiv.org/abs/1806.02508v2
|
https://arxiv.org/pdf/1806.02508v2.pdf
| null |
[
"Chen Chen",
"Qizhen Weng",
"Wei Wang",
"Baochun Li",
"Bo Li"
] |
[
"CPU",
"GPU"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/large-scale-classification-in-deep-neural
|
1806.02507
| null | null |
Large scale classification in deep neural network with Label Mapping
|
In recent years, deep neural network is widely used in machine learning. The
multi-class classification problem is a class of important problem in machine
learning. However, in order to solve those types of multi-class classification
problems effectively, the required network size should have hyper-linear growth
with respect to the number of classes. Therefore, it is infeasible to solve the
multi-class classification problem using deep neural network when the number of
classes are huge. This paper presents a method, so called Label Mapping (LM),
to solve this problem by decomposing the original classification problem to
several smaller sub-problems which are solvable theoretically. Our method is an
ensemble method like error-correcting output codes (ECOC), but it allows base
learners to be multi-class classifiers with different number of class labels.
We propose two design principles for LM, one is to maximize the number of base
classifier which can separate two different classes, and the other is to keep
all base learners to be independent as possible in order to reduce the
redundant information. Based on these principles, two different LM algorithms
are derived using number theory and information theory. Since each base learner
can be trained independently, it is easy to scale our method into a large scale
training system. Experiments show that our proposed method outperforms the
standard one-hot encoding and ECOC significantly in terms of accuracy and model
complexity.
| null |
http://arxiv.org/abs/1806.02507v1
|
http://arxiv.org/pdf/1806.02507v1.pdf
| null |
[
"Qizhi Zhang",
"Kuang-Chih Lee",
"Hongying Bao",
"Yuan You",
"Wenjie Li",
"Dongbai Guo"
] |
[
"BIG-bench Machine Learning",
"Classification",
"General Classification",
"Multi-class Classification"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/selecting-representative-examples-for-program
|
1711.03243
| null | null |
Selecting Representative Examples for Program Synthesis
|
Program synthesis is a class of regression problems where one seeks a
solution, in the form of a source-code program, mapping the inputs to their
corresponding outputs exactly. Due to its precise and combinatorial nature,
program synthesis is commonly formulated as a constraint satisfaction problem,
where input-output examples are encoded as constraints and solved with a
constraint solver. A key challenge of this formulation is scalability: while
constraint solvers work well with a few well-chosen examples, a large set of
examples can incur significant overhead in both time and memory. We describe a
method to discover a subset of examples that is both small and representative:
the subset is constructed iteratively, using a neural network to predict the
probability of unchosen examples conditioned on the chosen examples in the
subset, and greedily adding the least probable example. We empirically evaluate
the representativeness of the subsets constructed by our method, and
demonstrate such subsets can significantly improve synthesis time and
stability.
|
Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, mapping the inputs to their corresponding outputs exactly.
|
http://arxiv.org/abs/1711.03243v3
|
http://arxiv.org/pdf/1711.03243v3.pdf
|
ICML 2018 7
|
[
"Yewen Pu",
"Zachery Miranda",
"Armando Solar-Lezama",
"Leslie Pack Kaelbling"
] |
[
"Program Synthesis"
] | 2017-11-09T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2384
|
http://proceedings.mlr.press/v80/pu18b/pu18b.pdf
|
selecting-representative-examples-for-program-1
| null |
[] |
https://paperswithcode.com/paper/gp-rvm-genetic-programing-based-symbolic
|
1806.02502
| null | null |
GP-RVM: Genetic Programing-based Symbolic Regression Using Relevance Vector Machine
|
This paper proposes a hybrid basis function construction method (GP-RVM) for
Symbolic Regression problem, which combines an extended version of Genetic
Programming called Kaizen Programming and Relevance Vector Machine to evolve an
optimal set of basis functions. Different from traditional evolutionary
algorithms where a single individual is a complete solution, our method
proposes a solution based on linear combination of basis functions built from
individuals during the evolving process. RVM which is a sparse Bayesian kernel
method selects suitable functions to constitute the basis. RVM determines the
posterior weight of a function by evaluating its quality and sparsity. The
solution produced by GP-RVM is a sparse Bayesian linear model of the
coefficients of many non-linear functions. Our hybrid approach is focused on
nonlinear white-box models selecting the right combination of functions to
build robust predictions without prior knowledge about data. Experimental
results show that GP-RVM outperforms conventional methods, which suggest that
it is an efficient and accurate technique for solving SR. The computational
complexity of GP-RVM scales in $O( M^{3})$, where $M$ is the number of
functions in the basis set and is typically much smaller than the number $N$ of
training patterns.
| null |
http://arxiv.org/abs/1806.02502v3
|
http://arxiv.org/pdf/1806.02502v3.pdf
| null |
[
"Hossein Izadi Rad",
"Ji Feng",
"Hitoshi Iba"
] |
[
"Evolutionary Algorithms",
"regression",
"Symbolic Regression"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/simplifying-reward-design-through-divide-and
|
1806.02501
| null | null |
Simplifying Reward Design through Divide-and-Conquer
|
Designing a good reward function is essential to robot planning and
reinforcement learning, but it can also be challenging and frustrating. The
reward needs to work across multiple different environments, and that often
requires many iterations of tuning. We introduce a novel divide-and-conquer
approach that enables the designer to specify a reward separately for each
environment. By treating these separate reward functions as observations about
the underlying true reward, we derive an approach to infer a common reward
across all environments. We conduct user studies in an abstract grid world
domain and in a motion planning domain for a 7-DOF manipulator that measure
user effort and solution quality. We show that our method is faster, easier to
use, and produces a higher quality solution than the typical method of
designing a reward jointly across all environments. We additionally conduct a
series of experiments that measure the sensitivity of these results to
different properties of the reward design task, such as the number of
environments, the number of feasible solutions per environment, and the
fraction of the total features that vary within each environment. We find that
independent reward design outperforms the standard, joint, reward design
process but works best when the design problem can be divided into simpler
subproblems.
| null |
http://arxiv.org/abs/1806.02501v1
|
http://arxiv.org/pdf/1806.02501v1.pdf
| null |
[
"Ellis Ratner",
"Dylan Hadfield-Menell",
"Anca D. Dragan"
] |
[
"Motion Planning",
"Reinforcement Learning"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/safe-element-screening-for-submodular
|
1805.08527
| null | null |
Safe Element Screening for Submodular Function Minimization
|
Submodular functions are discrete analogs of convex functions, which have
applications in various fields, including machine learning and computer vision.
However, in large-scale applications, solving Submodular Function Minimization
(SFM) problems remains challenging. In this paper, we make the first attempt to
extend the emerging technique named screening in large-scale sparse learning to
SFM for accelerating its optimization process. We first conduct a careful
studying of the relationships between SFM and the corresponding convex proximal
problems, as well as the accurate primal optimum estimation of the proximal
problems. Relying on this study, we subsequently propose a novel safe screening
method to quickly identify the elements guaranteed to be included (we refer to
them as active) or excluded (inactive) in the final optimal solution of SFM
during the optimization process. By removing the inactive elements and fixing
the active ones, the problem size can be dramatically reduced, leading to great
savings in the computational cost without sacrificing any accuracy. To the best
of our knowledge, the proposed method is the first screening method in the
fields of SFM and even combinatorial optimization, thus pointing out a new
direction for accelerating SFM algorithms. Experiment results on both synthetic
and real datasets demonstrate the significant speedups gained by our approach.
| null |
http://arxiv.org/abs/1805.08527v4
|
http://arxiv.org/pdf/1805.08527v4.pdf
|
ICML 2018 7
|
[
"Weizhong Zhang",
"Bin Hong",
"Lin Ma",
"Wei Liu",
"Tong Zhang"
] |
[
"Combinatorial Optimization",
"Sparse Learning"
] | 2018-05-22T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1883
|
http://proceedings.mlr.press/v80/zhang18e/zhang18e.pdf
|
safe-element-screening-for-submodular-1
| null |
[] |
https://paperswithcode.com/paper/conditional-probability-calculation-using
|
1806.02499
| null | null |
Conditional probability calculation using restricted Boltzmann machine with application to system identification
|
There are many advantages to use probability method for nonlinear system
identification, such as the noises and outliers in the data set do not affect
the probability models significantly; the input features can be extracted in
probability forms. The biggest obstacle of the probability model is the
probability distributions are not easy to be obtained. In this paper, we form
the nonlinear system identification into solving the conditional probability.
Then we modify the restricted Boltzmann machine (RBM), such that the joint
probability, input distribution, and the conditional probability can be
calculated by the RBM training. Binary encoding and continue valued methods are
discussed. The universal approximation analysis for the conditional probability
based modelling is proposed. We use two benchmark nonlinear systems to compare
our probability modelling method with the other black-box modeling methods. The
results show that this novel method is much better when there are big noises
and the system dynamics are complex.
| null |
http://arxiv.org/abs/1806.02499v1
|
http://arxiv.org/pdf/1806.02499v1.pdf
| null |
[
"Erick de la Rosa",
"Wen Yu"
] |
[] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Restricted Boltzmann Machines**, or **RBMs**, are two-layer generative neural networks that learn a probability distribution over the inputs. They are a special class of Boltzmann Machine in that they have a restricted number of connections between visible and hidden units. Every node in the visible layer is connected to every node in the hidden layer, but no nodes in the same group are connected. RBMs are usually trained using the contrastive divergence learning procedure.\r\n\r\nImage Source: [here](https://medium.com/datatype/restricted-boltzmann-machine-a-complete-analysis-part-1-introduction-model-formulation-1a4404873b3)",
"full_name": "Restricted Boltzmann Machine",
"introduced_year": 1986,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Restricted Boltzmann Machine",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/k-beam-minimax-efficient-optimization-for
|
1805.11640
| null | null |
K-Beam Minimax: Efficient Optimization for Deep Adversarial Learning
|
Minimax optimization plays a key role in adversarial training of machine
learning algorithms, such as learning generative models, domain adaptation,
privacy preservation, and robust learning. In this paper, we demonstrate the
failure of alternating gradient descent in minimax optimization problems due to
the discontinuity of solutions of the inner maximization. To address this, we
propose a new epsilon-subgradient descent algorithm that addresses this problem
by simultaneously tracking K candidate solutions. Practically, the algorithm
can find solutions that previous saddle-point algorithms cannot find, with only
a sublinear increase of complexity in K. We analyze the conditions under which
the algorithm converges to the true solution in detail. A significant
improvement in stability and convergence speed of the algorithm is observed in
simple representative problems, GAN training, and domain-adaptation problems.
|
Minimax optimization plays a key role in adversarial training of machine learning algorithms, such as learning generative models, domain adaptation, privacy preservation, and robust learning.
|
http://arxiv.org/abs/1805.11640v2
|
http://arxiv.org/pdf/1805.11640v2.pdf
|
ICML 2018 7
|
[
"Jihun Hamm",
"Yung-Kyun Noh"
] |
[
"Domain Adaptation"
] | 2018-05-29T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1962
|
http://proceedings.mlr.press/v80/hamm18a/hamm18a.pdf
|
k-beam-minimax-efficient-optimization-for-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/corpus-level-fine-grained-entity-typing
|
1708.02275
| null | null |
Corpus-level Fine-grained Entity Typing
|
This paper addresses the problem of corpus-level entity typing, i.e.,
inferring from a large corpus that an entity is a member of a class such as
"food" or "artist". The application of entity typing we are interested in is
knowledge base completion, specifically, to learn which classes an entity is a
member of. We propose FIGMENT to tackle this problem. FIGMENT is embedding-
based and combines (i) a global model that scores based on aggregated
contextual information of an entity and (ii) a context model that first scores
the individual occurrences of an entity and then aggregates the scores. Each of
the two proposed models has some specific properties. For the global model,
learning high quality entity representations is crucial because it is the only
source used for the predictions. Therefore, we introduce representations using
name and contexts of entities on the three levels of entity, word, and
character. We show each has complementary information and a multi-level
representation is the best. For the context model, we need to use distant
supervision since the context-level labels are not available for entities.
Distant supervised labels are noisy and this harms the performance of models.
Therefore, we introduce and apply new algorithms for noise mitigation using
multi-instance learning. We show the effectiveness of our models in a large
entity typing dataset, built from Freebase.
| null |
http://arxiv.org/abs/1708.02275v2
|
http://arxiv.org/pdf/1708.02275v2.pdf
| null |
[
"Yadollah Yaghoobzadeh",
"Heike Adel",
"Hinrich Schütze"
] |
[
"Entity Typing",
"Knowledge Base Completion"
] | 2017-08-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/stochastic-wasserstein-barycenters
|
1802.05757
| null | null |
Stochastic Wasserstein Barycenters
|
We present a stochastic algorithm to compute the barycenter of a set of
probability distributions under the Wasserstein metric from optimal transport.
Unlike previous approaches, our method extends to continuous input
distributions and allows the support of the barycenter to be adjusted in each
iteration. We tackle the problem without regularization, allowing us to recover
a sharp output whose support is contained within the support of the true
barycenter. We give examples where our algorithm recovers a more meaningful
barycenter than previous work. Our method is versatile and can be extended to
applications such as generating super samples from a given distribution and
recovering blue noise approximations.
|
We present a stochastic algorithm to compute the barycenter of a set of probability distributions under the Wasserstein metric from optimal transport.
|
http://arxiv.org/abs/1802.05757v3
|
http://arxiv.org/pdf/1802.05757v3.pdf
|
ICML 2018 7
|
[
"Sebastian Claici",
"Edward Chien",
"Justin Solomon"
] |
[] | 2018-02-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2316
|
http://proceedings.mlr.press/v80/claici18a/claici18a.pdf
|
stochastic-wasserstein-barycenters-1
| null |
[] |
https://paperswithcode.com/paper/estimating-train-delays-in-a-large-rail
|
1806.02825
| null | null |
Estimating Train Delays in a Large Rail Network Using a Zero Shot Markov Model
|
India runs the fourth largest railway transport network size carrying over 8
billion passengers per year. However, the travel experience of passengers is
frequently marked by delays, i.e., late arrival of trains at stations, causing
inconvenience. In a first, we study the systemic delays in train arrivals using
n-order Markov frameworks and experiment with two regression based models.
Using train running-status data collected for two years, we report on an
efficient algorithm for estimating delays at railway stations with near
accurate results. This work can help railways to manage their resources, while
also helping passengers and businesses served by them to efficiently plan their
activities.
|
India runs the fourth largest railway transport network size carrying over 8 billion passengers per year.
|
http://arxiv.org/abs/1806.02825v1
|
http://arxiv.org/pdf/1806.02825v1.pdf
| null |
[
"Ramashish Gaurav",
"Biplav Srivastava"
] |
[
"regression"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/stochastic-block-models-are-a-discrete
|
1806.02485
| null | null |
Stochastic Block Models are a Discrete Surface Tension
|
Networks, which represent agents and interactions between them, arise in
myriad applications throughout the sciences, engineering, and even the
humanities. To understand large-scale structure in a network, a common task is
to cluster a network's nodes into sets called "communities", such that there
are dense connections within communities but sparse connections between them. A
popular and statistically principled method to perform such clustering is to
use a family of generative models known as stochastic block models (SBMs). In
this paper, we show that maximum likelihood estimation in an SBM is a network
analog of a well-known continuum surface-tension problem that arises from an
application in metallurgy. To illustrate the utility of this relationship, we
implement network analogs of three surface-tension algorithms, with which we
successfully recover planted community structure in synthetic networks and
which yield fascinating insights on empirical networks that we construct from
hyperspectral videos.
|
Networks, which represent agents and interactions between them, arise in myriad applications throughout the sciences, engineering, and even the humanities.
|
http://arxiv.org/abs/1806.02485v2
|
http://arxiv.org/pdf/1806.02485v2.pdf
| null |
[
"Zachary M. Boyd",
"Mason A. Porter",
"Andrea L. Bertozzi"
] |
[
"Clustering",
"Video Semantic Segmentation"
] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-learning-based-inverse-method-for-layout
|
1806.03182
| null | null |
Deep learning based inverse method for layout design
|
Layout design with complex constraints is a challenging problem to solve due
to the non-uniqueness of the solution and the difficulties in incorporating the
constraints into the conventional optimization-based methods. In this paper, we
propose a design method based on the recently developed machine learning
technique, Variational Autoencoder (VAE). We utilize the learning capability of
the VAE to learn the constraints and the generative capability of the VAE to
generate design candidates that automatically satisfy all the constraints. As
such, no constraints need to be imposed during the design stage. In addition,
we show that the VAE network is also capable of learning the underlying physics
of the design problem, leading to an efficient design tool that does not need
any physical simulation once the network is constructed. We demonstrated the
performance of the method on two cases: inverse design of surface diffusion
induced morphology change and mask design for optical microlithography.
| null |
http://arxiv.org/abs/1806.03182v1
|
http://arxiv.org/pdf/1806.03182v1.pdf
| null |
[
"Yu-Jie Zhang",
"Wenjing Ye"
] |
[
"Deep Learning",
"Layout Design"
] | 2018-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/large-scale-sparse-inverse-covariance
|
1802.04911
| null | null |
Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion
|
The sparse inverse covariance estimation problem is commonly solved using an
$\ell_{1}$-regularized Gaussian maximum likelihood estimator known as
"graphical lasso", but its computational cost becomes prohibitive for large
data sets. A recent line of results showed--under mild assumptions--that the
graphical lasso estimator can be retrieved by soft-thresholding the sample
covariance matrix and solving a maximum determinant matrix completion (MDMC)
problem. This paper proves an extension of this result, and describes a
Newton-CG algorithm to efficiently solve the MDMC problem. Assuming that the
thresholded sample covariance matrix is sparse with a sparse Cholesky
factorization, we prove that the algorithm converges to an $\epsilon$-accurate
solution in $O(n\log(1/\epsilon))$ time and $O(n)$ memory. The algorithm is
highly efficient in practice: we solve the associated MDMC problems with as
many as 200,000 variables to 7-9 digits of accuracy in less than an hour on a
standard laptop computer running MATLAB.
| null |
http://arxiv.org/abs/1802.04911v3
|
http://arxiv.org/pdf/1802.04911v3.pdf
|
ICML 2018 7
|
[
"Richard Y. Zhang",
"Salar Fattahi",
"Somayeh Sojoudi"
] |
[
"Matrix Completion"
] | 2018-02-14T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2189
|
http://proceedings.mlr.press/v80/zhang18c/zhang18c.pdf
|
large-scale-sparse-inverse-covariance-1
| null |
[] |
https://paperswithcode.com/paper/interlinked-convolutional-neural-networks-for
|
1806.02479
| null | null |
Interlinked Convolutional Neural Networks for Face Parsing
|
Face parsing is a basic task in face image analysis. It amounts to labeling
each pixel with appropriate facial parts such as eyes and nose. In the paper,
we present a interlinked convolutional neural network (iCNN) for solving this
problem in an end-to-end fashion. It consists of multiple convolutional neural
networks (CNNs) taking input in different scales. A special interlinking layer
is designed to allow the CNNs to exchange information, enabling them to
integrate local and contextual information efficiently. The hallmark of iCNN is
the extensive use of downsampling and upsampling in the interlinking layers,
while traditional CNNs usually uses downsampling only. A two-stage pipeline is
proposed for face parsing and both stages use iCNN. The first stage localizes
facial parts in the size-reduced image and the second stage labels the pixels
in the identified facial parts in the original image. On a benchmark dataset we
have obtained better results than the state-of-the-art methods.
| null |
http://arxiv.org/abs/1806.02479v1
|
http://arxiv.org/pdf/1806.02479v1.pdf
| null |
[
"Yisu Zhou",
"Xiaolin Hu",
"Bo Zhang"
] |
[
"Face Parsing"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-robust-training-of-neural-networks-by
|
1805.09370
| null | null |
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
|
In recent years, neural networks have demonstrated outstanding effectiveness
in a large amount of applications.However, recent works have shown that neural
networks are susceptible to adversarial examples, indicating possible flaws
intrinsic to the network structures. To address this problem and improve the
robustness of neural networks, we investigate the fundamental mechanisms behind
adversarial examples and propose a novel robust training method via regulating
adversarial gradients. The regulation effectively squeezes the adversarial
gradients of neural networks and significantly increases the difficulty of
adversarial example generation.Without any adversarial example involved, the
robust training method could generate naturally robust networks, which are
near-immune to various types of adversarial examples. Experiments show the
naturally robust networks can achieve optimal accuracy against Fast Gradient
Sign Method (FGSM) and C\&W attacks on MNIST, Cifar10, and Google Speech
Command dataset. Moreover, our proposed method also provides neural networks
with consistent robustness against transferable attacks.
| null |
http://arxiv.org/abs/1805.09370v2
|
http://arxiv.org/pdf/1805.09370v2.pdf
| null |
[
"Fuxun Yu",
"Zirui Xu",
"Yanzhi Wang",
"ChenChen Liu",
"Xiang Chen"
] |
[] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/graph-convolutional-policy-network-for-goal
|
1806.02473
| null | null |
Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation
|
Generating novel graph structures that optimize given objectives while
obeying some given underlying rules is fundamental for chemistry, biology and
social science research. This is especially important in the task of molecular
graph generation, whose goal is to discover novel molecules with desired
properties such as drug-likeness and synthetic accessibility, while obeying
physical laws such as chemical valency. However, designing models to find
molecules that optimize desired properties while incorporating highly complex
and non-differentiable rules remains to be a challenging task. Here we propose
Graph Convolutional Policy Network (GCPN), a general graph convolutional
network based model for goal-directed graph generation through reinforcement
learning. The model is trained to optimize domain-specific rewards and
adversarial loss through policy gradient, and acts in an environment that
incorporates domain-specific rules. Experimental results show that GCPN can
achieve 61% improvement on chemical property optimization over state-of-the-art
baselines while resembling known molecules, and achieve 184% improvement on the
constrained property optimization task.
|
Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research.
|
http://arxiv.org/abs/1806.02473v3
|
http://arxiv.org/pdf/1806.02473v3.pdf
|
NeurIPS 2018 12
|
[
"Jiaxuan You",
"Bowen Liu",
"Rex Ying",
"Vijay Pande",
"Jure Leskovec"
] |
[
"Graph Generation",
"Molecular Graph Generation",
"Reinforcement Learning"
] | 2018-06-07T00:00:00 |
http://papers.nips.cc/paper/7877-graph-convolutional-policy-network-for-goal-directed-molecular-graph-generation
|
http://papers.nips.cc/paper/7877-graph-convolutional-policy-network-for-goal-directed-molecular-graph-generation.pdf
|
graph-convolutional-policy-network-for-goal-1
| null |
[] |
https://paperswithcode.com/paper/stochastic-zeroth-order-optimization-via
|
1805.11811
| null | null |
Stochastic Zeroth-order Optimization via Variance Reduction method
|
Derivative-free optimization has become an important technique used in
machine learning for optimizing black-box models. To conduct updates without
explicitly computing gradient, most current approaches iteratively sample a
random search direction from Gaussian distribution and compute the estimated
gradient along that direction. However, due to the variance in the search
direction, the convergence rates and query complexities of existing methods
suffer from a factor of $d$, where $d$ is the problem dimension. In this paper,
we introduce a novel Stochastic Zeroth-order method with Variance Reduction
under Gaussian smoothing (SZVR-G) and establish the complexity for optimizing
non-convex problems. With variance reduction on both sample space and search
space, the complexity of our algorithm is sublinear to $d$ and is strictly
better than current approaches, in both smooth and non-smooth cases. Moreover,
we extend the proposed method to the mini-batch version. Our experimental
results demonstrate the superior performance of the proposed method over
existing derivative-free optimization techniques. Furthermore, we successfully
apply our method to conduct a universal black-box attack to deep neural
networks and present some interesting results.
| null |
http://arxiv.org/abs/1805.11811v3
|
http://arxiv.org/pdf/1805.11811v3.pdf
| null |
[
"Liu Liu",
"Minhao Cheng",
"Cho-Jui Hsieh",
"DaCheng Tao"
] |
[] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantitative-phase-imaging-and-artificial
|
1806.03982
| null | null |
Quantitative Phase Imaging and Artificial Intelligence: A Review
|
Recent advances in quantitative phase imaging (QPI) and artificial
intelligence (AI) have opened up the possibility of an exciting frontier. The
fast and label-free nature of QPI enables the rapid generation of large-scale
and uniform-quality imaging data in two, three, and four dimensions.
Subsequently, the AI-assisted interrogation of QPI data using data-driven
machine learning techniques results in a variety of biomedical applications.
Also, machine learning enhances QPI itself. Herein, we review the synergy
between QPI and machine learning with a particular focus on deep learning.
Further, we provide practical guidelines and perspectives for further
development.
| null |
http://arxiv.org/abs/1806.03982v2
|
http://arxiv.org/pdf/1806.03982v2.pdf
| null |
[
"YoungJu Jo",
"Hyungjoo Cho",
"Sang Yun Lee",
"Gunho Choi",
"Geon Kim",
"Hyun-seok Min",
"YongKeun Park"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-domain-adaptation-for
|
1806.01357
| null | null |
Adversarial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images
|
Automatic and accurate Gleason grading of histopathology tissue slides is
crucial for prostate cancer diagnosis, treatment, and prognosis. Usually,
histopathology tissue slides from different institutions show heterogeneous
appearances because of different tissue preparation and staining procedures,
thus the predictable model learned from one domain may not be applicable to a
new domain directly. Here we propose to adopt unsupervised domain adaptation to
transfer the discriminative knowledge obtained from the source domain to the
target domain without requiring labeling of images at the target domain. The
adaptation is achieved through adversarial training to find an invariant
feature space along with the proposed Siamese architecture on the target domain
to add a regularization that is appropriate for the whole-slide images. We
validate the method on two prostate cancer datasets and obtain significant
classification improvement of Gleason scores as compared with the baseline
models.
| null |
http://arxiv.org/abs/1806.01357v2
|
http://arxiv.org/pdf/1806.01357v2.pdf
| null |
[
"Jian Ren",
"Ilker Hacihaliloglu",
"Eric A. Singer",
"David J. Foran",
"Xin Qi"
] |
[
"Domain Adaptation",
"General Classification",
"Prognosis",
"Unsupervised Domain Adaptation",
"whole slide images"
] | 2018-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-effect-of-the-choice-of-neural-network
|
1806.02460
| null | null |
The effect of the choice of neural network depth and breadth on the size of its hypothesis space
|
We show that the number of unique function mappings in a neural network
hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is
the number of neurons in the hidden layer $l$.
| null |
http://arxiv.org/abs/1806.02460v1
|
http://arxiv.org/pdf/1806.02460v1.pdf
| null |
[
"Lech Szymanski",
"Brendan McCane",
"Michael Albert"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-compact-neural-networks-with
|
1802.01223
| null | null |
Learning Compact Neural Networks with Regularization
|
Proper regularization is critical for speeding up training, improving
generalization performance, and learning compact models that are cost
efficient. We propose and analyze regularized gradient descent algorithms for
learning shallow neural networks. Our framework is general and covers
weight-sharing (convolutional networks), sparsity (network pruning), and
low-rank constraints among others. We first introduce covering dimension to
quantify the complexity of the constraint set and provide insights on the
generalization properties. Then, we show that proposed algorithms become
well-behaved and local linear convergence occurs once the amount of data
exceeds the covering dimension. Overall, our results demonstrate that
near-optimal sample complexity is sufficient for efficient learning and
illustrate how regularization can be beneficial to learn over-parameterized
networks.
| null |
http://arxiv.org/abs/1802.01223v2
|
http://arxiv.org/pdf/1802.01223v2.pdf
|
ICML 2018 7
|
[
"Samet Oymak"
] |
[
"Network Pruning"
] | 2018-02-05T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1933
|
http://proceedings.mlr.press/v80/oymak18a/oymak18a.pdf
|
learning-compact-neural-networks-with-1
| null |
[] |
https://paperswithcode.com/paper/diversity-is-all-you-need-learning-skills
|
1802.06070
| null |
SJx63jRqFm
|
Diversity is All You Need: Learning Skills without a Reward Function
|
Intelligent creatures can explore their environments and learn useful skills
without supervision. In this paper, we propose DIAYN ('Diversity is All You
Need'), a method for learning useful skills without a reward function. Our
proposed method learns skills by maximizing an information theoretic objective
using a maximum entropy policy. On a variety of simulated robotic tasks, we
show that this simple objective results in the unsupervised emergence of
diverse skills, such as walking and jumping. In a number of reinforcement
learning benchmark environments, our method is able to learn a skill that
solves the benchmark task despite never receiving the true task reward. We show
how pretrained skills can provide a good parameter initialization for
downstream tasks, and can be composed hierarchically to solve complex, sparse
reward tasks. Our results suggest that unsupervised discovery of skills can
serve as an effective pretraining mechanism for overcoming challenges of
exploration and data efficiency in reinforcement learning.
|
On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping.
|
http://arxiv.org/abs/1802.06070v6
|
http://arxiv.org/pdf/1802.06070v6.pdf
|
ICLR 2019 5
|
[
"Benjamin Eysenbach",
"Abhishek Gupta",
"Julian Ibarz",
"Sergey Levine"
] |
[
"All",
"Diversity",
"Meta Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Unsupervised Reinforcement Learning"
] | 2018-02-16T00:00:00 |
https://openreview.net/forum?id=SJx63jRqFm
|
https://openreview.net/pdf?id=SJx63jRqFm
|
diversity-is-all-you-need-learning-skills-1
| null |
[] |
https://paperswithcode.com/paper/visual-reasoning-by-progressive-module
|
1806.02453
| null |
B1fpDsAqt7
|
Visual Reasoning by Progressive Module Networks
|
Humans learn to solve tasks of increasing complexity by building on top of
previously acquired knowledge. Typically, there exists a natural progression in
the tasks that we learn - most do not require completely independent solutions,
but can be broken down into simpler subtasks. We propose to represent a solver
for each task as a neural module that calls existing modules (solvers for
simpler tasks) in a functional program-like manner. Lower modules are a black
box to the calling module, and communicate only via a query and an output.
Thus, a module for a new task learns to query existing modules and composes
their outputs in order to produce its own output. Our model effectively
combines previous skill-sets, does not suffer from forgetting, and is fully
differentiable. We test our model in learning a set of visual reasoning tasks,
and demonstrate improved performances in all tasks by learning progressively.
By evaluating the reasoning process using human judges, we show that our model
is more interpretable than an attention-based baseline.
|
Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output.
|
http://arxiv.org/abs/1806.02453v2
|
http://arxiv.org/pdf/1806.02453v2.pdf
|
ICLR 2019 5
|
[
"Seung Wook Kim",
"Makarand Tapaswi",
"Sanja Fidler"
] |
[
"Visual Reasoning"
] | 2018-06-06T00:00:00 |
https://openreview.net/forum?id=B1fpDsAqt7
|
https://openreview.net/pdf?id=B1fpDsAqt7
|
visual-reasoning-by-progressive-module-1
| null |
[] |
https://paperswithcode.com/paper/numtadb-assembled-bengali-handwritten-digits
|
1806.02452
| null | null |
NumtaDB - Assembled Bengali Handwritten Digits
|
To benchmark Bengali digit recognition algorithms, a large publicly available
dataset is required which is free from biases originating from geographical
location, gender, and age. With this aim in mind, NumtaDB, a dataset consisting
of more than 85,000 images of hand-written Bengali digits, has been assembled.
This paper documents the collection and curation process of numerals along with
the salient statistics of the dataset.
|
To benchmark Bengali digit recognition algorithms, a large publicly available dataset is required which is free from biases originating from geographical location, gender, and age.
|
http://arxiv.org/abs/1806.02452v1
|
http://arxiv.org/pdf/1806.02452v1.pdf
| null |
[
"Samiul Alam",
"Tahsin Reasat",
"Rashed Mohammad Doha",
"Ahmed Imtiaz Humayun"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hyperbolic-entailment-cones-for-learning
|
1804.01882
| null | null |
Hyperbolic Entailment Cones for Learning Hierarchical Embeddings
|
Learning graph representations via low-dimensional embeddings that preserve
relevant network properties is an important class of problems in machine
learning. We here present a novel method to embed directed acyclic graphs.
Following prior work, we first advocate for using hyperbolic spaces which
provably model tree-like structures better than Euclidean geometry. Second, we
view hierarchical relations as partial orders defined using a family of nested
geodesically convex cones. We prove that these entailment cones admit an
optimal shape with a closed form expression both in the Euclidean and
hyperbolic spaces, and they canonically define the embedding learning process.
Experiments show significant improvements of our method over strong recent
baselines both in terms of representational capacity and generalization.
|
Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning.
|
http://arxiv.org/abs/1804.01882v3
|
http://arxiv.org/pdf/1804.01882v3.pdf
|
ICML 2018 7
|
[
"Octavian-Eugen Ganea",
"Gary Bécigneul",
"Thomas Hofmann"
] |
[
"Graph Embedding",
"Hypernym Discovery",
"Link Prediction",
"Representation Learning"
] | 2018-04-03T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2487
|
http://proceedings.mlr.press/v80/ganea18a/ganea18a.pdf
|
hyperbolic-entailment-cones-for-learning-1
| null |
[] |
https://paperswithcode.com/paper/a-finite-time-analysis-of-temporal-difference
|
1806.02450
| null | null |
A Finite Time Analysis of Temporal Difference Learning With Linear Function Approximation
|
Temporal difference learning (TD) is a simple iterative algorithm used to
estimate the value function corresponding to a given policy in a Markov
decision process. Although TD is one of the most widely used algorithms in
reinforcement learning, its theoretical analysis has proved challenging and few
guarantees on its statistical efficiency are available. In this work, we
provide a simple and explicit finite time analysis of temporal difference
learning with linear function approximation. Except for a few key insights, our
analysis mirrors standard techniques for analyzing stochastic gradient descent
algorithms, and therefore inherits the simplicity and elegance of that
literature. Final sections of the paper show how all of our main results extend
to the study of TD learning with eligibility traces, known as TD($\lambda$),
and to Q-learning applied in high-dimensional optimal stopping problems.
| null |
http://arxiv.org/abs/1806.02450v2
|
http://arxiv.org/pdf/1806.02450v2.pdf
| null |
[
"Jalaj Bhandari",
"Daniel Russo",
"Raghav Singal"
] |
[
"Q-Learning",
"Reinforcement Learning"
] | 2018-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-reinforcement-learning-for-general-video
|
1806.02448
| null | null |
Deep Reinforcement Learning for General Video Game AI
|
The General Video Game AI (GVGAI) competition and its associated software
framework provides a way of benchmarking AI algorithms on a large number of
games written in a domain-specific description language. While the competition
has seen plenty of interest, it has so far focused on online planning,
providing a forward model that allows the use of algorithms such as Monte Carlo
Tree Search.
In this paper, we describe how we interface GVGAI to the OpenAI Gym
environment, a widely used way of connecting agents to reinforcement learning
problems. Using this interface, we characterize how widely used implementations
of several deep reinforcement learning algorithms fare on a number of GVGAI
games. We further analyze the results to provide a first indication of the
relative difficulty of these games relative to each other, and relative to
those in the Arcade Learning Environment under similar conditions.
|
In this paper, we describe how we interface GVGAI to the OpenAI Gym environment, a widely used way of connecting agents to reinforcement learning problems.
|
http://arxiv.org/abs/1806.02448v1
|
http://arxiv.org/pdf/1806.02448v1.pdf
| null |
[
"Ruben Rodriguez Torrado",
"Philip Bontrager",
"Julian Togelius",
"Jialin Liu",
"Diego Perez-Liebana"
] |
[
"Atari Games",
"Benchmarking",
"Deep Reinforcement Learning",
"OpenAI Gym",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-ordinal-regression-network-for-monocular
|
1806.02446
| null | null |
Deep Ordinal Regression Network for Monocular Depth Estimation
|
Monocular depth estimation, which plays a crucial role in understanding 3D
scene geometry, is an ill-posed problem. Recent methods have gained significant
improvement by exploring image-level information and hierarchical features from
deep convolutional neural networks (DCNNs). These methods model depth
estimation as a regression problem and train the regression networks by
minimizing mean squared error, which suffers from slow convergence and
unsatisfactory local solutions. Besides, existing depth estimation networks
employ repeated spatial pooling operations, resulting in undesirable
low-resolution feature maps. To obtain high-resolution depth maps,
skip-connections or multi-layer deconvolution networks are required, which
complicates network training and consumes much more computations. To eliminate
or at least largely reduce these problems, we introduce a spacing-increasing
discretization (SID) strategy to discretize depth and recast depth network
learning as an ordinal regression problem. By training the network using an
ordinary regression loss, our method achieves much higher accuracy and
\dd{faster convergence in synch}. Furthermore, we adopt a multi-scale network
structure which avoids unnecessary spatial pooling and captures multi-scale
information in parallel.
The method described in this paper achieves state-of-the-art results on four
challenging benchmarks, i.e., KITTI [17], ScanNet [9], Make3D [50], and NYU
Depth v2 [42], and win the 1st prize in Robust Vision Challenge 2018. Code has
been made available at: https://github.com/hufu6371/DORN.
|
These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions.
|
http://arxiv.org/abs/1806.02446v1
|
http://arxiv.org/pdf/1806.02446v1.pdf
|
CVPR 2018 6
|
[
"Huan Fu",
"Mingming Gong",
"Chaohui Wang",
"Kayhan Batmanghelich",
"DaCheng Tao"
] |
[
"Depth Estimation",
"Monocular Depth Estimation",
"regression"
] | 2018-06-06T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Fu_Deep_Ordinal_Regression_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Fu_Deep_Ordinal_Regression_CVPR_2018_paper.pdf
|
deep-ordinal-regression-network-for-monocular-1
| null |
[] |
https://paperswithcode.com/paper/studying-the-difference-between-natural-and
|
1806.02437
| null | null |
Studying the Difference Between Natural and Programming Language Corpora
|
Code corpora, as observed in large software systems, are now known to be far
more repetitive and predictable than natural language corpora. But why? Does
the difference simply arise from the syntactic limitations of programming
languages? Or does it arise from the differences in authoring decisions made by
the writers of these natural and programming language texts? We conjecture that
the differences are not entirely due to syntax, but also from the fact that
reading and writing code is un-natural for humans, and requires substantial
mental effort; so, people prefer to write code in ways that are familiar to
both reader and writer. To support this argument, we present results from two
sets of studies: 1) a first set aimed at attenuating the effects of syntax, and
2) a second, aimed at measuring repetitiveness of text written in other
settings (e.g. second language, technical/specialized jargon), which are also
effortful to write. We find find that this repetition in source code is not
entirely the result of grammar constraints, and thus some repetition must
result from human choice. While the evidence we find of similar repetitive
behavior in technical and learner corpora does not conclusively show that such
language is used by humans to mitigate difficulty, it is consistent with that
theory.
| null |
http://arxiv.org/abs/1806.02437v1
|
http://arxiv.org/pdf/1806.02437v1.pdf
| null |
[
"Casey Casalnuovo",
"Kenji Sagae",
"Prem Devanbu"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rethinking-radiology-an-analysis-of-different
|
1806.03981
| null | null |
Rethinking Radiology: An Analysis of Different Approaches to BraTS
|
This paper discusses the deep learning architectures currently used for
pixel-wise segmentation of primary and secondary glioblastomas and low-grade
gliomas. We implement various models such as the popular UNet architecture and
compare the performance of these implementations on the BRATS dataset. This
paper will explore the different approaches and combinations, offering an in
depth discussion of how they perform and how we may improve upon them using
more recent advancements in deep learning architectures.
| null |
http://arxiv.org/abs/1806.03981v1
|
http://arxiv.org/pdf/1806.03981v1.pdf
| null |
[
"William Bakst",
"Linus Meyer-Teruel",
"Jasdeep Singh"
] |
[
"Deep Learning"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scalable-k-means-clustering-via-lightweight
|
1702.08248
| null | null |
Scalable k-Means Clustering via Lightweight Coresets
|
Coresets are compact representations of data sets such that models trained on
a coreset are provably competitive with models trained on the full data set. As
such, they have been successfully used to scale up clustering models to massive
data sets. While existing approaches generally only allow for multiplicative
approximation errors, we propose a novel notion of lightweight coresets that
allows for both multiplicative and additive errors. We provide a single
algorithm to construct lightweight coresets for k-means clustering as well as
soft and hard Bregman clustering. The algorithm is substantially faster than
existing constructions, embarrassingly parallel, and the resulting coresets are
smaller. We further show that the proposed approach naturally generalizes to
statistical k-means clustering and that, compared to existing results, it can
be used to compute smaller summaries for empirical risk minimization. In
extensive experiments, we demonstrate that the proposed algorithm outperforms
existing data summarization strategies in practice.
|
As such, they have been successfully used to scale up clustering models to massive data sets.
|
http://arxiv.org/abs/1702.08248v2
|
http://arxiv.org/pdf/1702.08248v2.pdf
| null |
[
"Olivier Bachem",
"Mario Lucic",
"Andreas Krause"
] |
[
"Clustering",
"Data Summarization"
] | 2017-02-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://cryptoabout.info",
"description": "**k-Means Clustering** is a clustering algorithm that divides a training set into $k$ different clusters of examples that are near each other. It works by initializing $k$ different centroids {$\\mu\\left(1\\right),\\ldots,\\mu\\left(k\\right)$} to different values, then alternating between two steps until convergence:\r\n\r\n(i) each training example is assigned to cluster $i$ where $i$ is the index of the nearest centroid $\\mu^{(i)}$\r\n\r\n(ii) each centroid $\\mu^{(i)}$ is updated to the mean of all training examples $x^{(j)}$ assigned to cluster $i$.\r\n\r\nText Source: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [scikit-learn](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html)",
"full_name": "k-Means Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "k-Means Clustering",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/text-independent-speaker-verification-using
|
1705.09422
| null | null |
Text-Independent Speaker Verification Using 3D Convolutional Neural Networks
|
In this paper, a novel method using 3D Convolutional Neural Network (3D-CNN)
architecture has been proposed for speaker verification in the text-independent
setting. One of the main challenges is the creation of the speaker models. Most
of the previously-reported approaches create speaker models based on averaging
the extracted features from utterances of the speaker, which is known as the
d-vector system. In our paper, we propose an adaptive feature learning by
utilizing the 3D-CNNs for direct speaker model creation in which, for both
development and enrollment phases, an identical number of spoken utterances per
speaker is fed to the network for representing the speakers' utterances and
creation of the speaker model. This leads to simultaneously capturing the
speaker-related information and building a more robust system to cope with
within-speaker variation. We demonstrate that the proposed method significantly
outperforms the traditional d-vector verification system. Moreover, the
proposed system can also be an alternative to the traditional d-vector system
which is a one-shot speaker modeling system by utilizing 3D-CNNs.
|
In our paper, we propose an adaptive feature learning by utilizing the 3D-CNNs for direct speaker model creation in which, for both development and enrollment phases, an identical number of spoken utterances per speaker is fed to the network for representing the speakers' utterances and creation of the speaker model.
|
http://arxiv.org/abs/1705.09422v7
|
http://arxiv.org/pdf/1705.09422v7.pdf
| null |
[
"Amirsina Torfi",
"Jeremy Dawson",
"Nasser M. Nasrabadi"
] |
[
"Speaker Verification",
"Text-Independent Speaker Verification"
] | 2017-05-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-variational-reinforcement-learning-for
|
1806.02426
| null | null |
Deep Variational Reinforcement Learning for POMDPs
|
Many real-world sequential decision making problems are partially observable
by nature, and the environment model is typically unknown. Consequently, there
is great need for reinforcement learning methods that can tackle such problems
given only a stream of incomplete and noisy observations. In this paper, we
propose deep variational reinforcement learning (DVRL), which introduces an
inductive bias that allows an agent to learn a generative model of the
environment and perform inference in that model to effectively aggregate the
available information. We develop an n-step approximation to the evidence lower
bound (ELBO), allowing the model to be trained jointly with the policy. This
ensures that the latent state representation is suitable for the control task.
In experiments on Mountain Hike and flickering Atari we show that our method
outperforms previous approaches relying on recurrent neural networks to encode
the past.
|
Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown.
|
http://arxiv.org/abs/1806.02426v1
|
http://arxiv.org/pdf/1806.02426v1.pdf
|
ICML 2018 7
|
[
"Maximilian Igl",
"Luisa Zintgraf",
"Tuan Anh Le",
"Frank Wood",
"Shimon Whiteson"
] |
[
"Decision Making",
"Inductive Bias",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Sequential Decision Making"
] | 2018-06-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2456
|
http://proceedings.mlr.press/v80/igl18a/igl18a.pdf
|
deep-variational-reinforcement-learning-for-1
| null |
[] |
https://paperswithcode.com/paper/graphical-posterior-predictive-classifier
|
1707.06792
| null | null |
Graphical posterior predictive classifier: Bayesian model averaging with particle Gibbs
|
In this study, we present a multi-class graphical Bayesian predictive
classifier that incorporates the uncertainty in the model selection into the
standard Bayesian formalism. For each class, the dependence structure
underlying the observed features is represented by a set of decomposable
Gaussian graphical models. Emphasis is then placed on the Bayesian model
averaging which takes full account of the class-specific model uncertainty by
averaging over the posterior graph model probabilities. An explicit evaluation
of the model probabilities is well known to be infeasible. To address this
issue, we consider the particle Gibbs strategy of Olsson et al. (2018b) for
posterior sampling from decomposable graphical models which utilizes the
Christmas tree algorithm of Olsson et al. (2018a) as proposal kernel. We also
derive a strong hyper Markov law which we call the hyper normal Wishart law
that allow to perform the resultant Bayesian calculations locally. The proposed
predictive graphical classifier reveals superior performance compared to the
ordinary Bayesian predictive rule that does not account for the model
uncertainty, as well as to a number of out-of-the-box classifiers.
|
In this study, we present a multi-class graphical Bayesian predictive classifier that incorporates the uncertainty in the model selection into the standard Bayesian formalism.
|
http://arxiv.org/abs/1707.06792v4
|
http://arxiv.org/pdf/1707.06792v4.pdf
| null |
[
"Tatjana Pavlenko",
"Felix Leopoldo Rios"
] |
[
"Model Selection"
] | 2017-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/action4d-real-time-action-recognition-in-the
|
1806.02424
| null | null |
Action4D: Real-time Action Recognition in the Crowd and Clutter
|
Recognizing every person's action in a crowded and cluttered environment is a
challenging task. In this paper, we propose a real-time action recognition
method, Action4D, which gives reliable and accurate results in the real-world
settings. We propose to tackle the action recognition problem using a holistic
4D "scan" of a cluttered scene to include every detail about the people and
environment. Recognizing multiple people's actions in the cluttered 4D
representation is a new problem. In this paper, we propose novel methods to
solve this problem. We propose a new method to track people in 4D, which can
reliably detect and follow each person in real time. We propose a new deep
neural network, the Action4D-Net, to recognize the action of each tracked
person. The Action4D-Net's novel structure uses both the global feature and the
focused attention to achieve state-of-the-art result. Our real-time method is
invariant to camera view angles, resistant to clutter and able to handle crowd.
The experimental results show that the proposed method is fast, reliable and
accurate. Our method paves the way to action recognition in the real-world
applications and is ready to be deployed to enable smart homes, smart factories
and smart stores.
| null |
http://arxiv.org/abs/1806.02424v1
|
http://arxiv.org/pdf/1806.02424v1.pdf
| null |
[
"Quanzeng You",
"Hao Jiang"
] |
[
"Action Recognition",
"Temporal Action Localization"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalization-without-systematicity-on-the
|
1711.00350
| null | null |
Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks
|
Humans can understand and produce new utterances effortlessly, thanks to
their compositional skills. Once a person learns the meaning of a new verb
"dax," he or she can immediately understand the meaning of "dax twice" or "sing
and dax." In this paper, we introduce the SCAN domain, consisting of a set of
simple compositional navigation commands paired with the corresponding action
sequences. We then test the zero-shot generalization capabilities of a variety
of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence
methods. We find that RNNs can make successful zero-shot generalizations when
the differences between training and test commands are small, so that they can
apply "mix-and-match" strategies to solve the task. However, when
generalization requires systematic compositional skills (as in the "dax"
example above), RNNs fail spectacularly. We conclude with a proof-of-concept
experiment in neural machine translation, suggesting that lack of systematicity
might be partially responsible for neural networks' notorious training data
thirst.
|
Humans can understand and produce new utterances effortlessly, thanks to their compositional skills.
|
http://arxiv.org/abs/1711.00350v3
|
http://arxiv.org/pdf/1711.00350v3.pdf
|
ICML 2018 7
|
[
"Brenden M. Lake",
"Marco Baroni"
] |
[
"Machine Translation",
"Translation",
"Zero-shot Generalization"
] | 2017-10-31T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1938
|
http://proceedings.mlr.press/v80/lake18a/lake18a.pdf
|
generalization-without-systematicity-on-the-1
| null |
[] |
https://paperswithcode.com/paper/on-multi-layer-basis-pursuit-efficient
|
1806.00701
| null | null |
On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks
|
Parsimonious representations are ubiquitous in modeling and processing
information. Motivated by the recent Multi-Layer Convolutional Sparse Coding
(ML-CSC) model, we herein generalize the traditional Basis Pursuit problem to a
multi-layer setting, introducing similar sparse enforcing penalties at
different representation layers in a symbiotic relation between synthesis and
analysis sparse priors. We explore different iterative methods to solve this
new problem in practice, and we propose a new Multi-Layer Iterative Soft
Thresholding Algorithm (ML-ISTA), as well as a fast version (ML-FISTA). We show
that these nested first order algorithms converge, in the sense that the
function value of near-fixed points can get arbitrarily close to the solution
of the original problem.
We further show how these algorithms effectively implement particular
recurrent convolutional neural networks (CNNs) that generalize feed-forward
ones without introducing any parameters. We present and analyze different
architectures resulting unfolding the iterations of the proposed pursuit
algorithms, including a new Learned ML-ISTA, providing a principled way to
construct deep recurrent CNNs. Unlike other similar constructions, these
architectures unfold a global pursuit holistically for the entire network. We
demonstrate the emerging constructions in a supervised learning setting,
consistently improving the performance of classical CNNs while maintaining the
number of parameters constant.
|
Parsimonious representations are ubiquitous in modeling and processing information.
|
http://arxiv.org/abs/1806.00701v5
|
http://arxiv.org/pdf/1806.00701v5.pdf
| null |
[
"Jeremias Sulam",
"Aviad Aberdam",
"Amir Beck",
"Michael Elad"
] |
[] | 2018-06-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/human-aided-multi-entity-bayesian-networks
|
1806.02421
| null | null |
Human-aided Multi-Entity Bayesian Networks Learning from Relational Data
|
An Artificial Intelligence (AI) system is an autonomous system which emulates
human mental and physical activities such as Observe, Orient, Decide, and Act,
called the OODA process. An AI system performing the OODA process requires a
semantically rich representation to handle a complex real world situation and
ability to reason under uncertainty about the situation. Multi-Entity Bayesian
Networks (MEBNs) combines First-Order Logic with Bayesian Networks for
representing and reasoning about uncertainty in complex, knowledge-rich
domains. MEBN goes beyond standard Bayesian networks to enable reasoning about
an unknown number of entities interacting with each other in various types of
relationships, a key requirement for the OODA process of an AI system. MEBN
models have heretofore been constructed manually by a domain expert. However,
manual MEBN modeling is labor-intensive and insufficiently agile. To address
these problems, an efficient method is needed for MEBN modeling. One of the
methods is to use machine learning to learn a MEBN model in whole or in part
from data. In the era of Big Data, data-rich environments, characterized by
uncertainty and complexity, have become ubiquitous. The larger the data sample
is, the more accurate the results of the machine learning approach can be.
Therefore, machine learning has potential to improve the quality of MEBN models
as well as the effectiveness for MEBN modeling. In this research, we study a
MEBN learning framework to develop a MEBN model from a combination of domain
expert's knowledge and data. To evaluate the MEBN learning framework, we
conduct an experiment to compare the MEBN learning framework and the existing
manual MEBN modeling in terms of development efficiency.
| null |
http://arxiv.org/abs/1806.02421v1
|
http://arxiv.org/pdf/1806.02421v1.pdf
| null |
[
"Cheol Young Park",
"Kathryn Blackmond Laskey"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/finding-convincing-arguments-using-scalable
|
1806.02418
| null | null |
Finding Convincing Arguments Using Scalable Bayesian Preference Learning
|
We introduce a scalable Bayesian preference learning method for identifying
convincing arguments in the absence of gold-standard rat- ings or rankings. In
contrast to previous work, we avoid the need for separate methods to perform
quality control on training data, predict rankings and perform pairwise
classification. Bayesian approaches are an effective solution when faced with
sparse or noisy training data, but have not previously been used to identify
convincing arguments. One issue is scalability, which we address by developing
a stochastic variational inference method for Gaussian process (GP) preference
learning. We show how our method can be applied to predict argument
convincingness from crowdsourced data, outperforming the previous
state-of-the-art, particularly when trained with small amounts of unreliable
data. We demonstrate how the Bayesian approach enables more effective active
learning, thereby reducing the amount of data required to identify convincing
arguments for new users and domains. While word embeddings are principally used
with neural networks, our results show that word embeddings in combination with
linguistic features also benefit GPs when predicting argument convincingness.
|
We introduce a scalable Bayesian preference learning method for identifying convincing arguments in the absence of gold-standard rat- ings or rankings.
|
http://arxiv.org/abs/1806.02418v1
|
http://arxiv.org/pdf/1806.02418v1.pdf
|
TACL 2018 1
|
[
"Edwin Simpson",
"Iryna Gurevych"
] |
[
"Active Learning",
"Variational Inference",
"Word Embeddings"
] | 2018-06-06T00:00:00 |
https://aclanthology.org/Q18-1026
|
https://aclanthology.org/Q18-1026.pdf
|
finding-convincing-arguments-using-scalable-1
| null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/detecting-spacecraft-anomalies-using-lstms
|
1802.04431
| null | null |
Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding
|
As spacecraft send back increasing amounts of telemetry data, improved
anomaly detection systems are needed to lessen the monitoring burden placed on
operations engineers and reduce operational risk. Current spacecraft monitoring
systems only target a subset of anomaly types and often require costly expert
knowledge to develop and maintain due to challenges involving scale and
complexity. We demonstrate the effectiveness of Long Short-Term Memory (LSTMs)
networks, a type of Recurrent Neural Network (RNN), in overcoming these issues
using expert-labeled telemetry anomaly data from the Soil Moisture Active
Passive (SMAP) satellite and the Mars Science Laboratory (MSL) rover,
Curiosity. We also propose a complementary unsupervised and nonparametric
anomaly thresholding approach developed during a pilot implementation of an
anomaly detection system for SMAP, and offer false positive mitigation
strategies along with other key improvements and lessons learned during
development.
|
As spacecraft send back increasing amounts of telemetry data, improved anomaly detection systems are needed to lessen the monitoring burden placed on operations engineers and reduce operational risk.
|
http://arxiv.org/abs/1802.04431v3
|
http://arxiv.org/pdf/1802.04431v3.pdf
| null |
[
"Kyle Hundman",
"Valentino Constantinou",
"Christopher Laporte",
"Ian Colwell",
"Tom Soderstrom"
] |
[
"Anomaly Detection"
] | 2018-02-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gaussian-mixture-reduction-for-time
|
1806.02415
| null | null |
Gaussian Mixture Reduction for Time-Constrained Approximate Inference in Hybrid Bayesian Networks
|
Hybrid Bayesian Networks (HBNs), which contain both discrete and continuous
variables, arise naturally in many application areas (e.g., image
understanding, data fusion, medical diagnosis, fraud detection). This paper
concerns inference in an important subclass of HBNs, the conditional Gaussian
(CG) networks, in which all continuous random variables have Gaussian
distributions and all children of continuous random variables must be
continuous. Inference in CG networks can be NP-hard even for special-case
structures, such as poly-trees, where inference in discrete Bayesian networks
can be performed in polynomial time. Therefore, approximate inference is
required. In approximate inference, it is often necessary to trade off accuracy
against solution time. This paper presents an extension to the Hybrid Message
Passing inference algorithm for general CG networks and an algorithm for
optimizing its accuracy given a bound on computation time. The extended
algorithm uses Gaussian mixture reduction to prevent an exponential increase in
the number of Gaussian mixture components. The trade-off algorithm performs
pre-processing to find optimal run-time settings for the extended algorithm.
Experimental results for four CG networks compare performance of the extended
algorithm with existing algorithms and show the optimal settings for these CG
networks.
| null |
http://arxiv.org/abs/1806.02415v1
|
http://arxiv.org/pdf/1806.02415v1.pdf
| null |
[
"Cheol Young Park",
"Kathryn Blackmond Laskey",
"Paulo C. G. Costa",
"Shou Matsumoto"
] |
[
"Fraud Detection",
"Medical Diagnosis"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/human-like-generalization-in-a-machine
|
1806.01709
| null | null |
Human-like generalization in a machine through predicate learning
|
Humans readily generalize, applying prior knowledge to novel situations and
stimuli. Advances in machine learning and artificial intelligence have begun to
approximate and even surpass human performance, but machine systems reliably
struggle to generalize information to untrained situations. We describe a
neural network model that is trained to play one video game (Breakout) and
demonstrates one-shot generalization to a new game (Pong). The model
generalizes by learning representations that are functionally and formally
symbolic from training data, without feedback, and without requiring that
structured representations be specified a priori. The model uses unsupervised
comparison to discover which characteristics of the input are invariant, and to
learn relational predicates; it then applies these predicates to arguments in a
symbolic fashion, using oscillatory regularities in network firing to
dynamically bind predicates to arguments. We argue that models of human
cognition must account for far-reaching and flexible generalization, and that
in order to do so, models must be able to discover symbolic representations
from unstructured data, a process we call predicate learning. Only then can
models begin to adequately explain where human-like representations come from,
why human cognition is the way it is, and why it continues to differ from
machine intelligence in crucial ways.
| null |
http://arxiv.org/abs/1806.01709v3
|
http://arxiv.org/pdf/1806.01709v3.pdf
| null |
[
"Leonidas A. A. Doumas",
"Guillermo Puebla",
"Andrea E. Martin"
] |
[] | 2018-06-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evidential-deep-learning-to-quantify
|
1806.01768
| null | null |
Evidential Deep Learning to Quantify Classification Uncertainty
|
Deterministic neural nets have been shown to learn effective predictors on a
wide range of machine learning problems. However, as the standard approach is
to train the network to minimize a prediction loss, the resultant model remains
ignorant to its prediction confidence. Orthogonally to Bayesian neural nets
that indirectly infer prediction uncertainty through weight uncertainties, we
propose explicit modeling of the same using the theory of subjective logic. By
placing a Dirichlet distribution on the class probabilities, we treat
predictions of a neural net as subjective opinions and learn the function that
collects the evidence leading to these opinions by a deterministic neural net
from data. The resultant predictor for a multi-class classification problem is
another Dirichlet distribution whose parameters are set by the continuous
output of a neural net. We provide a preliminary analysis on how the
peculiarities of our new loss function drive improved uncertainty estimation.
We observe that our method achieves unprecedented success on detection of
out-of-distribution queries and endurance against adversarial perturbations.
|
Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems.
|
http://arxiv.org/abs/1806.01768v3
|
http://arxiv.org/pdf/1806.01768v3.pdf
|
NeurIPS 2018 12
|
[
"Murat Sensoy",
"Lance Kaplan",
"Melih Kandemir"
] |
[
"Deep Learning",
"General Classification",
"Multi-class Classification",
"Prediction",
"Uncertainty Quantification"
] | 2018-06-05T00:00:00 |
http://papers.nips.cc/paper/7580-evidential-deep-learning-to-quantify-classification-uncertainty
|
http://papers.nips.cc/paper/7580-evidential-deep-learning-to-quantify-classification-uncertainty.pdf
|
evidential-deep-learning-to-quantify-1
| null |
[] |
https://paperswithcode.com/paper/a-comparative-study-on-unsupervised-domain
|
1806.02400
| null | null |
A Comparative Study on Unsupervised Domain Adaptation Approaches for Coffee Crop Mapping
|
In this work, we investigate the application of existing unsupervised domain
adaptation (UDA) approaches to the task of transferring knowledge between crop
regions having different coffee patterns. Given a geographical region with
fully mapped coffee plantations, we observe that this knowledge can be used to
train a classifier and to map a new county with no need of samples indicated in
the target region. Experimental results show that transferring knowledge via
some UDA strategies performs better than just applying a classifier trained in
a region to predict coffee crops in a new one. However, UDA methods may lead to
negative transfer, which may indicate that domains are too different that
transferring knowledge is not appropriate. We also verify that normalization
affect significantly some UDA methods; we observe a meaningful complementary
contribution between coffee crops data; and a visual behavior suggests an
existent of a cluster of samples that are more likely to be drawn from a
specific data.
| null |
http://arxiv.org/abs/1806.02400v1
|
http://arxiv.org/pdf/1806.02400v1.pdf
| null |
[
"Edemir Ferreira",
"Mário S. Alvim",
"Jefersson A. dos Santos"
] |
[
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimal-approximation-of-continuous-functions
|
1802.03620
| null | null |
Optimal approximation of continuous functions by very deep ReLU networks
|
We consider approximations of general continuous functions on
finite-dimensional cubes by general deep ReLU neural networks and study the
approximation rates with respect to the modulus of continuity of the function
and the total number of weights $W$ in the network. We establish the complete
phase diagram of feasible approximation rates and show that it includes two
distinct phases. One phase corresponds to slower approximations that can be
achieved with constant-depth networks and continuous weight assignments. The
other phase provides faster approximations at the cost of depths necessarily
growing as a power law $L\sim W^{\alpha}, 0<\alpha\le 1,$ and with necessarily
discontinuous weight assignments. In particular, we prove that constant-width
fully-connected networks of depth $L\sim W$ provide the fastest possible
approximation rate $\|f-\widetilde f\|_\infty = O(\omega_f(O(W^{-2/\nu})))$
that cannot be achieved with less deep networks.
| null |
http://arxiv.org/abs/1802.03620v2
|
http://arxiv.org/pdf/1802.03620v2.pdf
| null |
[
"Dmitry Yarotsky"
] |
[] | 2018-02-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/an-analysis-of-the-t-sne-algorithm-for-data
|
1803.01768
| null | null |
An Analysis of the t-SNE Algorithm for Data Visualization
|
A first line of attack in exploratory data analysis is data visualization,
i.e., generating a 2-dimensional representation of data that makes clusters of
similar points visually identifiable. Standard Johnson-Lindenstrauss
dimensionality reduction does not produce data visualizations. The t-SNE
heuristic of van der Maaten and Hinton, which is based on non-convex
optimization, has become the de facto standard for visualization in a wide
range of applications.
This work gives a formal framework for the problem of data visualization -
finding a 2-dimensional embedding of clusterable data that correctly separates
individual clusters to make them visually identifiable. We then give a rigorous
analysis of the performance of t-SNE under a natural, deterministic condition
on the "ground-truth" clusters (similar to conditions assumed in earlier
analyses of clustering) in the underlying data. These are the first provable
guarantees on t-SNE for constructing good data visualizations.
We show that our deterministic condition is satisfied by considerably general
probabilistic generative models for clusterable data such as mixtures of
well-separated log-concave distributions. Finally, we give theoretical evidence
that t-SNE provably succeeds in partially recovering cluster structure even
when the above deterministic condition is not met.
| null |
http://arxiv.org/abs/1803.01768v2
|
http://arxiv.org/pdf/1803.01768v2.pdf
| null |
[
"Sanjeev Arora",
"Wei Hu",
"Pravesh K. Kothari"
] |
[
"Clustering",
"Data Visualization",
"Dimensionality Reduction"
] | 2018-03-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/variational-implicit-processes
|
1806.02390
| null | null |
Variational Implicit Processes
|
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.
|
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables.
|
https://arxiv.org/abs/1806.02390v2
|
https://arxiv.org/pdf/1806.02390v2.pdf
| null |
[
"Chao Ma",
"Yingzhen Li",
"José Miguel Hernández-Lobato"
] |
[
"Gaussian Processes",
"Stochastic Optimization"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/d_mathcalx-private-mechanisms-for-linear
|
1806.02389
| null | null |
Not All Attributes are Created Equal: $d_{\mathcal{X}}$-Private Mechanisms for Linear Queries
|
Differential privacy provides strong privacy guarantees simultaneously enabling useful insights from sensitive datasets. However, it provides the same level of protection for all elements (individuals and attributes) in the data. There are practical scenarios where some data attributes need more/less protection than others. In this paper, we consider $d_{\mathcal{X}}$-privacy, an instantiation of the privacy notion introduced in \cite{chatzikokolakis2013broadening}, which allows this flexibility by specifying a separate privacy budget for each pair of elements in the data domain. We describe a systematic procedure to tailor any existing differentially private mechanism that assumes a query set and a sensitivity vector as input into its $d_{\mathcal{X}}$-private variant, specifically focusing on linear queries. Our proposed meta procedure has broad applications as linear queries form the basis of a range of data analysis and machine learning algorithms, and the ability to define a more flexible privacy budget across the data domain results in improved privacy/utility tradeoff in these applications. We propose several $d_{\mathcal{X}}$-private mechanisms, and provide theoretical guarantees on the trade-off between utility and privacy. We also experimentally demonstrate the effectiveness of our procedure, by evaluating our proposed $d_{\mathcal{X}}$-private Laplace mechanism on both synthetic and real datasets using a set of randomly generated linear queries.
| null |
https://arxiv.org/abs/1806.02389v2
|
https://arxiv.org/pdf/1806.02389v2.pdf
| null |
[
"Parameswaran Kamalaruban",
"Victor Perrier",
"Hassan Jameel Asghar",
"Mohamed Ali Kaafar"
] |
[
"All"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/variational-autoencoder-with-arbitrary
|
1806.02382
| null |
SyxtJh0qYm
|
Variational Autoencoder with Arbitrary Conditioning
|
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
|
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot".
|
https://arxiv.org/abs/1806.02382v3
|
https://arxiv.org/pdf/1806.02382v3.pdf
|
ICLR 2019 5
|
[
"Oleg Ivanov",
"Michael Figurnov",
"Dmitry Vetrov"
] |
[
"Diversity",
"Image Inpainting",
"Imputation"
] | 2018-06-06T00:00:00 |
https://openreview.net/forum?id=SyxtJh0qYm
|
https://openreview.net/pdf?id=SyxtJh0qYm
|
variational-autoencoder-with-arbitrary-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/causal-interventions-for-fairness
|
1806.02380
| null | null |
Causal Interventions for Fairness
|
Most approaches in algorithmic fairness constrain machine learning methods so
the resulting predictions satisfy one of several intuitive notions of fairness.
While this may help private companies comply with non-discrimination laws or
avoid negative publicity, we believe it is often too little, too late. By the
time the training data is collected, individuals in disadvantaged groups have
already suffered from discrimination and lost opportunities due to factors out
of their control. In the present work we focus instead on interventions such as
a new public policy, and in particular, how to maximize their positive effects
while improving the fairness of the overall system. We use causal methods to
model the effects of interventions, allowing for potential interference--each
individual's outcome may depend on who else receives the intervention. We
demonstrate this with an example of allocating a budget of teaching resources
using a dataset of schools in New York City.
| null |
http://arxiv.org/abs/1806.02380v1
|
http://arxiv.org/pdf/1806.02380v1.pdf
| null |
[
"Matt J. Kusner",
"Chris Russell",
"Joshua R. Loftus",
"Ricardo Silva"
] |
[
"Fairness"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sampling-as-optimization-in-the-space-of
|
1802.08089
| null | null |
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
|
We study sampling as optimization in the space of measures. We focus on
gradient flow-based optimization with the Langevin dynamics as a case study. We
investigate the source of the bias of the unadjusted Langevin algorithm (ULA)
in discrete time, and consider how to remove or reduce the bias. We point out
the difficulty is that the heat flow is exactly solvable, but neither its
forward nor backward method is implementable in general, except for Gaussian
data. We propose the symmetrized Langevin algorithm (SLA), which should have a
smaller bias than ULA, at the price of implementing a proximal gradient step in
space. We show SLA is in fact consistent for Gaussian target measure, whereas
ULA is not. We also illustrate various algorithms explicitly for Gaussian
target measure, including gradient descent, proximal gradient, and
Forward-Backward, and show they are all consistent.
| null |
http://arxiv.org/abs/1802.08089v2
|
http://arxiv.org/pdf/1802.08089v2.pdf
| null |
[
"Andre Wibisono"
] |
[] | 2018-02-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dempsterian-shaferian-belief-network-from
|
1806.02373
| null | null |
Dempsterian-Shaferian Belief Network From Data
|
Shenoy and Shafer {Shenoy:90} demonstrated that both for Dempster-Shafer
Theory and probability theory there exists a possibility to calculate
efficiently marginals of joint belief distributions (by so-called local
computations) provided that the joint distribution can be decomposed
(factorized) into a belief network. A number of algorithms exists for
decomposition of probabilistic joint belief distribution into a bayesian
(belief) network from data. For example
Spirtes, Glymour and Schein{Spirtes:90b} formulated a Conjecture that a
direct dependence test and a head-to-head meeting test would suffice to
construe bayesian network from data in such a way that Pearl's concept of
d-separation {Geiger:90} applies.
This paper is intended to transfer Spirtes, Glymour and Scheines
{Spirtes:90b} approach onto the ground of the Dempster-Shafer Theory (DST). For
this purpose, a frequentionistic interpretation of the DST developed in
{Klopotek:93b} is exploited. A special notion of conditionality for DST is
introduced and demonstrated to behave with respect to Pearl's d-separation
{Geiger:90} much the same way as conditional probability (though some
differences like non-uniqueness are evident). Based on this, an algorithm
analogous to that from {Spirtes:90b} is developed.
The notion of a partially oriented graph (pog) is introduced and within this
graph the notion of p-d-separation is defined. If direct dependence test and
head-to-head meeting test are used to orient the pog then its p-d-separation is
shown to be equivalent to the Pearl's d-separation for any compatible dag.
| null |
http://arxiv.org/abs/1806.02373v1
|
http://arxiv.org/pdf/1806.02373v1.pdf
| null |
[
"Mieczysław A. Kłopotek"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic sparse training methods train neural networks in a sparse manner, starting with an initial sparse mask, and periodically updating the mask based on some criteria.",
"full_name": "Dynamic Sparse Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "DST",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/adversarial-attack-on-graph-structured-data
|
1806.02371
| null | null |
Adversarial Attack on Graph Structured Data
|
Deep learning on graph structures has shown exciting results in various
applications. However, few attentions have been paid to the robustness of such
models, in contrast to numerous research work for image or text adversarial
attack and defense. In this paper, we focus on the adversarial attacks that
fool the model by modifying the combinatorial structure of data. We first
propose a reinforcement learning based attack method that learns the
generalizable attack policy, while only requiring prediction labels from the
target classifier. Also, variants of genetic algorithms and gradient methods
are presented in the scenario where prediction confidence or gradients are
available. We use both synthetic and real-world data to show that, a family of
Graph Neural Network models are vulnerable to these attacks, in both
graph-level and node-level classification tasks. We also show such attacks can
be used to diagnose the learned classifiers.
|
Deep learning on graph structures has shown exciting results in various applications.
|
http://arxiv.org/abs/1806.02371v1
|
http://arxiv.org/pdf/1806.02371v1.pdf
|
ICML 2018 7
|
[
"Hanjun Dai",
"Hui Li",
"Tian Tian",
"Xin Huang",
"Lin Wang",
"Jun Zhu",
"Le Song"
] |
[
"Adversarial Attack",
"Graph Neural Network",
"Reinforcement Learning"
] | 2018-06-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2294
|
http://proceedings.mlr.press/v80/dai18b/dai18b.pdf
|
adversarial-attack-on-graph-structured-data-1
| null |
[] |
https://paperswithcode.com/paper/qmix-monotonic-value-function-factorisation
|
1803.11485
| null | null |
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
|
In many real-world settings, a team of agents must coordinate their behaviour
while acting in a decentralised way. At the same time, it is often possible to
train the agents in a centralised fashion in a simulated or laboratory setting,
where global state information is available and communication constraints are
lifted. Learning joint action-values conditioned on extra state information is
an attractive way to exploit centralised learning, but the best strategy for
then extracting decentralised policies is unclear. Our solution is QMIX, a
novel value-based method that can train decentralised policies in a centralised
end-to-end fashion. QMIX employs a network that estimates joint action-values
as a complex non-linear combination of per-agent values that condition only on
local observations. We structurally enforce that the joint-action value is
monotonic in the per-agent values, which allows tractable maximisation of the
joint action-value in off-policy learning, and guarantees consistency between
the centralised and decentralised policies. We evaluate QMIX on a challenging
set of StarCraft II micromanagement tasks, and show that QMIX significantly
outperforms existing value-based multi-agent reinforcement learning methods.
|
At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted.
|
http://arxiv.org/abs/1803.11485v2
|
http://arxiv.org/pdf/1803.11485v2.pdf
|
ICML 2018 7
|
[
"Tabish Rashid",
"Mikayel Samvelyan",
"Christian Schroeder de Witt",
"Gregory Farquhar",
"Jakob Foerster",
"Shimon Whiteson"
] |
[
"Multi-agent Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"SMAC+",
"Starcraft",
"Starcraft II"
] | 2018-03-30T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2389
|
http://proceedings.mlr.press/v80/rashid18a/rashid18a.pdf
|
qmix-monotonic-value-function-factorisation-1
| null |
[] |
https://paperswithcode.com/paper/mitigating-bias-in-adaptive-data-gathering
|
1806.02329
| null | null |
Mitigating Bias in Adaptive Data Gathering via Differential Privacy
|
Data that is gathered adaptively --- via bandit algorithms, for example ---
exhibits bias. This is true both when gathering simple numeric valued data ---
the empirical means kept track of by stochastic bandit algorithms are biased
downwards --- and when gathering more complicated data --- running hypothesis
tests on complex data gathered via contextual bandit algorithms leads to false
discovery. In this paper, we show that this problem is mitigated if the data
collection procedure is differentially private. This lets us both bound the
bias of simple numeric valued quantities (like the empirical means of
stochastic bandit algorithms), and correct the p-values of hypothesis tests run
on the adaptively gathered data. Moreover, there exist differentially private
bandit algorithms with near optimal regret bounds: we apply existing theorems
in the simple stochastic case, and give a new analysis for linear contextual
bandits. We complement our theoretical results with experiments validating our
theory.
| null |
http://arxiv.org/abs/1806.02329v1
|
http://arxiv.org/pdf/1806.02329v1.pdf
|
ICML 2018 7
|
[
"Seth Neel",
"Aaron Roth"
] |
[
"Multi-Armed Bandits"
] | 2018-06-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2339
|
http://proceedings.mlr.press/v80/neel18a/neel18a.pdf
|
mitigating-bias-in-adaptive-data-gathering-1
| null |
[] |
https://paperswithcode.com/paper/conditional-linear-regression
|
1806.02326
| null | null |
Conditional Linear Regression
|
Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers. However, there does not often exist a good model on the whole dataset, so we seek to find a small subset where there exists a useful model. We are interested in finding a linear rule capable of achieving more accurate predictions for just a segment of the population. We give an efficient algorithm with theoretical analysis for the conditional linear regression task, which is the joint task of identifying a significant segment of the population, described by a k-DNF, along with its linear regression fit.
|
Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers.
|
https://arxiv.org/abs/1806.02326v2
|
https://arxiv.org/pdf/1806.02326v2.pdf
| null |
[
"Diego Calderon",
"Brendan Juba",
"Sirui Li",
"Zongyi Li",
"Lisa Ruan"
] |
[
"regression"
] | 2018-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fast-and-accurate-online-video-object
|
1806.02323
| null | null |
Fast and Accurate Online Video Object Segmentation via Tracking Parts
|
基于视频的目标检测算法研究
|
基于视频的目标检测算法研究
|
http://arxiv.org/abs/1806.02323v1
|
http://arxiv.org/pdf/1806.02323v1.pdf
|
CVPR 2018 6
|
[
"Jingchun Cheng",
"Yi-Hsuan Tsai",
"Wei-Chih Hung",
"Shengjin Wang",
"Ming-Hsuan Yang"
] |
[
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation",
"Visual Object Tracking"
] | 2018-06-06T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Cheng_Fast_and_Accurate_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Cheng_Fast_and_Accurate_CVPR_2018_paper.pdf
|
fast-and-accurate-online-video-object-1
| null |
[] |
https://paperswithcode.com/paper/learning-kolmogorov-models-for-binary-random
|
1806.02322
| null |
BJfguoAcFm
|
Learning Kolmogorov Models for Binary Random Variables
|
We summarize our recent findings, where we proposed a framework for learning
a Kolmogorov model, for a collection of binary random variables. More
specifically, we derive conditions that link outcomes of specific random
variables, and extract valuable relations from the data. We also propose an
algorithm for computing the model and show its first-order optimality, despite
the combinatorial nature of the learning problem. We apply the proposed
algorithm to recommendation systems, although it is applicable to other
scenarios. We believe that the work is a significant step toward interpretable
machine learning.
| null |
http://arxiv.org/abs/1806.02322v1
|
http://arxiv.org/pdf/1806.02322v1.pdf
| null |
[
"Hadi Ghauch",
"Mikael Skoglund",
"Hossein Shokri-Ghadikolaei",
"Carlo Fischione",
"Ali H. Sayed"
] |
[
"BIG-bench Machine Learning",
"Interpretable Machine Learning",
"Recommendation Systems"
] | 2018-06-06T00:00:00 |
https://openreview.net/forum?id=BJfguoAcFm
|
https://openreview.net/pdf?id=BJfguoAcFm
| null | null |
[] |
https://paperswithcode.com/paper/adaptive-feature-recombination-and
|
1806.02318
| null | null |
Adaptive feature recombination and recalibration for semantic segmentation: application to brain tumor segmentation in MRI
|
Convolutional neural networks (CNNs) have been successfully used for brain
tumor segmentation, specifically, fully convolutional networks (FCNs). FCNs can
segment a set of voxels at once, having a direct spatial correspondence between
units in feature maps (FMs) at a given location and the corresponding
classified voxels. In convolutional layers, FMs are merged to create new FMs,
so, channel combination is crucial. However, not all FMs have the same
relevance for a given class. Recently, in classification problems,
Squeeze-and-Excitation (SE) blocks have been proposed to re-calibrate FMs as a
whole, and suppress the less informative ones. However, this is not optimal in
FCN due to the spatial correspondence between units and voxels. In this
article, we propose feature recombination through linear expansion and
compression to create more complex features for semantic segmentation.
Additionally, we propose a segmentation SE (SegSE) block for feature
recalibration that collects contextual information, while maintaining the
spatial meaning. Finally, we evaluate the proposed methods in brain tumor
segmentation, using publicly available data.
|
However, this is not optimal in FCN due to the spatial correspondence between units and voxels.
|
http://arxiv.org/abs/1806.02318v1
|
http://arxiv.org/pdf/1806.02318v1.pdf
| null |
[
"Sérgio Pereira",
"Victor Alves",
"Carlos A. Silva"
] |
[
"Brain Tumor Segmentation",
"Segmentation",
"Semantic Segmentation",
"Tumor Segmentation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/randomized-value-functions-via-multiplicative
|
1806.02315
| null | null |
Randomized Value Functions via Multiplicative Normalizing Flows
|
Randomized value functions offer a promising approach towards the challenge of efficient exploration in complex environments with high dimensional state and action spaces. Unlike traditional point estimate methods, randomized value functions maintain a posterior distribution over action-space values. This prevents the agent's behavior policy from prematurely exploiting early estimates and falling into local optima. In this work, we leverage recent advances in variational Bayesian neural networks and combine these with traditional Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient (DDPG) to achieve randomized value functions for high-dimensional domains. In particular, we augment DQN and DDPG with multiplicative normalizing flows in order to track a rich approximate posterior distribution over the parameters of the value function. This allows the agent to perform approximate Thompson sampling in a computationally efficient manner via stochastic gradient methods. We demonstrate the benefits of our approach through an empirical comparison in high dimensional environments.
|
In particular, we augment DQN and DDPG with multiplicative normalizing flows in order to track a rich approximate posterior distribution over the parameters of the value function.
|
https://arxiv.org/abs/1806.02315v3
|
https://arxiv.org/pdf/1806.02315v3.pdf
| null |
[
"Ahmed Touati",
"Harsh Satija",
"Joshua Romoff",
"Joelle Pineau",
"Pascal Vincent"
] |
[
"Efficient Exploration",
"Thompson Sampling"
] | 2018-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
},
{
"code_snippet_url": null,
"description": "**Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\\_{t} = \\left(s\\_{t}, a\\_{t}, r\\_{t}, s\\_{t+1}\\right)$ in a data-set $D = e\\_{1}, \\cdots, e\\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.\r\n\r\nImage Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524)",
"full_name": "Experience Replay",
"introduced_year": 1993,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Experience Replay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": null,
"description": "**DDPG**, or **Deep Deterministic Policy Gradient**, is an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. It combines the actor-critic approach with insights from [DQNs](https://paperswithcode.com/method/dqn): in particular, the insights that 1) the network is trained off-policy with samples from a replay buffer to minimize correlations between samples, and 2) the network is trained with a target Q network to give consistent targets during temporal difference backups. DDPG makes use of the same ideas along with [batch normalization](https://paperswithcode.com/method/batch-normalization).",
"full_name": "Deep Deterministic Policy Gradient",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "DDPG",
"source_title": "Continuous control with deep reinforcement learning",
"source_url": "https://arxiv.org/abs/1509.02971v6"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **DQN**, or Deep Q-Network, approximates a state-value function in a [Q-Learning](https://paperswithcode.com/method/q-learning) framework with a neural network. In the Atari Games case, they take in several frames of the game as an input and output state values for each action as an output. \r\n\r\nIt is usually used in conjunction with [Experience Replay](https://paperswithcode.com/method/experience-replay), for storing the episode steps in memory for off-policy learning, where samples are drawn from the replay memory at random. Additionally, the Q-Network is usually optimized towards a frozen target network that is periodically updated with the latest weights every $k$ steps (where $k$ is a hyperparameter). The latter makes training more stable by preventing short-term oscillations from a moving target. The former tackles autocorrelation that would occur from on-line learning, and having a replay memory makes the problem more like a supervised learning problem.\r\n\r\nImage Source: [here](https://www.researchgate.net/publication/319643003_Autonomous_Quadrotor_Landing_using_Deep_Reinforcement_Learning)",
"full_name": "Deep Q-Network",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Q-Learning Networks",
"parent": "Off-Policy TD Control"
},
"name": "DQN",
"source_title": "Playing Atari with Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1312.5602v1"
}
] |
https://paperswithcode.com/paper/scan-learning-hierarchical-compositional
|
1707.03389
| null |
rkN2Il-RZ
|
SCAN: Learning Hierarchical Compositional Visual Concepts
|
The seemingly infinite diversity of the natural world arises from a
relatively small set of coherent rules, such as the laws of physics or
chemistry. We conjecture that these rules give rise to regularities that can be
discovered through primarily unsupervised experiences and represented as
abstract concepts. If such representations are compositional and hierarchical,
they can be recombined into an exponentially large set of new concepts. This
paper describes SCAN (Symbol-Concept Association Network), a new framework for
learning such abstractions in the visual domain. SCAN learns concepts through
fast symbol association, grounding them in disentangled visual primitives that
are discovered in an unsupervised manner. Unlike state of the art multimodal
generative model baselines, our approach requires very few pairings between
symbols and images and makes no assumptions about the form of symbol
representations. Once trained, SCAN is capable of multimodal bi-directional
inference, generating a diverse set of image samples from symbolic descriptions
and vice versa. It also allows for traversal and manipulation of the implicit
hierarchy of visual concepts through symbolic instructions and learnt logical
recombination operations. Such manipulations enable SCAN to break away from its
training data distribution and imagine novel visual concepts through
symbolically instructed recombination of previously learnt concepts.
| null |
http://arxiv.org/abs/1707.03389v3
|
http://arxiv.org/pdf/1707.03389v3.pdf
|
ICLR 2018 1
|
[
"Irina Higgins",
"Nicolas Sonnerat",
"Loic Matthey",
"Arka Pal",
"Christopher P. Burgess",
"Matko Bosnjak",
"Murray Shanahan",
"Matthew Botvinick",
"Demis Hassabis",
"Alexander Lerchner"
] |
[] | 2017-07-11T00:00:00 |
https://openreview.net/forum?id=rkN2Il-RZ
|
https://openreview.net/pdf?id=rkN2Il-RZ
|
scan-learning-hierarchical-compositional-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-attention-guided-image-to-image
|
1806.02311
| null | null |
Unsupervised Attention-guided Image to Image Translation
|
Current unsupervised image-to-image translation techniques struggle to focus
their attention on individual objects without altering the background or the
way multiple objects interact within a scene. Motivated by the important role
of attention in human perception, we tackle this limitation by introducing
unsupervised attention mechanisms that are jointly adversarialy trained with
the generators and discriminators. We demonstrate qualitatively and
quantitatively that our approach is able to attend to relevant regions in the
image without requiring supervision, and that by doing so it achieves more
realistic mappings compared to recent approaches.
|
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.
|
http://arxiv.org/abs/1806.02311v3
|
http://arxiv.org/pdf/1806.02311v3.pdf
| null |
[
"Youssef A. Mejjati",
"Christian Richardt",
"James Tompkin",
"Darren Cosker",
"Kwang In Kim"
] |
[
"Image-to-Image Translation",
"Translation",
"Unsupervised Image-To-Image Translation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/model-free-model-based-and-general
|
1806.02308
| null | null |
Model-free, Model-based, and General Intelligence
|
During the 60s and 70s, AI researchers explored intuitions about intelligence
by writing programs that displayed intelligent behavior. Many good ideas came
out from this work but programs written by hand were not robust or general.
After the 80s, research increasingly shifted to the development of learners
capable of inferring behavior and functions from experience and data, and
solvers capable of tackling well-defined but intractable models like SAT,
classical planning, Bayesian networks, and POMDPs. The learning approach has
achieved considerable success but results in black boxes that do not have the
flexibility, transparency, and generality of their model-based counterparts.
Model-based approaches, on the other hand, require models and scalable
algorithms. Model-free learners and model-based solvers have close parallels
with Systems 1 and 2 in current theories of the human mind: the first, a fast,
opaque, and inflexible intuitive mind; the second, a slow, transparent, and
flexible analytical mind. In this paper, I review developments in AI and draw
on these theories to discuss the gap between model-free learners and
model-based solvers, a gap that needs to be bridged in order to have
intelligent systems that are robust and general.
| null |
http://arxiv.org/abs/1806.02308v1
|
http://arxiv.org/pdf/1806.02308v1.pdf
| null |
[
"Hector Geffner"
] |
[
"model"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dilatation-of-lateral-ventricles-with-brain
|
1806.02305
| null | null |
Dilatation of Lateral Ventricles with Brain Volumes in Infants with 3D Transfontanelle US
|
Ultrasound (US) can be used to assess brain development in newborns, as MRI
is challenging due to immobilization issues, and may require sedation.
Dilatation of the lateral ventricles in the brain is a risk factor for poorer
neurodevelopment outcomes in infants. Hence, 3D US has the ability to assess
the volume of the lateral ventricles similar to clinically standard MRI, but
manual segmentation is time consuming. The objective of this study is to
develop an approach quantifying the ratio of lateral ventricular dilatation
with respect to total brain volume using 3D US, which can assess the severity
of macrocephaly. Automatic segmentation of the lateral ventricles is achieved
with a multi-atlas deformable registration approach using locally linear
correlation metrics for US-MRI fusion, followed by a refinement step using
deformable mesh models. Total brain volume is estimated using a 3D ellipsoid
modeling approach. Validation was performed on a cohort of 12 infants, ranging
from 2 to 8.5 months old, where 3D US and MRI were used to compare brain
volumes and segmented lateral ventricles. Automatically extracted volumes from
3D US show a high correlation and no statistically significant difference when
compared to ground truth measurements. Differences in volume ratios was 6.0 +/-
4.8% compared to MRI, while lateral ventricular segmentation yielded a mean
Dice coefficient of 70.8 +/- 3.6% and a mean absolute distance (MAD) of 0.88
+/- 0.2mm, demonstrating the clinical benefit of this tool in paediatric
ultrasound.
| null |
http://arxiv.org/abs/1806.02305v1
|
http://arxiv.org/pdf/1806.02305v1.pdf
| null |
[
"Marc-Antoine Boucher",
"Sarah Lippe",
"Amelie Damphousse",
"Ramy El-Jalbout",
"Samuel Kadoury"
] |
[
"Segmentation"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/data-driven-probabilistic-atlases-capture
|
1806.02300
| null | null |
Data-driven Probabilistic Atlases Capture Whole-brain Individual Variation
|
Probabilistic atlases provide essential spatial contextual information for
image interpretation, Bayesian modeling, and algorithmic processing. Such
atlases are typically constructed by grouping subjects with similar demographic
information. Importantly, use of the same scanner minimizes inter-group
variability. However, generalizability and spatial specificity of such
approaches is more limited than one might like. Inspired by Commowick
"Frankenstein's creature paradigm" which builds a personal specific anatomical
atlas, we propose a data-driven framework to build a personal specific
probabilistic atlas under the large-scale data scheme. The data-driven
framework clusters regions with similar features using a point distribution
model to learn different anatomical phenotypes. Regional structural atlases and
corresponding regional probabilistic atlases are used as indices and targets in
the dictionary. By indexing the dictionary, the whole brain probabilistic
atlases adapt to each new subject quickly and can be used as spatial priors for
visualization and processing. The novelties of this approach are (1) it
provides a new perspective of generating personal specific whole brain
probabilistic atlases (132 regions) under data-driven scheme across sites. (2)
The framework employs the large amount of heterogeneous data (2349 images). (3)
The proposed framework achieves low computational cost since only one affine
registration and Pearson correlation operation are required for a new subject.
Our method matches individual regions better with higher Dice similarity value
when testing the probabilistic atlases. Importantly, the advantage the
large-scale scheme is demonstrated by the better performance of using
large-scale training data (1888 images) than smaller training set (720 images).
| null |
http://arxiv.org/abs/1806.02300v1
|
http://arxiv.org/pdf/1806.02300v1.pdf
| null |
[
"Yuankai Huo",
"Katherine Swett",
"Susan M. Resnick",
"Laurie E. Cutting",
"Bennett A. Landman"
] |
[
"Specificity"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/darla-improving-zero-shot-transfer-in
|
1707.08475
| null | null |
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
|
Domain adaptation is an important open problem in deep reinforcement learning
(RL). In many scenarios of interest data is hard to obtain, so agents may learn
a source policy in a setting where data is readily available, with the hope
that it generalises well to the target domain. We propose a new multi-stage RL
agent, DARLA (DisentAngled Representation Learning Agent), which learns to see
before learning to act. DARLA's vision is based on learning a disentangled
representation of the observed environment. Once DARLA can see, it is able to
acquire source policies that are robust to many domain shifts - even with no
access to the target domain. DARLA significantly outperforms conventional
baselines in zero-shot domain adaptation scenarios, an effect that holds across
a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms
(DQN, A3C and EC).
|
Domain adaptation is an important open problem in deep reinforcement learning (RL).
|
http://arxiv.org/abs/1707.08475v2
|
http://arxiv.org/pdf/1707.08475v2.pdf
|
ICML 2017 8
|
[
"Irina Higgins",
"Arka Pal",
"Andrei A. Rusu",
"Loic Matthey",
"Christopher P. Burgess",
"Alexander Pritzel",
"Matthew Botvinick",
"Charles Blundell",
"Alexander Lerchner"
] |
[
"Deep Reinforcement Learning",
"Domain Adaptation",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Representation Learning"
] | 2017-07-26T00:00:00 |
https://icml.cc/Conferences/2017/Schedule?showEvent=711
|
http://proceedings.mlr.press/v70/higgins17a/higgins17a.pdf
|
darla-improving-zero-shot-transfer-in-1
| null |
[
{
"code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100",
"description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\\pi\\left(a\\mid{s}\\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:\r\n\r\n$$H(X) = -\\sum\\pi\\left(x\\right)\\log\\left(\\pi\\left(x\\right)\\right) $$\r\n\r\nImage Credit: Wikipedia",
"full_name": "Entropy Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Entropy Regularization",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**A3C**, **Asynchronous Advantage Actor Critic**, is a policy gradient algorithm in reinforcement learning that maintains a policy $\\pi\\left(a\\_{t}\\mid{s}\\_{t}; \\theta\\right)$ and an estimate of the value\r\nfunction $V\\left(s\\_{t}; \\theta\\_{v}\\right)$. It operates in the forward view and uses a mix of $n$-step returns to update both the policy and the value-function. The policy and the value function are updated after every $t\\_{\\text{max}}$ actions or when a terminal state is reached. The update performed by the algorithm can be seen as $\\nabla\\_{\\theta{'}}\\log\\pi\\left(a\\_{t}\\mid{s\\_{t}}; \\theta{'}\\right)A\\left(s\\_{t}, a\\_{t}; \\theta, \\theta\\_{v}\\right)$ where $A\\left(s\\_{t}, a\\_{t}; \\theta, \\theta\\_{v}\\right)$ is an estimate of the advantage function given by:\r\n\r\n$$\\sum^{k-1}\\_{i=0}\\gamma^{i}r\\_{t+i} + \\gamma^{k}V\\left(s\\_{t+k}; \\theta\\_{v}\\right) - V\\left(s\\_{t}; \\theta\\_{v}\\right)$$\r\n\r\nwhere $k$ can vary from state to state and is upper-bounded by $t\\_{max}$.\r\n\r\nThe critics in A3C learn the value function while multiple actors are trained in parallel and get synced with global parameters every so often. The gradients are accumulated as part of training for stability - this is like parallelized stochastic gradient descent.\r\n\r\nNote that while the parameters $\\theta$ of the policy and $\\theta\\_{v}$ of the value function are shown as being separate for generality, we always share some of the parameters in practice. We typically use a convolutional neural network that has one [softmax](https://paperswithcode.com/method/softmax) output for the policy $\\pi\\left(a\\_{t}\\mid{s}\\_{t}; \\theta\\right)$ and one linear output for the value function $V\\left(s\\_{t}; \\theta\\_{v}\\right)$, with all non-output layers shared.",
"full_name": "A3C",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "A3C",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
}
] |
https://paperswithcode.com/paper/regularization-by-denoising-clarifications
|
1806.02296
| null | null |
Regularization by Denoising: Clarifications and New Interpretations
|
Regularization by Denoising (RED), as recently proposed by Romano, Elad, and
Milanfar, is powerful image-recovery framework that aims to minimize an
explicit regularization objective constructed from a plug-in image-denoising
function. Experimental evidence suggests that the RED algorithms are
state-of-the-art. We claim, however, that explicit regularization does not
explain the RED algorithms. In particular, we show that many of the expressions
in the paper by Romano et al. hold only when the denoiser has a symmetric
Jacobian, and we demonstrate that such symmetry does not occur with practical
denoisers such as non-local means, BM3D, TNRD, and DnCNN. To explain the RED
algorithms, we propose a new framework called Score-Matching by Denoising
(SMD), which aims to match a "score" (i.e., the gradient of a log-prior). We
then show tight connections between SMD, kernel density estimation, and
constrained minimum mean-squared error denoising. Furthermore, we interpret the
RED algorithms from Romano et al. and propose new algorithms with acceleration
and convergence guarantees. Finally, we show that the RED algorithms seek a
consensus equilibrium solution, which facilitates a comparison to plug-and-play
ADMM.
|
To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a "score" (i. e., the gradient of a log-prior).
|
http://arxiv.org/abs/1806.02296v4
|
http://arxiv.org/pdf/1806.02296v4.pdf
| null |
[
"Edward T. Reehorst",
"Philip Schniter"
] |
[
"Denoising",
"Density Estimation",
"Image Denoising"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/compressive-sensing-with-low-precision-data
|
1802.04907
| null | null |
Compressive Sensing Using Iterative Hard Thresholding with Low Precision Data Representation: Theory and Applications
|
Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance? Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized {IHT} that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speed-up with negligible loss of recovery quality.
| null |
https://arxiv.org/abs/1802.04907v4
|
https://arxiv.org/pdf/1802.04907v4.pdf
| null |
[
"Nezihe Merve Gürel",
"Kaan Kara",
"Alen Stojanov",
"Tyler Smith",
"Thomas Lemmin",
"Dan Alistarh",
"Markus Püschel",
"Ce Zhang"
] |
[
"Astronomy",
"Compressive Sensing",
"CPU",
"Quantization"
] | 2018-02-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/disentangling-by-factorising
|
1802.05983
| null | null |
Disentangling by Factorising
|
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.
|
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation.
|
https://arxiv.org/abs/1802.05983v3
|
https://arxiv.org/pdf/1802.05983v3.pdf
|
ICML 2018 7
|
[
"Hyunjik Kim",
"andriy mnih"
] |
[
"Disentanglement"
] | 2018-02-16T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2146
|
http://proceedings.mlr.press/v80/kim18b/kim18b.pdf
|
disentangling-by-factorising-1
| null |
[] |
https://paperswithcode.com/paper/spatiotemporal-manifold-prediction-model-for
|
1806.02285
| null | null |
Spatiotemporal Manifold Prediction Model for Anterior Vertebral Body Growth Modulation Surgery in Idiopathic Scoliosis
|
Anterior Vertebral Body Growth Modulation (AVBGM) is a minimally invasive
surgical technique that gradually corrects spine deformities while preserving
lumbar motion. However the selection of potential surgical patients is
currently based on clinical judgment and would be facilitated by the
identification of patients responding to AVBGM prior to surgery. We introduce a
statistical framework for predicting the surgical outcomes following AVBGM in
adolescents with idiopathic scoliosis. A discriminant manifold is first
constructed to maximize the separation between responsive and non-responsive
groups of patients treated with AVBGM for scoliosis. The model then uses
subject-specific correction trajectories based on articulated transformations
in order to map spine correction profiles to a group-average piecewise-geodesic
path. Spine correction trajectories are described in a piecewise-geodesic
fashion to account for varying times at follow-up exams, regressing the curve
via a quadratic optimization process. To predict the evolution of correction, a
baseline reconstruction is projected onto the manifold, from which a
spatiotemporal regression model is built from parallel transport curves
inferred from neighboring exemplars. The model was trained on 438
reconstructions and tested on 56 subjects using 3D spine reconstructions from
follow-up exams, with the probabilistic framework yielding accurate results
with differences of 2.1 +/- 0.6deg in main curve angulation, and generating
models similar to biomechanical simulations.
| null |
http://arxiv.org/abs/1806.02285v1
|
http://arxiv.org/pdf/1806.02285v1.pdf
| null |
[
"William Mandel",
"Olivier Turcot",
"Dejan Knez",
"Stefan Parent",
"Samuel Kadoury"
] |
[] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/finding-the-bandit-in-a-graph-sequential
|
1806.02282
| null | null |
Finding the bandit in a graph: Sequential search-and-stop
|
We consider the problem where an agent wants to find a hidden object that is
randomly located in some vertex of a directed acyclic graph (DAG) according to
a fixed but possibly unknown distribution. The agent can only examine vertices
whose in-neighbors have already been examined. In this paper, we address a
learning setting where we allow the agent to stop before having found the
object and restart searching on a new independent instance of the same problem.
Our goal is to maximize the total number of hidden objects found given a time
budget. The agent can thus skip an instance after realizing that it would spend
too much time on it. Our contributions are both to the search theory and
multi-armed bandits. If the distribution is known, we provide a quasi-optimal
and efficient stationary strategy. If the distribution is unknown, we
additionally show how to sequentially approximate it and, at the same time, act
near-optimally in order to collect as many hidden objects as possible.
| null |
http://arxiv.org/abs/1806.02282v3
|
http://arxiv.org/pdf/1806.02282v3.pdf
| null |
[
"Pierre Perrault",
"Vianney Perchet",
"Michal Valko"
] |
[
"Multi-Armed Bandits"
] | 2018-06-06T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.