paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/multi-task-deep-networks-for-depth-based-6d
|
1806.03891
| null | null |
Multi-Task Deep Networks for Depth-Based 6D Object Pose and Joint Registration in Crowd Scenarios
|
In bin-picking scenarios, multiple instances of an object of interest are
stacked in a pile randomly, and hence, the instances are inherently subjected
to the challenges: severe occlusion, clutter, and similar-looking distractors.
Most existing methods are, however, for single isolated object instances, while
some recent methods tackle crowd scenarios as post-refinement which accounts
multiple object relations. In this paper, we address recovering 6D poses of
multiple instances in bin-picking scenarios in depth modality by multi-task
learning in deep neural networks. Our architecture jointly learns multiple
sub-tasks: 2D detection, depth, and 3D pose estimation of individual objects;
and joint registration of multiple objects. For training data generation, depth
images of physically plausible object pose configurations are generated by a 3D
object model in a physics simulation, which yields diverse occlusion patterns
to learn. We adopt a state-of-the-art object detector, and 2D offsets are
further estimated via a network to refine misaligned 2D detections. The depth
and 3D pose estimator is designed to generate multiple hypotheses per
detection. This allows the joint registration network to learn occlusion
patterns and remove physically implausible pose hypotheses. We apply our
architecture on both synthetic (our own and Sileane dataset) and real (a public
Bin-Picking dataset) data, showing that it significantly outperforms
state-of-the-art methods by 15-31% in average precision.
| null |
http://arxiv.org/abs/1806.03891v1
|
http://arxiv.org/pdf/1806.03891v1.pdf
| null |
[
"Juil Sock",
"Kwang In Kim",
"Caner Sahin",
"Tae-Kyun Kim"
] |
[
"3D Pose Estimation",
"Multi-Task Learning",
"Object",
"Pose Estimation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/first-experiments-with-neural-translation-of
|
1805.06502
| null | null |
First Experiments with Neural Translation of Informal to Formal Mathematics
|
We report on our experiments to train deep neural networks that automatically
translate informalized LaTeX-written Mizar texts into the formal Mizar
language. To the best of our knowledge, this is the first time when neural
networks have been adopted in the formalization of mathematics. Using Luong et
al.'s neural machine translation model (NMT), we tested our aligned
informal-formal corpora against various hyperparameters and evaluated their
results. Our experiments show that our best performing model configurations are
able to generate correct Mizar statements on 65.73\% of the inference data,
with the union of all models covering 79.17\%. These results indicate that
formalization through artificial neural network is a promising approach for
automated formalization of mathematics. We present several case studies to
illustrate our results.
| null |
http://arxiv.org/abs/1805.06502v2
|
http://arxiv.org/pdf/1805.06502v2.pdf
| null |
[
"Qingxiang Wang",
"Cezary Kaliszyk",
"Josef Urban"
] |
[
"Machine Translation",
"NMT",
"Translation"
] | 2018-05-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/object-detection-using-domain-randomization
|
1805.11778
| null | null |
Object Detection using Domain Randomization and Generative Adversarial Refinement of Synthetic Images
|
In this work, we present an application of domain randomization and
generative adversarial networks (GAN) to train a near real-time object detector
for industrial electric parts, entirely in a simulated environment. Large scale
availability of labelled real world data is typically rare and difficult to
obtain in many industrial settings. As such here, only a few hundred of
unlabelled real images are used to train a Cyclic-GAN network, in combination
with various degree of domain randomization procedures. We demonstrate that
this enables robust translation of synthetic images to the real world domain.
We show that a combination of the original synthetic (simulation) and GAN
translated images, when used for training a Mask-RCNN object detection network
achieves greater than 0.95 mean average precision in detecting and classifying
a collection of industrial electric parts. We evaluate the performance across
different combinations of training data.
| null |
http://arxiv.org/abs/1805.11778v2
|
http://arxiv.org/pdf/1805.11778v2.pdf
| null |
[
"Fernando Camaro Nogues",
"Andrew Huie",
"Sakyasingha Dasgupta"
] |
[
"object-detection",
"Object Detection",
"Translation"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-target-recovery-for-hindi-english
|
1806.04535
| null | null |
Automatic Target Recovery for Hindi-English Code Mixed Puns
|
In order for our computer systems to be more human-like, with a higher
emotional quotient, they need to be able to process and understand intrinsic
human language phenomena like humour. In this paper, we consider a subtype of
humour - puns, which are a common type of wordplay-based jokes. In particular,
we consider code-mixed puns which have become increasingly mainstream on social
media, in informal conversations and advertisements and aim to build a system
which can automatically identify the pun location and recover the target of
such puns. We first study and classify code-mixed puns into two categories
namely intra-sentential and intra-word, and then propose a four-step algorithm
to recover the pun targets for puns belonging to the intra-sentential category.
Our algorithm uses language models, and phonetic similarity-based features to
get the desired results. We test our approach on a small set of code-mixed
punning advertisements, and observe that our system is successfully able to
recover the targets for 67% of the puns.
| null |
http://arxiv.org/abs/1806.04535v1
|
http://arxiv.org/pdf/1806.04535v1.pdf
| null |
[
"Srishti Aggarwal",
"Kritik Mathur",
"Radhika Mamidi"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-approximate-natural-gradient-descent-in-1
|
1806.03884
| null | null |
Fast Approximate Natural Gradient Descent in a Kronecker-factored Eigenbasis
|
Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions. For models with many parameters, the covariance matrix they are based on becomes gigantic, making them inapplicable in their original form. This has motivated research into both simple diagonal approximations and more sophisticated factored approximations such as KFAC (Heskes, 2000; Martens & Grosse, 2015; Grosse & Martens, 2016). In the present work we draw inspiration from both to propose a novel approximation that is provably better than KFAC and amendable to cheap partial updates. It consists in tracking a diagonal variance, not in parameter coordinates, but in a Kronecker-factored eigenbasis, in which the diagonal approximation is likely to be more effective. Experiments show improvements over KFAC in optimization speed for several deep network architectures.
|
Optimization algorithms that leverage gradient covariance information, such as variants of natural gradient descent (Amari, 1998), offer the prospect of yielding more effective descent directions.
|
https://arxiv.org/abs/1806.03884v2
|
https://arxiv.org/pdf/1806.03884v2.pdf
| null |
[
"Thomas George",
"César Laurent",
"Xavier Bouthillier",
"Nicolas Ballas",
"Pascal Vincent"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/massively-parallel-video-networks
|
1806.03863
| null | null |
Massively Parallel Video Networks
|
We introduce a class of causal video understanding models that aims to
improve efficiency of video processing by maximising throughput, minimising
latency, and reducing the number of clock cycles. Leveraging operation
pipelining and multi-rate clocks, these models perform a minimal amount of
computation (e.g. as few as four convolutional layers) for each frame per
timestep to produce an output. The models are still very deep, with dozens of
such operations being performed but in a pipelined fashion that enables
depth-parallel computation. We illustrate the proposed principles by applying
them to existing image architectures and analyse their behaviour on two video
tasks: action recognition and human keypoint localisation. The results show
that a significant degree of parallelism, and implicitly speedup, can be
achieved with little loss in performance.
| null |
http://arxiv.org/abs/1806.03863v2
|
http://arxiv.org/pdf/1806.03863v2.pdf
|
ECCV 2018 9
|
[
"Joao Carreira",
"Viorica Patraucean",
"Laurent Mazare",
"Andrew Zisserman",
"Simon Osindero"
] |
[
"Action Recognition",
"Temporal Action Localization",
"Video Understanding"
] | 2018-06-11T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Viorica_Patraucean_Massively_Parallel_Video_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Viorica_Patraucean_Massively_Parallel_Video_ECCV_2018_paper.pdf
|
massively-parallel-video-networks-1
| null |
[] |
https://paperswithcode.com/paper/global-convergence-of-block-coordinate
|
1803.00225
| null | null |
Global Convergence of Block Coordinate Descent in Deep Learning
|
Deep learning has aroused extensive attention due to its great empirical success. The efficiency of the block coordinate descent (BCD) methods has been recently demonstrated in deep neural network (DNN) training. However, theoretical studies on their convergence properties are limited due to the highly nonconvex nature of DNN training. In this paper, we aim at providing a general methodology for provable convergence guarantees for this type of methods. In particular, for most of the commonly used DNN training models involving both two- and three-splitting schemes, we establish the global convergence to a critical point at a rate of ${\cal O}(1/k)$, where $k$ is the number of iterations. The results extend to general loss functions which have Lipschitz continuous gradients and deep residual networks (ResNets). Our key development adds several new elements to the Kurdyka-{\L}ojasiewicz inequality framework that enables us to carry out the global convergence analysis of BCD in the general scenario of deep learning.
|
Deep learning has aroused extensive attention due to its great empirical success.
|
https://arxiv.org/abs/1803.00225v4
|
https://arxiv.org/pdf/1803.00225v4.pdf
| null |
[
"Jinshan Zeng",
"Tim Tsz-Kit Lau",
"Shao-Bo Lin",
"Yuan YAO"
] |
[
"Deep Learning"
] | 2018-03-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-more-human-way-to-play-computer-chess
|
1503.04333
| null | null |
A More Human Way to Play Computer Chess
|
This paper suggests a forward-pruning technique for computer chess that uses
'Move Tables', which are like Transposition Tables, but for moves not
positions. They use an efficient memory structure and has put the design into
the context of long and short-term memories. The long-term memory updates a
play path with weight reinforcement, while the short-term memory can be
immediately added or removed. With this, 'long branches' can play a short path,
before returning to a full search at the resulting leaf nodes. Re-using an
earlier search path allows the tree to be forward-pruned, which is known to be
dangerous, because it removes part of the search process. Additional checks are
therefore made and moves can even be re-added when the search result is
unsatisfactory. Automatic feature analysis is now central to the algorithm,
where key squares and related squares can be generated automatically and used
to guide the search process. Using this analysis, if a search result is
inferior, it can re-insert un-played moves that cover these key squares only.
On the tactical side, a type of move that the forward-pruning will fail on is
recognised and a pattern-based solution to that problem is suggested. This has
completed the theory of an earlier paper and resulted in a more human-like
approach to searching for a chess move. Tests demonstrate that the obvious
blunders associated with forward pruning are no longer present and that it can
compete at the top level with regard to playing strength.
| null |
http://arxiv.org/abs/1503.04333v5
|
http://arxiv.org/pdf/1503.04333v5.pdf
| null |
[
"Kieran Greer"
] |
[] | 2015-03-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deepfirearm-learning-discriminative-feature
|
1806.02984
| null | null |
DeepFirearm: Learning Discriminative Feature Representation for Fine-grained Firearm Retrieval
|
There are great demands for automatically regulating inappropriate appearance
of shocking firearm images in social media or identifying firearm types in
forensics. Image retrieval techniques have great potential to solve these
problems. To facilitate research in this area, we introduce Firearm 14k, a
large dataset consisting of over 14,000 images in 167 categories. It can be
used for both fine-grained recognition and retrieval of firearm images. Recent
advances in image retrieval are mainly driven by fine-tuning state-of-the-art
convolutional neural networks for retrieval task. The conventional single
margin contrastive loss, known for its simplicity and good performance, has
been widely used. We find that it performs poorly on the Firearm 14k dataset
due to: (1) Loss contributed by positive and negative image pairs is unbalanced
during training process. (2) A huge domain gap exists between this dataset and
ImageNet. We propose to deal with the unbalanced loss by employing a double
margin contrastive loss. We tackle the domain gap issue with a two-stage
training strategy, where we first fine-tune the network for classification, and
then fine-tune it for retrieval. Experimental results show that our approach
outperforms the conventional single margin approach by a large margin (up to
88.5% relative improvement) and even surpasses the strong triplet-loss-based
approach.
|
There are great demands for automatically regulating inappropriate appearance of shocking firearm images in social media or identifying firearm types in forensics.
|
http://arxiv.org/abs/1806.02984v2
|
http://arxiv.org/pdf/1806.02984v2.pdf
| null |
[
"Jiedong Hao",
"Jing Dong",
"Wei Wang",
"Tieniu Tan"
] |
[
"Image Retrieval",
"Retrieval",
"Triplet"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-for-classification-tasks-on
|
1806.03857
| null | null |
Deep Learning for Classification Tasks on Geospatial Vector Polygons
|
In this paper, we evaluate the accuracy of deep learning approaches on geospatial vector geometry classification tasks. The purpose of this evaluation is to investigate the ability of deep learning models to learn from geometry coordinates directly. Previous machine learning research applied to geospatial polygon data did not use geometries directly, but derived properties thereof. These are produced by way of extracting geometry properties such as Fourier descriptors. Instead, our introduced deep neural net architectures are able to learn on sequences of coordinates mapped directly from polygons. In three classification tasks we show that the deep learning architectures are competitive with common learning algorithms that require extracted features.
|
In this paper, we evaluate the accuracy of deep learning approaches on geospatial vector geometry classification tasks.
|
https://arxiv.org/abs/1806.03857v2
|
https://arxiv.org/pdf/1806.03857v2.pdf
| null |
[
"Rein van 't Veer",
"Peter Bloem",
"Erwin Folmer"
] |
[
"BIG-bench Machine Learning",
"Classification",
"Deep Learning",
"General Classification"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/data-augmentation-instead-of-explicit
|
1806.03852
| null |
ByJWeR1AW
|
Data augmentation instead of explicit regularization
|
Contrary to most machine learning models, modern deep artificial neural networks typically include multiple components that contribute to regularization. Despite the fact that some (explicit) regularization techniques, such as weight decay and dropout, require costly fine-tuning of sensitive hyperparameters, the interplay between them and other elements that provide implicit regularization is not well understood yet. Shedding light upon these interactions is key to efficiently using computational resources and may contribute to solving the puzzle of generalization in deep learning. Here, we first provide formal definitions of explicit and implicit regularization that help understand essential differences between techniques. Second, we contrast data augmentation with weight decay and dropout. Our results show that visual object categorization models trained with data augmentation alone achieve the same performance or higher than models trained also with weight decay and dropout, as is common practice. We conclude that the contribution on generalization of weight decay and dropout is not only superfluous when sufficient implicit regularization is provided, but also such techniques can dramatically deteriorate the performance if the hyperparameters are not carefully tuned for the architecture and data set. In contrast, data augmentation systematically provides large generalization gains and does not require hyperparameter re-tuning. In view of our results, we suggest to optimize neural networks without weight decay and dropout to save computational resources, hence carbon emissions, and focus more on data augmentation and other inductive biases to improve performance and robustness.
|
Despite the fact that some (explicit) regularization techniques, such as weight decay and dropout, require costly fine-tuning of sensitive hyperparameters, the interplay between them and other elements that provide implicit regularization is not well understood yet.
|
https://arxiv.org/abs/1806.03852v5
|
https://arxiv.org/pdf/1806.03852v5.pdf
|
ICLR 2018 1
|
[
"Alex Hernández-García",
"Peter König"
] |
[
"Data Augmentation",
"Object Categorization"
] | 2018-06-11T00:00:00 |
https://openreview.net/forum?id=ByJWeR1AW
|
https://openreview.net/pdf?id=ByJWeR1AW
|
data-augmentation-instead-of-explicit-1
| null |
[
{
"code_snippet_url": "",
"description": "**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\\_{2}$ Norm of the weights:\r\n\r\n$$L\\_{new}\\left(w\\right) = L\\_{original}\\left(w\\right) + \\lambda{w^{T}w}$$\r\n\r\nwhere $\\lambda$ is a value determining the strength of the penalty (encouraging smaller weights). \r\n\r\nWeight decay can be incorporated directly into the weight update rule, rather than just implicitly by defining it through to objective function. Often weight decay refers to the implementation where we specify it directly in the weight update rule (whereas L2 regularization is usually the implementation which is specified in the objective function).\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Weight Decay",
"introduced_year": 1943,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Weight Decay",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/synthetic-perfusion-maps-imaging-perfusion
|
1806.03848
| null | null |
Synthetic Perfusion Maps: Imaging Perfusion Deficits in DSC-MRI with Deep Learning
|
In this work, we present a novel convolutional neural net- work based method
for perfusion map generation in dynamic suscepti- bility contrast-enhanced
perfusion imaging. The proposed architecture is trained end-to-end and solely
relies on raw perfusion data for inference. We used a dataset of 151 acute
ischemic stroke cases for evaluation. Our method generates perfusion maps that
are comparable to the target maps used for clinical routine, while being
model-free, fast, and less noisy.
| null |
http://arxiv.org/abs/1806.03848v1
|
http://arxiv.org/pdf/1806.03848v1.pdf
| null |
[
"Andreas Hess",
"Raphael Meier",
"Johannes Kaesmacher",
"Simon Jung",
"Fabien Scalzo",
"David Liebeskind",
"Roland Wiest",
"Richard McKinley"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-multimodal-classifier-generative
|
1806.03847
| null | null |
A Multimodal Classifier Generative Adversarial Network for Carry and Place Tasks from Ambiguous Language Instructions
|
This paper focuses on a multimodal language understanding method for
carry-and-place tasks with domestic service robots. We address the case of
ambiguous instructions, that is, when the target area is not specified. For
instance "put away the milk and cereal" is a natural instruction where there is
ambiguity regarding the target area, considering environments in daily life.
Conventionally, this instruction can be disambiguated from a dialogue system,
but at the cost of time and cumbersome interaction. Instead, we propose a
multimodal approach, in which the instructions are disambiguated using the
robot's state and environment context. We develop the Multi-Modal Classifier
Generative Adversarial Network (MMC-GAN) to predict the likelihood of different
target areas considering the robot's physical limitation and the target
clutter. Our approach, MMC-GAN, significantly improves accuracy compared with
baseline methods that use instructions only or simple deep neural networks.
| null |
http://arxiv.org/abs/1806.03847v1
|
http://arxiv.org/pdf/1806.03847v1.pdf
| null |
[
"Aly Magassouba",
"Komei Sugiura",
"Hisashi Kawai"
] |
[
"Generative Adversarial Network"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/intriguing-properties-of-learned
|
1804.07090
| null |
SJzvDjAcK7
|
Robustness via Deep Low-Rank Representations
|
We investigate the effect of the dimensionality of the representations learned in Deep Neural Networks (DNNs) on their robustness to input perturbations, both adversarial and random. To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN. This regularizer forces the feature representations to (mostly) lie in a low-dimensional linear subspace. We perform a wide range of experiments that demonstrate that the LR indeed induces low rank on the representations, while providing modest improvements to accuracy as an added benefit. Furthermore, the learned features make the trained model significantly more robust to input perturbations such as Gaussian and adversarial noise (even without adversarial training). Lastly, the low-dimensionality means that the learned features are highly compressible; thus discriminative features of the data can be stored using very little memory. Our experiments indicate that models trained using the LR learn robust classifiers by discovering subspaces that avoid non-robust features. Algorithmically, the LR is scalable, generic, and straightforward to implement into existing deep learning frameworks.
| null |
https://arxiv.org/abs/1804.07090v5
|
https://arxiv.org/pdf/1804.07090v5.pdf
|
ICLR 2019 5
|
[
"Amartya Sanyal",
"Varun Kanade",
"Philip H. S. Torr",
"Puneet K. Dokania"
] |
[
"Clustering",
"General Classification",
"Image Classification",
"Transfer Learning"
] | 2018-04-19T00:00:00 |
https://openreview.net/forum?id=SJzvDjAcK7
|
https://openreview.net/pdf?id=SJzvDjAcK7
|
intriguing-properties-of-learned-1
| null |
[] |
https://paperswithcode.com/paper/multi-document-summarization-using
|
1710.02745
| null | null |
Multi-Document Summarization using Distributed Bag-of-Words Model
|
As the number of documents on the web is growing exponentially,
multi-document summarization is becoming more and more important since it can
provide the main ideas in a document set in short time. In this paper, we
present an unsupervised centroid-based document-level reconstruction framework
using distributed bag of words model. Specifically, our approach selects
summary sentences in order to minimize the reconstruction error between the
summary and the documents. We apply sentence selection and beam search, to
further improve the performance of our model. Experimental results on two
different datasets show significant performance gains compared with the
state-of-the-art baselines.
| null |
http://arxiv.org/abs/1710.02745v2
|
http://arxiv.org/pdf/1710.02745v2.pdf
| null |
[
"Kaustubh Mani",
"Ishan Verma",
"Hardik Meisheri",
"Lipika Dey"
] |
[
"Document Summarization",
"Multi-Document Summarization",
"Sentence"
] | 2017-10-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dmcnn-dual-domain-multi-scale-convolutional
|
1806.03275
| null | null |
DMCNN: Dual-Domain Multi-Scale Convolutional Neural Network for Compression Artifacts Removal
|
JPEG is one of the most commonly used standards among lossy image compression
methods. However, JPEG compression inevitably introduces various kinds of
artifacts, especially at high compression rates, which could greatly affect the
Quality of Experience (QoE). Recently, convolutional neural network (CNN) based
methods have shown excellent performance for removing the JPEG artifacts. Lots
of efforts have been made to deepen the CNNs and extract deeper features, while
relatively few works pay attention to the receptive field of the network. In
this paper, we illustrate that the quality of output images can be
significantly improved by enlarging the receptive fields in many cases. One
step further, we propose a Dual-domain Multi-scale CNN (DMCNN) to take full
advantage of redundancies on both the pixel and DCT domains. Experiments show
that DMCNN sets a new state-of-the-art for the task of JPEG artifact removal.
| null |
http://arxiv.org/abs/1806.03275v2
|
http://arxiv.org/pdf/1806.03275v2.pdf
| null |
[
"Xiaoshuai Zhang",
"Wenhan Yang",
"Yueyu Hu",
"Jiaying Liu"
] |
[
"Image Compression",
"JPEG Artifact Correction",
"JPEG Artifact Removal"
] | 2018-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bayesian-model-agnostic-meta-learning
|
1806.03836
| null | null |
Bayesian Model-Agnostic Meta-Learning
|
Learning to infer Bayesian posterior from a few-shot dataset is an important
step towards robust meta-learning due to the model uncertainty inherent in the
problem. In this paper, we propose a novel Bayesian model-agnostic
meta-learning method. The proposed method combines scalable gradient-based
meta-learning with nonparametric variational inference in a principled
probabilistic framework. During fast adaptation, the method is capable of
learning complex uncertainty structure beyond a point estimate or a simple
Gaussian approximation. In addition, a robust Bayesian meta-update mechanism
with a new meta-loss prevents overfitting during meta-update. Remaining an
efficient gradient-based meta-learner, the method is also model-agnostic and
simple to implement. Experiment results show the accuracy and robustness of the
proposed method in various tasks: sinusoidal regression, image classification,
active learning, and reinforcement learning.
|
Learning to infer Bayesian posterior from a few-shot dataset is an important step towards robust meta-learning due to the model uncertainty inherent in the problem.
|
http://arxiv.org/abs/1806.03836v4
|
http://arxiv.org/pdf/1806.03836v4.pdf
|
NeurIPS 2018 12
|
[
"Taesup Kim",
"Jaesik Yoon",
"Ousmane Dia",
"Sungwoong Kim",
"Yoshua Bengio",
"Sungjin Ahn"
] |
[
"Active Learning",
"image-classification",
"Image Classification",
"Meta-Learning",
"model",
"Reinforcement Learning",
"Variational Inference"
] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/7963-bayesian-model-agnostic-meta-learning
|
http://papers.nips.cc/paper/7963-bayesian-model-agnostic-meta-learning.pdf
|
bayesian-model-agnostic-meta-learning-1
| null |
[] |
https://paperswithcode.com/paper/interactive-visual-grounding-of-referring
|
1806.03831
| null | null |
Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction
|
This paper presents INGRESS, a robot system that follows human natural
language instructions to pick and place everyday objects. The core issue here
is the grounding of referring expressions: infer objects and their
relationships from input images and language expressions. INGRESS allows for
unconstrained object categories and unconstrained language expressions.
Further, it asks questions to disambiguate referring expressions interactively.
To achieve these, we take the approach of grounding by generation and propose a
two-stage neural network model for grounding. The first stage uses a neural
network to generate visual descriptions of objects, compares them with the
input language expression, and identifies a set of candidate objects. The
second stage uses another neural network to examine all pairwise relations
between the candidates and infers the most likely referred object. The same
neural networks are used for both grounding and question generation for
disambiguation. Experiments show that INGRESS outperformed a state-of-the-art
method on the RefCOCO dataset and in robot experiments with humans.
| null |
http://arxiv.org/abs/1806.03831v1
|
http://arxiv.org/pdf/1806.03831v1.pdf
| null |
[
"Mohit Shridhar",
"David Hsu"
] |
[
"Question Generation",
"Question-Generation",
"Visual Grounding"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analysis-and-design-of-convolutional-networks
|
1705.02302
| null | null |
Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions
|
The driving force behind convolutional networks - the most successful deep
learning architecture to date, is their expressive power. Despite its wide
acceptance and vast empirical evidence, formal analyses supporting this belief
are scarce. The primary notions for formally reasoning about expressiveness are
efficiency and inductive bias. Expressive efficiency refers to the ability of a
network architecture to realize functions that require an alternative
architecture to be much larger. Inductive bias refers to the prioritization of
some functions over others given prior knowledge regarding a task at hand. In
this paper we overview a series of works written by the authors, that through
an equivalence to hierarchical tensor decompositions, analyze the expressive
efficiency and inductive bias of various convolutional network architectural
features (depth, width, strides and more). The results presented shed light on
the demonstrated effectiveness of convolutional networks, and in addition,
provide new tools for network design.
| null |
http://arxiv.org/abs/1705.02302v5
|
http://arxiv.org/pdf/1705.02302v5.pdf
| null |
[
"Nadav Cohen",
"Or Sharir",
"Yoav Levine",
"Ronen Tamari",
"David Yakira",
"Amnon Shashua"
] |
[
"Inductive Bias"
] | 2017-05-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autofocus-layer-for-semantic-segmentation
|
1805.08403
| null | null |
Autofocus Layer for Semantic Segmentation
|
We propose the autofocus convolutional layer for semantic segmentation with
the objective of enhancing the capabilities of neural networks for multi-scale
processing. Autofocus layers adaptively change the size of the effective
receptive field based on the processed context to generate more powerful
features. This is achieved by parallelising multiple convolutional layers with
different dilation rates, combined by an attention mechanism that learns to
focus on the optimal scales driven by context. By sharing the weights of the
parallel convolutions we make the network scale-invariant, with only a modest
increase in the number of parameters. The proposed autofocus layer can be
easily integrated into existing networks to improve a model's representational
power. We evaluate our models on the challenging tasks of multi-organ
segmentation in pelvic CT and brain tumor segmentation in MRI and achieve very
promising performance.
|
We propose the autofocus convolutional layer for semantic segmentation with the objective of enhancing the capabilities of neural networks for multi-scale processing.
|
http://arxiv.org/abs/1805.08403v3
|
http://arxiv.org/pdf/1805.08403v3.pdf
| null |
[
"Yao Qin",
"Konstantinos Kamnitsas",
"Siddharth Ancha",
"Jay Nanavati",
"Garrison Cottrell",
"Antonio Criminisi",
"Aditya Nori"
] |
[
"Brain Tumor Segmentation",
"Medical Image Segmentation",
"Organ Segmentation",
"Segmentation",
"Semantic Segmentation",
"Tumor Segmentation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-optimization-of-deep-networks-implicit
|
1802.06509
| null | null |
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
|
Conventional wisdom in deep learning states that increasing depth improves
expressiveness but complicates optimization. This paper suggests that,
sometimes, increasing depth can speed up optimization. The effect of depth on
optimization is decoupled from expressiveness by focusing on settings where
additional layers amount to overparameterization - linear neural networks, a
well-studied model. Theoretical analysis, as well as experiments, show that
here depth acts as a preconditioner which may accelerate convergence. Even on
simple convex problems such as linear regression with $\ell_p$ loss, $p>2$,
gradient descent can benefit from transitioning to a non-convex
overparameterized objective, more than it would from some common acceleration
schemes. We also prove that it is mathematically impossible to obtain the
acceleration effect of overparametrization via gradients of any regularizer.
|
The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to overparameterization - linear neural networks, a well-studied model.
|
http://arxiv.org/abs/1802.06509v2
|
http://arxiv.org/pdf/1802.06509v2.pdf
|
ICML 2018 7
|
[
"Sanjeev Arora",
"Nadav Cohen",
"Elad Hazan"
] |
[
"regression"
] | 2018-02-19T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2422
|
http://proceedings.mlr.press/v80/arora18a/arora18a.pdf
|
on-the-optimization-of-deep-networks-implicit-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/know-what-you-dont-know-unanswerable
|
1806.03822
| null | null |
Know What You Don't Know: Unanswerable Questions for SQuAD
|
Extractive reading comprehension systems can often locate the correct answer
to a question in a context document, but they also tend to make unreliable
guesses on questions for which the correct answer is not stated in the context.
Existing datasets either focus exclusively on answerable questions, or use
automatically generated unanswerable questions that are easy to identify. To
address these weaknesses, we present SQuAD 2.0, the latest version of the
Stanford Question Answering Dataset (SQuAD). SQuAD 2.0 combines existing SQuAD
data with over 50,000 unanswerable questions written adversarially by
crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0,
systems must not only answer questions when possible, but also determine when
no answer is supported by the paragraph and abstain from answering. SQuAD 2.0
is a challenging natural language understanding task for existing models: a
strong neural system that gets 86% F1 on SQuAD 1.1 achieves only 66% F1 on
SQuAD 2.0.
|
Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context.
|
http://arxiv.org/abs/1806.03822v1
|
http://arxiv.org/pdf/1806.03822v1.pdf
|
ACL 2018 7
|
[
"Pranav Rajpurkar",
"Robin Jia",
"Percy Liang"
] |
[
"Natural Language Understanding",
"Question Answering",
"Reading Comprehension"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/P18-2124
|
https://aclanthology.org/P18-2124.pdf
|
know-what-you-donat-know-unanswerable
| null |
[] |
https://paperswithcode.com/paper/addition-of-code-mixed-features-to-enhance
|
1806.03821
| null | null |
Addition of Code Mixed Features to Enhance the Sentiment Prediction of Song Lyrics
|
Sentiment analysis, also called opinion mining, is the field of study that
analyzes people's opinions,sentiments, attitudes and emotions. Songs are
important to sentiment analysis since the songs and mood are mutually dependent
on each other. Based on the selected song it becomes easy to find the mood of
the listener, in future it can be used for recommendation. The song lyric is a
rich source of datasets containing words that are helpful in analysis and
classification of sentiments generated from it. Now a days we observe a lot of
inter-sentential and intra-sentential code-mixing in songs which has a varying
impact on audience. To study this impact we created a Telugu songs dataset
which contained both Telugu-English code-mixed and pure Telugu songs. In this
paper, we classify the songs based on its arousal as exciting or non-exciting.
We develop a language identification tool and introduce code-mixing features
obtained from it as additional features. Our system with these additional
features attains 4-5% accuracy greater than traditional approaches on our
dataset.
| null |
http://arxiv.org/abs/1806.03821v1
|
http://arxiv.org/pdf/1806.03821v1.pdf
| null |
[
"Gangula Rama Rohit Reddy",
"Radhika Mamidi"
] |
[
"Language Identification",
"Opinion Mining",
"Sentiment Analysis"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-efficient-generalized-bellman-update-for
|
1806.03820
| null | null |
An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning
|
Our goal is for AI systems to correctly identify and act according to their
human user's objectives. Cooperative Inverse Reinforcement Learning (CIRL)
formalizes this value alignment problem as a two-player game between a human
and robot, in which only the human knows the parameters of the reward function:
the robot needs to learn them as the interaction unfolds. Previous work showed
that CIRL can be solved as a POMDP, but with an action space size exponential
in the size of the reward parameter space. In this work, we exploit a specific
property of CIRL---the human is a full information agent---to derive an
optimality-preserving modification to the standard Bellman update; this reduces
the complexity of the problem by an exponential factor and allows us to relax
CIRL's assumption of human rationality. We apply this update to a variety of
POMDP solvers and find that it enables us to scale CIRL to non-trivial
problems, with larger reward parameter spaces, and larger action spaces for
both robot and human. In solutions to these larger problems, the human exhibits
pedagogic (teaching) behavior, while the robot interprets it as such and
attains higher value for the human.
| null |
http://arxiv.org/abs/1806.03820v1
|
http://arxiv.org/pdf/1806.03820v1.pdf
|
ICML 2018 7
|
[
"Dhruv Malik",
"Malayandi Palaniappan",
"Jaime F. Fisac",
"Dylan Hadfield-Menell",
"Stuart Russell",
"Anca D. Dragan"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1969
|
http://proceedings.mlr.press/v80/malik18a/malik18a.pdf
|
an-efficient-generalized-bellman-update-for-1
| null |
[] |
https://paperswithcode.com/paper/adaptive-mcmc-via-combining-local-samplers
|
1806.03816
| null | null |
Adaptive MCMC via Combining Local Samplers
|
Markov chain Monte Carlo (MCMC) methods are widely used in machine learning. One of the major problems with MCMC is the question of how to design chains that mix fast over the whole state space; in particular, how to select the parameters of an MCMC algorithm. Here we take a different approach and, similarly to parallel MCMC methods, instead of trying to find a single chain that samples from the whole distribution, we combine samples from several chains run in parallel, each exploring only parts of the state space (e.g., a few modes only). The chains are prioritized based on kernel Stein discrepancy, which provides a good measure of performance locally. The samples from the independent chains are combined using a novel technique for estimating the probability of different regions of the sample space. Experimental results demonstrate that the proposed algorithm may provide significant speedups in different sampling problems. Most importantly, when combined with the state-of-the-art NUTS algorithm as the base MCMC sampler, our method remained competitive with NUTS on sampling from unimodal distributions, while significantly outperforming state-of-the-art competitors on synthetic multimodal problems as well as on a challenging sensor localization task.
| null |
https://arxiv.org/abs/1806.03816v6
|
https://arxiv.org/pdf/1806.03816v6.pdf
| null |
[
"Kiarash Shaloudegi",
"András György"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/compression-of-phase-only-holograms-with-jpeg
|
1806.03811
| null | null |
Compression of phase-only holograms with JPEG standard and deep learning
|
It is a critical issue to reduce the enormous amount of data in the
processing, storage and transmission of a hologram in digital format. In
photograph compression, the JPEG standard is commonly supported by almost every
system and device. It will be favorable if JPEG standard is applicable to
hologram compression, with advantages of universal compatibility. However, the
reconstructed image from a JPEG compressed hologram suffers from severe quality
degradation since some high frequency features in the hologram will be lost
during the compression process. In this work, we employ a deep convolutional
neural network to reduce the artifacts in a JPEG compressed hologram.
Simulation and experimental results reveal that our proposed "JPEG + deep
learning" hologram compression scheme can achieve satisfactory reconstruction
results for a computer-generated phase-only hologram after compression.
| null |
http://arxiv.org/abs/1806.03811v1
|
http://arxiv.org/pdf/1806.03811v1.pdf
| null |
[
"Shuming Jiao",
"Zhi Jin",
"Chenliang Chang",
"Changyuan Zhou",
"Wenbin Zou",
"Xia Li"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/greybox-fuzzing-as-a-contextual-bandits
|
1806.03806
| null | null |
Greybox fuzzing as a contextual bandits problem
|
Greybox fuzzing is one of the most useful and effective techniques for the
bug detection in large scale application programs. It uses minimal amount of
instrumentation. American Fuzzy Lop (AFL) is a popular coverage based
evolutionary greybox fuzzing tool. AFL performs extremely well in fuzz testing
large applications and finding critical vulnerabilities, but AFL involves a lot
of heuristics while deciding the favored test case(s), skipping test cases
during fuzzing, assigning fuzzing iterations to test case(s). In this work, we
aim at replacing the heuristics the AFL uses while assigning the fuzzing
iterations to a test case during the random fuzzing. We formalize this problem
as a `contextual bandit problem' and we propose an algorithm to solve this
problem. We have implemented our approach on top of the AFL. We modify the
AFL's heuristics with our learned model through the policy gradient method. Our
learning algorithm selects the multiplier of the number of fuzzing iterations
to be assigned to a test case during random fuzzing, given a fixed length
substring of the test case to be fuzzed. We fuzz the substring with this new
energy value and continuously updates the policy based upon the interesting
test cases it produces on fuzzing.
|
AFL performs extremely well in fuzz testing large applications and finding critical vulnerabilities, but AFL involves a lot of heuristics while deciding the favored test case(s), skipping test cases during fuzzing, assigning fuzzing iterations to test case(s).
|
http://arxiv.org/abs/1806.03806v1
|
http://arxiv.org/pdf/1806.03806v1.pdf
| null |
[
"Ketan Patil",
"Aditya Kanade"
] |
[
"Multi-Armed Bandits"
] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "The Complete Guide USA To Contacting American Airlines Customer Service Number Explained\r\n\r\nAmerican Airlines™ main customer service number is 1-800-American Airlines™ or ((+1⇨858⇨25o⇨2740 }}[US-American Airlines™] or ((+1⇨858⇨25o⇨2740 }}[UK-American Airlines™] OTA (Live Person), available 24/7. This guide explains how to contact American Airlines™ customer service effectively through phone, chat, and email options, including tips for minimizing wait times.\r\n\r\nWhy Contact a Live Person at American Airlines™? \r\n\r\nFlight changes or cancellations: Get help adjusting or canceling flights.\r\n\r\nBooking clarification: Assistance with understanding your booking details.\r\n\r\nRefunds and compensation: Live agents can help with complex cases.\r\n\r\nTechnical glitches: Resolve booking or payment issues quickly.\r\n\r\nAmerican Airlines™ Contact Options \r\n\r\nThere are several ways to contact American Airlines™ customer service:\r\n\r\nPhone: Call ((+1⇨858⇨25o⇨2740 }}and follow the prompts or press “0” to reach an agent.\r\n\r\nLive Chat: Go to American Airlines™’ website Help section to chat with an agent ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSocial Media: Reach out via Twitter or Facebook for quick replies.\r\n\r\nMobile App: Use the app to contact support via chat or call.\r\n\r\nEmail: Use email for less urgent matters and to keep written documentation.\r\n\r\nStep-by-Step: Talking to a Live Person at American Airlines™ \r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}, select the most relevant option, or say “agent” to connect faster. You can usually press “0” to bypass prompts.\r\n\r\nImportant Numbers for International Callers \r\n\r\nUS: +1⇨858⇨25o⇨2740 \r\n\r\nCanada: +1⇨858⇨25o⇨2740 \r\n\r\nAustralia: +1⇨858⇨25o⇨2740 \r\n\r\nEspañol: +1⇨858⇨25o⇨2740 \r\n\r\nCommon Customer Service Queries \r\n\r\nFlight Changes & Cancellations: Modify or cancel your booking with assistance at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\n𝚑𝚘𝚝𝚎𝚕 Bookings: Resolve issues like incorrect dates or reservation problems.\r\n\r\nRefunds & Compensation: Ensure your claims are managed correctly.\r\n\r\nFrequently Asked Questions \r\n\r\nQ: What is the fastest way to reach a live agent at American Airlines™?\r\n\r\nA: Call ((+1⇨858⇨25o⇨2740 }}or use live chat via the website/app.\r\n\r\nQ: Can I get help with accessibility or special needs?\r\n\r\nA: Yes, American Airlines™ offers accessibility support for medical or disability needs.\r\n\r\nQ: How long does it take to get an email response?\r\n\r\nA: Usually a few business days, depending on the issue.\r\n\r\nQ: Is American Airlines™ support available 24/7?\r\n\r\nA: Yes, many contact methods including phone ((+1⇨858⇨25o⇨2740 }}and chat are available 24/7.\r\n\r\nYou can contact American Airlines™ customer service ((+1⇨858⇨25o⇨2740 }}through several methods. The fastest way is by calling 1-800-American Airlines (((+1⇨858⇨25o⇨2740 }}). You can also use the chat feature on the American Airlines app or website. For social media support, message them on Twitter or Facebook. If you prefer email, submit a form through their official website. Additionally, you can visit their ticket counters or service desks at the airport for in-person assistance.\r\n\r\nLearn how to contact American Airlines customer service ((+1⇨858⇨25o⇨2740 }}by phone, chat, email or social media for any queries related to flights, refund, cancel and more. Find the official website, contact number and FAQs for American Airlines™ in the U.S.\r\n\r\nCall the Main American Airlines Customer Service Number:\r\n\r\nThe easiest and most common way to contact American Airlines™ is through their main customer service number: +1⇨858⇨25o⇨2740 \r\n\r\nAmerican Airlines™ Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps to Speak to a Representative: +1⇨858⇨25o⇨2740 \r\n\r\nDial ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nListen to the automated menu options.\r\n\r\nPress the appropriate number for your inquiry (e.g., reservations, flight status, baggage claim, etc.).\r\n\r\nHold the line until a live representative becomes available.\r\n\r\nExplain your concern and receive assistance.\r\n\r\nContact American Airlines™ Rapid Rewards Customer Service \r\n\r\nIf you are a Rapid Rewards member and need assistance with points, travel rewards, or account-related issues, contact the Rapid Rewards customer service line.\r\n\r\nRapid Rewards Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall +1⇨858⇨25o⇨2740 \r\n\r\nProvide your Rapid Rewards account number when prompted.\r\n\r\nFollow the automated menu to reach an agent.\r\n\r\nDiscuss your issue or inquiry with the representative.\r\n\r\nCall American Airlines™ Baggage Service Office \r\n\r\nIf your luggage is lost, damaged, or delayed, you can contact American Airlines™ Baggage Service\r\n\r\nBaggage Service Phone Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the appropriate option for lost, delayed, or damaged baggage.\r\n\r\nProvide your flight and baggage claim details.\r\n\r\nSpeak to a representative for assistance.\r\n\r\nAmerican Airlines™ Customer Service for Group Travel\r\n\r\nFor group reservations (10 or more passengers), a dedicated support line is available.\r\n\r\nGroup Travel Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nDial ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for group reservations.\r\n\r\nSpeak to an agent regarding booking, changes, or special requests.\r\n\r\nReach Out to American Airlines™ Vacations Customer Service\r\n\r\nFor vacation packages, including 𝚑𝚘𝚝𝚎𝚕s and car rentals, call the vacation service line.\r\n\r\nVacations Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the appropriate option for new reservations or modifications.\r\n\r\nDiscuss your vacation plans with a representative.\r\n\r\nCall American Airlines™ Cargo Customer Service\r\n\r\nIf you are shipping cargo, you can contact the cargo department for assistance.\r\n\r\nCargo Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nProvide details about your shipment.\r\n\r\nSpeak with a representative for assistance.\r\n\r\nContact American Airlines™ for Special Assistance\r\n\r\nFor passengers with disabilities or special needs, American Airlines™ offers a dedicated support line.\r\n\r\nSpecial Assistance Phone Number: ((+1⇨858⇨25o⇨2740 }}(same as the main number)\r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for special assistance.\r\n\r\nSpeak with an agent about your needs.\r\n\r\nCall the American Airlines™ Refund Department\r\n\r\nIf you need to request a refund, call the refund department directly.\r\n\r\nRefunds Customer Service Number: ((+1⇨858⇨25o⇨2740 }}(main number, follow refund prompts)\r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for refund inquiries.\r\n\r\nProvide your booking details.\r\n\r\nDiscuss refund eligibility with a representative.\r\n\r\nContact American Airlines™ Corporate Customer Service\r\n\r\nFor corporate inquiries, media requests, or other non-passenger-related concerns, use the corporate office number.\r\n\r\nCorporate Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nFollow the menu prompts for corporate inquiries.\r\n\r\nSpeak to an appropriate representative.\r\n\r\nUse the American Airlines™ International Customer Service Line\r\n\r\nFor international travel inquiries, American Airlines™ provides dedicated support.\r\n\r\nInternational Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nDial +1⇨858⇨25o⇨2740 \r\n\r\nSelect the option for international travel.\r\n\r\nSpeak with a representative for assistance\r\n\r\nFAQs about American Airlines™ Customer Service\r\n\r\nIs American Airlines™ Customer Service available 24 hours?\r\nYes, the general customer service line (((+1⇨858⇨25o⇨2740 }}) is available 24/7 for assistance\r\n\r\nHow do I speak to a live American Airlines™ representative?\r\nCall ((+1⇨858⇨25o⇨2740 }}, follow the prompts, and select the option to speak with an agent.\r\n\r\nWhat is the 800 number for American Airlines™?\r\nThe main toll-free number is ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nDoes American Airlines™ have a different number for Rapid Rewards members?\r\nYes, Rapid Rewards Customer Service can be reached at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nHow can I contact American Airlines™ for baggage issues?\r\nCall the Baggage Service Office at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nCan I contact American Airlines™ for a refund request?\r\nYes, call ((+1⇨858⇨25o⇨2740 }}and select the refund option.\r\n\r\nIs there a dedicated line for international travel inquiries?\r\nYes, international customers can call ((+1⇨858⇨25o⇨2740 }}and follow the prompts for assistance.\r\n\r\nWhat number should I call for special assistance requests?\r\nPassengers needing special assistance can call ((+1⇨858⇨25o⇨2740 }}and select the appropriate option.\r\n\r\nHow do I reach American Airlines™ for corporate inquiries?\r\nFor corporate-related concerns, call ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nIs there a different number for American Airlines™ vacation packages?\r\nYes, for vacation package support, call ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nBy following this guide, you can quickly and efficiently connect with American Airlines™ Customer Service for any inquiries or assistance needed.\r\n\r\nConclusion \r\n\r\nAs an American Airlines™ customer ((+1⇨858⇨25o⇨2740 }}, you have several reliable options to connect with support. For the fastest help, keep ((+1⇨858⇨25o⇨2740 }}ready. Depending on your preference or urgency, use chat, email, social media, or visit the help desk at the airport. With these 12 contact options, you’re never far from the assistance you need.",
"full_name": "7 Fastest Ways to Call American Airlines Reservations Number (USA Guide)",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "6D Pose Estimation Models",
"parent": null
},
"name": "American",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
}
] |
https://paperswithcode.com/paper/chaining-mutual-information-and-tightening
|
1806.03803
| null | null |
Chaining Mutual Information and Tightening Generalization Bounds
|
Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two important difficulties are (i) exploiting the dependencies between the hypotheses, (ii) exploiting the dependence between the algorithm's input and output. Progress on the first point was made with the chaining method, originating from the work of Kolmogorov, and used in the VC-dimension bound. More recently, progress on the second point was made with the mutual information method by Russo and Zou '15. Yet, these two methods are currently disjoint. In this paper, we introduce a technique to combine the chaining and mutual information methods, to obtain a generalization bound that is both algorithm-dependent and that exploits the dependencies between the hypotheses. We provide an example in which our bound significantly outperforms both the chaining and the mutual information bounds. As a corollary, we tighten Dudley's inequality when the learning algorithm chooses its output from a small subset of hypotheses with high probability.
| null |
https://arxiv.org/abs/1806.03803v2
|
https://arxiv.org/pdf/1806.03803v2.pdf
|
NeurIPS 2018 12
|
[
"Amir R. Asadi",
"Emmanuel Abbe",
"Sergio Verdú"
] |
[
"Generalization Bounds"
] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/7954-chaining-mutual-information-and-tightening-generalization-bounds
|
http://papers.nips.cc/paper/7954-chaining-mutual-information-and-tightening-generalization-bounds.pdf
|
chaining-mutual-information-and-tightening-1
| null |
[] |
https://paperswithcode.com/paper/eve-a-gradient-based-optimization-method-with
|
1611.01505
| null | null |
Eve: A Gradient Based Optimization Method with Locally and Globally Adaptive Learning Rates
|
Adaptive gradient methods for stochastic optimization adjust the learning
rate for each parameter locally. However, there is also a global learning rate
which must be tuned in order to get the best performance. In this paper, we
present a new algorithm that adapts the learning rate locally for each
parameter separately, and also globally for all parameters together.
Specifically, we modify Adam, a popular method for training deep learning
models, with a coefficient that captures properties of the objective function.
Empirically, we show that our method, which we call Eve, outperforms Adam and
other popular methods in training deep neural networks, like convolutional
neural networks for image classification, and recurrent neural networks for
language tasks.
|
Adaptive gradient methods for stochastic optimization adjust the learning rate for each parameter locally.
|
http://arxiv.org/abs/1611.01505v3
|
http://arxiv.org/pdf/1611.01505v3.pdf
| null |
[
"Hiroaki Hayashi",
"Jayanth Koushik",
"Graham Neubig"
] |
[
"General Classification",
"image-classification",
"Image Classification",
"Stochastic Optimization"
] | 2016-11-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
}
] |
https://paperswithcode.com/paper/generative-adversarial-network-architectures
|
1806.03796
| null | null |
Generative Adversarial Network Architectures For Image Synthesis Using Capsule Networks
|
In this paper, we propose Generative Adversarial Network (GAN) architectures
that use Capsule Networks for image-synthesis. Based on the principal of
positional-equivariance of features, Capsule Network's ability to encode
spatial relationships between the features of the image helps it become a more
powerful critic in comparison to Convolutional Neural Networks (CNNs) used in
current architectures for image synthesis. Our proposed GAN architectures learn
the data manifold much faster and therefore, synthesize visually accurate
images in significantly lesser number of training samples and training epochs
in comparison to GANs and its variants that use CNNs. Apart from analyzing the
quantitative results corresponding the images generated by different
architectures, we also explore the reasons for the lower coverage and diversity
explored by the GAN architectures that use CNN critics.
| null |
http://arxiv.org/abs/1806.03796v4
|
http://arxiv.org/pdf/1806.03796v4.pdf
| null |
[
"Yash Upadhyay",
"Paul Schrater"
] |
[
"Diversity",
"Generative Adversarial Network",
"Image Generation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/cross-dataset-person-re-identification-using
|
1806.04533
| null | null |
Cross-dataset Person Re-Identification Using Similarity Preserved Generative Adversarial Networks
|
Person re-identification (Re-ID) aims to match the image frames which contain
the same person in the surveillance videos. Most of the Re-ID algorithms
conduct supervised training in some small labeled datasets, so directly
deploying these trained models to the real-world large camera networks may lead
to a poor performance due to underfitting. The significant difference between
the source training dataset and the target testing dataset makes it challenging
to incrementally optimize the model. To address this challenge, we propose a
novel solution by transforming the unlabeled images in the target domain to fit
the original classifier by using our proposed similarity preserved generative
adversarial networks model, SimPGAN. Specifically, SimPGAN adopts the
generative adversarial networks with the cycle consistency constraint to
transform the unlabeled images in the target domain to the style of the source
domain. Meanwhile, SimPGAN uses the similarity consistency loss, which is
measured by a siamese deep convolutional neural network, to preserve the
similarity of the transformed images of the same person. Comprehensive
experiments based on multiple real surveillance datasets are conducted, and the
results show that our algorithm is better than the state-of-the-art
cross-dataset unsupervised person Re-ID algorithms.
|
Meanwhile, SimPGAN uses the similarity consistency loss, which is measured by a siamese deep convolutional neural network, to preserve the similarity of the transformed images of the same person.
|
http://arxiv.org/abs/1806.04533v2
|
http://arxiv.org/pdf/1806.04533v2.pdf
| null |
[
"Jianming Lv",
"Xintong Wang"
] |
[
"Person Re-Identification"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-effect-of-network-width-on-the
|
1806.03791
| null | null |
The Effect of Network Width on the Performance of Large-batch Training
|
Distributed implementations of mini-batch stochastic gradient descent (SGD)
suffer from communication overheads, attributed to the high frequency of
gradient updates inherent in small-batch training. Training with large batches
can reduce these overheads; however, large batches can affect the convergence
properties and generalization performance of SGD. In this work, we take a first
step towards analyzing how the structure (width and depth) of a neural network
affects the performance of large-batch training. We present new theoretical
results which suggest that--for a fixed number of parameters--wider networks
are more amenable to fast large-batch training compared to deeper ones. We
provide extensive experiments on residual and fully-connected neural networks
which suggest that wider networks can be trained using larger batches without
incurring a convergence slow-down, unlike their deeper variants.
| null |
http://arxiv.org/abs/1806.03791v1
|
http://arxiv.org/pdf/1806.03791v1.pdf
|
NeurIPS 2018 12
|
[
"Lingjiao Chen",
"Hongyi Wang",
"Jinman Zhao",
"Dimitris Papailiopoulos",
"Paraschos Koutris"
] |
[] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/8142-the-effect-of-network-width-on-the-performance-of-large-batch-training
|
http://papers.nips.cc/paper/8142-the-effect-of-network-width-on-the-performance-of-large-batch-training.pdf
|
the-effect-of-network-width-on-the-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/dureader-a-chinese-machine-reading
|
1711.05073
| null | null |
DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications
|
This paper introduces DuReader, a new large-scale, open-domain Chinese ma-
chine reading comprehension (MRC) dataset, designed to address real-world MRC.
DuReader has three advantages over previous MRC datasets: (1) data sources:
questions and documents are based on Baidu Search and Baidu Zhidao; answers are
manually generated. (2) question types: it provides rich annotations for more
question types, especially yes-no and opinion questions, that leaves more
opportunity for the research community. (3) scale: it contains 200K questions,
420K answers and 1M documents; it is the largest Chinese MRC dataset so far.
Experiments show that human performance is well above current state-of-the-art
baseline systems, leaving plenty of room for the community to make
improvements. To help the community make these improvements, both DuReader and
baseline systems have been posted online. We also organize a shared competition
to encourage the exploration of more models. Since the release of the task,
there are significant improvements over the baselines.
|
Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.
|
http://arxiv.org/abs/1711.05073v4
|
http://arxiv.org/pdf/1711.05073v4.pdf
|
WS 2018 7
|
[
"Wei He",
"Kai Liu",
"Jing Liu",
"Yajuan Lyu",
"Shiqi Zhao",
"Xinyan Xiao",
"Yu-An Liu",
"Yizhong Wang",
"Hua Wu",
"Qiaoqiao She",
"Xuan Liu",
"Tian Wu",
"Haifeng Wang"
] |
[
"Machine Reading Comprehension",
"Reading Comprehension"
] | 2017-11-14T00:00:00 |
https://aclanthology.org/W18-2605
|
https://aclanthology.org/W18-2605.pdf
|
dureader-a-chinese-machine-reading-1
| null |
[] |
https://paperswithcode.com/paper/assumed-density-filtering-q-learning
|
1712.03333
| null | null |
Assumed Density Filtering Q-learning
|
While off-policy temporal difference (TD) methods have widely been used in reinforcement learning due to their efficiency and simple implementation, their Bayesian counterparts have not been utilized as frequently. One reason is that the non-linear max operation in the Bellman optimality equation makes it difficult to define conjugate distributions over the value functions. In this paper, we introduce a novel Bayesian approach to off-policy TD methods, called as ADFQ, which updates beliefs on state-action values, Q, through an online Bayesian inference method known as Assumed Density Filtering. We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs. Uncertainty measures in the beliefs not only are used in exploration but also provide a natural regularization for the value update considering all next available actions. ADFQ converges to Q-learning as the uncertainty measures of the Q-beliefs decrease and improves common drawbacks of other Bayesian RL algorithms such as computational complexity. We extend ADFQ with a neural network. Our empirical results demonstrate that ADFQ outperforms comparable algorithms on various Atari 2600 games, with drastic improvements in highly stochastic domains or domains with a large action space.
|
We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs.
|
https://arxiv.org/abs/1712.03333v4
|
https://arxiv.org/pdf/1712.03333v4.pdf
| null |
[
"Heejin Jeong",
"Clark Zhang",
"George J. Pappas",
"Daniel D. Lee"
] |
[
"Atari Games",
"Bayesian Inference",
"Q-Learning",
"Reinforcement Learning"
] | 2017-12-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/doobnet-deep-object-occlusion-boundary
|
1806.03772
| null | null |
DOOBNet: Deep Object Occlusion Boundary Detection from an Image
|
Object occlusion boundary detection is a fundamental and crucial research
problem in computer vision. This is challenging to solve as encountering the
extreme boundary/non-boundary class imbalance during training an object
occlusion boundary detector. In this paper, we propose to address this class
imbalance by up-weighting the loss contribution of false negative and false
positive examples with our novel Attention Loss function. We also propose a
unified end-to-end multi-task deep object occlusion boundary detection network
(DOOBNet) by sharing convolutional features to simultaneously predict object
boundary and occlusion orientation. DOOBNet adopts an encoder-decoder structure
with skip connection in order to automatically learn multi-scale and
multi-level features. We significantly surpass the state-of-the-art on the PIOD
dataset (ODS F-score of .702) and the BSDS ownership dataset (ODS F-score of
.555), as well as improving the detecting speed to as 0.037s per image on the
PIOD dataset.
|
Object occlusion boundary detection is a fundamental and crucial research problem in computer vision.
|
http://arxiv.org/abs/1806.03772v3
|
http://arxiv.org/pdf/1806.03772v3.pdf
| null |
[
"Guoxia Wang",
"Xiaohui Liang",
"Frederick W. B. Li"
] |
[
"Boundary Detection",
"Decoder",
"Object"
] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/smoothed-analysis-of-the-low-rank-approach
|
1806.03763
| null | null |
Smoothed analysis of the low-rank approach for smooth semidefinite programs
|
We consider semidefinite programs (SDPs) of size n with equality constraints.
In order to overcome scalability issues, Burer and Monteiro proposed a
factorized approach based on optimizing over a matrix Y of size $n$ by $k$ such
that $X = YY^*$ is the SDP variable. The advantages of such formulation are
twofold: the dimension of the optimization variable is reduced and positive
semidefiniteness is naturally enforced. However, the problem in Y is
non-convex. In prior work, it has been shown that, when the constraints on the
factorized variable regularly define a smooth manifold, provided k is large
enough, for almost all cost matrices, all second-order stationary points
(SOSPs) are optimal. Importantly, in practice, one can only compute points
which approximately satisfy necessary optimality conditions, leading to the
question: are such points also approximately optimal? To this end, and under
similar assumptions, we use smoothed analysis to show that approximate SOSPs
for a randomly perturbed objective function are approximate global optima, with
k scaling like the square root of the number of constraints (up to log
factors). Moreover, we bound the optimality gap at the approximate solution of
the perturbed problem with respect to the original problem. We particularize
our results to an SDP relaxation of phase retrieval.
| null |
http://arxiv.org/abs/1806.03763v2
|
http://arxiv.org/pdf/1806.03763v2.pdf
|
NeurIPS 2018 12
|
[
"Thomas Pumir",
"Samy Jelassi",
"Nicolas Boumal"
] |
[
"Retrieval"
] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/7496-smoothed-analysis-of-the-low-rank-approach-for-smooth-semidefinite-programs
|
http://papers.nips.cc/paper/7496-smoothed-analysis-of-the-low-rank-approach-for-smooth-semidefinite-programs.pdf
|
smoothed-analysis-of-the-low-rank-approach-1
| null |
[] |
https://paperswithcode.com/paper/leveraging-translations-for-speech
|
1803.08991
| null | null |
Leveraging translations for speech transcription in low-resource settings
|
Recently proposed data collection frameworks for endangered language
documentation aim not only to collect speech in the language of interest, but
also to collect translations into a high-resource language that will render the
collected resource interpretable. We focus on this scenario and explore whether
we can improve transcription quality under these extremely low-resource
settings with the assistance of text translations. We present a neural
multi-source model and evaluate several variations of it on three low-resource
datasets. We find that our multi-source model with shared attention outperforms
the baselines, reducing transcription character error rate by up to 12.3%.
|
Recently proposed data collection frameworks for endangered language documentation aim not only to collect speech in the language of interest, but also to collect translations into a high-resource language that will render the collected resource interpretable.
|
http://arxiv.org/abs/1803.08991v2
|
http://arxiv.org/pdf/1803.08991v2.pdf
| null |
[
"Antonis Anastasopoulos",
"David Chiang"
] |
[] | 2018-03-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/part-of-speech-tagging-on-an-endangered
|
1806.03757
| null | null |
Part-of-Speech Tagging on an Endangered Language: a Parallel Griko-Italian Resource
|
Most work on part-of-speech (POS) tagging is focused on high resource
languages, or examines low-resource and active learning settings through
simulated studies. We evaluate POS tagging techniques on an actual endangered
language, Griko. We present a resource that contains 114 narratives in Griko,
along with sentence-level translations in Italian, and provides gold
annotations for the test set. Based on a previously collected small corpus, we
investigate several traditional methods, as well as methods that take advantage
of monolingual data or project cross-lingual POS tags. We show that the
combination of a semi-supervised method with cross-lingual transfer is more
appropriate for this extremely challenging setting, with the best tagger
achieving an accuracy of 72.9%. With an applied active learning scheme, which
we use to collect sentence-level annotations over the test set, we achieve
improvements of more than 21 percentage points.
|
Most work on part-of-speech (POS) tagging is focused on high resource languages, or examines low-resource and active learning settings through simulated studies.
|
http://arxiv.org/abs/1806.03757v1
|
http://arxiv.org/pdf/1806.03757v1.pdf
|
COLING 2018 8
|
[
"Antonis Anastasopoulos",
"Marika Lekakou",
"Josep Quer",
"Eleni Zimianiti",
"Justin DeBenedetto",
"David Chiang"
] |
[
"Active Learning",
"Cross-Lingual Transfer",
"Part-Of-Speech Tagging",
"POS",
"POS Tagging",
"Sentence"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/C18-1214
|
https://aclanthology.org/C18-1214.pdf
|
part-of-speech-tagging-on-an-endangered-2
| null |
[] |
https://paperswithcode.com/paper/robust-object-tracking-with-crow-search
|
1806.03753
| null | null |
Robust Object Tracking with Crow Search Optimized Multi-cue Particle Filter
|
Particle Filter(PF) is used extensively for estimation of target Non-linear
and Non-gaussian state. However, its performance suffers due to inherent
problem of sample degeneracy and impoverishment. In order to address this, we
propose a novel resampling method based upon Crow Search Optimization to
overcome low performing particles detected as outlier. Proposed outlier
detection mechanism with transductive reliability achieve faster convergence of
proposed PF tracking framework. In addition, we present an adaptive fuzzy
fusion model to integrate multi-cue extracted for each evaluated particle.
Automatic boosting and suppression of particles using proposed fusion model not
only enhances performance of resampling method but also achieve optimal state
estimation. Performance of the proposed tracker is evaluated over 12 benchmark
video sequences and compared with state-of-the-art solutions. Qualitative and
quantitative results reveals that the proposed tracker not only outperforms
existing solutions but also efficiently handle various tracking challenges. On
average of outcome, we achieve CLE of 7.98 and F-measure of 0.734.
| null |
http://arxiv.org/abs/1806.03753v1
|
http://arxiv.org/pdf/1806.03753v1.pdf
| null |
[
"Kapil Sharma",
"Gurjit Singh Walia",
"Ashish Kumar",
"Astitwa Saxena",
"Kuldeep Singh"
] |
[
"Object Tracking",
"Outlier Detection",
"State Estimation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-transferability-of-adversarial
|
1803.06978
| null | null |
Improving Transferability of Adversarial Examples with Input Diversity
|
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.
|
We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future.
|
https://arxiv.org/abs/1803.06978v4
|
https://arxiv.org/pdf/1803.06978v4.pdf
|
CVPR 2019 6
|
[
"Cihang Xie",
"Zhishuai Zhang",
"Yuyin Zhou",
"Song Bai",
"Jian-Yu Wang",
"Zhou Ren",
"Alan Yuille"
] |
[
"Adversarial Attack",
"Diversity",
"Image Classification"
] | 2018-03-19T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_Improving_Transferability_of_Adversarial_Examples_With_Input_Diversity_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Xie_Improving_Transferability_of_Adversarial_Examples_With_Input_Diversity_CVPR_2019_paper.pdf
|
improving-transferability-of-adversarial-1
| null |
[] |
https://paperswithcode.com/paper/a-gpu-based-wfst-decoder-with-exact-lattice
|
1804.03243
| null | null |
A GPU-based WFST Decoder with Exact Lattice Generation
|
We describe initial work on an extension of the Kaldi toolkit that supports
weighted finite-state transducer (WFST) decoding on Graphics Processing Units
(GPUs). We implement token recombination as an atomic GPU operation in order to
fully parallelize the Viterbi beam search, and propose a dynamic load balancing
strategy for more efficient token passing scheduling among GPU threads. We also
redesign the exact lattice generation and lattice pruning algorithms for better
utilization of the GPUs. Experiments on the Switchboard corpus show that the
proposed method achieves identical 1-best results and lattice quality in
recognition and confidence measure tasks, while running 3 to 15 times faster
than the single process Kaldi decoder. The above results are reported on
different GPU architectures. Additionally we obtain a 46-fold speedup with
sequence parallelism and multi-process service (MPS) in GPU.
| null |
http://arxiv.org/abs/1804.03243v3
|
http://arxiv.org/pdf/1804.03243v3.pdf
| null |
[
"Zhehuai Chen",
"Justin Luitjens",
"Hainan Xu",
"Yiming Wang",
"Daniel Povey",
"Sanjeev Khudanpur"
] |
[
"Decoder",
"GPU",
"Scheduling"
] | 2018-04-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/a-structured-variational-autoencoder-for
|
1806.03746
| null | null |
A Structured Variational Autoencoder for Contextual Morphological Inflection
|
Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases.
|
Statistical morphological inflectors are typically trained on fully supervised, type-level data.
|
https://arxiv.org/abs/1806.03746v2
|
https://arxiv.org/pdf/1806.03746v2.pdf
|
ACL 2018 7
|
[
"Lawrence Wolf-Sonkin",
"Jason Naradowsky",
"Sabrina J. Mielke",
"Ryan Cotterell"
] |
[
"Morphological Inflection",
"Variational Inference"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/P18-1245
|
https://aclanthology.org/P18-1245.pdf
|
a-structured-variational-autoencoder-for-1
| null |
[] |
https://paperswithcode.com/paper/object-detection-in-videos-by-high-quality
|
1801.09823
| null | null |
Object Detection in Videos by High Quality Object Linking
|
Compared with object detection in static images, object detection in videos
is more challenging due to degraded image qualities. An effective way to
address this problem is to exploit temporal contexts by linking the same object
across video to form tubelets and aggregating classification scores in the
tubelets. In this paper, we focus on obtaining high quality object linking
results for better classification. Unlike previous methods that link objects by
checking boxes between neighboring frames, we propose to link in the same
frame. To achieve this goal, we extend prior methods in following aspects: (1)
a cuboid proposal network that extracts spatio-temporal candidate cuboids which
bound the movement of objects; (2) a short tubelet detection network that
detects short tubelets in short video segments; (3) a short tubelet linking
algorithm that links temporally-overlapping short tubelets to form long
tubelets. Experiments on the ImageNet VID dataset show that our method
outperforms both the static image detector and the previous state of the art.
In particular, our method improves results by 8.8% over the static image
detector for fast moving objects.
| null |
http://arxiv.org/abs/1801.09823v3
|
http://arxiv.org/pdf/1801.09823v3.pdf
| null |
[
"Peng Tang",
"Chunyu Wang",
"Xinggang Wang",
"Wenyu Liu",
"Wen-Jun Zeng",
"Jingdong Wang"
] |
[
"General Classification",
"Object",
"object-detection",
"Object Detection",
"Vocal Bursts Intensity Prediction"
] | 2018-01-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/are-all-languages-equally-hard-to-language
|
1806.03743
| null | null |
Are All Languages Equally Hard to Language-Model?
|
For general modeling methods applied to diverse languages, a natural question is: how well should we expect our models to work on languages with differing typological profiles? In this work, we develop an evaluation framework for fair cross-linguistic comparison of language models, using translated text so that all models are asked to predict approximately the same information. We then conduct a study on 21 languages, demonstrating that in some languages, the textual expression of the information is harder to predict with both $n$-gram and LSTM language models. We show complex inflectional morphology to be a cause of performance differences among languages.
| null |
https://arxiv.org/abs/1806.03743v2
|
https://arxiv.org/pdf/1806.03743v2.pdf
|
NAACL 2018 6
|
[
"Ryan Cotterell",
"Sabrina J. Mielke",
"Jason Eisner",
"Brian Roark"
] |
[
"All",
"Language Modeling",
"Language Modelling",
"model"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/N18-2085
|
https://aclanthology.org/N18-2085.pdf
|
are-all-languages-equally-hard-to-language-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/unsupervised-disambiguation-of-syncretism-in
|
1806.03740
| null | null |
Unsupervised Disambiguation of Syncretism in Inflected Lexicons
|
Lexical ambiguity makes it difficult to compute various useful statistics of a corpus. A given word form might represent any of several morphological feature bundles. One can, however, use unsupervised learning (as in EM) to fit a model that probabilistically disambiguates word forms. We present such an approach, which employs a neural network to smoothly model a prior distribution over feature bundles (even rare ones). Although this basic model does not consider a token's context, that very property allows it to operate on a simple list of unigram type counts, partitioning each count among different analyses of that unigram. We discuss evaluation metrics for this novel task and report results on 5 languages.
| null |
https://arxiv.org/abs/1806.03740v2
|
https://arxiv.org/pdf/1806.03740v2.pdf
|
NAACL 2018 6
|
[
"Ryan Cotterell",
"Christo Kirov",
"Sabrina J. Mielke",
"Jason Eisner"
] |
[] | 2018-06-10T00:00:00 |
https://aclanthology.org/N18-2087
|
https://aclanthology.org/N18-2087.pdf
|
unsupervised-disambiguation-of-syncretism-in-1
| null |
[] |
https://paperswithcode.com/paper/polya-urn-latent-dirichlet-allocation-a
|
1704.03581
| null | null |
Pólya Urn Latent Dirichlet Allocation: a doubly sparse massively parallel sampler
|
Latent Dirichlet Allocation (LDA) is a topic model widely used in natural language processing and machine learning. Most approaches to training the model rely on iterative algorithms, which makes it difficult to run LDA on big corpora that are best analyzed in parallel and distributed computational environments. Indeed, current approaches to parallel inference either don't converge to the correct posterior or require storage of large dense matrices in memory. We present a novel sampler that overcomes both problems, and we show that this sampler is faster, both empirically and theoretically, than previous Gibbs samplers for LDA. We do so by employing a novel P\'olya-urn-based approximation in the sparse partially collapsed sampler for LDA. We prove that the approximation error vanishes with data size, making our algorithm asymptotically exact, a property of importance for large-scale topic models. In addition, we show, via an explicit example, that - contrary to popular belief in the topic modeling literature - partially collapsed samplers can be more efficient than fully collapsed samplers. We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.
|
We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.
|
https://arxiv.org/abs/1704.03581v7
|
https://arxiv.org/pdf/1704.03581v7.pdf
| null |
[
"Alexander Terenin",
"Måns Magnusson",
"Leif Jonsson",
"David Draper"
] |
[
"Topic Models"
] | 2017-04-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.\r\n\r\nExtracted from [Wikipedia](https://en.wikipedia.org/wiki/Linear_discriminant_analysis)\r\n\r\n**Source**:\r\n\r\nPaper: [Linear Discriminant Analysis: A Detailed Tutorial](https://dx.doi.org/10.3233/AIC-170729)\r\n\r\nPublic version: [Linear Discriminant Analysis: A Detailed Tutorial](https://usir.salford.ac.uk/id/eprint/52074/)",
"full_name": "Linear Discriminant Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "LDA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/an-enhanced-bpso-based-approach-for-service
|
1806.05971
| null | null |
An Enhanced Binary Particle-Swarm Optimization (E-BPSO) Algorithm for Service Placement in Hybrid Cloud Platforms
|
Nowadays, hybrid cloud platforms stand as an attractive solution for organizations intending to implement combined private and public cloud applications, in order to meet their profitability requirements. However, this can only be achieved through the utilization of available resources while speeding up execution processes. Accordingly, deploying new applications entails dedicating some of these processes to a private cloud solution, while allocating others to the public cloud. In this context, the present work is set to help minimize relevant costs and deliver effective choices for an optimal service placement solution within minimal execution time. Several evolutionary algorithms have been applied to solve the service placement problem and are used when dealing with complex solution spaces to provide an optimal placement and often produce a short execution time. The standard BPSO algorithm is found to display a significant disadvantage, namely, of easily trapping into local optima, in addition to demonstrating a noticeable lack of robustness in dealing with service placement problems. Hence, to overcome critical shortcomings associated with the standard BPSO, an Enhanced Binary Particle Swarm Optimization (E-BPSO) algorithm is proposed, consisting of a modification of the particle position updating equation, initially inspired from the continuous PSO. Our proposed E-BPSO algorithm is shown to outperform state-of-the-art approaches in terms of both cost and execution time, using a real benchmark.
| null |
https://arxiv.org/abs/1806.05971v2
|
https://arxiv.org/pdf/1806.05971v2.pdf
| null |
[
"Wissem Abbes",
"Zied Kechaou",
"Amir Hussain",
"Abdulrahman M. Qahtani",
"Omar Aimutiry",
"Habib Dhahri",
"Adel M. ALIMI"
] |
[
"Evolutionary Algorithms"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cross-dataset-adaptation-for-visual-question
|
1806.03726
| null | null |
Cross-Dataset Adaptation for Visual Question Answering
|
We investigate the problem of cross-dataset adaptation for visual question
answering (Visual QA). Our goal is to train a Visual QA model on a source
dataset but apply it to another target one. Analogous to domain adaptation for
visual recognition, this setting is appealing when the target dataset does not
have a sufficient amount of labeled data to learn an "in-domain" model. The key
challenge is that the two datasets are constructed differently, resulting in
the cross-dataset mismatch on images, questions, or answers.
We overcome this difficulty by proposing a novel domain adaptation algorithm.
Our method reduces the difference in statistical distributions by transforming
the feature representation of the data in the target dataset. Moreover, it
maximizes the likelihood of answering questions (in the target dataset)
correctly using the Visual QA model trained on the source dataset. We
empirically studied the effectiveness of the proposed approach on adapting
among several popular Visual QA datasets. We show that the proposed method
improves over baselines where there is no adaptation and several other
adaptation methods. We both quantitatively and qualitatively analyze when the
adaptation can be mostly effective.
| null |
http://arxiv.org/abs/1806.03726v1
|
http://arxiv.org/pdf/1806.03726v1.pdf
|
CVPR 2018 6
|
[
"Wei-Lun Chao",
"Hexiang Hu",
"Fei Sha"
] |
[
"Domain Adaptation",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-06-10T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Chao_Cross-Dataset_Adaptation_for_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Chao_Cross-Dataset_Adaptation_for_CVPR_2018_paper.pdf
|
cross-dataset-adaptation-for-visual-question-1
| null |
[] |
https://paperswithcode.com/paper/learning-answer-embeddings-for-visual
|
1806.03724
| null | null |
Learning Answer Embeddings for Visual Question Answering
|
We propose a novel probabilistic model for visual question answering (Visual
QA). The key idea is to infer two sets of embeddings: one for the image and the
question jointly and the other for the answers. The learning objective is to
learn the best parameterization of those embeddings such that the correct
answer has higher likelihood among all possible answers. In contrast to several
existing approaches of treating Visual QA as multi-way classification, the
proposed approach takes the semantic relationships (as characterized by the
embeddings) among answers into consideration, instead of viewing them as
independent ordinal numbers. Thus, the learned embedded function can be used to
embed unseen answers (in the training dataset). These properties make the
approach particularly appealing for transfer learning for open-ended Visual QA,
where the source dataset on which the model is learned has limited overlapping
with the target dataset in the space of answers. We have also developed
large-scale optimization techniques for applying the model to datasets with a
large number of answers, where the challenge is to properly normalize the
proposed probabilistic models. We validate our approach on several Visual QA
datasets and investigate its utility for transferring models across datasets.
The empirical results have shown that the approach performs well not only on
in-domain learning but also on transfer learning.
| null |
http://arxiv.org/abs/1806.03724v1
|
http://arxiv.org/pdf/1806.03724v1.pdf
|
CVPR 2018 6
|
[
"Hexiang Hu",
"Wei-Lun Chao",
"Fei Sha"
] |
[
"Question Answering",
"Transfer Learning",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-06-10T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Learning_Answer_Embeddings_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Learning_Answer_Embeddings_CVPR_2018_paper.pdf
|
learning-answer-embeddings-for-visual-1
| null |
[] |
https://paperswithcode.com/paper/smallify-learning-network-size-while-training
|
1806.03723
| null | null |
Smallify: Learning Network Size while Training
|
As neural networks become widely deployed in different applications and on
different hardware, it has become increasingly important to optimize inference
time and model size along with model accuracy. Most current techniques optimize
model size, model accuracy and inference time in different stages, resulting in
suboptimal results and computational inefficiency. In this work, we propose a
new technique called Smallify that optimizes all three of these metrics at the
same time. Specifically we present a new method to simultaneously optimize
network size and model performance by neuron-level pruning during training.
Neuron-level pruning not only produces much smaller networks but also produces
dense weight matrices that are amenable to efficient inference. By applying our
technique to convolutional as well as fully connected models, we show that
Smallify can reduce network size by 35X with a 6X improvement in inference time
with similar accuracy as models found by traditional training techniques.
| null |
http://arxiv.org/abs/1806.03723v1
|
http://arxiv.org/pdf/1806.03723v1.pdf
| null |
[
"Guillaume Leclerc",
"Manasi Vartak",
"Raul Castro Fernandez",
"Tim Kraska",
"Samuel Madden"
] |
[] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/stochastic-seismic-waveform-inversion-using
|
1806.03720
| null | null |
Stochastic seismic waveform inversion using generative adversarial networks as a geological prior
|
We present an application of deep generative models in the context of
partial-differential equation (PDE) constrained inverse problems. We combine a
generative adversarial network (GAN) representing an a priori model that
creates subsurface geological structures and their petrophysical properties,
with the numerical solution of the PDE governing the propagation of acoustic
waves within the earth's interior. We perform Bayesian inversion using an
approximate Metropolis-adjusted Langevin algorithm (MALA) to sample from the
posterior given seismic observations. Gradients with respect to the model
parameters governing the forward problem are obtained by solving the adjoint of
the acoustic wave equation. Gradients of the mismatch with respect to the
latent variables are obtained by leveraging the differentiable nature of the
deep neural network used to represent the generative model. We show that
approximate MALA sampling allows efficient Bayesian inversion of model
parameters obtained from a prior represented by a deep generative model,
obtaining a diverse set of realizations that reflect the observed seismic
response.
|
We show that approximate MALA sampling allows efficient Bayesian inversion of model parameters obtained from a prior represented by a deep generative model, obtaining a diverse set of realizations that reflect the observed seismic response.
|
http://arxiv.org/abs/1806.03720v1
|
http://arxiv.org/pdf/1806.03720v1.pdf
| null |
[
"Lukas Mosser",
"Olivier Dubrule",
"Martin J. Blunt"
] |
[
"Generative Adversarial Network"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/being-negative-but-constructively-lessons
|
1704.07121
| null | null |
Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets
|
Visual question answering (Visual QA) has attracted a lot of attention
lately, seen essentially as a form of (visual) Turing test that artificial
intelligence should strive to achieve. In this paper, we study a crucial
component of this task: how can we design good datasets for the task? We focus
on the design of multiple-choice based datasets where the learner has to select
the right answer from a set of candidate ones including the target (\ie the
correct one) and the decoys (\ie the incorrect ones). Through careful analysis
of the results attained by state-of-the-art learning models and human
annotators on existing datasets, we show that the design of the decoy answers
has a significant impact on how and what the learning models learn from the
datasets. In particular, the resulting learner can ignore the visual
information, the question, or both while still doing well on the task. Inspired
by this, we propose automatic procedures to remedy such design deficiencies. We
apply the procedures to re-construct decoy answers for two popular Visual QA
datasets as well as to create a new Visual QA dataset from the Visual Genome
project, resulting in the largest dataset for this task. Extensive empirical
studies show that the design deficiencies have been alleviated in the remedied
datasets and the performance on them is likely a more faithful indicator of the
difference among learning models. The datasets are released and publicly
available via http://www.teds.usc.edu/website_vqa/.
| null |
http://arxiv.org/abs/1704.07121v2
|
http://arxiv.org/pdf/1704.07121v2.pdf
|
NAACL 2018 6
|
[
"Wei-Lun Chao",
"Hexiang Hu",
"Fei Sha"
] |
[
"Multiple-choice",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2017-04-24T00:00:00 |
https://aclanthology.org/N18-1040
|
https://aclanthology.org/N18-1040.pdf
|
being-negative-but-constructively-lessons-1
| null |
[] |
https://paperswithcode.com/paper/conditional-generative-adversarial-and
|
1805.10207
| null | null |
Conditional Generative Adversarial and Convolutional Networks for X-ray Breast Mass Segmentation and Shape Classification
|
This paper proposes a novel approach based on conditional Generative
Adversarial Networks (cGAN) for breast mass segmentation in mammography. We
hypothesized that the cGAN structure is well-suited to accurately outline the
mass area, especially when the training data is limited. The generative network
learns intrinsic features of tumors while the adversarial network enforces
segmentations to be similar to the ground truth. Experiments performed on
dozens of malignant tumors extracted from the public DDSM dataset and from our
in-house private dataset confirm our hypothesis with very high Dice coefficient
and Jaccard index (>94% and >89%, respectively) outperforming the scores
obtained by other state-of-the-art approaches. Furthermore, in order to detect
portray significant morphological features of the segmented tumor, a specific
Convolutional Neural Network (CNN) have also been designed for classifying the
segmented tumor areas into four types (irregular, lobular, oval and round),
which provides an overall accuracy about 72% with the DDSM dataset.
|
This paper proposes a novel approach based on conditional Generative Adversarial Networks (cGAN) for breast mass segmentation in mammography.
|
http://arxiv.org/abs/1805.10207v2
|
http://arxiv.org/pdf/1805.10207v2.pdf
| null |
[
"Vivek Kumar Singh",
"Santiago Romani",
"Hatem A. Rashwan",
"Farhan Akram",
"Nidhi Pandey",
"Md. Mostafa Kamal Sarker",
"Jordina Torrents Barrena",
"Saddam Abdulwahab",
"Adel Saleh",
"Miguel Arquez",
"Meritxell Arenas",
"Domenec Puig"
] |
[
"General Classification"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/all-in-one-multi-task-learning-for-rumour
|
1806.03713
| null | null |
All-in-one: Multi-task Learning for Rumour Verification
|
Automatic resolution of rumours is a challenging task that can be broken down
into smaller components that make up a pipeline, including rumour detection,
rumour tracking and stance classification, leading to the final outcome of
determining the veracity of a rumour. In previous work, these steps in the
process of rumour verification have been developed as separate components where
the output of one feeds into the next. We propose a multi-task learning
approach that allows joint training of the main and auxiliary tasks, improving
the performance of rumour verification. We examine the connection between the
dataset properties and the outcomes of the multi-task learning models used.
| null |
http://arxiv.org/abs/1806.03713v1
|
http://arxiv.org/pdf/1806.03713v1.pdf
|
COLING 2018 8
|
[
"Elena Kochkina",
"Maria Liakata",
"Arkaitz Zubiaga"
] |
[
"All",
"General Classification",
"Multi-Task Learning",
"Rumour Detection",
"Stance Classification"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/C18-1288
|
https://aclanthology.org/C18-1288.pdf
|
all-in-one-multi-task-learning-for-rumour-1
| null |
[] |
https://paperswithcode.com/paper/light-field-super-resolution-through
|
1709.09422
| null | null |
Light field super resolution through controlled micro-shifts of light field sensor
|
Light field cameras enable new capabilities, such as post-capture refocusing
and aperture control, through capturing directional and spatial distribution of
light rays in space. Micro-lens array based light field camera design is often
preferred due to its light transmission efficiency, cost-effectiveness and
compactness. One drawback of the micro-lens array based light field cameras is
low spatial resolution due to the fact that a single sensor is shared to
capture both spatial and angular information. To address the low spatial
resolution issue, we present a light field imaging approach, where multiple
light fields are captured and fused to improve the spatial resolution. For each
capture, the light field sensor is shifted by a pre-determined fraction of a
micro-lens size using an XY translation stage for optimal performance.
| null |
http://arxiv.org/abs/1709.09422v2
|
http://arxiv.org/pdf/1709.09422v2.pdf
| null |
[
"M. Umair Mukati",
"Bahadir K. Gunturk"
] |
[
"Super-Resolution",
"Translation"
] | 2017-09-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-reinforcement-learning-for-chinese-zero
|
1806.03711
| null | null |
Deep Reinforcement Learning for Chinese Zero pronoun Resolution
|
Deep neural network models for Chinese zero pronoun resolution learn semantic
information for zero pronoun and candidate antecedents, but tend to be
short-sighted---they often make local decisions. They typically predict
coreference chains between the zero pronoun and one single candidate antecedent
one link at a time, while overlooking their long-term influence on future
decisions. Ideally, modeling useful information of preceding potential
antecedents is critical when later predicting zero pronoun-candidate antecedent
pairs. In this study, we show how to integrate local and global decision-making
by exploiting deep reinforcement learning models. With the help of the
reinforcement learning agent, our model learns the policy of selecting
antecedents in a sequential manner, where useful information provided by
earlier predicted antecedents could be utilized for making later coreference
decisions. Experimental results on OntoNotes 5.0 dataset show that our
technique surpasses the state-of-the-art models.
|
In this study, we show how to integrate local and global decision-making by exploiting deep reinforcement learning models.
|
http://arxiv.org/abs/1806.03711v2
|
http://arxiv.org/pdf/1806.03711v2.pdf
|
ACL 2018 7
|
[
"Qingyu Yin",
"Yu Zhang",
"Wei-Nan Zhang",
"Ting Liu",
"William Yang Wang"
] |
[
"Chinese Zero Pronoun Resolution",
"Decision Making",
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/P18-1053
|
https://aclanthology.org/P18-1053.pdf
|
deep-reinforcement-learning-for-chinese-zero-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-video-to-video-translation
|
1806.03698
| null |
SkgKzh0cY7
|
Unsupervised Video-to-Video Translation
|
Unsupervised image-to-image translation is a recently proposed task of
translating an image to a different style or domain given only unpaired image
examples at training time. In this paper, we formulate a new task of
unsupervised video-to-video translation, which poses its own unique challenges.
Translating video implies learning not only the appearance of objects and
scenes but also realistic motion and transitions between consecutive frames.We
investigate the performance of per-frame video-to-video translation using
existing image-to-image translation networks, and propose a spatio-temporal 3D
translator as an alternative solution to this problem. We evaluate our 3D
method on multiple synthetic datasets, such as moving colorized digits, as well
as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI
volumetric images translation dataset. Our results show that frame-wise
translation produces realistic results on a single frame level but
underperforms significantly on the scale of the whole video compared to our
three-dimensional translation approach, which is better able to learn the
complex structure of video and motion and continuity of object appearance.
|
Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time.
|
http://arxiv.org/abs/1806.03698v1
|
http://arxiv.org/pdf/1806.03698v1.pdf
|
ICLR 2019 5
|
[
"Dina Bashkirova",
"Ben Usman",
"Kate Saenko"
] |
[
"Image-to-Image Translation",
"Translation",
"Unsupervised Image-To-Image Translation"
] | 2018-06-10T00:00:00 |
https://openreview.net/forum?id=SkgKzh0cY7
|
https://openreview.net/pdf?id=SkgKzh0cY7
|
unsupervised-video-to-video-translation-1
| null |
[] |
https://paperswithcode.com/paper/attention-based-guided-structured-sparsity-of
|
1802.09902
| null | null |
Attention-Based Guided Structured Sparsity of Deep Neural Networks
|
Network pruning is aimed at imposing sparsity in a neural network
architecture by increasing the portion of zero-valued weights for reducing its
size regarding energy-efficiency consideration and increasing evaluation speed.
In most of the conducted research efforts, the sparsity is enforced for network
pruning without any attention to the internal network characteristics such as
unbalanced outputs of the neurons or more specifically the distribution of the
weights and outputs of the neurons. That may cause severe accuracy drop due to
uncontrolled sparsity. In this work, we propose an attention mechanism that
simultaneously controls the sparsity intensity and supervised network pruning
by keeping important information bottlenecks of the network to be active. On
CIFAR-10, the proposed method outperforms the best baseline method by 6% and
reduced the accuracy drop by 2.6x at the same level of sparsity.
|
Network pruning is aimed at imposing sparsity in a neural network architecture by increasing the portion of zero-valued weights for reducing its size regarding energy-efficiency consideration and increasing evaluation speed.
|
http://arxiv.org/abs/1802.09902v4
|
http://arxiv.org/pdf/1802.09902v4.pdf
| null |
[
"Amirsina Torfi",
"Rouzbeh A. Shirvani",
"Sobhan Soleymani",
"Nasser M. Nasrabadi"
] |
[
"Network Pruning"
] | 2018-02-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/continuous-time-visual-inertial-odometry-for
|
1702.07389
| null | null |
Continuous-Time Visual-Inertial Odometry for Event Cameras
|
Event cameras are bio-inspired vision sensors that output pixel-level
brightness changes instead of standard intensity frames. They offer significant
advantages over standard cameras, namely a very high dynamic range, no motion
blur, and a latency in the order of microseconds. However, due to the
fundamentally different structure of the sensor's output, new algorithms that
exploit the high temporal resolution and the asynchronous nature of the sensor
are required. Recent work has shown that a continuous-time representation of
the event camera pose can deal with the high temporal resolution and
asynchronous nature of this sensor in a principled way. In this paper, we
leverage such a continuous-time representation to perform visual-inertial
odometry with an event camera. This representation allows direct integration of
the asynchronous events with micro-second accuracy and the inertial
measurements at high frequency. The event camera trajectory is approximated by
a smooth curve in the space of rigid-body motions using cubic splines. This
formulation significantly reduces the number of variables in trajectory
estimation problems. We evaluate our method on real data from several scenes
and compare the results against ground truth from a motion-capture system. We
show that our method provides improved accuracy over the result of a
state-of-the-art visual odometry method for event cameras. We also show that
both the map orientation and scale can be recovered accurately by fusing events
and inertial data. To the best of our knowledge, this is the first work on
visual-inertial fusion with event cameras using a continuous-time framework.
| null |
http://arxiv.org/abs/1702.07389v2
|
http://arxiv.org/pdf/1702.07389v2.pdf
| null |
[
"Elias Mueggler",
"Guillermo Gallego",
"Henri Rebecq",
"Davide Scaramuzza"
] |
[
"Visual Odometry"
] | 2017-02-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/segmentation-of-arterial-walls-in
|
1806.03695
| null | null |
Segmentation of Arterial Walls in Intravascular Ultrasound Cross-Sectional Images Using Extremal Region Selection
|
Intravascular Ultrasound (IVUS) is an intra-operative imaging modality that
facilitates observing and appraising the vessel wall structure of the human
coronary arteries. Segmentation of arterial wall boundaries from the IVUS
images is not only crucial for quantitative analysis of the vessel walls and
plaque characteristics, but is also necessary for generating 3D reconstructed
models of the artery. The aim of this study is twofold. Firstly, we investigate
the feasibility of using a recently proposed region detector, namely Extremal
Region of Extremum Level (EREL) to delineate the luminal and media-adventitia
borders in IVUS frames acquired by 20 MHz probes. Secondly, we propose a region
selection strategy to label two ERELs as lumen and media based on the stability
of their textural information. We extensively evaluated our selection strategy
on the test set of a standard publicly available dataset containing 326 IVUS
B-mode images. We showed that in the best case, the average Hausdorff Distances
(HD) between the extracted ERELs and the actual lumen and media were $0.22$ mm
and $0.45$ mm, respectively. The results of our experiments revealed that our
selection strategy was able to segment the lumen with $\le 0.3$ mm HD to the
gold standard even though the images contained major artifacts such as
bifurcations, shadows, and side branches. Moreover, when there was no artifact,
our proposed method was able to delineate media-adventitia boundaries with
$0.31$ mm HD to the gold standard. Furthermore, our proposed segmentation
method runs in time that is linear in the number of pixels in each frame. Based
on the results of this work, by using a 20 MHz IVUS probe with controlled
pullback, not only can we now analyze the internal structure of human arteries
more accurately, but also segment each frame during the pullback procedure
because of the low run time of our proposed segmentation method.
| null |
http://arxiv.org/abs/1806.03695v1
|
http://arxiv.org/pdf/1806.03695v1.pdf
| null |
[
"Mehdi Faraji",
"Irene Cheng",
"Iris Naudin",
"Anup Basu"
] |
[] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deconvolution-based-global-decoding-for
|
1806.03692
| null | null |
Deconvolution-Based Global Decoding for Neural Machine Translation
|
A great proportion of sequence-to-sequence (Seq2Seq) models for Neural
Machine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate
translation word by word following a sequential order. As the studies of
linguistics have proved that language is not linear word sequence but sequence
of complex structure, translation at each step should be conditioned on the
whole target-side context. To tackle the problem, we propose a new NMT model
that decodes the sequence with the guidance of its structural prediction of the
context of the target sequence. Our model generates translation based on the
structural prediction of the target-side context so that the translation can be
freed from the bind of sequential order. Experimental results demonstrate that
our model is more competitive compared with the state-of-the-art methods, and
the analysis reflects that our model is also robust to translating sentences of
different lengths and it also reduces repetition with the instruction from the
target-side context for decoding.
|
A great proportion of sequence-to-sequence (Seq2Seq) models for Neural Machine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate translation word by word following a sequential order.
|
http://arxiv.org/abs/1806.03692v1
|
http://arxiv.org/pdf/1806.03692v1.pdf
|
COLING 2018 8
|
[
"Junyang Lin",
"Xu sun",
"Xuancheng Ren",
"Shuming Ma",
"Jinsong Su",
"Qi Su"
] |
[
"Machine Translation",
"NMT",
"Translation"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/C18-1276
|
https://aclanthology.org/C18-1276.pdf
|
deconvolution-based-global-decoding-for-1
| null |
[] |
https://paperswithcode.com/paper/lexnlp-natural-language-processing-and
|
1806.03688
| null | null |
LexNLP: Natural language processing and information extraction for legal and regulatory texts
|
LexNLP is an open source Python package focused on natural language
processing and machine learning for legal and regulatory text. The package
includes functionality to (i) segment documents, (ii) identify key text such as
titles and section headings, (iii) extract over eighteen types of structured
information like distances and dates, (iv) extract named entities such as
companies and geopolitical entities, (v) transform text into features for model
training, and (vi) build unsupervised and supervised models such as word
embedding or tagging models. LexNLP includes pre-trained models based on
thousands of unit tests drawn from real documents available from the SEC EDGAR
database as well as various judicial and regulatory proceedings. LexNLP is
designed for use in both academic research and industrial applications, and is
distributed at https://github.com/LexPredict/lexpredict-lexnlp.
|
LexNLP is an open source Python package focused on natural language processing and machine learning for legal and regulatory text.
|
http://arxiv.org/abs/1806.03688v1
|
http://arxiv.org/pdf/1806.03688v1.pdf
| null |
[
"Michael J Bommarito II",
"Daniel Martin Katz",
"Eric M Detterman"
] |
[] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/embedding-words-as-distributions-with-a
|
1711.11027
| null | null |
Embedding Words as Distributions with a Bayesian Skip-gram Model
|
We introduce a method for embedding words as probability densities in a
low-dimensional space. Rather than assuming that a word embedding is fixed
across the entire text collection, as in standard word embedding methods, in
our Bayesian model we generate it from a word-specific prior density for each
occurrence of a given word. Intuitively, for each word, the prior density
encodes the distribution of its potential 'meanings'. These prior densities are
conceptually similar to Gaussian embeddings. Interestingly, unlike the Gaussian
embeddings, we can also obtain context-specific densities: they encode
uncertainty about the sense of a word given its context and correspond to
posterior distributions within our model. The context-dependent densities have
many potential applications: for example, we show that they can be directly
used in the lexical substitution task. We describe an effective estimation
method based on the variational autoencoding framework. We also demonstrate
that our embeddings achieve competitive results on standard benchmarks.
|
Rather than assuming that a word embedding is fixed across the entire text collection, as in standard word embedding methods, in our Bayesian model we generate it from a word-specific prior density for each occurrence of a given word.
|
http://arxiv.org/abs/1711.11027v2
|
http://arxiv.org/pdf/1711.11027v2.pdf
|
COLING 2018 8
|
[
"Arthur Bražinskas",
"Serhii Havrylov",
"Ivan Titov"
] |
[] | 2017-11-29T00:00:00 |
https://aclanthology.org/C18-1151
|
https://aclanthology.org/C18-1151.pdf
|
embedding-words-as-distributions-with-a-2
| null |
[] |
https://paperswithcode.com/paper/dissipativity-theory-for-accelerating
|
1806.03677
| null | null |
Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs
|
Techniques for reducing the variance of gradient estimates used in stochastic
programming algorithms for convex finite-sum problems have received a great
deal of attention in recent years. By leveraging dissipativity theory from
control, we provide a new perspective on two important variance-reduction
algorithms: SVRG and its direct accelerated variant Katyusha. Our perspective
provides a physically intuitive understanding of the behavior of SVRG-like
methods via a principle of energy conservation. The tools discussed here allow
us to automate the convergence analysis of SVRG-like methods by capturing their
essential properties in small semidefinite programs amenable to standard
analysis and computational techniques. Our approach recovers existing
convergence results for SVRG and Katyusha and generalizes the theory to
alternative parameter choices. We also discuss how our approach complements the
linear coupling technique. Our combination of perspectives leads to a better
understanding of accelerated variance-reduced stochastic methods for finite-sum
problems.
| null |
http://arxiv.org/abs/1806.03677v1
|
http://arxiv.org/pdf/1806.03677v1.pdf
|
ICML 2018 7
|
[
"Bin Hu",
"Stephen Wright",
"Laurent Lessard"
] |
[] | 2018-06-10T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2471
|
http://proceedings.mlr.press/v80/hu18b/hu18b.pdf
|
dissipativity-theory-for-accelerating-1
| null |
[] |
https://paperswithcode.com/paper/on-the-covariance-hessian-relation-in
|
1806.03674
| null | null |
On the Covariance-Hessian Relation in Evolution Strategies
|
We consider Evolution Strategies operating only with isotropic Gaussian mutations on positive quadratic objective functions, and investigate the covariance matrix when constructed out of selected individuals by truncation. We prove that the covariance matrix over $(1,\lambda)$-selected decision vectors becomes proportional to the inverse of the landscape Hessian as the population-size $\lambda$ increases. This generalizes a previous result that proved an equivalent phenomenon when sampling was assumed to take place in the vicinity of the optimum. It further confirms the classical hypothesis that statistical learning of the landscape is an inherent characteristic of standard Evolution Strategies, and that this distinguishing capability stems only from the usage of isotropic Gaussian mutations and rank-based selection. We provide broad numerical validation for the proven results, and present empirical evidence for its generalization to $(\mu,\lambda)$-selection.
|
We consider Evolution Strategies operating only with isotropic Gaussian mutations on positive quadratic objective functions, and investigate the covariance matrix when constructed out of selected individuals by truncation.
|
https://arxiv.org/abs/1806.03674v2
|
https://arxiv.org/pdf/1806.03674v2.pdf
| null |
[
"Ofer M. Shir",
"Amir Yehudayoff"
] |
[
"Relation"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/global-encoding-for-abstractive-summarization
|
1805.03989
| null | null |
Global Encoding for Abstractive Summarization
|
In neural abstractive summarization, the conventional sequence-to-sequence
(seq2seq) model often suffers from repetition and semantic irrelevance. To
tackle the problem, we propose a global encoding framework, which controls the
information flow from the encoder to the decoder based on the global
information of the source context. It consists of a convolutional gated unit to
perform global encoding to improve the representations of the source-side
information. Evaluations on the LCSTS and the English Gigaword both demonstrate
that our model outperforms the baseline models, and the analysis shows that our
model is capable of reducing repetition.
|
To tackle the problem, we propose a global encoding framework, which controls the information flow from the encoder to the decoder based on the global information of the source context.
|
http://arxiv.org/abs/1805.03989v2
|
http://arxiv.org/pdf/1805.03989v2.pdf
|
ACL 2018 7
|
[
"Junyang Lin",
"Xu sun",
"Shuming Ma",
"Qi Su"
] |
[
"Abstractive Text Summarization",
"Decoder"
] | 2018-05-10T00:00:00 |
https://aclanthology.org/P18-2027
|
https://aclanthology.org/P18-2027.pdf
|
global-encoding-for-abstractive-summarization-1
| null |
[] |
https://paperswithcode.com/paper/the-impact-of-humanoid-affect-expression-on
|
1806.03671
| null | null |
The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting
|
With the rapid development of robot and other intelligent and autonomous
agents, how a human could be influenced by a robot's expressed mood when making
decisions becomes a crucial question in human-robot interaction. In this pilot
study, we investigate (1) in what way a robot can express a certain mood to
influence a human's decision making behavioral model; (2) how and to what
extent the human will be influenced in a game theoretic setting. More
specifically, we create an NLP model to generate sentences that adhere to a
specific affective expression profile. We use these sentences for a humanoid
robot as it plays a Stackelberg security game against a human. We investigate
the behavioral model of the human player.
|
In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human's decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting.
|
http://arxiv.org/abs/1806.03671v1
|
http://arxiv.org/pdf/1806.03671v1.pdf
| null |
[
"Aaron M. Roth",
"Umang Bhatt",
"Tamara Amin",
"Afsaneh Doryab",
"Fei Fang",
"Manuela Veloso"
] |
[
"Decision Making"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-architecture-search-with-bayesian
|
1802.07191
| null | null |
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
|
Bayesian Optimisation (BO) refers to a class of methods for global
optimisation of a function $f$ which is only accessible via point evaluations.
It is typically used in settings where $f$ is expensive to evaluate. A common
use case for BO in machine learning is model selection, where it is not
possible to analytically model the generalisation performance of a statistical
model, and we resort to noisy and expensive training and validation procedures
to choose the best model. Conventional BO methods have focused on Euclidean and
categorical domains, which, in the context of model selection, only permits
tuning scalar hyper-parameters of machine learning algorithms. However, with
the surge of interest in deep learning, there is an increasing demand to tune
neural network \emph{architectures}. In this work, we develop NASBOT, a
Gaussian process based BO framework for neural architecture search. To
accomplish this, we develop a distance metric in the space of neural network
architectures which can be computed efficiently via an optimal transport
program. This distance might be of independent interest to the deep learning
community as it may find applications outside of BO. We demonstrate that NASBOT
outperforms other alternatives for architecture search in several cross
validation based model selection tasks on multi-layer perceptrons and
convolutional neural networks.
|
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
|
http://arxiv.org/abs/1802.07191v3
|
http://arxiv.org/pdf/1802.07191v3.pdf
|
NeurIPS 2018 12
|
[
"Kirthevasan Kandasamy",
"Willie Neiswanger",
"Jeff Schneider",
"Barnabas Poczos",
"Eric Xing"
] |
[
"Bayesian Optimisation",
"BIG-bench Machine Learning",
"Model Selection",
"Neural Architecture Search"
] | 2018-02-11T00:00:00 |
http://papers.nips.cc/paper/7472-neural-architecture-search-with-bayesian-optimisation-and-optimal-transport
|
http://papers.nips.cc/paper/7472-neural-architecture-search-with-bayesian-optimisation-and-optimal-transport.pdf
|
neural-architecture-search-with-bayesian-1
| null |
[] |
https://paperswithcode.com/paper/centrality-measures-for-graphons-accounting
|
1707.09350
| null | null |
Centrality measures for graphons: Accounting for uncertainty in networks
|
As relational datasets modeled as graphs keep increasing in size and their
data-acquisition is permeated by uncertainty, graph-based analysis techniques
can become computationally and conceptually challenging. In particular, node
centrality measures rely on the assumption that the graph is perfectly known --
a premise not necessarily fulfilled for large, uncertain networks. Accordingly,
centrality measures may fail to faithfully extract the importance of nodes in
the presence of uncertainty. To mitigate these problems, we suggest a
statistical approach based on graphon theory: we introduce formal definitions
of centrality measures for graphons and establish their connections to
classical graph centrality measures. A key advantage of this approach is that
centrality measures defined at the modeling level of graphons are inherently
robust to stochastic variations of specific graph realizations. Using the
theory of linear integral operators, we define degree, eigenvector, Katz and
PageRank centrality functions for graphons and establish concentration
inequalities demonstrating that graphon centrality functions arise naturally as
limits of their counterparts defined on sequences of graphs of increasing size.
The same concentration inequalities also provide high-probability bounds
between the graphon centrality functions and the centrality measures on any
sampled graph, thereby establishing a measure of uncertainty of the measured
centrality score. The same concentration inequalities also provide
high-probability bounds between the graphon centrality functions and the
centrality measures on any sampled graph, thereby establishing a measure of
uncertainty of the measured centrality score.
| null |
http://arxiv.org/abs/1707.09350v4
|
http://arxiv.org/pdf/1707.09350v4.pdf
| null |
[
"Marco Avella-Medina",
"Francesca Parise",
"Michael T. Schaub",
"Santiago Segarra"
] |
[] | 2017-07-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-understanding-acceleration-tradeoff
|
1806.01660
| null | null |
Towards Understanding Acceleration Tradeoff between Momentum and Asynchrony in Nonconvex Stochastic Optimization
|
Asynchronous momentum stochastic gradient descent algorithms (Async-MSGD) is one of the most popular algorithms in distributed machine learning. However, its convergence properties for these complicated nonconvex problems is still largely unknown, because of the current technical limit. Therefore, in this paper, we propose to analyze the algorithm through a simpler but nontrivial nonconvex problem - streaming PCA, which helps us to understand Aync-MSGD better even for more general problems. Specifically, we establish the asymptotic rate of convergence of Async-MSGD for streaming PCA by diffusion approximation. Our results indicate a fundamental tradeoff between asynchrony and momentum: To ensure convergence and acceleration through asynchrony, we have to reduce the momentum (compared with Sync-MSGD). To the best of our knowledge, this is the first theoretical attempt on understanding Async-MSGD for distributed nonconvex stochastic optimization. Numerical experiments on both streaming PCA and training deep neural networks are provided to support our findings for Async-MSGD.
| null |
https://arxiv.org/abs/1806.01660v6
|
https://arxiv.org/pdf/1806.01660v6.pdf
|
NeurIPS 2018 12
|
[
"Tianyi Liu",
"Shiyang Li",
"Jianping Shi",
"Enlu Zhou",
"Tuo Zhao"
] |
[
"Stochastic Optimization"
] | 2018-06-04T00:00:00 |
http://papers.nips.cc/paper/7626-towards-understanding-acceleration-tradeoff-between-momentum-and-asynchrony-in-nonconvex-stochastic-optimization
|
http://papers.nips.cc/paper/7626-towards-understanding-acceleration-tradeoff-between-momentum-and-asynchrony-in-nonconvex-stochastic-optimization.pdf
|
towards-understanding-acceleration-tradeoff-1
| null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/conditional-noise-contrastive-estimation-of
|
1806.03664
| null | null |
Conditional Noise-Contrastive Estimation of Unnormalised Models
|
Many parametric statistical models are not properly normalised and only
specified up to an intractable partition function, which renders parameter
estimation difficult. Examples of unnormalised models are Gibbs distributions,
Markov random fields, and neural network models in unsupervised deep learning.
In previous work, the estimation principle called noise-contrastive estimation
(NCE) was introduced where unnormalised models are estimated by learning to
distinguish between data and auxiliary noise. An open question is how to best
choose the auxiliary noise distribution. We here propose a new method that
addresses this issue. The proposed method shares with NCE the idea of
formulating density estimation as a supervised learning problem but in contrast
to NCE, the proposed method leverages the observed data when generating noise
samples. The noise can thus be generated in a semi-automated manner. We first
present the underlying theory of the new method, show that score matching
emerges as a limiting case, validate the method on continuous and discrete
valued synthetic data, and show that we can expect an improved performance
compared to NCE when the data lie in a lower-dimensional manifold. Then we
demonstrate its applicability in unsupervised deep learning by estimating a
four-layer neural image model.
| null |
http://arxiv.org/abs/1806.03664v1
|
http://arxiv.org/pdf/1806.03664v1.pdf
|
ICML 2018 7
|
[
"Ciwan Ceylan",
"Michael U. Gutmann"
] |
[
"Density Estimation",
"Open-Ended Question Answering",
"parameter estimation"
] | 2018-06-10T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2158
|
http://proceedings.mlr.press/v80/ceylan18a/ceylan18a.pdf
|
conditional-noise-contrastive-estimation-of-1
| null |
[] |
https://paperswithcode.com/paper/smart-novel-computer-based-analytical-tool
|
1806.04576
| null | null |
Smart Novel Computer-based Analytical Tool for Image Forgery Authentication
|
This paper presents an integration of image forgery detection with image
facial recognition using black propagation neural network (BPNN). We observed
that facial image recognition by itself will always give a matching output or
closest possible output image for every input image irrespective of the
authenticity or otherwise not of the testing input image. Based on this, we are
proposing the combination of the blind but powerful automation image forgery
detection for entire input images for the BPNN recognition program. Hence, an
input image must first be authenticated before being fed into the recognition
program. Thus, an image security identification and authentication requirement,
any image that fails the authentication/verification stage are not to be used
as an input/test image. In addition, the universal smart GUI tool is proposed
and designed to perform image forgery detection with the high accuracy of 2%
error rate.
| null |
http://arxiv.org/abs/1806.04576v1
|
http://arxiv.org/pdf/1806.04576v1.pdf
| null |
[
"Rozita Teymourzadeh",
"Amirrize Alpha",
"VH Mok"
] |
[
"Image Forgery Detection"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/incremental-decoding-and-training-methods-for
|
1806.03661
| null | null |
Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation
|
We address the problem of simultaneous translation by modifying the Neural MT
decoder to operate with dynamically built encoder and attention. We propose a
tunable agent which decides the best segmentation strategy for a user-defined
BLEU loss and Average Proportion (AP) constraint. Our agent outperforms
previously proposed Wait-if-diff and Wait-if-worse agents (Cho and Esipova,
2016) on BLEU with a lower latency. Secondly we proposed data-driven changes to
Neural MT training to better match the incremental decoding framework.
| null |
http://arxiv.org/abs/1806.03661v1
|
http://arxiv.org/pdf/1806.03661v1.pdf
|
NAACL 2018 6
|
[
"Fahim Dalvi",
"Nadir Durrani",
"Hassan Sajjad",
"Stephan Vogel"
] |
[
"Decoder",
"Machine Translation",
"Translation"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/N18-2079
|
https://aclanthology.org/N18-2079.pdf
|
incremental-decoding-and-training-methods-for-1
| null |
[] |
https://paperswithcode.com/paper/a-generic-deep-architecture-for-single-image
|
1708.03474
| null | null |
A Generic Deep Architecture for Single Image Reflection Removal and Image Smoothing
|
This paper proposes a deep neural network structure that exploits edge
information in addressing representative low-level vision tasks such as layer
separation and image filtering. Unlike most other deep learning strategies
applied in this context, our approach tackles these challenging problems by
estimating edges and reconstructing images using only cascaded convolutional
layers arranged such that no handcrafted or application-specific
image-processing components are required. We apply the resulting transferrable
pipeline to two different problem domains that are both sensitive to edges,
namely, single image reflection removal and image smoothing. For the former,
using a mild reflection smoothness assumption and a novel synthetic data
generation method that acts as a type of weak supervision, our network is able
to solve much more difficult reflection cases that cannot be handled by
previous methods. For the latter, we also exceed the state-of-the-art
quantitative and qualitative results by wide margins. In all cases, the
proposed framework is simple, fast, and easy to transfer across disparate
domains.
|
This paper proposes a deep neural network structure that exploits edge information in addressing representative low-level vision tasks such as layer separation and image filtering.
|
http://arxiv.org/abs/1708.03474v2
|
http://arxiv.org/pdf/1708.03474v2.pdf
|
ICCV 2017 10
|
[
"Qingnan Fan",
"Jiaolong Yang",
"Gang Hua",
"Baoquan Chen",
"David Wipf"
] |
[
"image smoothing",
"Reflection Removal",
"Synthetic Data Generation"
] | 2017-08-11T00:00:00 |
http://openaccess.thecvf.com/content_iccv_2017/html/Fan_A_Generic_Deep_ICCV_2017_paper.html
|
http://openaccess.thecvf.com/content_ICCV_2017/papers/Fan_A_Generic_Deep_ICCV_2017_paper.pdf
|
a-generic-deep-architecture-for-single-image-1
| null |
[] |
https://paperswithcode.com/paper/scidtb-discourse-dependency-treebank-for
|
1806.03653
| null | null |
SciDTB: Discourse Dependency TreeBank for Scientific Abstracts
|
Annotation corpus for discourse relations benefits NLP tasks such as machine
translation and question answering. In this paper, we present SciDTB, a
domain-specific discourse treebank annotated on scientific articles. Different
from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent
discourse structure, which is flexible and simplified to some extent but do not
sacrifice structural integrity. We discuss the labeling framework, annotation
workflow and some statistics about SciDTB. Furthermore, our treebank is made as
a benchmark for evaluating discourse dependency parsers, on which we provide
several baselines as fundamental work.
|
Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering.
|
http://arxiv.org/abs/1806.03653v1
|
http://arxiv.org/pdf/1806.03653v1.pdf
|
ACL 2018 7
|
[
"An Yang",
"Sujian Li"
] |
[
"Articles",
"Machine Translation",
"Question Answering",
"Translation"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/P18-2071
|
https://aclanthology.org/P18-2071.pdf
|
scidtb-discourse-dependency-treebank-for-1
| null |
[] |
https://paperswithcode.com/paper/deep-learning-estimation-of-absorbed-dose-for
|
1805.09108
| null | null |
Deep Learning Estimation of Absorbed Dose for Nuclear Medicine Diagnostics
|
The distribution of energy dose from Lu$^{177}$ radiotherapy can be estimated by convolving an image of a time-integrated activity distribution with a dose voxel kernel (DVK) consisting of different types of tissues. This fast and inacurate approximation is inappropriate for personalized dosimetry as it neglects tissue heterogenity. The latter can be calculated using different imaging techniques such as CT and SPECT combined with a time consuming monte-carlo simulation. The aim of this study is, for the first time, an estimation of DVKs from CT-derived density kernels (DK) via deep learning in convolutional neural networks (CNNs). The proposed CNN achieved, on the test set, a mean intersection over union (IOU) of $= 0.86$ after $308$ epochs and a corresponding mean squared error (MSE) $= 1.24 \cdot 10^{-4}$. This generalization ability shows that the trained CNN can indeed learn the difficult transfer function from DK to DVK. Future work will evaluate DVKs estimated by CNNs with full monte-carlo simulations of a whole body CT to predict patient specific voxel dose maps.
|
The distribution of energy dose from Lu$^{177}$ radiotherapy can be estimated by convolving an image of a time-integrated activity distribution with a dose voxel kernel (DVK) consisting of different types of tissues.
|
https://arxiv.org/abs/1805.09108v9
|
https://arxiv.org/pdf/1805.09108v9.pdf
| null |
[
"Luciano Melodia"
] |
[
"Deep Learning"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-disease-named-entity-extraction-with
|
1806.03648
| null | null |
Neural Disease Named Entity Extraction with Character-based BiLSTM+CRF in Japanese Medical Text
|
We propose an 'end-to-end' character-based recurrent neural network that
extracts disease named entities from a Japanese medical text and simultaneously
judges its modality as either positive or negative; i.e., the mentioned disease
or symptom is affirmed or negated. The motivation to adopt neural networks is
to learn effective lexical and structural representation features for Entity
Recognition and also for Positive/Negative classification from an annotated
corpora without explicitly providing any rule-based or manual feature sets. We
confirmed the superiority of our method over previous char-based CRF or SVM
methods in the results.
| null |
http://arxiv.org/abs/1806.03648v1
|
http://arxiv.org/pdf/1806.03648v1.pdf
| null |
[
"Ken Yano"
] |
[
"Entity Extraction using GAN",
"General Classification"
] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/scalable-magnetic-field-slam-in-3d-using
|
1804.01926
| null | null |
Scalable Magnetic Field SLAM in 3D Using Gaussian Process Maps
|
We present a method for scalable and fully 3D magnetic field simultaneous
localisation and mapping (SLAM) using local anomalies in the magnetic field as
a source of position information. These anomalies are due to the presence of
ferromagnetic material in the structure of buildings and in objects such as
furniture. We represent the magnetic field map using a Gaussian process model
and take well-known physical properties of the magnetic field into account. We
build local maps using three-dimensional hexagonal block tiling. To make our
approach computationally tractable we use reduced-rank Gaussian process
regression in combination with a Rao-Blackwellised particle filter. We show
that it is possible to obtain accurate position and orientation estimates using
measurements from a smartphone, and that our approach provides a scalable
magnetic field SLAM algorithm in terms of both computational complexity and map
storage.
| null |
http://arxiv.org/abs/1804.01926v2
|
http://arxiv.org/pdf/1804.01926v2.pdf
| null |
[
"Manon Kok",
"Arno Solin"
] |
[
"Position"
] | 2018-04-05T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-curiosity-loops-in-social-environments
|
1806.03645
| null | null |
Deep Curiosity Loops in Social Environments
|
Inspired by infants' intrinsic motivation to learn, which values informative
sensory channels contingent on their immediate social environment, we developed
a deep curiosity loop (DCL) architecture. The DCL is composed of a learner,
which attempts to learn a forward model of the agent's state-action transition,
and a novel reinforcement-learning (RL) component, namely, an
Action-Convolution Deep Q-Network, which uses the learner's prediction error as
reward. The environment for our agent is composed of visual social scenes,
composed of sitcom video streams, thereby both the learner and the RL are
constructed as deep convolutional neural networks. The agent's learner learns
to predict the zero-th order of the dynamics of visual scenes, resulting in
intrinsic rewards proportional to changes within its social environment. The
sources of these socially informative changes within the sitcom are
predominantly motions of faces and hands, leading to the unsupervised
curiosity-based learning of social interaction features. The face and hand
detection is represented by the value function and the social interaction
optical-flow is represented by the policy. Our results suggest that face and
hand detection are emergent properties of curiosity-based learning embedded in
social environments.
| null |
http://arxiv.org/abs/1806.03645v1
|
http://arxiv.org/pdf/1806.03645v1.pdf
| null |
[
"Jonatan Barkan",
"Goren Gordon"
] |
[
"Hand Detection",
"Optical Flow Estimation",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transformationally-identical-and-invariant-1
|
1806.03636
| null | null |
Transformationally Identical and Invariant Convolutional Neural Networks through Symmetric Element Operators
|
Mathematically speaking, a transformationally invariant operator, such as a
transformationally identical (TI) matrix kernel (i.e., K= T{K}), commutes with
the transformation (T{.}) itself when they operate on the first operand matrix.
We found that by consistently applying the same type of TI kernels in a
convolutional neural networks (CNN) system, the commutative property holds
throughout all layers of convolution processes with and without involving an
activation function and/or a 1D convolution across channels within a layer. We
further found that any CNN possessing the same TI kernel property for all
convolution layers followed by a flatten layer with weight sharing among their
transformation corresponding elements would output the same result for all
transformation versions of the original input vector. In short, CNN[ Vi ] =
CNN[ T{Vi} ] providing every K = T{K} in CNN, where Vi denotes input vector and
CNN[.] represents the whole CNN process as a function of input vector that
produces an output vector. With such a transformationally identical CNN
(TI-CNN) system, each transformation, that is not associated with a predefined
TI used in data augmentation, would inherently include all of its corresponding
transformation versions of the input vector for the training. Hence the use of
same TI property for every kernel in the CNN would serve as an orientation or a
translation independent training guide in conjunction with the
error-backpropagation during the training. This TI kernel property is desirable
for applications requiring a highly consistent output result from corresponding
transformation versions of an input. Several C programming routines are
provided to facilitate interested parties of using the TI-CNN technique which
is expected to produce a better generalization performance than its ordinary
CNN counterpart.
| null |
http://arxiv.org/abs/1806.03636v3
|
http://arxiv.org/pdf/1806.03636v3.pdf
| null |
[
"Shih Chung B. Lo",
"Matthew T. Freedman",
"Seong K. Mun",
"Shuo Gu"
] |
[
"Data Augmentation"
] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/segmentation-of-instances-by-hashing
|
1702.08160
| null | null |
Segmentation of Instances by Hashing
|
We propose a novel approach to address the Simultaneous Detection and
Segmentation problem. Using hierarchical structures we use an efficient and
accurate procedure that exploits the hierarchy feature information using
Locality Sensitive Hashing. We build on recent work that utilizes convolutional
neural networks to detect bounding boxes in an image and then use the top
similar hierarchical region that best fits each bounding box after hashing, we
call this approach CZ Segmentation. We then refine our final segmentation
results by automatic hierarchy pruning. CZ Segmentation introduces a train-free
alternative to Hypercolumns. We conduct extensive experiments on PASCAL VOC
2012 segmentation dataset, showing that CZ gives competitive state-of-the-art
object segmentations.
| null |
http://arxiv.org/abs/1702.08160v9
|
http://arxiv.org/pdf/1702.08160v9.pdf
| null |
[
"J. D. Curtó",
"I. C. Zarza",
"A. Smola",
"L. Van Gool"
] |
[
"Segmentation"
] | 2017-02-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**R-CNN**, or **Regions with CNN Features**, is an object detection model that uses high-capacity CNNs to bottom-up region proposals in order to localize and segment objects. It uses [selective search](https://paperswithcode.com/method/selective-search) to identify a number of bounding-box object region candidates (“regions of interest”), and then extracts features from each region independently for classification.",
"full_name": "R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "R-CNN",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
}
] |
https://paperswithcode.com/paper/mckernel-a-library-for-approximate-kernel
|
1702.08159
| null | null |
McKernel: A Library for Approximate Kernel Expansions in Log-linear Time
|
Kernel Methods Next Generation (KMNG) introduces a framework to use kernel
approximates in the mini-batch setting with SGD Optimizer as an alternative to
Deep Learning. McKernel is a C++ library for KMNG ML Large-scale. It contains a
CPU optimized implementation of the Fastfood algorithm that allows the
computation of approximated kernel expansions in log-linear time. The algorithm
requires to compute the product of Walsh Hadamard Transform (WHT) matrices. A
cache friendly SIMD Fast Walsh Hadamard Transform (FWHT) that achieves
compelling speed and outperforms current state-of-the-art methods has been
developed. McKernel allows to obtain non-linear classification combining
Fastfood and a linear classifier.
|
The algorithm requires to compute the product of Walsh Hadamard Transform (WHT) matrices.
|
http://arxiv.org/abs/1702.08159v9
|
http://arxiv.org/pdf/1702.08159v9.pdf
| null |
[
"Joachim D. Curtó",
"Irene C. Zarza",
"Feng Yang",
"Alexander J. Smola",
"Fernando de la Torre",
"Chong-Wah Ngo",
"Luc van Gool"
] |
[
"CPU",
"General Classification"
] | 2017-02-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/curto2/mckernel",
"description": "McKernel introduces a framework to use kernel approximates in the mini-batch setting with Stochastic Gradient Descent ([SGD](https://paperswithcode.com/method/sgd)) as an alternative to Deep Learning.\r\n\r\nThe core library was developed in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon and City University of Hong Kong. The original intend was to implement a speedup of Random Kitchen Sinks (Rahimi and Recht 2007) by writing a very efficient HADAMARD tranform, which was the main bottleneck of the construction. The code though was later expanded at ETH Zürich (in McKernel by Curtó et al. 2017) to propose a framework that could explain both Kernel Methods and Neural Networks. This manuscript and the corresponding theses, constitute one of the first usages (if not the first) in the literature of FOURIER features and Deep Learning; which later got a lot of research traction and interest in the community.\r\n\r\nMore information can be found in this presentation that the first author gave at ICLR 2020 [iclr2020_DeCurto](https://www.decurto.tw/c/iclr2020_DeCurto.pdf).\r\n\r\n[1] [https://www.curto.hk/c/decurto.pdf](https://www.curto.hk/c/decurto.pdf)\r\n\r\n[2] [https://www.zarza.hk/z/dezarza.pdf](https://www.zarza.hk/z/dezarza.pdf)",
"full_name": "MCKERNEL",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "MCKERNEL",
"source_title": "McKernel: A Library for Approximate Kernel Expansions in Log-linear Time",
"source_url": "http://arxiv.org/abs/1702.08159v9"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/enhancing-convolutional-neural-networks-for
|
1707.07923
| null | null |
Enhancing Convolutional Neural Networks for Face Recognition with Occlusion Maps and Batch Triplet Loss
|
Despite the recent success of convolutional neural networks for computer
vision applications, unconstrained face recognition remains a challenge. In
this work, we make two contributions to the field. Firstly, we consider the
problem of face recognition with partial occlusions and show how current
approaches might suffer significant performance degradation when dealing with
this kind of face images. We propose a simple method to find out which parts of
the human face are more important to achieve a high recognition rate, and use
that information during training to force a convolutional neural network to
learn discriminative features from all the face regions more equally, including
those that typical approaches tend to pay less attention to. We test the
accuracy of the proposed method when dealing with real-life occlusions using
the AR face database. Secondly, we propose a novel loss function called batch
triplet loss that improves the performance of the triplet loss by adding an
extra term to the loss function to cause minimisation of the standard deviation
of both positive and negative scores. We show consistent improvement in the
Labeled Faces in the Wild (LFW) benchmark by applying both proposed adjustments
to the convolutional neural network training.
| null |
http://arxiv.org/abs/1707.07923v4
|
http://arxiv.org/pdf/1707.07923v4.pdf
| null |
[
"Daniel Sáez Trigueros",
"Li Meng",
"Margaret Hartnett"
] |
[
"Face Recognition",
"Triplet"
] | 2017-07-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/voxelatlasgan-3d-left-ventricle-segmentation
|
1806.03619
| null | null |
VoxelAtlasGAN: 3D Left Ventricle Segmentation on Echocardiography with Atlas Guided Generation and Voxel-to-voxel Discrimination
|
3D left ventricle (LV) segmentation on echocardiography is very important for
diagnosis and treatment of cardiac disease. It is not only because of that
echocardiography is a real-time imaging technology and widespread in clinical
application, but also because of that LV segmentation on 3D echocardiography
can provide more full volume information of heart than LV segmentation on 2D
echocardiography. However, 3D LV segmentation on echocardiography is still an
open and challenging task owing to the lower contrast, higher noise and data
dimensionality, limited annotation of 3D echocardiography. In this paper, we
proposed a novel real-time framework, i.e., VoxelAtlasGAN, for 3D LV
segmentation on 3D echocardiography. This framework has three contributions: 1)
It is based on voxel-to-voxel conditional generative adversarial nets (cGAN).
For the first time, cGAN is used for 3D LV segmentation on echocardiography.
And cGAN advantageously fuses substantial 3D spatial context information from
3D echocardiography by self-learning structured loss; 2) For the first time, it
embeds the atlas into an end-to-end optimization framework, which uses 3D LV
atlas as a powerful prior knowledge to improve the inference speed, address the
lower contrast and the limited annotation problems of 3D echocardiography; 3)
It combines traditional discrimination loss and the new proposed consistent
constraint, which further improves the generalization of the proposed
framework. VoxelAtlasGAN was validated on 60 subjects on 3D echocardiography
and it achieved satisfactory segmentation results and high inference speed. The
mean surface distance is 1.85 mm, the mean hausdorff surface distance is 7.26
mm, mean dice is 0.953, the correlation of EF is 0.918, and the mean inference
speed is 0.1s. These results have demonstrated that our proposed method has
great potential for clinical application
| null |
http://arxiv.org/abs/1806.03619v1
|
http://arxiv.org/pdf/1806.03619v1.pdf
| null |
[
"Suyu Dong",
"Gongning Luo",
"Kuanquan Wang",
"Shaodong Cao",
"Ashley Mercado",
"Olga Shmuilovich",
"Henggui Zhang",
"Shuo Li"
] |
[
"Left Ventricle Segmentation",
"LV Segmentation",
"Segmentation",
"Self-Learning"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/capacity-releasing-diffusion-for-speed-and-1
|
1706.05826
| null | null |
Capacity Releasing Diffusion for Speed and Locality
|
Diffusions and related random walk procedures are of central importance in
many areas of machine learning, data analysis, and applied mathematics. Because
they spread mass agnostically at each step in an iterative manner, they can
sometimes spread mass "too aggressively," thereby failing to find the "right"
clusters. We introduce a novel Capacity Releasing Diffusion (CRD) Process,
which is both faster and stays more local than the classical spectral diffusion
process. As an application, we use our CRD Process to develop an improved local
algorithm for graph clustering. Our local graph clustering method can find
local clusters in a model of clustering where one begins the CRD Process in a
cluster whose vertices are connected better internally than externally by an
$O(\log^2 n)$ factor, where $n$ is the number of nodes in the cluster. Thus,
our CRD Process is the first local graph clustering algorithm that is not
subject to the well-known quadratic Cheeger barrier. Our result requires a
certain smoothness condition, which we expect to be an artifact of our
analysis. Our empirical evaluation demonstrates improved results, in particular
for realistic social graphs where there are moderately good---but not very
good---clusters.
| null |
http://arxiv.org/abs/1706.05826v2
|
http://arxiv.org/pdf/1706.05826v2.pdf
| null |
[
"Di Wang",
"Kimon Fountoulakis",
"Monika Henzinger",
"Michael W. Mahoney",
"Satish Rao"
] |
[
"Clustering",
"Graph Clustering"
] | 2017-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/implicit-policy-for-reinforcement-learning
|
1806.06798
| null | null |
Implicit Policy for Reinforcement Learning
|
We introduce Implicit Policy, a general class of expressive policies that can
flexibly represent complex action distributions in reinforcement learning, with
efficient algorithms to compute entropy regularized policy gradients. We
empirically show that, despite its simplicity in implementation, entropy
regularization combined with a rich policy class can attain desirable
properties displayed under maximum entropy reinforcement learning framework,
such as robustness and multi-modality.
| null |
http://arxiv.org/abs/1806.06798v2
|
http://arxiv.org/pdf/1806.06798v2.pdf
| null |
[
"Yunhao Tang",
"Shipra Agrawal"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/k-space-deep-learning-for-reference-free-epi
|
1806.00153
| null | null |
k-Space Deep Learning for Reference-free EPI Ghost Correction
|
Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.
| null |
https://arxiv.org/abs/1806.00153v3
|
https://arxiv.org/pdf/1806.00153v3.pdf
| null |
[
"Juyoung Lee",
"Yoseob Han",
"Jae-Kyun Ryu",
"Jang-Yeon Park",
"Jong Chul Ye"
] |
[
"Deep Learning",
"Matrix Completion"
] | 2018-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/weighted-tanimoto-coefficient-for-3d-molecule
|
1806.05237
| null | null |
Weighted Tanimoto Coefficient for 3D Molecule Structure Similarity Measurement
|
Similarity searching of molecular structure has been an important application
in the Chemoinformatics, especially in drug discovery. Similarity searching is
a common method used for identification of molecular structure. It involve
three main principal component of similarity searching: structure
representation; weighting scheme; and similarity coefficient. In this paper, we
introduces Weighted Tanimoto Coefficient based on weighted Euclidean distance
in order to investigate the effect of weight function on the result for
similarity searching. The Tanimoto coefficient is one of the popular similarity
coefficients used to measure the similarity between pairs of the molecule. The
most of research area found that the similarity searching is based on binary or
fingerprint data. Meanwhile, we used non-binary data and was set amphetamine
structure as a reference or targeted structure and the rest of the dataset
becomes a database structure. Throughout this study, it showed that there is
definitely gives a different result between a similarity searching with and
without weight.
| null |
http://arxiv.org/abs/1806.05237v1
|
http://arxiv.org/pdf/1806.05237v1.pdf
| null |
[
"Siti Asmah Bero",
"Azah Kamilah Muda",
"Yun-Huoy Choo",
"Noor Azilah Muda",
"Satrya Fajri Pratama"
] |
[
"Drug Discovery"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/k-space-deep-learning-for-parallel-mri
|
1806.00806
| null | null |
k-Space Deep Learning for Parallel MRI: Application to Time-Resolved MR Angiography
|
Time-resolved angiography with interleaved stochastic trajectories (TWIST)
has been widely used for dynamic contrast enhanced MRI (DCE-MRI). To achieve
highly accelerated acquisitions, TWIST combines the periphery of the k-space
data from several adjacent frames to reconstruct one temporal frame. However,
this view-sharing scheme limits the true temporal resolution of TWIST.
Moreover, the k-space sampling patterns have been specially designed for a
specific generalized autocalibrating partial parallel acquisition (GRAPPA)
factor so that it is not possible to reduce the number of view-sharing once the
k-data is acquired. To address these issues, this paper proposes a novel
k-space deep learning approach for parallel MRI. In particular, we have
designed our neural network so that accurate k-space interpolations are
performed simultaneously for multiple coils by exploiting the redundancies
along the coils and images. Reconstruction results using in vivo TWIST data set
confirm that the proposed method can immediately generate high-quality
reconstruction results with various choices of view- sharing, allowing us to
exploit the trade-off between spatial and temporal resolution in time-resolved
MR angiography.
| null |
http://arxiv.org/abs/1806.00806v2
|
http://arxiv.org/pdf/1806.00806v2.pdf
| null |
[
"Eunju Cha",
"Eung Yeop Kim",
"Jong Chul Ye"
] |
[] | 2018-06-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-generalization-of-equivariance-and
|
1802.03690
| null | null |
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups
|
Convolutional neural networks have been extremely successful in the image
recognition domain because they ensure equivariance to translations. There have
been many recent attempts to generalize this framework to other domains,
including graphs and data lying on manifolds. In this paper we give a rigorous,
theoretical treatment of convolution and equivariance in neural networks with
respect to not just translations, but the action of any compact group. Our main
result is to prove that (given some natural constraints) convolutional
structure is not just a sufficient, but also a necessary condition for
equivariance to the action of a compact group. Our exposition makes use of
concepts from representation theory and noncommutative harmonic analysis and
derives new generalized convolution formulae.
| null |
http://arxiv.org/abs/1802.03690v3
|
http://arxiv.org/pdf/1802.03690v3.pdf
|
ICML 2018 7
|
[
"Risi Kondor",
"Shubhendu Trivedi"
] |
[] | 2018-02-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2476
|
http://proceedings.mlr.press/v80/kondor18a/kondor18a.pdf
|
on-the-generalization-of-equivariance-and-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/cross-lingual-task-specific-representation
|
1806.03590
| null | null |
Cross-Lingual Task-Specific Representation Learning for Text Classification in Resource Poor Languages
|
Neural network models have shown promising results for text classification.
However, these solutions are limited by their dependence on the availability of
annotated data.
The prospect of leveraging resource-rich languages to enhance the text
classification of resource-poor languages is fascinating. The performance on
resource-poor languages can significantly improve if the resource availability
constraints can be offset. To this end, we present a twin Bidirectional Long
Short Term Memory (Bi-LSTM) network with shared parameters consolidated by a
contrastive loss function (based on a similarity metric). The model learns the
representation of resource-poor and resource-rich sentences in a common space
by using the similarity between their assigned annotation tags. Hence, the
model projects sentences with similar tags closer and those with different tags
farther from each other. We evaluated our model on the classification tasks of
sentiment analysis and emoji prediction for resource-poor languages - Hindi and
Telugu and resource-rich languages - English and Spanish. Our model
significantly outperforms the state-of-the-art approaches in both the tasks
across all metrics.
| null |
http://arxiv.org/abs/1806.03590v1
|
http://arxiv.org/pdf/1806.03590v1.pdf
| null |
[
"Nurendra Choudhary",
"Rajat Singh",
"Manish Shrivastava"
] |
[
"Classification",
"General Classification",
"Representation Learning",
"Sentiment Analysis",
"text-classification",
"Text Classification"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/free-form-image-inpainting-with-gated
|
1806.03589
| null | null |
Free-Form Image Inpainting with Gated Convolution
|
We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting
|
We present a generative image inpainting system to complete images with free-form mask and guidance.
|
https://arxiv.org/abs/1806.03589v2
|
https://arxiv.org/pdf/1806.03589v2.pdf
|
ICCV 2019 10
|
[
"Jiahui Yu",
"Zhe Lin",
"Jimei Yang",
"Xiaohui Shen",
"Xin Lu",
"Thomas Huang"
] |
[
"feature selection",
"Form",
"Image Inpainting",
"valid"
] | 2018-06-10T00:00:00 |
http://openaccess.thecvf.com/content_ICCV_2019/html/Yu_Free-Form_Image_Inpainting_With_Gated_Convolution_ICCV_2019_paper.html
|
http://openaccess.thecvf.com/content_ICCV_2019/papers/Yu_Free-Form_Image_Inpainting_With_Gated_Convolution_ICCV_2019_paper.pdf
|
free-form-image-inpainting-with-gated-1
| null |
[
{
"code_snippet_url": "",
"description": "A Gated Linear Unit, or GLU computes:\r\n\r\n$$\r\n\\mathrm{GLU}(a, b) = a \\otimes \\sigma(b)\r\n$$\r\n\r\nIt is used in natural language processing architectures, for example the Gated CNN, because here $\\sigma(b)$ is the gate that control what information from $a$ is passed up to the following layer. Intuitively, for a language modeling task, the gating mechanism allows selection of words or features that are important for predicting the next word. The GLU also has non-linear capabilities, but has a linear path for the gradient so diminishes the vanishing gradient problem.",
"full_name": "Gated Linear Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Gated Linear Unit",
"source_title": "Language Modeling with Gated Convolutional Networks",
"source_url": "http://arxiv.org/abs/1612.08083v3"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": null,
"description": "A **Gated Convolution** is a type of temporal [convolution](https://paperswithcode.com/method/convolution) with a gating mechanism. Zero-padding is used to ensure that future context can not be seen.",
"full_name": "Gated Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Temporal Convolutions",
"parent": null
},
"name": "Gated Convolution",
"source_title": "Language Modeling with Gated Convolutional Networks",
"source_url": "http://arxiv.org/abs/1612.08083v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/a-simplified-active-calibration-algorithm-for
|
1806.03584
| null | null |
A Simplified Active Calibration algorithm for Focal Length Estimation
|
We introduce new linear mathematical formulations to calculate the focal
length of a camera in an active platform. Through mathematical derivations, we
show that the focal lengths in each direction can be estimated using only one
point correspondence that relates images taken before and after a degenerate
rotation of the camera. The new formulations will be beneficial in robotic and
dynamic surveillance environments when the camera needs to be calibrated while
it freely moves and zooms. By establishing a correspondence between only two
images taken after slightly panning and tilting the camera and a reference
image, our proposed Simplified Calibration Method is able to calculate the
focal length of the camera. We extensively evaluate the derived formulations on
a simulated camera, 3D scenes and real-world images. Our error analysis over
simulated and real images indicates that the proposed Simplified Active
Calibration formulation estimates the parameters of a camera with low error
rates.
| null |
http://arxiv.org/abs/1806.03584v1
|
http://arxiv.org/pdf/1806.03584v1.pdf
| null |
[
"Mehdi Faraji",
"Anup Basu"
] |
[] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-scalable-framework-for-trajectory
|
1806.03582
| null | null |
A Scalable Framework for Trajectory Prediction
|
Trajectory prediction (TP) is of great importance for a wide range of
location-based applications in intelligent transport systems such as
location-based advertising, route planning, traffic management, and early
warning systems. In the last few years, the widespread use of GPS navigation
systems and wireless communication technology enabled vehicles has resulted in
huge volumes of trajectory data. The task of utilizing this data employing
spatio-temporal techniques for trajectory prediction in an efficient and
accurate manner is an ongoing research problem. Existing TP approaches are
limited to short-term predictions. Moreover, they cannot handle a large volume
of trajectory data for long-term prediction. To address these limitations, we
propose a scalable clustering and Markov chain based hybrid framework, called
Traj-clusiVAT-based TP, for both short-term and long-term trajectory
prediction, which can handle a large number of overlapping trajectories in a
dense road network. Traj-clusiVAT can also determine the number of clusters,
which represent different movement behaviours in input trajectory data. In our
experiments, we compare our proposed approach with a mixed Markov model
(MMM)-based scheme, and a trajectory clustering, NETSCAN-based TP method for
both short- and long-term trajectory predictions. We performed our experiments
on two real, vehicle trajectory datasets, including a large-scale trajectory
dataset consisting of 3.28 million trajectories obtained from 15,061 taxis in
Singapore over a period of one month. Experimental results on two real
trajectory datasets show that our proposed approach outperforms the existing
approaches in terms of both short- and long-term prediction performances, based
on prediction accuracy and distance error (in km).
| null |
http://arxiv.org/abs/1806.03582v3
|
http://arxiv.org/pdf/1806.03582v3.pdf
| null |
[
"Punit Rathore",
"Dheeraj Kumar",
"Sutharshan Rajasegarar",
"Marimuthu Palaniswami",
"James C. Bezdek"
] |
[
"Clustering",
"Management",
"Prediction",
"Trajectory Clustering",
"Trajectory Prediction"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/language-based-image-editing-with-recurrent
|
1711.06288
| null | null |
Language-Based Image Editing with Recurrent Attentive Models
|
We investigate the problem of Language-Based Image Editing (LBIE). Given a
source image and a natural language description, we want to generate a target
image by editing the source image based on the description. We propose a
generic modeling framework for two sub-tasks of LBIE: language-based image
segmentation and image colorization. The framework uses recurrent attentive
models to fuse image and language features. Instead of using a fixed step size,
we introduce for each region of the image a termination gate to dynamically
determine after each inference step whether to continue extrapolating
additional information from the textual description. The effectiveness of the
framework is validated on three datasets. First, we introduce a synthetic
dataset, called CoSaL, to evaluate the end-to-end performance of our LBIE
system. Second, we show that the framework leads to state-of-the-art
performance on image segmentation on the ReferIt dataset. Third, we present the
first language-based colorization result on the Oxford-102 Flowers dataset.
|
First, we introduce a synthetic dataset, called CoSaL, to evaluate the end-to-end performance of our LBIE system.
|
http://arxiv.org/abs/1711.06288v2
|
http://arxiv.org/pdf/1711.06288v2.pdf
|
CVPR 2018 6
|
[
"Jianbo Chen",
"Yelong Shen",
"Jianfeng Gao",
"Jingjing Liu",
"Xiaodong Liu"
] |
[
"Colorization",
"Image Colorization",
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2017-11-16T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Chen_Language-Based_Image_Editing_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Language-Based_Image_Editing_CVPR_2018_paper.pdf
|
language-based-image-editing-with-recurrent-1
| null |
[
{
"code_snippet_url": "",
"description": "**Colorization** is a self-supervision approach that relies on colorization as the pretext task in order to learn image representations.",
"full_name": "Colorization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "Colorization",
"source_title": "Colorful Image Colorization",
"source_url": "http://arxiv.org/abs/1603.08511v5"
}
] |
https://paperswithcode.com/paper/erel-selection-using-morphological-relation
|
1806.03580
| null | null |
EREL Selection using Morphological Relation
|
This work concentrates on Extremal Regions of Extremum Level (EREL)
selection. EREL is a recently proposed feature detector aiming at detecting
regions from a set of extremal regions. This is a branching problem derived
from segmentation of arterial wall boundaries from Intravascular Ultrasound
(IVUS) images. For each IVUS frame, a set of EREL regions is generated to
describe the luminal area of human coronary. Each EREL is then fitted by an
ellipse to represent the luminal border. The goal is to assign the most
appropriate EREL as the lumen. In this work, EREL selection carries out in two
rounds. In the first round, the pattern in a set of EREL regions is analyzed
and used to generate an approximate luminal region. Then, the two-dimensional
(2D) correlation coefficients are computed between this approximate region and
each EREL to keep the ones with tightest relevance. In the second round, a
compactness measure is calculated for each EREL and its fitted ellipse to
guarantee that the resulting EREL has not affected by the common artifacts such
as bifurcations, shadows, and side branches. We evaluated the selected ERELs in
terms of Hausdorff Distance (HD) and Jaccard Measure (JM) on the train and test
set of a publicly available dataset. The results show that our selection
strategy outperforms the current state-of-the-art.
| null |
http://arxiv.org/abs/1806.03580v1
|
http://arxiv.org/pdf/1806.03580v1.pdf
| null |
[
"Yuying Li",
"Mehdi Faraji"
] |
[
"Relation"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptations-of-rouge-and-bleu-to-better
|
1806.03578
| null | null |
Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading Comprehension Task
|
Current evaluation metrics to question answering based machine reading
comprehension (MRC) systems generally focus on the lexical overlap between the
candidate and reference answers, such as ROUGE and BLEU. However, bias may
appear when these metrics are used for specific question types, especially
questions inquiring yes-no opinions and entity lists. In this paper, we make
adaptations on the metrics to better correlate n-gram overlap with the human
judgment for answers to these two question types. Statistical analysis proves
the effectiveness of our approach. Our adaptations may provide positive
guidance for the development of real-scene MRC systems.
| null |
http://arxiv.org/abs/1806.03578v1
|
http://arxiv.org/pdf/1806.03578v1.pdf
|
WS 2018 7
|
[
"An Yang",
"Kai Liu",
"Jing Liu",
"Yajuan Lyu",
"Sujian Li"
] |
[
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | 2018-06-10T00:00:00 |
https://aclanthology.org/W18-2611
|
https://aclanthology.org/W18-2611.pdf
|
adaptations-of-rouge-and-bleu-to-better-1
| null |
[] |
https://paperswithcode.com/paper/generative-adversarial-nets-for-information
|
1806.03577
| null | null |
Generative Adversarial Nets for Information Retrieval: Fundamentals and Advances
|
Generative adversarial nets (GANs) have been widely studied during the recent
development of deep learning and unsupervised learning. With an adversarial
training mechanism, GAN manages to train a generative model to fit the
underlying unknown real data distribution under the guidance of the
discriminative model estimating whether a data instance is real or generated.
Such a framework is originally proposed for fitting continuous data
distribution such as images, thus it is not straightforward to be directly
applied to information retrieval scenarios where the data is mostly discrete,
such as IDs, text and graphs. In this tutorial, we focus on discussing the GAN
techniques and the variants on discrete data fitting in various information
retrieval scenarios. (i) We introduce the fundamentals of GAN framework and its
theoretic properties; (ii) we carefully study the promising solutions to extend
GAN onto discrete data generation; (iii) we introduce IRGAN, the fundamental
GAN framework of fitting single ID data distribution and the direct application
on information retrieval; (iv) we further discuss the task of sequential
discrete data generation tasks, e.g., text generation, and the corresponding
GAN solutions; (v) we present the most recent work on graph/network data
fitting with node embedding techniques by GANs. Meanwhile, we also introduce
the relevant open-source platforms such as IRGAN and Texygen to help audience
conduct research experiments on GANs in information retrieval. Finally, we
conclude this tutorial with a comprehensive summarization and a prospect of
further research directions for GANs in information retrieval.
| null |
http://arxiv.org/abs/1806.03577v1
|
http://arxiv.org/pdf/1806.03577v1.pdf
| null |
[
"Wei-Nan Zhang"
] |
[
"Information Retrieval",
"Retrieval",
"Text Generation"
] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/instance-search-via-instance-level
|
1806.03576
| null | null |
Instance Search via Instance Level Segmentation and Feature Representation
|
Instance search is an interesting task as well as a challenging issue due to the lack of effective feature representation. In this paper, an instance level feature representation built upon fully convolutional instance-aware segmentation is proposed. The feature is ROI-pooled from the segmented instance region. So that instances in various sizes and layouts are represented by deep features in uniform length. This representation is further enhanced by the use of deformable ResNeXt blocks. Superior performance is observed in terms of its distinctiveness and scalability on a challenging evaluation dataset built by ourselves. In addition, the proposed enhancement on the network structure also shows superior performance on the instance segmentation task.
|
In addition, the proposed enhancement on the network structure also shows superior performance on the instance segmentation task.
|
https://arxiv.org/abs/1806.03576v2
|
https://arxiv.org/pdf/1806.03576v2.pdf
| null |
[
"Yu Zhan",
"Wan-Lei Zhao"
] |
[
"Instance Search",
"Instance Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **ResNeXt Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) used as part of the [ResNeXt](https://paperswithcode.com/method/resnext) CNN architecture. It uses a \"split-transform-merge\" strategy (branched paths within a single module) similar to an [Inception module](https://paperswithcode.com/method/inception-module), i.e. it aggregates a set of transformations. Compared to a Residual Block, it exposes a new dimension, *cardinality* (size of set of transformations) $C$, as an essential factor in addition to depth and width. \r\n\r\nFormally, a set of aggregated transformations can be represented as: $\\mathcal{F}(x)=\\sum_{i=1}^{C}\\mathcal{T}_i(x)$, where $\\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it.",
"full_name": "ResNeXt Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "ResNeXt Block",
"source_title": "Aggregated Residual Transformations for Deep Neural Networks",
"source_url": "http://arxiv.org/abs/1611.05431v2"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/resnet.py#L124",
"description": "A **ResNeXt** repeats a building block that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. \r\n\r\nFormally, a set of aggregated transformations can be represented as: $\\mathcal{F}(x)=\\sum_{i=1}^{C}\\mathcal{T}_i(x)$, where $\\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it.",
"full_name": "ResNeXt",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "ResNeXt",
"source_title": "Aggregated Residual Transformations for Deep Neural Networks",
"source_url": "http://arxiv.org/abs/1611.05431v2"
}
] |
https://paperswithcode.com/paper/fmhash-deep-hashing-of-in-air-handwriting-for
|
1806.03574
| null | null |
FMHash: Deep Hashing of In-Air-Handwriting for User Identification
|
Many mobile systems and wearable devices, such as Virtual Reality (VR) or Augmented Reality (AR) headsets, lack a keyboard or touchscreen to type an ID and password for signing into a virtual website. However, they are usually equipped with gesture capture interfaces to allow the user to interact with the system directly with hand gestures. Although gesture-based authentication has been well-studied, less attention is paid to the gesture-based user identification problem, which is essentially an input method of account ID and an efficient searching and indexing method of a database of gesture signals. In this paper, we propose FMHash (i.e., Finger Motion Hash), a user identification framework that can generate a compact binary hash code from a piece of in-air-handwriting of an ID string. This hash code enables indexing and fast search of a large account database using the in-air-handwriting by a hash table. To demonstrate the effectiveness of the framework, we implemented a prototype and achieved >99.5% precision and >92.6% recall with exact hash code match on a dataset of 200 accounts collected by us. The ability of hashing in-air-handwriting pattern to binary code can be used to achieve convenient sign-in and sign-up with in-air-handwriting gesture ID on future mobile and wearable systems connected to the Internet.
|
Many mobile systems and wearable devices, such as Virtual Reality (VR) or Augmented Reality (AR) headsets, lack a keyboard or touchscreen to type an ID and password for signing into a virtual website.
|
https://arxiv.org/abs/1806.03574v2
|
https://arxiv.org/pdf/1806.03574v2.pdf
| null |
[
"Duo Lu",
"Dijiang Huang",
"Anshul Rai"
] |
[
"Deep Hashing",
"User Identification"
] | 2018-06-10T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.