paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
null
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/age-and-gender-classification-from-ear-images
1806.05742
null
null
Age and Gender Classification From Ear Images
In this paper, we present a detailed analysis on extracting soft biometric traits, age and gender, from ear images. Although there have been a few previous work on gender classification using ear images, to the best of our knowledge, this study is the first work on age classification from ear images. In the study, we have utilized both geometric features and appearance-based features for ear representation. The utilized geometric features are based on eight anthropometric landmarks and consist of 14 distance measurements and two area calculations. The appearance-based methods employ deep convolutional neural networks for representation and classification. The well-known convolutional neural network models, namely, AlexNet, VGG-16, GoogLeNet, and SqueezeNet have been adopted for the study. They have been fine-tuned on a large-scale ear dataset that has been built from the profile and close-to-profile face images in the Multi-PIE face dataset. This way, we have performed a domain adaptation. The updated models have been fine-tuned once more time on the small-scale target ear dataset, which contains only around 270 ear images for training. According to the experimental results, appearance-based methods have been found to be superior to the methods based on geometric features. We have achieved 94\% accuracy for gender classification, whereas 52\% accuracy has been obtained for age classification. These results indicate that ear images provide useful cues for age and gender classification, however, further work is required for age estimation.
null
http://arxiv.org/abs/1806.05742v1
http://arxiv.org/pdf/1806.05742v1.pdf
null
[ "Dogucan Yaman", "Fevziye Irem Eyiokur", "Nurdan Sezgin", "Hazim Kemal Ekenel" ]
[ "Age And Gender Classification", "Age Classification", "Age Estimation", "Classification", "Domain Adaptation", "Gender Classification", "General Classification" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)", "full_name": "1x1 Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "1x1 Convolution", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)", "full_name": "Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Average Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13", "description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor", "full_name": "Local Response Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Local Response Normalization", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": "", "description": "**Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks.", "full_name": "Auxiliary Classifier", "introduced_year": 2000, "main_collection": { "area": "General", "description": "The following is a list of miscellaneous components used in neural networks.", "name": "Miscellaneous Components", "parent": null }, "name": "Auxiliary Classifier", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/hskang9/Googlenet/blob/654d126b6a2cd3ac944cf5613419deb73da5311e/keras/googlenet.py#L39", "description": "An **Inception Module** is an image model block that aims to approximate an optimal local sparse structure in a CNN. Put simply, it allows for us to use multiple types of filter size, instead of being restricted to a single filter size, in a single image block, which we then concatenate and pass onto the next layer.", "full_name": "Inception Module", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.", "name": "Image Model Blocks", "parent": null }, "name": "Inception Module", "source_title": "Going Deeper with Convolutions", "source_url": "http://arxiv.org/abs/1409.4842v1" }, { "code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42", "description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.", "full_name": "Grouped Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Grouped Convolution", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)", "full_name": "Max Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Max Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40", "description": "To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.ggfdf\r\n\r\n\r\nHow do I speak to a person at Expedia?How do I speak to a person at Expedia?To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.\r\n\r\n\r\n\r\nTo make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.chgd", "full_name": "How do I speak to a person at Expedia?-/+/", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "How do I speak to a person at Expedia?-/+/", "source_title": "ImageNet Classification with Deep Convolutional Neural Networks", "source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/googlenet.py#L62", "description": "**GoogLeNet** is a type of convolutional neural network based on the [Inception](https://paperswithcode.com/method/inception-module) architecture. It utilises Inception modules, which allow the network to choose between multiple convolutional filter sizes in each block. An Inception network stacks these modules on top of each other, with occasional max-pooling layers with stride 2 to halve the resolution of the grid.", "full_name": "GoogLeNet", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "GoogLeNet", "source_title": "Going Deeper with Convolutions", "source_url": "http://arxiv.org/abs/1409.4842v1" } ]
https://paperswithcode.com/paper/psyphy-a-psychophysics-driven-evaluation
1611.06448
null
null
PsyPhy: A Psychophysics Driven Evaluation Framework for Visual Recognition
By providing substantial amounts of data and standardized evaluation protocols, datasets in computer vision have helped fuel advances across all areas of visual recognition. But even in light of breakthrough results on recent benchmarks, it is still fair to ask if our recognition algorithms are doing as well as we think they are. The vision sciences at large make use of a very different evaluation regime known as Visual Psychophysics to study visual perception. Psychophysics is the quantitative examination of the relationships between controlled stimuli and the behavioral responses they elicit in experimental test subjects. Instead of using summary statistics to gauge performance, psychophysics directs us to construct item-response curves made up of individual stimulus responses to find perceptual thresholds, thus allowing one to identify the exact point at which a subject can no longer reliably recognize the stimulus class. In this article, we introduce a comprehensive evaluation framework for visual recognition models that is underpinned by this methodology. Over millions of procedurally rendered 3D scenes and 2D images, we compare the performance of well-known convolutional neural networks. Our results bring into question recent claims of human-like performance, and provide a path forward for correcting newly surfaced algorithmic deficiencies.
null
http://arxiv.org/abs/1611.06448v6
http://arxiv.org/pdf/1611.06448v6.pdf
null
[ "Brandon RichardWebster", "Samuel E. Anthony", "Walter J. Scheirer" ]
[]
2016-11-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/using-search-queries-to-understand-health
1806.05740
null
null
Using Search Queries to Understand Health Information Needs in Africa
The lack of comprehensive, high-quality health data in developing nations creates a roadblock for combating the impacts of disease. One key challenge is understanding the health information needs of people in these nations. Without understanding people's everyday needs, concerns, and misconceptions, health organizations and policymakers lack the ability to effectively target education and programming efforts. In this paper, we propose a bottom-up approach that uses search data from individuals to uncover and gain insight into health information needs in Africa. We analyze Bing searches related to HIV/AIDS, malaria, and tuberculosis from all 54 African nations. For each disease, we automatically derive a set of common search themes or topics, revealing a wide-spread interest in various types of information, including disease symptoms, drugs, concerns about breastfeeding, as well as stigma, beliefs in natural cures, and other topics that may be hard to uncover through traditional surveys. We expose the different patterns that emerge in health information needs by demographic groups (age and sex) and country. We also uncover discrepancies in the quality of content returned by search engines to users by topic. Combined, our results suggest that search data can help illuminate health information needs in Africa and inform discussions on health policy and targeted education efforts both on- and offline.
null
http://arxiv.org/abs/1806.05740v2
http://arxiv.org/pdf/1806.05740v2.pdf
null
[ "Rediet Abebe", "Shawndra Hill", "Jennifer Wortman Vaughan", "Peter M. Small", "H. Andrew Schwartz" ]
[ "Misconceptions" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-sauer-shelah-perles-lemma-for-sumsets
1806.05737
null
null
A Sauer-Shelah-Perles Lemma for Sumsets
We show that any family of subsets $A\subseteq 2^{[n]}$ satisfies $\lvert A\rvert \leq O\bigl(n^{\lceil{d}/{2}\rceil}\bigr)$, where $d$ is the VC dimension of $\{S\triangle T \,\vert\, S,T\in A\}$, and $\triangle$ is the symmetric difference operator. We also observe that replacing $\triangle$ by either $\cup$ or $\cap$ fails to satisfy an analogous statement. Our proof is based on the polynomial method; specifically, on an argument due to [Croot, Lev, Pach '17].
null
http://arxiv.org/abs/1806.05737v2
http://arxiv.org/pdf/1806.05737v2.pdf
null
[ "Zeev Dvir", "Shay Moran" ]
[ "LEMMA" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/efficient-sampling-for-gaussian-linear
1806.05738
null
null
Efficient sampling for Gaussian linear regression with arbitrary priors
This paper develops a slice sampler for Bayesian linear regression models with arbitrary priors. The new sampler has two advantages over current approaches. One, it is faster than many custom implementations that rely on auxiliary latent variables, if the number of regressors is large. Two, it can be used with any prior with a density function that can be evaluated up to a normalizing constant, making it ideal for investigating the properties of new shrinkage priors without having to develop custom sampling algorithms. The new sampler takes advantage of the special structure of the linear regression likelihood, allowing it to produce better effective sample size per second than common alternative approaches.
null
http://arxiv.org/abs/1806.05738v1
http://arxiv.org/pdf/1806.05738v1.pdf
null
[ "P. Richard Hahn", "Jingyu He", "Hedibert Lopes" ]
[ "regression" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)", "full_name": "Linear Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Linear Regression", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/learning-influence-receptivity-network
1806.05730
null
null
Learning Influence-Receptivity Network Structure with Guarantee
Traditional works on community detection from observations of information cascade assume that a single adjacency matrix parametrizes all the observed cascades. However, in reality the connection structure usually does not stay the same across cascades. For example, different people have different topics of interest, therefore the connection structure depends on the information/topic content of the cascade. In this paper we consider the case where we observe a sequence of noisy adjacency matrices triggered by information/event with different topic distributions. We propose a novel latent model using the intuition that a connection is more likely to exist between two nodes if they are interested in similar topics, which are common with the information/event. Specifically, we endow each node with two node-topic vectors: an influence vector that measures how influential/authoritative they are on each topic; and a receptivity vector that measures how receptive/susceptible they are to each topic. We show how these two node-topic structures can be estimated from observed adjacency matrices with theoretical guarantee on estimation error, in cases where the topic distributions of the information/event are known, as well as when they are unknown. Experiments on synthetic and real data demonstrate the effectiveness of our model and superior performance compared to state-of-the-art methods.
null
http://arxiv.org/abs/1806.05730v2
http://arxiv.org/pdf/1806.05730v2.pdf
null
[ "Ming Yu", "Varun Gupta", "Mladen Kolar" ]
[ "Community Detection" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/action-learning-for-3d-point-cloud-based
1806.05724
null
null
Action Learning for 3D Point Cloud Based Organ Segmentation
We propose a novel point cloud based 3D organ segmentation pipeline utilizing deep Q-learning. In order to preserve shape properties, the learning process is guided using a statistical shape model. The trained agent directly predicts piece-wise linear transformations for all vertices in each iteration. This mapping between the ideal transformation for an object outline estimation is learned based on image features. To this end, we introduce aperture features that extract gray values by sampling the 3D volume within the cone centered around the associated vertex and its normal vector. Our approach is also capable of estimating a hierarchical pyramid of non rigid deformations for multi-resolution meshes. In the application phase, we use a marginal approach to gradually estimate affine as well as non-rigid transformations. We performed extensive evaluations to highlight the robust performance of our approach on a variety of challenge data as well as clinical data. Additionally, our method has a run time ranging from 0.3 to 2.7 seconds to segment each organ. In addition, we show that the proposed method can be applied to different organs, X-ray based modalities, and scanning protocols without the need of transfer learning. As we learn actions, even unseen reference meshes can be processed as demonstrated in an example with the Visible Human. From this we conclude that our method is robust, and we believe that our method can be successfully applied to many more applications, in particular, in the interventional imaging space.
null
http://arxiv.org/abs/1806.05724v1
http://arxiv.org/pdf/1806.05724v1.pdf
null
[ "Xia Zhong", "Mario Amrehn", "Nishant Ravikumar", "Shuqing Chen", "Norbert Strobel", "Annette Birkhold", "Markus Kowarschik", "Rebecca Fahrig", "Andreas Maier" ]
[ "Organ Segmentation", "Q-Learning", "Transfer Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/differentiable-submodular-maximization
1803.01785
null
null
Differentiable Submodular Maximization
We consider learning of submodular functions from data. These functions are important in machine learning and have a wide range of applications, e.g. data summarization, feature selection and active learning. Despite their combinatorial nature, submodular functions can be maximized approximately with strong theoretical guarantees in polynomial time. Typically, learning the submodular function and optimization of that function are treated separately, i.e. the function is first learned using a proxy objective and subsequently maximized. In contrast, we show how to perform learning and optimization jointly. By interpreting the output of greedy maximization algorithms as distributions over sequences of items and smoothening these distributions, we obtain a differentiable objective. In this way, we can differentiate through the maximization algorithms and optimize the model to work well with the optimization algorithm. We theoretically characterize the error made by our approach, yielding insights into the tradeoff of smoothness and accuracy. We demonstrate the effectiveness of our approach for jointly learning and optimizing on synthetic maximum cut data, and on real world applications such as product recommendation and image collection summarization.
null
http://arxiv.org/abs/1803.01785v2
http://arxiv.org/pdf/1803.01785v2.pdf
null
[ "Sebastian Tschiatschek", "Aytunc Sahin", "Andreas Krause" ]
[ "Active Learning", "Data Summarization", "feature selection", "Product Recommendation" ]
2018-03-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/non-asymptotic-identification-of-lti-systems
1806.05722
null
null
Non-asymptotic Identification of LTI Systems from a Single Trajectory
We consider the problem of learning a realization for a linear time-invariant (LTI) dynamical system from input/output data. Given a single input/output trajectory, we provide finite time analysis for learning the system's Markov parameters, from which a balanced realization is obtained using the classical Ho-Kalman algorithm. By proving a stability result for the Ho-Kalman algorithm and combining it with the sample complexity results for Markov parameters, we show how much data is needed to learn a balanced realization of the system up to a desired accuracy with high probability.
We consider the problem of learning a realization for a linear time-invariant (LTI) dynamical system from input/output data.
http://arxiv.org/abs/1806.05722v2
http://arxiv.org/pdf/1806.05722v2.pdf
null
[ "Samet Oymak", "Necmiye Ozay" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/safe-policy-improvement-with-baseline
1712.06924
null
null
Safe Policy Improvement with Baseline Bootstrapping
This paper considers Safe Policy Improvement (SPI) in Batch Reinforcement Learning (Batch RL): from a fixed dataset and without direct access to the true environment, train a policy that is guaranteed to perform at least as well as the baseline policy used to collect the data. Our approach, called SPI with Baseline Bootstrapping (SPIBB), is inspired by the knows-what-it-knows paradigm: it bootstraps the trained policy with the baseline when the uncertainty is high. Our first algorithm, $\Pi_b$-SPIBB, comes with SPI theoretical guarantees. We also implement a variant, $\Pi_{\leq b}$-SPIBB, that is even more efficient in practice. We apply our algorithms to a motivational stochastic gridworld domain and further demonstrate on randomly generated MDPs the superiority of SPIBB with respect to existing algorithms, not only in safety but also in mean performance. Finally, we implement a model-free version of SPIBB and show its benefits on a navigation task with deep RL implementation called SPIBB-DQN, which is, to the best of our knowledge, the first RL algorithm relying on a neural network representation able to train efficiently and reliably from batch data, without any interaction with the environment.
Finally, we implement a model-free version of SPIBB and show its benefits on a navigation task with deep RL implementation called SPIBB-DQN, which is, to the best of our knowledge, the first RL algorithm relying on a neural network representation able to train efficiently and reliably from batch data, without any interaction with the environment.
https://arxiv.org/abs/1712.06924v5
https://arxiv.org/pdf/1712.06924v5.pdf
null
[ "Romain Laroche", "Paul Trichelair", "Rémi Tachet des Combes" ]
[ "Reinforcement Learning" ]
2017-12-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/data-driven-decentralized-optimal-power-flow
1806.06790
null
null
Towards Distributed Energy Services: Decentralizing Optimal Power Flow with Machine Learning
The implementation of optimal power flow (OPF) methods to perform voltage and power flow regulation in electric networks is generally believed to require extensive communication. We consider distribution systems with multiple controllable Distributed Energy Resources (DERs) and present a data-driven approach to learn control policies for each DER to reconstruct and mimic the solution to a centralized OPF problem from solely locally available information. Collectively, all local controllers closely match the centralized OPF solution, providing near optimal performance and satisfaction of system constraints. A rate distortion framework enables the analysis of how well the resulting fully decentralized control policies are able to reconstruct the OPF solution. The methodology provides a natural extension to decide what nodes a DER should communicate with to improve the reconstruction of its individual policy. The method is applied on both single- and three-phase test feeder networks using data from real loads and distributed generators, focusing on DERs that do not exhibit inter-temporal dependencies. It provides a framework for Distribution System Operators to efficiently plan and operate the contributions of DERs to achieve Distributed Energy Services in distribution networks.
null
https://arxiv.org/abs/1806.06790v3
https://arxiv.org/pdf/1806.06790v3.pdf
null
[ "Roel Dobbe", "Oscar Sondermeijer", "David Fridovich-Keil", "Daniel Arnold", "Duncan Callaway", "Claire Tomlin" ]
[ "BIG-bench Machine Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multilevel-artificial-neural-network-training
1806.05703
null
null
Multilevel Artificial Neural Network Training for Spatially Correlated Learning
Multigrid modeling algorithms are a technique used to accelerate relaxation models running on a hierarchy of similar graphlike structures. We introduce and demonstrate a new method for training neural networks which uses multilevel methods. Using an objective function derived from a graph-distance metric, we perform orthogonally-constrained optimization to find optimal prolongation and restriction maps between graphs. We compare and contrast several methods for performing this numerical optimization, and additionally present some new theoretical results on upper bounds of this type of objective function. Once calculated, these optimal maps between graphs form the core of Multiscale Artificial Neural Network (MsANN) training, a new procedure we present which simultaneously trains a hierarchy of neural network models of varying spatial resolution. Parameter information is passed between members of this hierarchy according to standard coarsening and refinement schedules from the multiscale modelling literature. In our machine learning experiments, these models are able to learn faster than default training, achieving a comparable level of error in an order of magnitude fewer training examples.
Multigrid modeling algorithms are a technique used to accelerate relaxation models running on a hierarchy of similar graphlike structures.
https://arxiv.org/abs/1806.05703v3
https://arxiv.org/pdf/1806.05703v3.pdf
null
[ "C. B. Scott", "Eric Mjolsness" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/detection-limits-in-the-high-dimensional
1802.07309
null
null
Detection limits in the high-dimensional spiked rectangular model
We study the problem of detecting the presence of a single unknown spike in a rectangular data matrix, in a high-dimensional regime where the spike has fixed strength and the aspect ratio of the matrix converges to a finite limit. This setup includes Johnstone's spiked covariance model. We analyze the likelihood ratio of the spiked model against an "all noise" null model of reference, and show it has asymptotically Gaussian fluctuations in a region below---but in general not up to---the so-called BBP threshold from random matrix theory. Our result parallels earlier findings of Onatski et al.\ (2013) and Johnstone-Onatski (2015) for spherical spikes. We present a probabilistic approach capable of treating generic product priors. In particular, sparsity in the spike is allowed. Our approach is based on Talagrand's interpretation of the cavity method from spin-glass theory. The question of the maximal parameter region where asymptotic normality is expected to hold is left open. This region is shaped by the prior in a non-trivial way. We conjecture that this is the entire paramagnetic phase of an associated spin-glass model, and is defined by the vanishing of the replica-symmetric solution of Lesieur et al.\ (2015).
null
http://arxiv.org/abs/1802.07309v3
http://arxiv.org/pdf/1802.07309v3.pdf
null
[ "Ahmed El Alaoui", "Michael. I. Jordan" ]
[ "Vocal Bursts Intensity Prediction" ]
2018-02-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/neural-generative-models-for-global
1805.08594
null
null
Neural Generative Models for Global Optimization with Gradients
The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones. In this paper we focus on the subproblem of global optimization for differentiable functions and we propose an Evolutionary Search-inspired solution where we model point search distributions via Generative Neural Networks. This approach enables us to model diverse and complex search distributions based on which we can efficiently explore complicated objective landscapes. In our experiments we show the practical superiority of our algorithm versus classical Evolutionary Search and gradient-based solutions on a benchmark set of multimodal functions, and demonstrate how it can be used to accelerate Bayesian Optimization with Gaussian Processes.
The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones.
http://arxiv.org/abs/1805.08594v3
http://arxiv.org/pdf/1805.08594v3.pdf
null
[ "Louis Faury", "Flavian vasile", "Clément Calauzènes", "Olivier Fercoq" ]
[ "Bayesian Optimization", "Gaussian Processes", "global-optimization" ]
2018-05-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evolving-simple-programs-for-playing-atari
1806.05695
null
null
Evolving simple programs for playing Atari games
Cartesian Genetic Programming (CGP) has previously shown capabilities in image processing tasks by evolving programs with a function set specialized for computer vision. A similar approach can be applied to Atari playing. Programs are evolved using mixed type CGP with a function set suited for matrix operations, including image processing, but allowing for controller behavior to emerge. While the programs are relatively small, many controllers are competitive with state of the art methods for the Atari benchmark set and require less training time. By evaluating the programs of the best evolved individuals, simple but effective strategies can be found.
Cartesian Genetic Programming (CGP) has previously shown capabilities in image processing tasks by evolving programs with a function set specialized for computer vision.
http://arxiv.org/abs/1806.05695v1
http://arxiv.org/pdf/1806.05695v1.pdf
null
[ "Dennis G Wilson", "Sylvain Cussat-Blanc", "Hervé Luga", "Julian F. Miller" ]
[ "Atari Games" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/discovering-latent-patterns-of-urban-cultural
1806.05694
null
null
Discovering Latent Patterns of Urban Cultural Interactions in WeChat for Modern City Planning
Cultural activity is an inherent aspect of urban life and the success of a modern city is largely determined by its capacity to offer generous cultural entertainment to its citizens. To this end, the optimal allocation of cultural establishments and related resources across urban regions becomes of vital importance, as it can reduce financial costs in terms of planning and improve quality of life in the city, more generally. In this paper, we make use of a large longitudinal dataset of user location check-ins from the online social network WeChat to develop a data-driven framework for cultural planning in the city of Beijing. We exploit rich spatio-temporal representations on user activity at cultural venues and use a novel extended version of the traditional latent Dirichlet allocation model that incorporates temporal information to identify latent patterns of urban cultural interactions. Using the characteristic typologies of mobile user cultural activities emitted by the model, we determine the levels of demand for different types of cultural resources across urban areas. We then compare those with the corresponding levels of supply as driven by the presence and spatial reach of cultural venues in local areas to obtain high resolution maps that indicate urban regions with lack of cultural resources, and thus give suggestions for further urban cultural planning and investment optimisation.
null
http://arxiv.org/abs/1806.05694v1
http://arxiv.org/pdf/1806.05694v1.pdf
null
[ "Xiao Zhou", "Anastasios Noulas", "Cecilia Mascoloo", "Zhongxiang Zhao" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-power-of-interpolation-understanding-the
1712.06559
null
null
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
In this paper we aim to formally explain the phenomenon of fast convergence of SGD observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for {\it mini-batch} SGD parallel to that for full gradient descent. We show that there is a critical batch size $m^*$ such that: (a) SGD iteration with mini-batch size $m\leq m^*$ is nearly equivalent to $m$ iterations of mini-batch size $1$ (\emph{linear scaling regime}). (b) SGD iteration with mini-batch $m> m^*$ is nearly equivalent to a full gradient descent iteration (\emph{saturation regime}). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying $O(n)$ acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction.
null
http://arxiv.org/abs/1712.06559v3
http://arxiv.org/pdf/1712.06559v3.pdf
ICML 2018 7
[ "Siyuan Ma", "Raef Bassily", "Mikhail Belkin" ]
[]
2017-12-18T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=1968
http://proceedings.mlr.press/v80/ma18a/ma18a.pdf
the-power-of-interpolation-understanding-the-1
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/ivus-net-an-intravascular-ultrasound
1806.03583
null
null
IVUS-Net: An Intravascular Ultrasound Segmentation Network
IntraVascular UltraSound (IVUS) is one of the most effective imaging modalities that provides assistance to experts in order to diagnose and treat cardiovascular diseases. We address a central problem in IVUS image analysis with Fully Convolutional Network (FCN): automatically delineate the lumen and media-adventitia borders in IVUS images, which is crucial to shorten the diagnosis process or benefits a faster and more accurate 3D reconstruction of the artery. Particularly, we propose an FCN architecture, called IVUS-Net, followed by a post-processing contour extraction step, in order to automatically segments the interior (lumen) and exterior (media-adventitia) regions of the human arteries. We evaluated our IVUS-Net on the test set of a standard publicly available dataset containing 326 IVUS B-mode images with two measurements, namely Jaccard Measure (JM) and Hausdorff Distances (HD). The evaluation result shows that IVUS-Net outperforms the state-of-the-art lumen and media segmentation methods by 4% to 20% in terms of HD distance. IVUS-Net performs well on images in the test set that contain a significant amount of major artifacts such as bifurcations, shadows, and side branches that are not common in the training set. Furthermore, using a modern GPU, IVUS-Net segments each IVUS frame only in 0.15 seconds. The proposed work, to the best of our knowledge, is the first deep learning based method for segmentation of both the lumen and the media vessel walls in 20 MHz IVUS B-mode images that achieves the best results without any manual intervention. Code is available at https://github.com/Kulbear/ivus-segmentation-icsm2018
IntraVascular UltraSound (IVUS) is one of the most effective imaging modalities that provides assistance to experts in order to diagnose and treat cardiovascular diseases.
http://arxiv.org/abs/1806.03583v2
http://arxiv.org/pdf/1806.03583v2.pdf
null
[ "Ji Yang", "Lin Tong", "Mehdi Faraji", "Anup Basu" ]
[ "3D Reconstruction", "GPU", "Segmentation" ]
2018-06-10T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)", "full_name": "Max Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Max Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/Jackey9797/FCN", "description": "**Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks faster to train). It also means an FCN can work for variable image sizes given all connections are local.\r\n\r\nThe network consists of a downsampling path, used to extract and interpret the context, and an upsampling path, which allows for localization. \r\n\r\nFCNs also employ skip connections to recover the fine-grained spatial information lost in the downsampling path.", "full_name": "Fully Convolutional Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ", "name": "Semantic Segmentation Models", "parent": null }, "name": "FCN", "source_title": "Fully Convolutional Networks for Semantic Segmentation", "source_url": "http://arxiv.org/abs/1605.06211v1" } ]
https://paperswithcode.com/paper/learning-human-optical-flow
1806.05666
null
null
Learning Human Optical Flow
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.
Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods.
http://arxiv.org/abs/1806.05666v2
http://arxiv.org/pdf/1806.05666v2.pdf
null
[ "Anurag Ranjan", "Javier Romero", "Michael J. Black" ]
[ "Optical Flow Estimation" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-deep-resnet-blocks-sequentially
1706.04964
null
SksY3deAW
Learning Deep ResNet Blocks Sequentially using Boosting Theory
Deep neural networks are known to be difficult to train due to the instability of back-propagation. A deep \emph{residual network} (ResNet) with identity loops remedies this by stabilizing gradient computations. We prove a boosting theory for the ResNet architecture. We construct $T$ weak module classifiers, each contains two of the $T$ layers, such that the combined strong learner is a ResNet. Therefore, we introduce an alternative Deep ResNet training algorithm, \emph{BoostResNet}, which is particularly suitable in non-differentiable architectures. Our proposed algorithm merely requires a sequential training of $T$ "shallow ResNets" which are inexpensive. We prove that the training error decays exponentially with the depth $T$ if the \emph{weak module classifiers} that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. Our results apply to general multi-class ResNets. A generalization error bound based on margin theory is proved and suggests ResNet's resistant to overfitting under network with $l_1$ norm bounded weights.
null
http://arxiv.org/abs/1706.04964v4
http://arxiv.org/pdf/1706.04964v4.pdf
ICML 2018 7
[ "Furong Huang", "Jordan Ash", "John Langford", "Robert Schapire" ]
[]
2017-06-15T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2107
http://proceedings.mlr.press/v80/huang18b/huang18b.pdf
learning-deep-resnet-blocks-sequentially-1
null
[ { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)", "full_name": "Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Average Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)", "full_name": "1x1 Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "1x1 Convolution", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116", "description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.", "full_name": "Batch Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Batch Normalization", "source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "source_url": "http://arxiv.org/abs/1502.03167v3" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75", "description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.", "full_name": "Bottleneck Residual Block", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:", "name": "Skip Connection Blocks", "parent": null }, "name": "Bottleneck Residual Block", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157", "description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.", "full_name": "Global Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Global Average Pooling", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35", "description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.", "full_name": "Residual Block", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:", "name": "Skip Connection Blocks", "parent": null }, "name": "Residual Block", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389", "description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.", "full_name": "Kaiming Initialization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.", "name": "Initialization", "parent": null }, "name": "Kaiming Initialization", "source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", "source_url": "http://arxiv.org/abs/1502.01852v1" }, { "code_snippet_url": null, "description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)", "full_name": "Max Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Max Pooling", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118", "description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.", "full_name": "Residual Connection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.", "name": "Skip Connections", "parent": null }, "name": "Residual Connection", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Bitcoin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "Bitcoin Customer Service Number +1-833-534-1729", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" } ]
https://paperswithcode.com/paper/interactive-classification-for-deep-learning
1806.05660
null
null
Interactive Classification for Deep Learning Interpretation
We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers. Using modern web technologies to run in-browser inference, users can remove image features using inpainting algorithms and obtain new classifications in real time, which allows them to ask a variety of "what if" questions by experimentally modifying images and seeing how the model reacts. Our system allows users to compare and contrast what image regions humans and machine learning models use for classification, revealing a wide range of surprising results ranging from spectacular failures (e.g., a "water bottle" image becomes a "concert" when removing a person) to impressive resilience (e.g., a "baseball player" image remains correctly classified even without a glove or base). We demonstrate our system at The 2018 Conference on Computer Vision and Pattern Recognition (CVPR) for the audience to try it live. Our system is open-sourced at https://github.com/poloclub/interactive-classification. A video demo is available at https://youtu.be/llub5GcOF6w.
We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers.
http://arxiv.org/abs/1806.05660v2
http://arxiv.org/pdf/1806.05660v2.pdf
null
[ "Ángel Alexander Cabrera", "Fred Hohman", "Jason Lin", "Duen Horng Chau" ]
[ "Classification", "Deep Learning", "General Classification" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**GloVe Embeddings** are a type of word embedding that encode the co-occurrence probability ratio between two words as vector differences. GloVe uses a weighted least squares objective $J$ that minimizes the difference between the dot product of the vectors of two words and the logarithm of their number of co-occurrences:\r\n\r\n$$ J=\\sum\\_{i, j=1}^{V}f\\left(𝑋\\_{i j}\\right)(w^{T}\\_{i}\\tilde{w}_{j} + b\\_{i} + \\tilde{b}\\_{j} - \\log{𝑋}\\_{ij})^{2} $$\r\n\r\nwhere $w\\_{i}$ and $b\\_{i}$ are the word vector and bias respectively of word $i$, $\\tilde{w}_{j}$ and $b\\_{j}$ are the context word vector and bias respectively of word $j$, $X\\_{ij}$ is the number of times word $i$ occurs in the context of word $j$, and $f$ is a weighting function that assigns lower weights to rare and frequent co-occurrences.", "full_name": "GloVe Embeddings", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Word Embeddings", "parent": null }, "name": "GloVe", "source_title": "GloVe: Global Vectors for Word Representation", "source_url": "https://aclanthology.org/D14-1162" } ]
https://paperswithcode.com/paper/structure-infused-copy-mechanisms-for
1806.05658
null
null
Structure-Infused Copy Mechanisms for Abstractive Summarization
Seq2seq learning has produced promising results on summarization. However, in many cases, system summaries still struggle to keep the meaning of the original intact. They may miss out important words or relations that play critical roles in the syntactic structure of source sentences. In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence. The approach naturally combines source dependency structure with the copy mechanism of an abstractive sentence summarizer. Experimental results demonstrate the effectiveness of incorporating source-side syntactic information in the system, and our proposed approach compares favorably to state-of-the-art methods.
In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence.
http://arxiv.org/abs/1806.05658v2
http://arxiv.org/pdf/1806.05658v2.pdf
COLING 2018 8
[ "Kaiqiang Song", "Lin Zhao", "Fei Liu" ]
[ "Abstractive Text Summarization", "Sentence" ]
2018-06-14T00:00:00
https://aclanthology.org/C18-1146
https://aclanthology.org/C18-1146.pdf
structure-infused-copy-mechanisms-for-2
null
[]
https://paperswithcode.com/paper/adding-new-tasks-to-a-single-network-with
1805.11119
null
null
Adding New Tasks to a Single Network with Weight Transformations using Binary Masks
Visual recognition algorithms are required today to exhibit adaptive abilities. Given a deep model trained on a specific, given task, it would be highly desirable to be able to adapt incrementally to new tasks, preserving scalability as the number of new tasks increases, while at the same time avoiding catastrophic forgetting issues. Recent work has shown that masking the internal weights of a given original conv-net through learned binary variables is a promising strategy. We build upon this intuition and take into account more elaborated affine transformations of the convolutional weights that include learned binary masks. We show that with our generalization it is possible to achieve significantly higher levels of adaptation to new tasks, enabling the approach to compete with fine tuning strategies by requiring slightly more than 1 bit per network parameter per additional task. Experiments on two popular benchmarks showcase the power of our approach, that achieves the new state of the art on the Visual Decathlon Challenge.
null
http://arxiv.org/abs/1805.11119v2
http://arxiv.org/pdf/1805.11119v2.pdf
null
[ "Massimiliano Mancini", "Elisa Ricci", "Barbara Caputo", "Samuel Rota Bulò" ]
[]
2018-05-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/abstract-meaning-representation-for-multi
1806.05655
null
null
Abstract Meaning Representation for Multi-Document Summarization
Generating an abstract from a collection of documents is a desirable capability for many real-world applications. However, abstractive approaches to multi-document summarization have not been thoroughly investigated. This paper studies the feasibility of using Abstract Meaning Representation (AMR), a semantic representation of natural language grounded in linguistic theory, as a form of content representation. Our approach condenses source documents to a set of summary graphs following the AMR formalism. The summary graphs are then transformed to a set of summary sentences in a surface realization step. The framework is fully data-driven and flexible. Each component can be optimized independently using small-scale, in-domain training data. We perform experiments on benchmark summarization datasets and report promising results. We also describe opportunities and challenges for advancing this line of research.
null
http://arxiv.org/abs/1806.05655v1
http://arxiv.org/pdf/1806.05655v1.pdf
COLING 2018 8
[ "Kexin Liao", "Logan Lebanoff", "Fei Liu" ]
[ "Abstract Meaning Representation", "Document Summarization", "Multi-Document Summarization" ]
2018-06-14T00:00:00
https://aclanthology.org/C18-1101
https://aclanthology.org/C18-1101.pdf
abstract-meaning-representation-for-multi-1
null
[]
https://paperswithcode.com/paper/hgr-net-a-fusion-network-for-hand-gesture
1806.05653
null
null
HGR-Net: A Fusion Network for Hand Gesture Segmentation and Recognition
We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture. The segmentation stage architecture is based on the combination of fully convolutional residual network and atrous spatial pyramid pooling. Although the segmentation sub-network is trained without depth information, it is particularly robust against challenges such as illumination variations and complex backgrounds. The recognition stage deploys a two-stream CNN, which fuses the information from the red-green-blue and segmented images by combining their deep representations in a fully connected layer before classification. Extensive experiments on public datasets show that our architecture achieves almost as good as state-of-the-art performance in segmentation and recognition of static hand gestures, at a fraction of training time, run time, and model size. Our method can operate at an average of 23 ms per frame.
We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.
https://arxiv.org/abs/1806.05653v3
https://arxiv.org/pdf/1806.05653v3.pdf
null
[ "Amirhossein Dadashzadeh", "Alireza Tavakoli Targhi", "Maryam Tahmasbi", "Majid Mirmehdi" ]
[ "Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition", "Hand Gesture Segmentation", "Hand Segmentation", "Segmentation", "Semantic Segmentation" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-learning-to-represent-sub-grid-processes
1806.04731
null
null
Deep learning to represent sub-grid processes in climate models
The representation of nonlinear sub-grid processes, especially clouds, has been a major source of uncertainty in climate models for decades. Cloud-resolving models better represent many of these processes and can now be run globally but only for short-term simulations of at most a few years because of computational limitations. Here we demonstrate that deep learning can be used to capture many advantages of cloud-resolving modeling at a fraction of the computational cost. We train a deep neural network to represent all atmospheric sub-grid processes in a climate model by learning from a multi-scale model in which convection is treated explicitly. The trained neural network then replaces the traditional sub-grid parameterizations in a global general circulation model in which it freely interacts with the resolved dynamics and the surface-flux scheme. The prognostic multi-year simulations are stable and closely reproduce not only the mean climate of the cloud-resolving simulation but also key aspects of variability, including precipitation extremes and the equatorial wave spectrum. Furthermore, the neural network approximately conserves energy despite not being explicitly instructed to. Finally, we show that the neural network parameterization generalizes to new surface forcing patterns but struggles to cope with temperatures far outside its training manifold. Our results show the feasibility of using deep learning for climate model parameterization. In a broader context, we anticipate that data-driven Earth System Model development could play a key role in reducing climate prediction uncertainty in the coming decade.
We train a deep neural network to represent all atmospheric sub-grid processes in a climate model by learning from a multi-scale model in which convection is treated explicitly.
http://arxiv.org/abs/1806.04731v3
http://arxiv.org/pdf/1806.04731v3.pdf
null
[ "Stephan Rasp", "Michael S. Pritchard", "Pierre Gentine" ]
[ "Deep Learning" ]
2018-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/grounded-textual-entailment
1806.05645
null
null
Grounded Textual Entailment
Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics. Logic-based models analyse entailment in terms of possible worlds (interpretations, or situations) where a premise P entails a hypothesis H iff in all worlds where P is true, H is also true. Statistical models view this relationship probabilistically, addressing it in terms of whether a human would likely infer H from P. In this paper, we wish to bridge these two perspectives, by arguing for a visually-grounded version of the Textual Entailment task. Specifically, we ask whether models can perform better if, in addition to P and H, there is also an image (corresponding to the relevant "world" or "situation"). We use a multimodal version of the SNLI dataset (Bowman et al., 2015) and we compare "blind" and visually-augmented models of textual entailment. We show that visual information is beneficial, but we also conduct an in-depth error analysis that reveals that current multimodal models are not performing "grounding" in an optimal fashion.
Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics.
http://arxiv.org/abs/1806.05645v1
http://arxiv.org/pdf/1806.05645v1.pdf
COLING 2018 8
[ "Hoa Trong Vu", "Claudio Greco", "Aliia Erofeeva", "Somayeh Jafaritazehjan", "Guido Linders", "Marc Tanti", "Alberto Testoni", "Raffaella Bernardi", "Albert Gatt" ]
[ "Natural Language Inference" ]
2018-06-14T00:00:00
https://aclanthology.org/C18-1199
https://aclanthology.org/C18-1199.pdf
grounded-textual-entailment-2
null
[]
https://paperswithcode.com/paper/evaluation-of-unsupervised-compositional
1806.04713
null
null
Evaluation of Unsupervised Compositional Representations
We evaluated various compositional models, from bag-of-words representations to compositional RNN-based models, on several extrinsic supervised and unsupervised evaluation benchmarks. Our results confirm that weighted vector averaging can outperform context-sensitive models in most benchmarks, but structural features encoded in RNN models can also be useful in certain classification tasks. We analyzed some of the evaluation datasets to identify the aspects of meaning they measure and the characteristics of the various models that explain their performance variance.
We evaluated various compositional models, from bag-of-words representations to compositional RNN-based models, on several extrinsic supervised and unsupervised evaluation benchmarks.
http://arxiv.org/abs/1806.04713v2
http://arxiv.org/pdf/1806.04713v2.pdf
COLING 2018 8
[ "Hanan Aldarmaki", "Mona Diab" ]
[ "General Classification" ]
2018-06-12T00:00:00
https://aclanthology.org/C18-1226
https://aclanthology.org/C18-1226.pdf
evaluation-of-unsupervised-compositional-1
null
[]
https://paperswithcode.com/paper/self-imitation-learning
1806.05635
null
null
Self-Imitation Learning
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions.
http://arxiv.org/abs/1806.05635v1
http://arxiv.org/pdf/1806.05635v1.pdf
ICML 2018 7
[ "Junhyuk Oh", "Yijie Guo", "Satinder Singh", "Honglak Lee" ]
[ "Atari Games", "Imitation Learning", "MuJoCo" ]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2101
http://proceedings.mlr.press/v80/oh18b/oh18b.pdf
self-imitation-learning-1
null
[]
https://paperswithcode.com/paper/teaching-multiple-concepts-to-a-forgetful
1805.08322
null
null
Teaching Multiple Concepts to a Forgetful Learner
How can we help a forgetful learner learn multiple concepts within a limited time frame? While there have been extensive studies in designing optimal schedules for teaching a single concept given a learner's memory model, existing approaches for teaching multiple concepts are typically based on heuristic scheduling techniques without theoretical guarantees. In this paper, we look at the problem from the perspective of discrete optimization and introduce a novel algorithmic framework for teaching multiple concepts with strong performance guarantees. Our framework is both generic, allowing the design of teaching schedules for different memory models, and also interactive, allowing the teacher to adapt the schedule to the underlying forgetting mechanisms of the learner. Furthermore, for a well-known memory model, we are able to identify a regime of model parameters where our framework is guaranteed to achieve high performance. We perform extensive evaluations using simulations along with real user studies in two concrete applications: (i) an educational app for online vocabulary teaching; and (ii) an app for teaching novices how to recognize animal species from images. Our results demonstrate the effectiveness of our algorithm compared to popular heuristic approaches.
null
https://arxiv.org/abs/1805.08322v4
https://arxiv.org/pdf/1805.08322v4.pdf
NeurIPS 2019 12
[ "Anette Hunziker", "Yuxin Chen", "Oisin Mac Aodha", "Manuel Gomez Rodriguez", "Andreas Krause", "Pietro Perona", "Yisong Yue", "Adish Singla" ]
[ "Scheduling" ]
2018-05-21T00:00:00
http://papers.nips.cc/paper/8659-teaching-multiple-concepts-to-a-forgetful-learner
http://papers.nips.cc/paper/8659-teaching-multiple-concepts-to-a-forgetful-learner.pdf
teaching-multiple-concepts-to-a-forgetful-1
null
[]
https://paperswithcode.com/paper/learning-in-pomdps-with-monte-carlo-tree
1806.05631
null
null
Learning in POMDPs with Monte Carlo Tree Search
The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation and exploration. Unfortunately, BA-POMDPs are currently impractical to solve for any non-trivial domain. In this paper, we extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs and show that the resulting method, which we call BA-POMCP, is able to tackle problems that previous solution methods have been unable to solve. Additionally, we introduce several techniques that exploit the BA-POMDP structure to improve the efficiency of BA-POMCP along with proof of their convergence.
null
http://arxiv.org/abs/1806.05631v1
http://arxiv.org/pdf/1806.05631v1.pdf
ICML 2017 8
[ "Sammie Katt", "Frans A. Oliehoek", "Christopher Amato" ]
[]
2018-06-14T00:00:00
https://icml.cc/Conferences/2017/Schedule?showEvent=750
http://proceedings.mlr.press/v70/katt17a/katt17a.pdf
learning-in-pomdps-with-monte-carlo-tree-1
null
[]
https://paperswithcode.com/paper/voxceleb2-deep-speaker-recognition
1806.05622
null
null
VoxCeleb2: Deep Speaker Recognition
The objective of this paper is speaker recognition under noisy and unconstrained conditions. We make two key contributions. First, we introduce a very large-scale audio-visual speaker recognition dataset collected from open-source media. Using a fully automated pipeline, we curate VoxCeleb2 which contains over a million utterances from over 6,000 speakers. This is several times larger than any publicly available speaker recognition dataset. Second, we develop and compare Convolutional Neural Network (CNN) models and training strategies that can effectively recognise identities from voice under various conditions. The models trained on the VoxCeleb2 dataset surpass the performance of previous works on a benchmark dataset by a significant margin.
The objective of this paper is speaker recognition under noisy and unconstrained conditions.
http://arxiv.org/abs/1806.05622v2
http://arxiv.org/pdf/1806.05622v2.pdf
null
[ "Joon Son Chung", "Arsha Nagrani", "Andrew Zisserman" ]
[ "Speaker Recognition", "Speaker Verification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dynaslam-tracking-mapping-and-inpainting-in
1806.05620
null
null
DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes
The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this paper we present DynaSLAM, a visual SLAM system that, building over ORB-SLAM2 [1], adds the capabilities of dynamic object detection and background inpainting. DynaSLAM is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. We are capable of detecting the moving objects either by multi-view geometry, deep learning or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynaSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.
And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.
http://arxiv.org/abs/1806.05620v2
http://arxiv.org/pdf/1806.05620v2.pdf
null
[ "Berta Bescos", "José M. Fácil", "Javier Civera", "José Neira" ]
[ "Autonomous Vehicles", "object-detection", "Object Detection" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "ORB-SLAM2 is a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city.\r\n\r\nSource: [Mur-Artal and Tardos](https://arxiv.org/pdf/1610.06475v2.pdf)\r\n\r\nImage source: [Mur-Artal and Tardos](https://arxiv.org/pdf/1610.06475v2.pdf)", "full_name": "ORB-Simultaneous localization and mapping", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Localization Models", "parent": null }, "name": "ORB-SLAM2", "source_title": "ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras", "source_url": "http://arxiv.org/abs/1610.06475v2" } ]
https://paperswithcode.com/paper/stochastic-variance-reduced-policy-gradient
1806.05618
null
null
Stochastic Variance-Reduced Policy Gradient
In this paper, we propose a novel reinforcement- learning algorithm consisting in a stochastic variance-reduced version of policy gradient for solving Markov Decision Processes (MDPs). Stochastic variance-reduced gradient (SVRG) methods have proven to be very successful in supervised learning. However, their adaptation to policy gradient is not straightforward and needs to account for I) a non-concave objective func- tion; II) approximations in the full gradient com- putation; and III) a non-stationary sampling pro- cess. The result is SVRPG, a stochastic variance- reduced policy gradient algorithm that leverages on importance weights to preserve the unbiased- ness of the gradient estimate. Under standard as- sumptions on the MDP, we provide convergence guarantees for SVRPG with a convergence rate that is linear under increasing batch sizes. Finally, we suggest practical variants of SVRPG, and we empirically evaluate them on continuous MDPs.
In this paper, we propose a novel reinforcement- learning algorithm consisting in a stochastic variance-reduced version of policy gradient for solving Markov Decision Processes (MDPs).
http://arxiv.org/abs/1806.05618v1
http://arxiv.org/pdf/1806.05618v1.pdf
ICML 2018 7
[ "Matteo Papini", "Damiano Binaghi", "Giuseppe Canonaco", "Matteo Pirotta", "Marcello Restelli" ]
[ "Reinforcement Learning" ]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2066
http://proceedings.mlr.press/v80/papini18a/papini18a.pdf
stochastic-variance-reduced-policy-gradient-1
null
[]
https://paperswithcode.com/paper/from-self-ception-to-image-self-ception-a
1806.05610
null
null
From Self-ception to Image Self-ception: A method to represent an image with its own approximations
A concept of defining images based on its own approximate ones is proposed here, which is called 'Self-ception'. In this regard, an algorithm is proposed to implement the self-ception for images, which we call it 'Image Self-ception' since we use it for images. We can control the accuracy of this self-ception representation by deciding how many segments or regions we want to use for the representation. Some self-ception images are included in the paper. The video versions of the proposed image self-ception algorithm in action are shown in a YouTube channel (find it by Googling image self-ception).
null
http://arxiv.org/abs/1806.05610v1
http://arxiv.org/pdf/1806.05610v1.pdf
null
[ "Hamed Shah-Hosseini" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gender-prediction-in-english-hindi-code-mixed
1806.05600
null
null
Gender Prediction in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System
The rapid expansion in the usage of social media networking sites leads to a huge amount of unprocessed user generated data which can be used for text mining. Author profiling is the problem of automatically determining profiling aspects like the author's gender and age group through a text is gaining much popularity in computational linguistics. Most of the past research in author profiling is concentrated on English texts \cite{1,2}. However many users often change the language while posting on social media which is called code-mixing, and it develops some challenges in the field of text classification and author profiling like variations in spelling, non-grammatical structure and transliteration \cite{3}. There are very few English-Hindi code-mixed annotated datasets of social media content present online \cite{4}. In this paper, we analyze the task of author's gender prediction in code-mixed content and present a corpus of English-Hindi texts collected from Twitter which is annotated with author's gender. We also explore language identification of every word in this corpus. We present a supervised classification baseline system which uses various machine learning algorithms to identify the gender of an author using a text, based on character and word level features.
null
http://arxiv.org/abs/1806.05600v1
http://arxiv.org/pdf/1806.05600v1.pdf
null
[ "Ankush Khandelwal", "Sahil Swami", "Syed Sarfaraz Akhtar", "Manish Shrivastava" ]
[ "Author Profiling", "Gender Prediction", "General Classification", "Language Identification", "text-classification", "Text Classification", "Transliteration" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-survey-on-open-information-extraction
1806.05599
null
null
A Survey on Open Information Extraction
We provide a detailed overview of the various approaches that were proposed to date to solve the task of Open Information Extraction. We present the major challenges that such systems face, show the evolution of the suggested approaches over time and depict the specific issues they address. In addition, we provide a critique of the commonly applied evaluation procedures for assessing the performance of Open IE systems and highlight some directions for future work.
null
http://arxiv.org/abs/1806.05599v1
http://arxiv.org/pdf/1806.05599v1.pdf
COLING 2018 8
[ "Christina Niklaus", "Matthias Cetto", "André Freitas", "Siegfried Handschuh" ]
[ "Open Information Extraction", "Survey" ]
2018-06-14T00:00:00
https://aclanthology.org/C18-1326
https://aclanthology.org/C18-1326.pdf
a-survey-on-open-information-extraction-2
null
[]
https://paperswithcode.com/paper/there-are-many-consistent-explanations-of
1806.05594
null
rkgKBhA5Y7
There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average
Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures. The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data. Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule. We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule. With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data. For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.
Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.
http://arxiv.org/abs/1806.05594v3
http://arxiv.org/pdf/1806.05594v3.pdf
ICLR 2019 5
[ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ]
[ "Domain Adaptation", "Semi-Supervised Image Classification" ]
2018-06-14T00:00:00
https://openreview.net/forum?id=rkgKBhA5Y7
https://openreview.net/pdf?id=rkgKBhA5Y7
there-are-many-consistent-explanations-of-1
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/sparsely-grouped-multi-task-generative
1805.07509
null
null
Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation
Recent Image-to-Image Translation algorithms have achieved significant progress in neural style transfer and image attribute manipulation tasks. However, existing approaches require exhaustively labelling training data, which is labor demanding, difficult to scale up, and hard to migrate into new domains. To overcome such a key limitation, we propose Sparsely Grouped Generative Adversarial Networks (SG-GAN) as a novel approach that can translate images on sparsely grouped datasets where only a few samples for training are labelled. Using a novel one-input multi-output architecture, SG-GAN is well-suited for tackling sparsely grouped learning and multi-task learning. The proposed model can translate images among multiple groups using only a single commonly trained model. To experimentally validate advantages of the new model, we apply the proposed method to tackle a series of attribute manipulation tasks for facial images. Experimental results demonstrate that SG-GAN can generate image translation results of comparable quality with baselines methods on adequately labelled datasets and results of superior quality on sparsely grouped datasets. The official implementation is publicly available:https://github.com/zhangqianhui/Sparsely-Grouped-GAN.
To overcome such a key limitation, we propose Sparsely Grouped Generative Adversarial Networks (SG-GAN) as a novel approach that can translate images on sparsely grouped datasets where only a few samples for training are labelled.
https://arxiv.org/abs/1805.07509v7
https://arxiv.org/pdf/1805.07509v7.pdf
null
[ "Jichao Zhang", "Yezhi Shu", "Songhua Xu", "Gongze Cao", "Fan Zhong", "Meng Liu", "Xueying Qin" ]
[ "Attribute", "Image-to-Image Translation", "Multi-Task Learning", "Style Transfer", "Translation" ]
2018-05-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tract-orientation-mapping-for-bundle-specific
1806.05580
null
null
Tract orientation mapping for bundle-specific tractography
While the major white matter tracts are of great interest to numerous studies in neuroscience and medicine, their manual dissection in larger cohorts from diffusion MRI tractograms is time-consuming, requires expert knowledge and is hard to reproduce. Tract orientation mapping (TOM) is a novel concept that facilitates bundle-specific tractography based on a learned mapping from the original fiber orientation distribution function (fODF) peaks to a list of tract orientation maps (also abbr. TOM). Each TOM represents one of the known tracts with each voxel containing no more than one orientation vector. TOMs can act as a prior or even as direct input for tractography. We use an encoder-decoder fully-convolutional neural network architecture to learn the required mapping. In comparison to previous concepts for the reconstruction of specific bundles, the presented one avoids various cumbersome processing steps like whole brain tractography, atlas registration or clustering. We compare it to four state of the art bundle recognition methods on 20 different bundles in a total of 105 subjects from the Human Connectome Project. Results are anatomically convincing even for difficult tracts, while reaching low angular errors, unprecedented runtimes and top accuracy values (Dice). Our code and our data are openly available.
While the major white matter tracts are of great interest to numerous studies in neuroscience and medicine, their manual dissection in larger cohorts from diffusion MRI tractograms is time-consuming, requires expert knowledge and is hard to reproduce.
http://arxiv.org/abs/1806.05580v1
http://arxiv.org/pdf/1806.05580v1.pdf
null
[ "Jakob Wasserthal", "Peter F. Neher", "Klaus H. Maier-Hein" ]
[ "Clustering", "Decoder", "Diffusion MRI" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lap-a-linearize-and-project-method-for
1705.09992
null
null
LAP: a Linearize and Project Method for Solving Inverse Problems with Coupled Variables
Many inverse problems involve two or more sets of variables that represent different physical quantities but are tightly coupled with each other. For example, image super-resolution requires joint estimation of the image and motion parameters from noisy measurements. Exploiting this structure is key for efficiently solving these large-scale optimization problems, which are often ill-conditioned. In this paper, we present a new method called Linearize And Project (LAP) that offers a flexible framework for solving inverse problems with coupled variables. LAP is most promising for cases when the subproblem corresponding to one of the variables is considerably easier to solve than the other. LAP is based on a Gauss-Newton method, and thus after linearizing the residual, it eliminates one block of variables through projection. Due to the linearization, this block can be chosen freely. Further, LAP supports direct, iterative, and hybrid regularization as well as constraints. Therefore LAP is attractive, e.g., for ill-posed imaging problems. These traits differentiate LAP from common alternatives for this type of problem such as variable projection (VarPro) and block coordinate descent (BCD). Our numerical experiments compare the performance of LAP to BCD and VarPro using three coupled problems whose forward operators are linear with respect to one block and nonlinear for the other set of variables.
LAP is most promising for cases when the subproblem corresponding to one of the variables is considerably easier to solve than the other.
http://arxiv.org/abs/1705.09992v3
http://arxiv.org/pdf/1705.09992v3.pdf
null
[ "James Herring", "James Nagy", "Lars Ruthotto" ]
[ "Image Super-Resolution", "Super-Resolution" ]
2017-05-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fruit-recognition-from-images-using-deep
1712.00580
null
null
Fruit recognition from images using deep learning
In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use this kind of neural network.
In this paper we introduce a new, high-quality, dataset of images containing fruits.
http://arxiv.org/abs/1712.00580v9
http://arxiv.org/pdf/1712.00580v9.pdf
null
[ "Horea Mureşan", "Mihai Oltean" ]
[ "Deep Learning" ]
2017-12-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/autoregressive-quantile-networks-for
1806.05575
null
null
Autoregressive Quantile Networks for Generative Modeling
We introduce autoregressive implicit quantile networks (AIQN), a fundamentally different approach to generative modeling than those commonly used, that implicitly captures the distribution using quantile regression. AIQN is able to achieve superior perceptual quality and improvements in evaluation metrics, without incurring a loss of sample diversity. The method can be applied to many existing models and architectures. In this work we extend the PixelCNN model with AIQN and demonstrate results on CIFAR-10 and ImageNet using Inception score, FID, non-cherry-picked samples, and inpainting results. We consistently observe that AIQN yields a highly stable algorithm that improves perceptual quality while maintaining a highly diverse distribution.
We introduce autoregressive implicit quantile networks (AIQN), a fundamentally different approach to generative modeling than those commonly used, that implicitly captures the distribution using quantile regression.
http://arxiv.org/abs/1806.05575v1
http://arxiv.org/pdf/1806.05575v1.pdf
ICML 2018 7
[ "Georg Ostrovski", "Will Dabney", "Rémi Munos" ]
[ "Diversity", "quantile regression", "regression" ]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2416
http://proceedings.mlr.press/v80/ostrovski18a/ostrovski18a.pdf
autoregressive-quantile-networks-for-1
null
[]
https://paperswithcode.com/paper/weakly-supervised-learning-for-tool
1806.05573
null
null
Weakly-Supervised Learning for Tool Localization in Laparoscopic Videos
Surgical tool localization is an essential task for the automatic analysis of endoscopic videos. In the literature, existing methods for tool localization, tracking and segmentation require training data that is fully annotated, thereby limiting the size of the datasets that can be used and the generalization of the approaches. In this work, we propose to circumvent the lack of annotated data with weak supervision. We propose a deep architecture, trained solely on image level annotations, that can be used for both tool presence detection and localization in surgical videos. Our architecture relies on a fully convolutional neural network, trained end-to-end, enabling us to localize surgical tools without explicit spatial annotations. We demonstrate the benefits of our approach on a large public dataset, Cholec80, which is fully annotated with binary tool presence information and of which 5 videos have been fully annotated with bounding boxes and tool centers for the evaluation.
We propose a deep architecture, trained solely on image level annotations, that can be used for both tool presence detection and localization in surgical videos.
http://arxiv.org/abs/1806.05573v2
http://arxiv.org/pdf/1806.05573v2.pdf
null
[ "Armine Vardazaryan", "Didier Mutter", "Jacques Marescaux", "Nicolas Padoy" ]
[ "Surgical tool detection", "Weakly-supervised Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/direct-automated-quantitative-measurement-of
1806.05570
null
null
Direct Automated Quantitative Measurement of Spine via Cascade Amplifier Regression Network
Automated quantitative measurement of the spine (i.e., multiple indices estimation of heights, widths, areas, and so on for the vertebral body and disc) is of the utmost importance in clinical spinal disease diagnoses, such as osteoporosis, intervertebral disc degeneration, and lumbar disc herniation, yet still an unprecedented challenge due to the variety of spine structure and the high dimensionality of indices to be estimated. In this paper, we propose a novel cascade amplifier regression network (CARN), which includes the CARN architecture and local shape-constrained manifold regularization (LSCMR) loss function, to achieve accurate direct automated multiple indices estimation. The CARN architecture is composed of a cascade amplifier network (CAN) for expressive feature embedding and a linear regression model for multiple indices estimation. The CAN consists of cascade amplifier units (AUs), which are used for selective feature reuse by stimulating effective feature and suppressing redundant feature during propagating feature map between adjacent layers, thus an expressive feature embedding is obtained. During training, the LSCMR is utilized to alleviate overfitting and generate realistic estimation by learning the multiple indices distribution. Experiments on MR images of 195 subjects show that the proposed CARN achieves impressive performance with mean absolute errors of 1.2496 mm, 1.2887 mm, and 1.2692 mm for estimation of 15 heights of discs, 15 heights of vertebral bodies, and total indices respectively. The proposed method has great potential in clinical spinal disease diagnoses.
The CARN architecture is composed of a cascade amplifier network (CAN) for expressive feature embedding and a linear regression model for multiple indices estimation.
http://arxiv.org/abs/1806.05570v1
http://arxiv.org/pdf/1806.05570v1.pdf
null
[ "Shumao Pang", "Stephanie Leung", "Ilanit Ben Nachum", "Qianjin Feng", "Shuo Li" ]
[ "regression" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)", "full_name": "Linear Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Linear Regression", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/cardiac-motion-scoring-with-segment-and
1806.05569
null
null
Cardiac Motion Scoring with Segment- and Subject-level Non-Local Modeling
Motion scoring of cardiac myocardium is of paramount importance for early detection and diagnosis of various cardiac disease. It aims at identifying regional wall motions into one of the four types including normal, hypokinetic, akinetic, and dyskinetic, and is extremely challenging due to the complex myocardium deformation and subtle inter-class difference of motion patterns. All existing work on automated motion analysis are focused on binary abnormality detection to avoid the much more demanding motion scoring, which is urgently required in real clinical practice yet has never been investigated before. In this work, we propose Cardiac-MOS, the first powerful method for cardiac motion scoring from cardiac MR sequences based on deep convolution neural network. Due to the locality of convolution, the relationship between distant non-local responses of the feature map cannot be explored, which is closely related to motion difference between segments. In Cardiac-MOS, such non-local relationship is modeled with non-local neural network within each segment and across all segments of one subject, i.e., segment- and subject-level non-local modeling, and lead to obvious performance improvement. Besides, Cardiac-MOS can effectively extract motion information from MR sequences of various lengths by interpolating the convolution kernel along the temporal dimension, therefore can be applied to MR sequences of multiple sources. Experiments on 1440 myocardium segments of 90 subjects from short axis MR sequences of multiple lengths prove that Cardiac-MOS achieves reliable performance, with correlation of 0.926 for motion score index estimation and accuracy of 77.4\% for motion scoring. Cardiac-MOS also outperforms all existing work for binary abnormality detection. As the first automatic motion scoring solution, Cardiac-MOS demonstrates great potential in future clinical application.
null
http://arxiv.org/abs/1806.05569v1
http://arxiv.org/pdf/1806.05569v1.pdf
null
[ "Wufeng Xue", "Gary Brahm", "Stephanie Leung", "Ogla Shmuilovich", "Shuo Li" ]
[ "Anomaly Detection" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/adaptive-shooting-for-bots-in-first-person
1806.05554
null
null
Adaptive Shooting for Bots in First Person Shooter Games Using Reinforcement Learning
In current state-of-the-art commercial first person shooter games, computer controlled bots, also known as non player characters, can often be easily distinguishable from those controlled by humans. Tell-tale signs such as failed navigation, "sixth sense" knowledge of human players' whereabouts and deterministic, scripted behaviors are some of the causes of this. We propose, however, that one of the biggest indicators of non humanlike behavior in these games can be found in the weapon shooting capability of the bot. Consistently perfect accuracy and "locking on" to opponents in their visual field from any distance are indicative capabilities of bots that are not found in human players. Traditionally, the bot is handicapped in some way with either a timed reaction delay or a random perturbation to its aim, which doesn't adapt or improve its technique over time. We hypothesize that enabling the bot to learn the skill of shooting through trial and error, in the same way a human player learns, will lead to greater variation in game-play and produce less predictable non player characters. This paper describes a reinforcement learning shooting mechanism for adapting shooting over time based on a dynamic reward signal from the amount of damage caused to opponents.
null
http://arxiv.org/abs/1806.05554v1
http://arxiv.org/pdf/1806.05554v1.pdf
null
[ "Frank G. Glavin", "Michael G. Madden" ]
[ "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/apple-picker-automatic-particle-picking-a-low
1802.00469
null
null
APPLE Picker: Automatic Particle Picking, a Low-Effort Cryo-EM Framework
Particle picking is a crucial first step in the computational pipeline of single-particle cryo-electron microscopy (cryo-EM). Selecting particles from the micrographs is difficult especially for small particles with low contrast. As high-resolution reconstruction typically requires hundreds of thousands of particles, manually picking that many particles is often too time-consuming. While semi-automated particle picking is currently a popular approach, it may suffer from introducing manual bias into the selection process. In addition, semi-automated particle picking is still somewhat time-consuming. This paper presents the APPLE (Automatic Particle Picking with Low user Effort) picker, a simple and novel approach for fast, accurate, and fully automatic particle picking. While our approach was inspired by template matching, it is completely template-free. This approach is evaluated on publicly available datasets containing micrographs of $\beta$-galactosidase and keyhole limpet hemocyanin projections.
Selecting particles from the micrographs is difficult especially for small particles with low contrast.
http://arxiv.org/abs/1802.00469v2
http://arxiv.org/pdf/1802.00469v2.pdf
null
[ "Ayelet Heimowitz", "Joakim andén", "Amit Singer" ]
[ "Template Matching" ]
2018-02-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/correlation-tracking-via-robust-region
1806.05530
null
null
Correlation Tracking via Robust Region Proposals
Recently, correlation filter-based trackers have received extensive attention due to their simplicity and superior speed. However, such trackers perform poorly when the target undergoes occlusion, viewpoint change or other challenging attributes due to pre-defined sampling strategy. To tackle these issues, in this paper, we propose an adaptive region proposal scheme to facilitate visual tracking. To be more specific, a novel tracking monitoring indicator is advocated to forecast tracking failure. Afterwards, we incorporate detection and scale proposals respectively, to recover from model drift as well as handle aspect ratio variation. We test the proposed algorithm on several challenging sequences, which have demonstrated that the proposed tracker performs favourably against state-of-the-art trackers.
null
http://arxiv.org/abs/1806.05530v1
http://arxiv.org/pdf/1806.05530v1.pdf
null
[ "Yuqi Han", "Jinghong Nan", "Zengshuo Zhang", "Jingjing Wang", "Baojun Zhao" ]
[ "Region Proposal", "Visual Tracking" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/variational-message-passing-with-structured
1803.05589
null
HyH9lbZAW
Variational Message Passing with Structured Inference Networks
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods.
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret.
http://arxiv.org/abs/1803.05589v2
http://arxiv.org/pdf/1803.05589v2.pdf
ICLR 2018 1
[ "Wu Lin", "Nicolas Hubacher", "Mohammad Emtiyaz Khan" ]
[ "Variational Inference" ]
2018-03-15T00:00:00
https://openreview.net/forum?id=HyH9lbZAW
https://openreview.net/pdf?id=HyH9lbZAW
variational-message-passing-with-structured-1
null
[ { "code_snippet_url": "", "description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "USD Coin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "USD Coin Customer Service Number +1-833-534-1729", "source_title": "Auto-Encoding Variational Bayes", "source_url": "http://arxiv.org/abs/1312.6114v10" } ]
https://paperswithcode.com/paper/el-gan-embedding-loss-driven-generative
1806.05525
null
null
EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection
Convolutional neural networks have been successfully applied to semantic segmentation problems. However, there are many problems that are inherently not pixel-wise classification problems but are nevertheless frequently formulated as semantic segmentation. This ill-posed formulation consequently necessitates hand-crafted scenario-specific and computationally expensive post-processing methods to convert the per pixel probability maps to final desired outputs. Generative adversarial networks (GANs) can be used to make the semantic segmentation network output to be more realistic or better structure-preserving, decreasing the dependency on potentially complex post-processing. In this work, we propose EL-GAN: a GAN framework to mitigate the discussed problem using an embedding loss. With EL-GAN, we discriminate based on learned embeddings of both the labels and the prediction at the same time. This results in more stable training due to having better discriminative information, benefiting from seeing both `fake' and `real' predictions at the same time. This substantially stabilizes the adversarial training process. We use the TuSimple lane marking challenge to demonstrate that with our proposed framework it is viable to overcome the inherent anomalies of posing it as a semantic segmentation problem. Not only is the output considerably more similar to the labels when compared to conventional methods, the subsequent post-processing is also simpler and crosses the competitive 96% accuracy threshold.
null
http://arxiv.org/abs/1806.05525v2
http://arxiv.org/pdf/1806.05525v2.pdf
null
[ "Mohsen Ghafoorian", "Cedric Nugteren", "Nóra Baka", "Olaf Booij", "Michael Hofmann" ]
[ "Lane Detection", "Segmentation", "Semantic Segmentation" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Dogecoin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Dogecoin Customer Service Number +1-833-534-1729", "source_title": "Generative Adversarial Networks", "source_url": "https://arxiv.org/abs/1406.2661v1" } ]
https://paperswithcode.com/paper/improved-density-based-spatio-textual
1806.05522
null
null
Improved Density-Based Spatio--Textual Clustering on Social Media
DBSCAN may not be sufficient when the input data type is heterogeneous in terms of textual description. When we aim to discover clusters of geo-tagged records relevant to a particular point-of-interest (POI) on social media, examining only one type of input data (e.g., the tweets relevant to a POI) may draw an incomplete picture of clusters due to noisy regions. To overcome this problem, we introduce DBSTexC, a newly defined density-based clustering algorithm using spatio--textual information. We first characterize POI-relevant and POI-irrelevant tweets as the texts that include and do not include a POI name or its semantically coherent variations, respectively. By leveraging the proportion of POI-relevant and POI-irrelevant tweets, the proposed algorithm demonstrates much higher clustering performance than the DBSCAN case in terms of $\mathcal{F}_1$ score and its variants. While DBSTexC performs exactly as DBSCAN with the textually homogeneous inputs, it far outperforms DBSCAN with the textually heterogeneous inputs. Furthermore, to further improve the clustering quality by fully capturing the geographic distribution of tweets, we present fuzzy DBSTexC (F-DBSTexC), an extension of DBSTexC, which incorporates the notion of fuzzy clustering into the DBSTexC. We then demonstrate the robustness of F-DBSTexC via intensive experiments. The computational complexity of our algorithms is also analytically and numerically shown.
null
http://arxiv.org/abs/1806.05522v1
http://arxiv.org/pdf/1806.05522v1.pdf
null
[ "Minh D. Nguyen", "Won-Yong Shin" ]
[ "Clustering" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/semaxis-a-lightweight-framework-to
1806.05521
null
null
SemAxis: A Lightweight Framework to Characterize Domain-Specific Word Semantics Beyond Sentiment
Because word semantics can substantially change across communities and contexts, capturing domain-specific word semantics is an important challenge. Here, we propose SEMAXIS, a simple yet powerful framework to characterize word semantics using many semantic axes in word- vector spaces beyond sentiment. We demonstrate that SEMAXIS can capture nuanced semantic representations in multiple online communities. We also show that, when the sentiment axis is examined, SEMAXIS outperforms the state-of-the-art approaches in building domain-specific sentiment lexicons.
Because word semantics can substantially change across communities and contexts, capturing domain-specific word semantics is an important challenge.
http://arxiv.org/abs/1806.05521v1
http://arxiv.org/pdf/1806.05521v1.pdf
ACL 2018 7
[ "Jisun An", "Haewoon Kwak", "Yong-Yeol Ahn" ]
[]
2018-06-14T00:00:00
https://aclanthology.org/P18-1228
https://aclanthology.org/P18-1228.pdf
semaxis-a-lightweight-framework-to-1
null
[]
https://paperswithcode.com/paper/features-projections-and-representation
1801.10055
null
null
Features, Projections, and Representation Change for Generalized Planning
Generalized planning is concerned with the characterization and computation of plans that solve many instances at once. In the standard formulation, a generalized plan is a mapping from feature or observation histories into actions, assuming that the instances share a common pool of features and actions. This assumption, however, excludes the standard relational planning domains where actions and objects change across instances. In this work, we extend the standard formulation of generalized planning to such domains. This is achieved by projecting the actions over the features, resulting in a common set of abstract actions which can be tested for soundness and completeness, and which can be used for generating general policies such as "if the gripper is empty, pick the clear block above x and place it on the table" that achieve the goal clear(x) in any Blocksworld instance. In this policy, "pick the clear block above x" is an abstract action that may represent the action Unstack(a, b) in one situation and the action Unstack(b, c) in another. Transformations are also introduced for computing such policies by means of fully observable non-deterministic (FOND) planners. The value of generalized representations for learning general policies is also discussed.
null
http://arxiv.org/abs/1801.10055v4
http://arxiv.org/pdf/1801.10055v4.pdf
null
[ "Blai Bonet", "Hector Geffner" ]
[]
2018-01-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/split-door-criterion-identification-of-causal
1611.09414
null
null
Split-door criterion: Identification of causal effects through auxiliary outcomes
We present a method for estimating causal effects in time series data when fine-grained information about the outcome of interest is available. Specifically, we examine what we call the split-door setting, where the outcome variable can be split into two parts: one that is potentially affected by the cause being studied and another that is independent of it, with both parts sharing the same (unobserved) confounders. We show that under these conditions, the problem of identification reduces to that of testing for independence among observed variables, and present a method that uses this approach to automatically find subsets of the data that are causally identified. We demonstrate the method by estimating the causal impact of Amazon's recommender system on traffic to product pages, finding thousands of examples within the dataset that satisfy the split-door criterion. Unlike past studies based on natural experiments that were limited to a single product category, our method applies to a large and representative sample of products viewed on the site. In line with previous work, we find that the widely-used click-through rate (CTR) metric overestimates the causal impact of recommender systems; depending on the product category, we estimate that 50-80\% of the traffic attributed to recommender systems would have happened even without any recommendations. We conclude with guidelines for using the split-door criterion as well as a discussion of other contexts where the method can be applied.
We present a method for estimating causal effects in time series data when fine-grained information about the outcome of interest is available.
http://arxiv.org/abs/1611.09414v2
http://arxiv.org/pdf/1611.09414v2.pdf
null
[ "Amit Sharma", "Jake M. Hofman", "Duncan J. Watts" ]
[ "Recommendation Systems", "Time Series Analysis" ]
2016-11-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/real-time-cardiovascular-mr-with-spatio
1803.05192
null
null
Real-time Cardiovascular MR with Spatio-temporal Artifact Suppression using Deep Learning - Proof of Concept in Congenital Heart Disease
PURPOSE: Real-time assessment of ventricular volumes requires high acceleration factors. Residual convolutional neural networks (CNN) have shown potential for removing artifacts caused by data undersampling. In this study we investigated the effect of different radial sampling patterns on the accuracy of a CNN. We also acquired actual real-time undersampled radial data in patients with congenital heart disease (CHD), and compare CNN reconstruction to Compressed Sensing (CS). METHODS: A 3D (2D plus time) CNN architecture was developed, and trained using 2276 gold-standard paired 3D data sets, with 14x radial undersampling. Four sampling schemes were tested, using 169 previously unseen 3D 'synthetic' test data sets. Actual real-time tiny Golden Angle (tGA) radial SSFP data was acquired in 10 new patients (122 3D data sets), and reconstructed using the 3D CNN as well as a CS algorithm; GRASP. RESULTS: Sampling pattern was shown to be important for image quality, and accurate visualisation of cardiac structures. For actual real-time data, overall reconstruction time with CNN (including creation of aliased images) was shown to be more than 5x faster than GRASP. Additionally, CNN image quality and accuracy of biventricular volumes was observed to be superior to GRASP for the same raw data. CONCLUSION: This paper has demonstrated the potential for the use of a 3D CNN for deep de-aliasing of real-time radial data, within the clinical setting. Clinical measures of ventricular volumes using real-time data with CNN reconstruction are not statistically significantly different from the gold-standard, cardiac gated, BH techniques.
null
http://arxiv.org/abs/1803.05192v3
http://arxiv.org/pdf/1803.05192v3.pdf
null
[ "Andreas Hauptmann", "Simon Arridge", "Felix Lucka", "Vivek Muthurangu", "Jennifer A. Steeden" ]
[ "compressed sensing", "De-aliasing" ]
2018-03-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/translations-as-additional-contexts-for
1806.05516
null
null
Translations as Additional Contexts for Sentence Classification
In sentence classification tasks, additional contexts, such as the neighboring sentences, may improve the accuracy of the classifier. However, such contexts are domain-dependent and thus cannot be used for another classification task with an inappropriate domain. In contrast, we propose the use of translated sentences as context that is always available regardless of the domain. We find that naive feature expansion of translations gains only marginal improvements and may decrease the performance of the classifier, due to possible inaccurate translations thus producing noisy sentence vectors. To this end, we present multiple context fixing attachment (MCFA), a series of modules attached to multiple sentence vectors to fix the noise in the vectors using the other sentence vectors as context. We show that our method performs competitively compared to previous models, achieving best classification performance on multiple data sets. We are the first to use translations as domain-free contexts for sentence classification.
We are the first to use translations as domain-free contexts for sentence classification.
http://arxiv.org/abs/1806.05516v1
http://arxiv.org/pdf/1806.05516v1.pdf
null
[ "Reinald Kim Amplayo", "Kyungjae Lee", "Jinyeong Yeo", "Seung-won Hwang" ]
[ "Classification", "General Classification", "Sentence", "Sentence Classification", "Subjectivity Analysis", "Text Classification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-exact-equivalence-of-distance-and-kernel
1806.05514
null
null
The Exact Equivalence of Distance and Kernel Methods for Hypothesis Testing
Distance-based tests, also called "energy statistics", are leading methods for two-sample and independence tests from the statistics community. Kernel-based tests, developed from "kernel mean embeddings", are leading methods for two-sample and independence tests from the machine learning community. A fixed-point transformation was previously proposed to connect the distance methods and kernel methods for the population statistics. In this paper, we propose a new bijective transformation between metrics and kernels. It simplifies the fixed-point transformation, inherits similar theoretical properties, allows distance methods to be exactly the same as kernel methods for sample statistics and p-value, and better preserves the data structure upon transformation. Our results further advance the understanding in distance and kernel-based tests, streamline the code base for implementing these tests, and enable a rich literature of distance-based and kernel-based methodologies to directly communicate with each other.
null
https://arxiv.org/abs/1806.05514v5
https://arxiv.org/pdf/1806.05514v5.pdf
null
[ "Cencheng Shen", "Joshua T. Vogelstein" ]
[ "Two-sample testing" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/humor-detection-in-english-hindi-code-mixed
1806.05513
null
null
Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System
The tremendous amount of user generated data through social networking sites led to the gaining popularity of automatic text classification in the field of computational linguistics over the past decade. Within this domain, one problem that has drawn the attention of many researchers is automatic humor detection in texts. In depth semantic understanding of the text is required to detect humor which makes the problem difficult to automate. With increase in the number of social media users, many multilingual speakers often interchange between languages while posting on social media which is called code-mixing. It introduces some challenges in the field of linguistic analysis of social media content (Barman et al., 2014), like spelling variations and non-grammatical structures in a sentence. Past researches include detecting puns in texts (Kao et al., 2016) and humor in one-lines (Mihalcea et al., 2010) in a single language, but with the tremendous amount of code-mixed data available online, there is a need to develop techniques which detects humor in code-mixed tweets. In this paper, we analyze the task of humor detection in texts and describe a freely available corpus containing English-Hindi code-mixed tweets annotated with humorous(H) or non-humorous(N) tags. We also tagged the words in the tweets with Language tags (English/Hindi/Others). Moreover, we describe the experiments carried out on the corpus and provide a baseline classification system which distinguishes between humorous and non-humorous texts.
null
http://arxiv.org/abs/1806.05513v1
http://arxiv.org/pdf/1806.05513v1.pdf
LREC 2018 5
[ "Ankush Khandelwal", "Sahil Swami", "Syed S. Akhtar", "Manish Shrivastava" ]
[ "General Classification", "Humor Detection", "Sentence", "text-classification", "Text Classification" ]
2018-06-14T00:00:00
https://aclanthology.org/L18-1193
https://aclanthology.org/L18-1193.pdf
humor-detection-in-english-hindi-code-mixed-1
null
[]
https://paperswithcode.com/paper/netscore-towards-universal-metrics-for-large
1806.05512
null
Hyzq4ZKa97
NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical On-Device Edge Usage
Much of the focus in the design of deep neural networks has been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios, particularly on edge devices such as mobile and other consumer devices given their high computational and memory requirements. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical on-device edge usage. In particular, we propose a new balanced metric called NetScore, which is designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network, which is important for on-device edge operation. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 60 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. The proposed NetScore metric, along with the other tested metrics, are by no means perfect, but the hope is to push the conversation towards better universal metrics for evaluating deep neural networks for use in practical on-device edge scenarios to help guide practitioners in model design for such scenarios.
null
http://arxiv.org/abs/1806.05512v2
http://arxiv.org/pdf/1806.05512v2.pdf
null
[ "Alexander Wong" ]
[ "image-classification", "Image Classification", "Object Recognition" ]
2018-06-14T00:00:00
https://openreview.net/forum?id=Hyzq4ZKa97
https://openreview.net/pdf?id=Hyzq4ZKa97
null
null
[]
https://paperswithcode.com/paper/cold-start-aware-user-and-product-attention
1806.05507
null
null
Cold-Start Aware User and Product Attention for Sentiment Classification
The use of user/product information in sentiment analysis is important, especially for cold-start users/products, whose number of reviews are very limited. However, current models do not deal with the cold-start problem which is typical in review websites. In this paper, we present Hybrid Contextualized Sentiment Classifier (HCSC), which contains two modules: (1) a fast word encoder that returns word vectors embedded with short and long range dependency features; and (2) Cold-Start Aware Attention (CSAA), an attention mechanism that considers the existence of cold-start problem when attentively pooling the encoded word vectors. HCSC introduces shared vectors that are constructed from similar users/products, and are used when the original distinct vectors do not have sufficient information (i.e. cold-start). This is decided by a frequency-guided selective gate vector. Our experiments show that in terms of RMSE, HCSC performs significantly better when compared with on famous datasets, despite having less complexity, and thus can be trained much faster. More importantly, our model performs significantly better than previous models when the training data is sparse and has cold-start problems.
The use of user/product information in sentiment analysis is important, especially for cold-start users/products, whose number of reviews are very limited.
http://arxiv.org/abs/1806.05507v1
http://arxiv.org/pdf/1806.05507v1.pdf
ACL 2018 7
[ "Reinald Kim Amplayo", "Jihyeok Kim", "Sua Sung", "Seung-won Hwang" ]
[ "Classification", "General Classification", "Sentiment Analysis", "Sentiment Classification" ]
2018-06-14T00:00:00
https://aclanthology.org/P18-1236
https://aclanthology.org/P18-1236.pdf
cold-start-aware-user-and-product-attention-1
null
[]
https://paperswithcode.com/paper/dense-light-field-reconstruction-from-sparse
1806.05506
null
null
Dense Light Field Reconstruction From Sparse Sampling Using Residual Network
A light field records numerous light rays from a real-world scene. However, capturing a dense light field by existing devices is a time-consuming process. Besides, reconstructing a large amount of light rays equivalent to multiple light fields using sparse sampling arises a severe challenge for existing methods. In this paper, we present a learning based method to reconstruct multiple novel light fields between two mutually independent light fields. We indicate that light rays distributed in different light fields have the same consistent constraints under a certain condition. The most significant constraint is a depth related correlation between angular and spatial dimensions. Our method avoids working out the error-sensitive constraint by employing a deep neural network. We solve residual values of pixels on epipolar plane image (EPI) to reconstruct novel light fields. Our method is able to reconstruct 2 to 4 novel light fields between two mutually independent input light fields. We also compare our results with those yielded by a number of alternatives elsewhere in the literature, which shows our reconstructed light fields have better structure similarity and occlusion relationship.
null
http://arxiv.org/abs/1806.05506v2
http://arxiv.org/pdf/1806.05506v2.pdf
null
[ "Mantang Guo", "Hao Zhu", "Guoqing Zhou", "Qing Wang" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/entity-commonsense-representation-for-neural
1806.05504
null
null
Entity Commonsense Representation for Neural Abstractive Summarization
A major proportion of a text summary includes important entities found in the original text. These entities build up the topic of the summary. Moreover, they hold commonsense information once they are linked to a knowledge base. Based on these observations, this paper investigates the usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries. To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary. Current available ELS's are still not sufficiently effective, possibly introducing unresolved ambiguities and irrelevant entities. We resolve the imperfections of the ELS by (a) encoding entities with selective disambiguation, and (b) pooling entity vectors using firm attention. By applying E2T to a simple sequence-to-sequence model with attention mechanism as base model, we see significant improvements of the performance in the Gigaword (sentence to title) and CNN (long document to multi-sentence highlights) summarization datasets by at least 2 ROUGE points.
To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary.
http://arxiv.org/abs/1806.05504v1
http://arxiv.org/pdf/1806.05504v1.pdf
NAACL 2018 6
[ "Reinald Kim Amplayo", "Seonjae Lim", "Seung-won Hwang" ]
[ "Abstractive Text Summarization", "Decoder", "Entity Linking", "Sentence" ]
2018-06-14T00:00:00
https://aclanthology.org/N18-1064
https://aclanthology.org/N18-1064.pdf
entity-commonsense-representation-for-neural-1
null
[]
https://paperswithcode.com/paper/gradient-based-meta-learning-with-learned
1801.05558
null
null
Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace
Gradient-based meta-learning methods leverage gradient descent to learn the commonalities among various tasks. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during meta-testing. Our primary contribution is the {\em MT-net}, which enables the meta-learner to learn on each layer's activation space a subspace that the task-specific learner performs gradient descent on. Additionally, a task-specific learner of an {\em MT-net} performs gradient descent with respect to a meta-learned distance metric, which warps the activation space to be more sensitive to task identity. We demonstrate that the dimension of this learned subspace reflects the complexity of the task-specific learner's adaptation task, and also that our model is less sensitive to the choice of initial learning rates than previous gradient-based meta-learning methods. Our method achieves state-of-the-art or comparable performance on few-shot classification and regression tasks.
Our primary contribution is the {\em MT-net}, which enables the meta-learner to learn on each layer's activation space a subspace that the task-specific learner performs gradient descent on.
http://arxiv.org/abs/1801.05558v3
http://arxiv.org/pdf/1801.05558v3.pdf
ICML 2018 7
[ "Yoonho Lee", "Seungjin Choi" ]
[ "Few-Shot Image Classification", "Meta-Learning" ]
2018-01-17T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2202
http://proceedings.mlr.press/v80/lee18a/lee18a.pdf
gradient-based-meta-learning-with-learned-1
null
[]
https://paperswithcode.com/paper/aspect-sentiment-model-for-micro-reviews
1806.05499
null
null
Aspect Sentiment Model for Micro Reviews
This paper aims at an aspect sentiment model for aspect-based sentiment analysis (ABSA) focused on micro reviews. This task is important in order to understand short reviews majority of the users write, while existing topic models are targeted for expert-level long reviews with sufficient co-occurrence patterns to observe. Current methods on aggregating micro reviews using metadata information may not be effective as well due to metadata absence, topical heterogeneity, and cold start problems. To this end, we propose a model called Micro Aspect Sentiment Model (MicroASM). MicroASM is based on the observation that short reviews 1) are viewed with sentiment-aspect word pairs as building blocks of information, and 2) can be clustered into larger reviews. When compared to the current state-of-the-art aspect sentiment models, experiments show that our model provides better performance on aspect-level tasks such as aspect term extraction and document-level tasks such as sentiment classification.
This paper aims at an aspect sentiment model for aspect-based sentiment analysis (ABSA) focused on micro reviews.
http://arxiv.org/abs/1806.05499v1
http://arxiv.org/pdf/1806.05499v1.pdf
null
[ "Reinald Kim Amplayo", "Seung-won Hwang" ]
[ "Aspect-Based Sentiment Analysis", "Aspect-Based Sentiment Analysis (ABSA)", "model", "Sentiment Analysis", "Sentiment Classification", "Term Extraction", "Topic Models" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/on-accurate-evaluation-of-gans-for-language
1806.04936
null
rJMcdsA5FX
On Accurate Evaluation of GANs for Language Generation
Generative Adversarial Networks (GANs) are a promising approach to language generation. The latest works introducing novel GAN models for language generation use n-gram based metrics for evaluation and only report single scores of the best run. In this paper, we argue that this often misrepresents the true picture and does not tell the full story, as GAN models can be extremely sensitive to the random initialization and small deviations from the best hyperparameter choice. In particular, we demonstrate that the previously used BLEU score is not sensitive to semantic deterioration of generated texts and propose alternative metrics that better capture the quality and diversity of the generated samples. We also conduct a set of experiments comparing a number of GAN models for text with a conventional Language Model (LM) and find that neither of the considered models performs convincingly better than the LM.
null
https://arxiv.org/abs/1806.04936v3
https://arxiv.org/pdf/1806.04936v3.pdf
null
[ "Stanislau Semeniuta", "Aliaksei Severyn", "Sylvain Gelly" ]
[ "Diversity", "Language Modeling", "Language Modelling", "Text Generation" ]
2018-06-13T00:00:00
https://openreview.net/forum?id=rJMcdsA5FX
https://openreview.net/pdf?id=rJMcdsA5FX
on-accurate-evaluation-of-gans-for-language-1
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Dogecoin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Dogecoin Customer Service Number +1-833-534-1729", "source_title": "Generative Adversarial Networks", "source_url": "https://arxiv.org/abs/1406.2661v1" } ]
https://paperswithcode.com/paper/dynamic-video-segmentation-network
1804.00931
null
null
Dynamic Video Segmentation Network
In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.
null
http://arxiv.org/abs/1804.00931v2
http://arxiv.org/pdf/1804.00931v2.pdf
CVPR 2018 6
[ "Yu-Syuan Xu", "Tsu-Jui Fu", "Hsuan-Kung Yang", "Chun-Yi Lee" ]
[ "Segmentation", "Video Segmentation", "Video Semantic Segmentation" ]
2018-04-03T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Xu_Dynamic_Video_Segmentation_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Xu_Dynamic_Video_Segmentation_CVPR_2018_paper.pdf
dynamic-video-segmentation-network-1
null
[]
https://paperswithcode.com/paper/learning-a-tree-structured-ising-model-in
1604.06749
null
null
Learning a Tree-Structured Ising Model in Order to Make Predictions
We study the problem of learning a tree Ising model from samples such that subsequent predictions made using the model are accurate. The prediction task considered in this paper is that of predicting the values of a subset of variables given values of some other subset of variables. Virtually all previous work on graphical model learning has focused on recovering the true underlying graph. We define a distance ("small set TV" or ssTV) between distributions $P$ and $Q$ by taking the maximum, over all subsets $\mathcal{S}$ of a given size, of the total variation between the marginals of $P$ and $Q$ on $\mathcal{S}$; this distance captures the accuracy of the prediction task of interest. We derive non-asymptotic bounds on the number of samples needed to get a distribution (from the same class) with small ssTV relative to the one generating the samples. One of the main messages of this paper is that far fewer samples are needed than for recovering the underlying tree, which means that accurate predictions are possible using the wrong tree.
null
http://arxiv.org/abs/1604.06749v3
http://arxiv.org/pdf/1604.06749v3.pdf
null
[ "Guy Bresler", "Mina Karzand" ]
[]
2016-04-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/inference-in-deep-gaussian-processes-using
1806.05490
null
null
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
Deep Gaussian Processes (DGPs) are hierarchical generalizations of Gaussian Processes that combine well calibrated uncertainty estimates with the high flexibility of multilayer models. One of the biggest challenges with these models is that exact inference is intractable. The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution. This can be a potentially poor unimodal approximation of the generally multimodal posterior. In this work, we provide evidence for the non-Gaussian nature of the posterior and we apply the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples. To efficiently optimize the hyperparameters, we introduce the Moving Window MCEM algorithm. This results in significantly better predictions at a lower computational cost than its VI counterpart. Thus our method establishes a new state-of-the-art for inference in DGPs.
The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution.
http://arxiv.org/abs/1806.05490v3
http://arxiv.org/pdf/1806.05490v3.pdf
NeurIPS 2018 12
[ "Marton Havasi", "José Miguel Hernández-Lobato", "Juan José Murillo-Fuentes" ]
[ "Gaussian Processes", "Variational Inference" ]
2018-06-14T00:00:00
http://papers.nips.cc/paper/7979-inference-in-deep-gaussian-processes-using-stochastic-gradient-hamiltonian-monte-carlo
http://papers.nips.cc/paper/7979-inference-in-deep-gaussian-processes-using-stochastic-gradient-hamiltonian-monte-carlo.pdf
inference-in-deep-gaussian-processes-using-1
null
[]
https://paperswithcode.com/paper/nearly-zero-shot-learning-for-semantic
1806.05484
null
null
Nearly Zero-Shot Learning for Semantic Decoding in Spoken Dialogue Systems
This paper presents two ways of dealing with scarce data in semantic decoding using N-Best speech recognition hypotheses. First, we learn features by using a deep learning architecture in which the weights for the unknown and known categories are jointly optimised. Second, an unsupervised method is used for further tuning the weights. Sharing weights injects prior knowledge to unknown categories. The unsupervised tuning (i.e. the risk minimisation) improves the F-Measure when recognising nearly zero-shot data on the DSTC3 corpus. This unsupervised method can be applied subject to two assumptions: the rank of the class marginal is assumed to be known and the class-conditional scores of the classifier are assumed to follow a Gaussian distribution.
null
http://arxiv.org/abs/1806.05484v2
http://arxiv.org/pdf/1806.05484v2.pdf
null
[ "Lina M. Rojas-Barahona", "Stefan Ultes", "Pawel Budzianowski", "Iñigo Casanueva", "Milica Gasic", "Bo-Hsiang Tseng", "Steve Young" ]
[ "speech-recognition", "Speech Recognition", "Spoken Dialogue Systems", "Zero-Shot Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/morphological-and-language-agnostic-word
1806.05482
null
null
Morphological and Language-Agnostic Word Segmentation for NMT
The state of the art of handling rich morphology in neural machine translation (NMT) is to break word forms into subword units, so that the overall vocabulary size of these units fits the practical limits given by the NMT model and GPU memory capacity. In this paper, we compare two common but linguistically uninformed methods of subword construction (BPE and STE, the method implemented in Tensor2Tensor toolkit) and two linguistically-motivated methods: Morfessor and one novel method, based on a derivational dictionary. Our experiments with German-to-Czech translation, both morphologically rich, document that so far, the non-motivated methods perform better. Furthermore, we iden- tify a critical difference between BPE and STE and show a simple pre- processing step for BPE that considerably increases translation quality as evaluated by automatic measures.
The state of the art of handling rich morphology in neural machine translation (NMT) is to break word forms into subword units, so that the overall vocabulary size of these units fits the practical limits given by the NMT model and GPU memory capacity.
http://arxiv.org/abs/1806.05482v1
http://arxiv.org/pdf/1806.05482v1.pdf
null
[ "Dominik Macháček", "Jonáš Vidra", "Ondřej Bojar" ]
[ "GPU", "Machine Translation", "NMT", "Translation" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" } ]
https://paperswithcode.com/paper/automatic-language-identification-for-romance
1806.05480
null
null
Automatic Language Identification for Romance Languages using Stop Words and Diacritics
Automatic language identification is a natural language processing problem that tries to determine the natural language of a given content. In this paper we present a statistical method for automatic language identification of written text using dictionaries containing stop words and diacritics. We propose different approaches that combine the two dictionaries to accurately determine the language of textual corpora. This method was chosen because stop words and diacritics are very specific to a language, although some languages have some similar words and special characters they are not all common. The languages taken into account were romance languages because they are very similar and usually it is hard to distinguish between them from a computational point of view. We have tested our method using a Twitter corpus and a news article corpus. Both corpora consists of UTF-8 encoded text, so the diacritics could be taken into account, in the case that the text has no diacritics only the stop words are used to determine the language of the text. The experimental results show that the proposed method has an accuracy of over 90% for small texts and over 99.8% for
null
http://arxiv.org/abs/1806.05480v1
http://arxiv.org/pdf/1806.05480v1.pdf
null
[ "Ciprian-Octavian Truică", "Julien Velcin", "Alexandru Boicea" ]
[ "Language Identification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/copycat-cnn-stealing-knowledge-by-persuading
1806.05476
null
null
Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data
In the past few years, Convolutional Neural Networks (CNNs) have been achieving state-of-the-art performance on a variety of problems. Many companies employ resources and money to generate these models and provide them as an API, therefore it is in their best interest to protect them, i.e., to avoid that someone else copies them. Recent studies revealed that state-of-the-art CNNs are vulnerable to adversarial examples attacks, and this weakness indicates that CNNs do not need to operate in the problem domain (PD). Therefore, we hypothesize that they also do not need to be trained with examples of the PD in order to operate in it. Given these facts, in this paper, we investigate if a target black-box CNN can be copied by persuading it to confess its knowledge through random non-labeled data. The copy is two-fold: i) the target network is queried with random data and its predictions are used to create a fake dataset with the knowledge of the network; and ii) a copycat network is trained with the fake dataset and should be able to achieve similar performance as the target network. This hypothesis was evaluated locally in three problems (facial expression, object, and crosswalk classification) and against a cloud-based API. In the copy attacks, images from both non-problem domain and PD were used. All copycat networks achieved at least 93.7% of the performance of the original models with non-problem domain data, and at least 98.6% using additional data from the PD. Additionally, the copycat CNN successfully copied at least 97.3% of the performance of the Microsoft Azure Emotion API. Our results show that it is possible to create a copycat CNN by simply querying a target network as black-box with random non-labeled data.
The copy is two-fold: i) the target network is queried with random data and its predictions are used to create a fake dataset with the knowledge of the network; and ii) a copycat network is trained with the fake dataset and should be able to achieve similar performance as the target network.
http://arxiv.org/abs/1806.05476v1
http://arxiv.org/pdf/1806.05476v1.pdf
null
[ "Jacson Rodrigues Correia-Silva", "Rodrigo F. Berriel", "Claudine Badue", "Alberto F. de Souza", "Thiago Oliveira-Santos" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/efficient-active-learning-for-image
1806.05473
null
null
Efficient Active Learning for Image Classification and Segmentation using a Sample Selection and Conditional Generative Adversarial Network
Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about 35% of the full dataset, thus saving significant time and effort over conventional methods.
null
https://arxiv.org/abs/1806.05473v4
https://arxiv.org/pdf/1806.05473v4.pdf
null
[ "Dwarikanath Mahapatra", "Behzad Bozorgtabar", "Jean-Philippe Thiran", "Mauricio Reyes" ]
[ "Active Learning", "General Classification", "Generative Adversarial Network", "image-classification", "Image Classification", "Medical Image Classification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/relation-networks-for-object-detection
1711.11575
null
null
Relation Networks for Object Detection
Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.
Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era.
http://arxiv.org/abs/1711.11575v2
http://arxiv.org/pdf/1711.11575v2.pdf
CVPR 2018 6
[ "Han Hu", "Jiayuan Gu", "Zheng Zhang", "Jifeng Dai", "Yichen Wei" ]
[ "Object", "object-detection", "Object Detection", "Object Recognition", "Relation" ]
2017-11-30T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Relation_Networks_for_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Relation_Networks_for_CVPR_2018_paper.pdf
relation-networks-for-object-detection-1
null
[]
https://paperswithcode.com/paper/dice-the-infinitely-differentiable-monte
1802.05098
null
null
DiCE: The Infinitely Differentiable Monte-Carlo Estimator
The score function estimator is widely used for estimating gradients of stochastic objectives in stochastic computation graphs (SCG), eg, in reinforcement learning and meta-learning. While deriving the first-order gradient estimators by differentiating a surrogate loss (SL) objective is computationally and conceptually simple, using the same approach for higher-order derivatives is more challenging. Firstly, analytically deriving and implementing such estimators is laborious and not compliant with automatic differentiation. Secondly, repeatedly applying SL to construct new objectives for each order derivative involves increasingly cumbersome graph manipulations. Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives. To address all these shortcomings in a unified way, we introduce DiCE, which provides a single objective that can be differentiated repeatedly, generating correct estimators of derivatives of any order in SCGs. Unlike SL, DiCE relies on automatic differentiation for performing the requisite graph manipulations. We verify the correctness of DiCE both through a proof and numerical evaluation of the DiCE derivative estimates. We also use DiCE to propose and evaluate a novel approach for multi-agent learning. Our code is available at https://www.github.com/alshedivat/lola.
Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.
http://arxiv.org/abs/1802.05098v3
http://arxiv.org/pdf/1802.05098v3.pdf
null
[ "Jakob Foerster", "Gregory Farquhar", "Maruan Al-Shedivat", "Tim Rocktäschel", "Eric P. Xing", "Shimon Whiteson" ]
[ "Meta-Learning", "Reinforcement Learning" ]
2018-02-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-cross-lingual-distributed-logical
1806.05461
null
null
Learning Cross-lingual Distributed Logical Representations for Semantic Parsing
With the development of several multilingual datasets used for semantic parsing, recent research efforts have looked into the problem of learning semantic parsers in a multilingual setup. However, how to improve the performance of a monolingual semantic parser for a specific language by leveraging data annotated in different languages remains a research question that is under-explored. In this work, we present a study to show how learning distributed representations of the logical forms from data annotated in different languages can be used for improving the performance of a monolingual semantic parser. We extend two existing monolingual semantic parsers to incorporate such cross-lingual distributed logical representations as features. Experiments show that our proposed approach is able to yield improved semantic parsing results on the standard multilingual GeoQuery dataset.
null
http://arxiv.org/abs/1806.05461v1
http://arxiv.org/pdf/1806.05461v1.pdf
ACL 2018 7
[ "Yanyan Zou", "Wei Lu" ]
[ "Semantic Parsing" ]
2018-06-14T00:00:00
https://aclanthology.org/P18-2107
https://aclanthology.org/P18-2107.pdf
learning-cross-lingual-distributed-logical-1
null
[]
https://paperswithcode.com/paper/gradient-layer-enhancing-the-convergence-of
1801.02227
null
null
Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models
We propose a new technique that boosts the convergence of training generative adversarial networks. Generally, the rate of training deep models reduces severely after multiple iterations. A key reason for this phenomenon is that a deep network is expressed using a highly non-convex finite-dimensional model, and thus the parameter gets stuck in a local optimum. Because of this, methods often suffer not only from degeneration of the convergence speed but also from limitations in the representational power of the trained network. To overcome this issue, we propose an additional layer called the gradient layer to seek a descent direction in an infinite-dimensional space. Because the layer is constructed in the infinite-dimensional space, we are not restricted by the specific model structure of finite-dimensional models. As a result, we can get out of the local optima in finite-dimensional models and move towards the global optimal function more directly. In this paper, this phenomenon is explained from the functional gradient method perspective of the gradient layer. Interestingly, the optimization procedure using the gradient layer naturally constructs the deep structure of the network. Moreover, we demonstrate that this procedure can be regarded as a discretization method of the gradient flow that naturally reduces the objective function. Finally, the method is tested using several numerical experiments, which show its fast convergence.
null
http://arxiv.org/abs/1801.02227v2
http://arxiv.org/pdf/1801.02227v2.pdf
null
[ "Atsushi Nitanda", "Taiji Suzuki" ]
[]
2018-01-07T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/analysis-of-the-effect-of-unexpected-outliers
1806.05455
null
null
Analysis of the Effect of Unexpected Outliers in the Classification of Spectroscopy Data
Multi-class classification algorithms are very widely used, but we argue that they are not always ideal from a theoretical perspective, because they assume all classes are characterized by the data, whereas in many applications, training data for some classes may be entirely absent, rare, or statistically unrepresentative. We evaluate one-sided classifiers as an alternative, since they assume that only one class (the target) is well characterized. We consider a task of identifying whether a substance contains a chlorinated solvent, based on its chemical spectrum. For this application, it is not really feasible to collect a statistically representative set of outliers, since that group may contain \emph{anything} apart from the target chlorinated solvents. Using a new one-sided classification toolkit, we compare a One-Sided k-NN algorithm with two well-known binary classification algorithms, and conclude that the one-sided classifier is more robust to unexpected outliers.
null
http://arxiv.org/abs/1806.05455v1
http://arxiv.org/pdf/1806.05455v1.pdf
null
[ "Frank G. Glavin", "Michael G. Madden" ]
[ "Binary Classification", "Classification", "General Classification", "Multi-class Classification" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**$k$-Nearest Neighbors** is a clustering-based algorithm for classification and regression. It is a a type of instance-based learning as it does not attempt to construct a general internal model, but simply stores instances of the training data. Prediction is computed from a simple majority vote of the nearest neighbors of each point: a query point is assigned the data class which has the most representatives within the nearest neighbors of the point.\r\n\r\nSource of Description and Image: [scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html#classification)", "full_name": "k-Nearest Neighbors", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.", "name": "Non-Parametric Classification", "parent": null }, "name": "k-NN", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/low-rank-geometric-mean-metric-learning
1806.05454
null
null
Low-rank geometric mean metric learning
We propose a low-rank approach to learning a Mahalanobis metric from data. Inspired by the recent geometric mean metric learning (GMML) algorithm, we propose a low-rank variant of the algorithm. This allows to jointly learn a low-dimensional subspace where the data reside and the Mahalanobis metric that appropriately fits the data. Our results show that we compete effectively with GMML at lower ranks.
We propose a low-rank approach to learning a Mahalanobis metric from data.
http://arxiv.org/abs/1806.05454v1
http://arxiv.org/pdf/1806.05454v1.pdf
null
[ "Mukul Bhutani", "Pratik Jawanpuria", "Hiroyuki Kasai", "Bamdev Mishra" ]
[ "Metric Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/4dfab-a-large-scale-4d-facial-expression
1712.01443
null
null
4DFAB: A Large Scale 4D Facial Expression Database for Biometric Applications
The progress we are currently witnessing in many computer vision applications, including automatic face analysis, would not be made possible without tremendous efforts in collecting and annotating large scale visual databases. To this end, we propose 4DFAB, a new large scale database of dynamic high-resolution 3D faces (over 1,800,000 3D meshes). 4DFAB contains recordings of 180 subjects captured in four different sessions spanning over a five-year period. It contains 4D videos of subjects displaying both spontaneous and posed facial behaviours. The database can be used for both face and facial expression recognition, as well as behavioural biometrics. It can also be used to learn very powerful blendshapes for parametrising facial behaviour. In this paper, we conduct several experiments and demonstrate the usefulness of the database for various applications. The database will be made publicly available for research purposes.
4DFAB contains recordings of 180 subjects captured in four different sessions spanning over a five-year period.
http://arxiv.org/abs/1712.01443v2
http://arxiv.org/pdf/1712.01443v2.pdf
null
[ "Shiyang Cheng", "Irene Kotsia", "Maja Pantic", "Stefanos Zafeiriou" ]
[ "Facial Expression Recognition", "Facial Expression Recognition (FER)" ]
2017-12-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/masked-autoregressive-flow-for-density
1705.07057
null
null
Masked Autoregressive Flow for Density Estimation
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow.
http://arxiv.org/abs/1705.07057v4
http://arxiv.org/pdf/1705.07057v4.pdf
NeurIPS 2017 12
[ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ]
[ "Density Estimation" ]
2017-05-19T00:00:00
http://papers.nips.cc/paper/6828-masked-autoregressive-flow-for-density-estimation
http://papers.nips.cc/paper/6828-masked-autoregressive-flow-for-density-estimation.pdf
masked-autoregressive-flow-for-density-1
null
[]
https://paperswithcode.com/paper/deep-generative-models-in-the-real-world-an
1806.05452
null
null
Deep Generative Models in the Real-World: An Open Challenge from Medical Imaging
Recent advances in deep learning led to novel generative modeling techniques that achieve unprecedented quality in generated samples and performance in learning complex distributions in imaging data. These new models in medical image computing have important applications that form clinically relevant and very challenging unsupervised learning problems. In this paper, we explore the feasibility of using state-of-the-art auto-encoder-based deep generative models, such as variational and adversarial auto-encoders, for one such task: abnormality detection in medical imaging. We utilize typical, publicly available datasets with brain scans from healthy subjects and patients with stroke lesions and brain tumors. We use the data from healthy subjects to train different auto-encoder based models to learn the distribution of healthy images and detect pathologies as outliers. Models that can better learn the data distribution should be able to detect outliers more accurately. We evaluate the detection performance of deep generative models and compare them with non-deep learning based approaches to provide a benchmark of the current state of research. We conclude that abnormality detection is a challenging task for deep generative models and large room exists for improvement. In order to facilitate further research, we aim to provide carefully pre-processed imaging data available to the research community.
null
http://arxiv.org/abs/1806.05452v1
http://arxiv.org/pdf/1806.05452v1.pdf
null
[ "Xiaoran Chen", "Nick Pawlowski", "Martin Rajchl", "Ben Glocker", "Ender Konukoglu" ]
[ "Anomaly Detection" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-committee-machine-computational-to
1806.05451
null
null
The committee machine: Computational to statistical gaps in learning a two-layers neural network
Heuristic tools from statistical physics have been used in the past to locate the phase transitions and compute the optimal learning and generalization errors in the teacher-student scenario in multi-layer neural networks. In this contribution, we provide a rigorous justification of these approaches for a two-layers neural network model called the committee machine. We also introduce a version of the approximate message passing (AMP) algorithm for the committee machine that allows to perform optimal learning in polynomial time for a large set of parameters. We find that there are regimes in which a low generalization error is information-theoretically achievable while the AMP algorithm fails to deliver it, strongly suggesting that no efficient algorithm exists for those cases, and unveiling a large computational gap.
Heuristic tools from statistical physics have been used in the past to locate the phase transitions and compute the optimal learning and generalization errors in the teacher-student scenario in multi-layer neural networks.
https://arxiv.org/abs/1806.05451v3
https://arxiv.org/pdf/1806.05451v3.pdf
NeurIPS 2018 12
[ "Benjamin Aubin", "Antoine Maillard", "Jean Barbier", "Florent Krzakala", "Nicolas Macris", "Lenka Zdeborová" ]
[]
2018-06-14T00:00:00
http://papers.nips.cc/paper/7584-the-committee-machine-computational-to-statistical-gaps-in-learning-a-two-layers-neural-network
http://papers.nips.cc/paper/7584-the-committee-machine-computational-to-statistical-gaps-in-learning-a-two-layers-neural-network.pdf
the-committee-machine-computational-to-1
null
[]
https://paperswithcode.com/paper/stochastic-gradient-descent-with-exponential
1806.05438
null
null
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors
We consider stochastic gradient descent and its averaging variant for binary classification problems in a reproducing kernel Hilbert space. In the traditional analysis using a consistency property of loss functions, it is known that the expected classification error converges more slowly than the expected risk even when assuming a low-noise condition on the conditional label probabilities. Consequently, the resulting rate is sublinear. Therefore, it is important to consider whether much faster convergence of the expected classification error can be achieved. In recent research, an exponential convergence rate for stochastic gradient descent was shown under a strong low-noise condition but provided theoretical analysis was limited to the squared loss function, which is somewhat inadequate for binary classification tasks. In this paper, we show an exponential convergence of the expected classification error in the final phase of the stochastic gradient descent for a wide class of differentiable convex loss functions under similar assumptions. As for the averaged stochastic gradient descent, we show that the same convergence rate holds from the early phase of training. In experiments, we verify our analyses on the $L_2$-regularized logistic regression.
null
https://arxiv.org/abs/1806.05438v4
https://arxiv.org/pdf/1806.05438v4.pdf
null
[ "Atsushi Nitanda", "Taiji Suzuki" ]
[ "Binary Classification", "Classification", "General Classification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/servenet-a-deep-neural-network-for-web
1806.05437
null
null
ServeNet: A Deep Neural Network for Web Services Classification
Automated service classification plays a crucial role in service discovery, selection, and composition. Machine learning has been widely used for service classification in recent years. However, the performance of conventional machine learning methods highly depends on the quality of manual feature engineering. In this paper, we present a novel deep neural network to automatically abstract low-level representation of both service name and service description to high-level merged features without feature engineering and the length limitation, and then predict service classification on 50 service categories. To demonstrate the effectiveness of our approach, we conduct a comprehensive experimental study by comparing 10 machine learning methods on 10,000 real-world web services. The result shows that the proposed deep neural network can achieve higher accuracy in classification and more robust than other machine learning methods.
Automated service classification plays a crucial role in service discovery, selection, and composition.
https://arxiv.org/abs/1806.05437v3
https://arxiv.org/pdf/1806.05437v3.pdf
null
[ "Yilong Yang", "Nafees Qamar", "Peng Liu", "Katarina Grolinger", "Weiru Wang", "Zhi Li", "Zhifang Liao" ]
[ "BIG-bench Machine Learning", "Classification", "Feature Engineering", "General Classification" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/transfer-learning-for-context-aware-question
1806.05434
null
null
Transfer Learning for Context-Aware Question Matching in Information-seeking Conversations in E-commerce
Building multi-turn information-seeking conversation systems is an important and challenging research topic. Although several advanced neural text matching models have been proposed for this task, they are generally not efficient for industrial applications. Furthermore, they rely on a large amount of labeled data, which may not be available in real-world applications. To alleviate these problems, we study transfer learning for multi-turn information seeking conversations in this paper. We first propose an efficient and effective multi-turn conversation model based on convolutional neural networks. After that, we extend our model to adapt the knowledge learned from a resource-rich domain to enhance the performance. Finally, we deployed our model in an industrial chatbot called AliMe Assist (https://consumerservice.taobao.com/online-help) and observed a significant improvement over the existing online model.
null
http://arxiv.org/abs/1806.05434v1
http://arxiv.org/pdf/1806.05434v1.pdf
ACL 2018 7
[ "Minghui Qiu", "Liu Yang", "Feng Ji", "Weipeng Zhao", "Wei Zhou", "Jun Huang", "Haiqing Chen", "W. Bruce Croft", "Wei. Lin" ]
[ "Chatbot", "Text Matching", "Transfer Learning" ]
2018-06-14T00:00:00
https://aclanthology.org/P18-2034
https://aclanthology.org/P18-2034.pdf
transfer-learning-for-context-aware-question-1
null
[]
https://paperswithcode.com/paper/urdu-word-segmentation-using-conditional
1806.05432
null
null
Urdu Word Segmentation using Conditional Random Fields (CRFs)
State-of-the-art Natural Language Processing algorithms rely heavily on efficient word segmentation. Urdu is amongst languages for which word segmentation is a complex task as it exhibits space omission as well as space insertion issues. This is partly due to the Arabic script which although cursive in nature, consists of characters that have inherent joining and non-joining attributes regardless of word boundary. This paper presents a word segmentation system for Urdu which uses a Conditional Random Field sequence modeler with orthographic, linguistic and morphological features. Our proposed model automatically learns to predict white space as word boundary as well as Zero Width Non-Joiner (ZWNJ) as sub-word boundary. Using a manually annotated corpus, our model achieves F1 score of 0.97 for word boundary identification and 0.85 for sub-word boundary identification tasks. We have made our code and corpus publicly available to make our results reproducible.
State-of-the-art Natural Language Processing algorithms rely heavily on efficient word segmentation.
http://arxiv.org/abs/1806.05432v1
http://arxiv.org/pdf/1806.05432v1.pdf
COLING 2018 8
[ "Haris Bin Zia", "Agha Ali Raza", "Awais Athar" ]
[ "Segmentation" ]
2018-06-14T00:00:00
https://aclanthology.org/C18-1217
https://aclanthology.org/C18-1217.pdf
urdu-word-segmentation-using-conditional-1
null
[]
https://paperswithcode.com/paper/only-bayes-should-learn-a-manifold-on-the
1806.04994
null
null
Only Bayes should learn a manifold (on the estimation of differential geometric structure from data)
We investigate learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space. We first analyze kernel-based algorithms and show that under the usual regularizations, non-probabilistic methods cannot recover the differential geometric structure, but instead find mostly linear manifolds or spaces equipped with teleports. To properly learn the differential geometric structure, non-probabilistic methods must apply regularizations that enforce large gradients, which go against common wisdom. We repeat the analysis for probabilistic methods and find that under reasonable priors, the geometric structure can be recovered. Fully exploiting the recovered structure, however, requires the development of stochastic extensions to classic Riemannian geometry. We take early steps in that regard. Finally, we partly extend the analysis to modern models based on neural networks, thereby highlighting geometric and probabilistic shortcomings of current deep generative models.
null
https://arxiv.org/abs/1806.04994v3
https://arxiv.org/pdf/1806.04994v3.pdf
null
[ "Søren Hauberg" ]
[]
2018-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/modeling-coherence-for-neural-machine
1711.11221
null
null
Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches
Sentences in a well-formed text are connected to each other via various links to form the cohesive structure of the text. Current neural machine translation (NMT) systems translate a text in a conventional sentence-by-sentence fashion, ignoring such cross-sentence links and dependencies. This may lead to generate an incoherent target text for a coherent source text. In order to handle this issue, we propose a cache-based approach to modeling coherence for neural machine translation by capturing contextual information either from recently translated sentences or the entire document. Particularly, we explore two types of caches: a dynamic cache, which stores words from the best translation hypotheses of preceding sentences, and a topic cache, which maintains a set of target-side topical words that are semantically related to the document to be translated. On this basis, we build a new layer to score target words in these two caches with a cache-based neural model. Here the estimated probabilities from the cache-based neural model are combined with NMT probabilities into the final word prediction probabilities via a gating mechanism. Finally, the proposed cache-based neural model is trained jointly with NMT system in an end-to-end manner. Experiments and analysis presented in this paper demonstrate that the proposed cache-based model achieves substantial improvements over several state-of-the-art SMT and NMT baselines.
null
http://arxiv.org/abs/1711.11221v3
http://arxiv.org/pdf/1711.11221v3.pdf
COLING 2018 8
[ "Shaohui Kuang", "Deyi Xiong", "Weihua Luo", "Guodong Zhou" ]
[ "Machine Translation", "NMT", "Sentence", "Translation" ]
2017-11-30T00:00:00
https://aclanthology.org/C18-1050
https://aclanthology.org/C18-1050.pdf
modeling-coherence-for-neural-machine-2
null
[]
https://paperswithcode.com/paper/cross-modal-hallucination-for-few-shot-fine
1806.05147
null
null
Cross-modal Hallucination for Few-shot Fine-grained Recognition
State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection. We show the results of our proposed discriminative hallucinated method for 1-, 2-, and 5- shot learning on the CUB dataset, where the accuracy is improved by employing multimodal data.
null
http://arxiv.org/abs/1806.05147v2
http://arxiv.org/pdf/1806.05147v2.pdf
null
[ "Frederik Pahde", "Patrick Jähnichen", "Tassilo Klein", "Moin Nabi" ]
[ "Hallucination" ]
2018-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Dogecoin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Dogecoin Customer Service Number +1-833-534-1729", "source_title": "Generative Adversarial Networks", "source_url": "https://arxiv.org/abs/1406.2661v1" } ]
https://paperswithcode.com/paper/ranking-recovery-from-limited-comparisons
1806.05419
null
null
Ranking Recovery from Limited Comparisons using Low-Rank Matrix Completion
This paper proposes a new method for solving the well-known rank aggregation problem from pairwise comparisons using the method of low-rank matrix completion. The partial and noisy data of pairwise comparisons is transformed into a matrix form. We then use tools from matrix completion, which has served as a major component in the low-rank completion solution of the Netflix challenge, to construct the preference of the different objects. In our approach, the data of multiple comparisons is used to create an estimate of the probability of object i to win (or be chosen) over object j, where only a partial set of comparisons between N objects is known. The data is then transformed into a matrix form for which the noiseless solution has a known rank of one. An alternating minimization algorithm, in which the target matrix takes a bilinear form, is then used in combination with maximum likelihood estimation for both factors. The reconstructed matrix is used to obtain the true underlying preference intensity. This work demonstrates the improvement of our proposed algorithm over the current state-of-the-art in both simulated scenarios and real data.
null
http://arxiv.org/abs/1806.05419v1
http://arxiv.org/pdf/1806.05419v1.pdf
null
[ "Tal Levy", "Alireza Vahid", "Raja Giryes" ]
[ "Form", "Low-Rank Matrix Completion", "Matrix Completion" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/configurable-markov-decision-processes
1806.05415
null
null
Configurable Markov Decision Processes
In many real-world problems, there is the possibility to configure, to a limited extent, some environmental parameters to improve the performance of a learning agent. In this paper, we propose a novel framework, Configurable Markov Decision Processes (Conf-MDPs), to model this new type of interaction with the environment. Furthermore, we provide a new learning algorithm, Safe Policy-Model Iteration (SPMI), to jointly and adaptively optimize the policy and the environment configuration. After having introduced our approach and derived some theoretical results, we present the experimental evaluation in two explicative problems to show the benefits of the environment configurability on the performance of the learned policy.
null
http://arxiv.org/abs/1806.05415v1
http://arxiv.org/pdf/1806.05415v1.pdf
ICML 2018 7
[ "Alberto Maria Metelli", "Mirco Mutti", "Marcello Restelli" ]
[]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2137
http://proceedings.mlr.press/v80/metelli18a/metelli18a.pdf
configurable-markov-decision-processes-1
null
[]
https://paperswithcode.com/paper/learning-dynamics-of-linear-denoising
1806.05413
null
null
Learning Dynamics of Linear Denoising Autoencoders
Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.
Here we develop theory for how noise influences learning in DAEs.
http://arxiv.org/abs/1806.05413v2
http://arxiv.org/pdf/1806.05413v2.pdf
ICML 2018 7
[ "Arnu Pretorius", "Steve Kroon", "Herman Kamper" ]
[ "Denoising", "Representation Learning" ]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2013
http://proceedings.mlr.press/v80/pretorius18a/pretorius18a.pdf
learning-dynamics-of-linear-denoising-1
null
[]
https://paperswithcode.com/paper/an-approach-to-vehicle-trajectory-prediction
1802.08632
null
null
An Approach to Vehicle Trajectory Prediction Using Automatically Generated Traffic Maps
Trajectory and intention prediction of traffic participants is an important task in automated driving and crucial for safe interaction with the environment. In this paper, we present a new approach to vehicle trajectory prediction based on automatically generated maps containing statistical information about the behavior of traffic participants in a given area. These maps are generated based on trajectory observations using image processing and map matching techniques and contain all typical vehicle movements and probabilities in the considered area. Our prediction approach matches an observed trajectory to a behavior contained in the map and uses this information to generate a prediction. We evaluated our approach on a dataset containing over 14000 trajectories and found that it produces significantly more precise mid-term predictions compared to motion model-based prediction approaches.
null
http://arxiv.org/abs/1802.08632v2
http://arxiv.org/pdf/1802.08632v2.pdf
null
[ "Jannik Quehl", "Haohao Hu", "Sascha Wirges", "Martin Lauer" ]
[ "Prediction", "Trajectory Prediction" ]
2018-02-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/incremental-natural-language-processing-1
1805.12518
null
null
Incremental Natural Language Processing: Challenges, Strategies, and Evaluation
Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial.
null
http://arxiv.org/abs/1805.12518v2
http://arxiv.org/pdf/1805.12518v2.pdf
null
[ "Arne Köhn" ]
[]
2018-05-31T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/on-the-perceptrons-compression
1806.05403
null
null
On the Perceptron's Compression
We study and provide exposition to several phenomena that are related to the perceptron's compression. One theme concerns modifications of the perceptron algorithm that yield better guarantees on the margin of the hyperplane it outputs. These modifications can be useful in training neural networks as well, and we demonstrate them with some experimental data. In a second theme, we deduce conclusions from the perceptron's compression in various contexts.
null
http://arxiv.org/abs/1806.05403v1
http://arxiv.org/pdf/1806.05403v1.pdf
null
[ "Shay Moran", "Ido Nachum", "Itai Panasoff", "Amir Yehudayoff" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/aggregating-predictions-on-multiple-non
1806.04000
null
null
Aggregating Predictions on Multiple Non-disclosed Datasets using Conformal Prediction
Conformal Prediction is a machine learning methodology that produces valid prediction regions under mild conditions. In this paper, we explore the application of making predictions over multiple data sources of different sizes without disclosing data between the sources. We propose that each data source applies a transductive conformal predictor independently using the local data, and that the individual predictions are then aggregated to form a combined prediction region. We demonstrate the method on several data sets, and show that the proposed method produces conservatively valid predictions and reduces the variance in the aggregated predictions. We also study the effect that the number of data sources and size of each source has on aggregated predictions, as compared with equally sized sources and pooled data.
null
http://arxiv.org/abs/1806.04000v2
http://arxiv.org/pdf/1806.04000v2.pdf
null
[ "Ola Spjuth", "Lars Carlsson", "Niharika Gauraha" ]
[ "Conformal Prediction", "Prediction", "valid" ]
2018-06-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dynamical-isometry-and-a-mean-field-theory-of
1806.05394
null
null
Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Recurrent neural networks have gained widespread use in modeling sequence data across various domains. While many successful recurrent architectures employ a notion of gating, the exact mechanism that enables such remarkable performance is not well understood. We develop a theory for signal propagation in recurrent networks after random initialization using a combination of mean field theory and random matrix theory. To simplify our discussion, we introduce a new RNN cell with a simple gating mechanism that we call the minimalRNN and compare it with vanilla RNNs. Our theory allows us to define a maximum timescale over which RNNs can remember an input. We show that this theory predicts trainability for both recurrent architectures. We show that gated recurrent networks feature a much broader, more robust, trainable region than vanilla RNNs, which corroborates recent experimental findings. Finally, we develop a closed-form critical initialization scheme that achieves dynamical isometry in both vanilla RNNs and minimalRNNs. We show that this results in significantly improvement in training dynamics. Finally, we demonstrate that the minimalRNN achieves comparable performance to its more complex counterparts, such as LSTMs or GRUs, on a language modeling task.
null
http://arxiv.org/abs/1806.05394v2
http://arxiv.org/pdf/1806.05394v2.pdf
ICML 2018 7
[ "Minmin Chen", "Jeffrey Pennington", "Samuel S. Schoenholz" ]
[ "Language Modeling", "Language Modelling" ]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2464
http://proceedings.mlr.press/v80/chen18i/chen18i.pdf
dynamical-isometry-and-a-mean-field-theory-of-4
null
[]
https://paperswithcode.com/paper/dynamical-isometry-and-a-mean-field-theory-of-2
1806.05393
null
null
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.
In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme.
http://arxiv.org/abs/1806.05393v2
http://arxiv.org/pdf/1806.05393v2.pdf
ICML 2018 7
[ "Lechao Xiao", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz", "Jeffrey Pennington" ]
[]
2018-06-14T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2421
http://proceedings.mlr.press/v80/xiao18a/xiao18a.pdf
dynamical-isometry-and-a-mean-field-theory-of-3
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]