paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
null
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/minibatch-gibbs-sampling-on-large-graphical
1806.06086
null
null
Minibatch Gibbs Sampling on Large Graphical Models
Gibbs sampling is the de facto Markov chain Monte Carlo method used for inference and learning on large scale graphical models. For complicated factor graphs with lots of factors, the performance of Gibbs sampling can be limited by the computational cost of executing a single update step of the Markov chain. This cost is proportional to the degree of the graph, the number of factors adjacent to each variable. In this paper, we show how this cost can be reduced by using minibatching: subsampling the factors to form an estimate of their sum. We introduce several minibatched variants of Gibbs, show that they can be made unbiased, prove bounds on their convergence rates, and show that under some conditions they can result in asymptotic single-update-run-time speedups over plain Gibbs sampling.
null
http://arxiv.org/abs/1806.06086v1
http://arxiv.org/pdf/1806.06086v1.pdf
ICML 2018 7
[ "Christopher De Sa", "Vincent Chen", "Wing Wong" ]
[]
2018-06-15T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2383
http://proceedings.mlr.press/v80/desa18a/desa18a.pdf
minibatch-gibbs-sampling-on-large-graphical-1
null
[]
https://paperswithcode.com/paper/detecting-dead-weights-and-units-in-neural
1806.06068
null
null
Detecting Dead Weights and Units in Neural Networks
Deep Neural Networks are highly over-parameterized and the size of the neural networks can be reduced significantly after training without any decrease in performance. One can clearly see this phenomenon in a wide range of architectures trained for various problems. Weight/channel pruning, distillation, quantization, matrix factorization are some of the main methods one can use to remove the redundancy to come up with smaller and faster models. This work starts with a short informative chapter, where we motivate the pruning idea and provide the necessary notation. In the second chapter, we compare various saliency scores in the context of parameter pruning. Using the insights obtained from this comparison and stating the problems it brings we motivate why pruning units instead of the individual parameters might be a better idea. We propose some set of definitions to quantify and analyze units that don't learn and create any useful information. We propose an efficient way for detecting dead units and use it to select which units to prune. We get 5x model size reduction through unit-wise pruning on MNIST.
null
http://arxiv.org/abs/1806.06068v1
http://arxiv.org/pdf/1806.06068v1.pdf
null
[ "Utku Evci" ]
[ "Quantization" ]
2018-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Efficient ConvNets", "source_url": "http://arxiv.org/abs/1608.08710v3" } ]
https://paperswithcode.com/paper/classification-with-fairness-constraints-a
1806.06055
null
null
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees
Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.
The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees.
https://arxiv.org/abs/1806.06055v3
https://arxiv.org/pdf/1806.06055v3.pdf
null
[ "L. Elisa Celis", "Lingxiao Huang", "Vijay Keswani", "Nisheeth K. Vishnoi" ]
[ "Classification", "Fairness", "General Classification" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/an-online-prediction-algorithm-for
1806.06720
null
null
An Online Prediction Algorithm for Reinforcement Learning with Linear Function Approximation using Cross Entropy Method
In this paper, we provide two new stable online algorithms for the problem of prediction in reinforcement learning, \emph{i.e.}, estimating the value function of a model-free Markov reward process using the linear function approximation architecture and with memory and computation costs scaling quadratically in the size of the feature set. The algorithms employ the multi-timescale stochastic approximation variant of the very popular cross entropy (CE) optimization method which is a model based search method to find the global optimum of a real-valued function. A proof of convergence of the algorithms using the ODE method is provided. We supplement our theoretical results with experimental comparisons. The algorithms achieve good performance fairly consistently on many RL benchmark problems with regards to computational efficiency, accuracy and stability.
null
http://arxiv.org/abs/1806.06720v1
http://arxiv.org/pdf/1806.06720v1.pdf
null
[ "Ajin George Joseph", "Shalabh Bhatnagar" ]
[ "Computational Efficiency", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-lip-reading-a-comparison-of-models-and
1806.06053
null
null
Deep Lip Reading: a comparison of models and an online application
The goal of this paper is to develop state-of-the-art models for lip reading -- visual speech recognition. We develop three architectures and compare their accuracy and training times: (i) a recurrent model using LSTMs; (ii) a fully convolutional model; and (iii) the recently proposed transformer model. The recurrent and fully convolutional models are trained with a Connectionist Temporal Classification loss and use an explicit language model for decoding, the transformer is a sequence-to-sequence model. Our best performing model improves the state-of-the-art word error rate on the challenging BBC-Oxford Lip Reading Sentences 2 (LRS2) benchmark dataset by over 20 percent. As a further contribution we investigate the fully convolutional model when used for online (real time) lip reading of continuous speech, and show that it achieves high performance with low latency.
null
http://arxiv.org/abs/1806.06053v1
http://arxiv.org/pdf/1806.06053v1.pdf
null
[ "Triantafyllos Afouras", "Joon Son Chung", "Andrew Zisserman" ]
[ "Language Modeling", "Language Modelling", "Lip Reading", "speech-recognition", "Speech Recognition", "Visual Speech Recognition" ]
2018-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.", "full_name": "Linear Layer", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Linear Layer", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.", "full_name": "Position-Wise Feed-Forward Layer", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Position-Wise Feed-Forward Layer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118", "description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.", "full_name": "Residual Connection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.", "name": "Skip Connections", "parent": null }, "name": "Residual Connection", "source_title": "Deep Residual Learning for Image Recognition", "source_url": "http://arxiv.org/abs/1512.03385v1" }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6", "description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.", "full_name": "Adam", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "Adam", "source_title": "Adam: A Method for Stochastic Optimization", "source_url": "http://arxiv.org/abs/1412.6980v9" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9", "description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)", "full_name": "Multi-Head Attention", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.", "name": "Attention Modules", "parent": "Attention" }, "name": "Multi-Head Attention", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "", "description": "", "full_name": "Attention Is All You Need", "introduced_year": 2000, "main_collection": { "area": "General", "description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.", "name": "Attention Mechanisms", "parent": "Attention" }, "name": "Attention", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/versatile-auxiliary-classifier-with-1
1805.00316
null
null
Versatile Auxiliary Classifier with Generative Adversarial Network (VAC+GAN)
One of the most interesting challenges in Artificial Intelligence is to train conditional generators which are able to provide labeled adversarial samples drawn from a specific distribution. In this work, a new framework is presented to train a deep conditional generator by placing a classifier in parallel with the discriminator and back propagate the classification error through the generator network. The method is versatile and is applicable to any variations of Generative Adversarial Network (GAN) implementation, and also gives superior results compared to similar methods.
null
http://arxiv.org/abs/1805.00316v3
http://arxiv.org/pdf/1805.00316v3.pdf
null
[ "Shabab Bazrafkan", "Hossein Javidnia", "Peter Corcoran" ]
[ "General Classification", "Generative Adversarial Network" ]
2018-05-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/computationally-efficient-estimation-of-the
1806.06047
null
null
Computationally Efficient Estimation of the Spectral Gap of a Markov Chain
We consider the problem of estimating from sample paths the absolute spectral gap $\gamma_*$ of a reversible, irreducible and aperiodic Markov chain $(X_t)_{t \in \mathbb{N}}$ over a finite state space $\Omega$. We propose the ${\tt UCPI}$ (Upper Confidence Power Iteration) algorithm for this problem, a low-complexity algorithm which estimates the spectral gap in time ${\cal O}(n)$ and memory space ${\cal O}((\ln n)^2)$ given $n$ samples. This is in stark contrast with most known methods which require at least memory space ${\cal O}(|\Omega|)$, so that they cannot be applied to large state spaces. Furthermore, ${\tt UCPI}$ is amenable to parallel implementation.
null
http://arxiv.org/abs/1806.06047v2
http://arxiv.org/pdf/1806.06047v2.pdf
null
[ "Richard Combes", "Mikael Touati" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/high-quality-prediction-intervals-for-deep
1802.07167
null
null
High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks. It is axiomatic that high-quality PIs should be as narrow as possible, whilst capturing a specified portion of data. We derive a loss function directly from this axiom that requires no distributional assumption. We show how its form derives from a likelihood principle, that it can be used with gradient descent, and that model uncertainty is accounted for in ensembled form. Benchmark experiments show the method outperforms current state-of-the-art uncertainty quantification methods, reducing average PI width by over 10%.
This paper considers the generation of prediction intervals (PIs) by neural networks for quantifying uncertainty in regression tasks.
http://arxiv.org/abs/1802.07167v3
http://arxiv.org/pdf/1802.07167v3.pdf
ICML 2018 7
[ "Tim Pearce", "Mohamed Zaki", "Alexandra Brintrup", "Andy Neely" ]
[ "Form", "Prediction Intervals", "regression", "Uncertainty Quantification" ]
2018-02-20T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2188
http://proceedings.mlr.press/v80/pearce18a/pearce18a.pdf
high-quality-prediction-intervals-for-deep-1
null
[]
https://paperswithcode.com/paper/bayesian-inference-with-anchored-ensembles-of
1805.11324
null
null
Bayesian Inference with Anchored Ensembles of Neural Networks, and Application to Exploration in Reinforcement Learning
The use of ensembles of neural networks (NNs) for the quantification of predictive uncertainty is widespread. However, the current justification is intuitive rather than analytical. This work proposes one minor modification to the normal ensembling methodology, which we prove allows the ensemble to perform Bayesian inference, hence converging to the corresponding Gaussian Process as both the total number of NNs, and the size of each, tend to infinity. This working paper provides early-stage results in a reinforcement learning setting, analysing the practicality of the technique for an ensemble of small, finite number. Using the uncertainty estimates produced by anchored ensembles to govern the exploration-exploitation process results in steadier, more stable learning.
The use of ensembles of neural networks (NNs) for the quantification of predictive uncertainty is widespread.
http://arxiv.org/abs/1805.11324v3
http://arxiv.org/pdf/1805.11324v3.pdf
null
[ "Tim Pearce", "Nicolas Anastassacos", "Mohamed Zaki", "Andy Neely" ]
[ "Bayesian Inference", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-05-29T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/three-factors-influencing-minima-in-sgd
1711.04623
null
rJma2bZCW
Three Factors Influencing Minima in SGD
We investigate the dynamical and convergent properties of stochastic gradient descent (SGD) applied to Deep Neural Networks (DNNs). Characterizing the relation between learning rate, batch size and the properties of the final minima, such as width or generalization, remains an open question. In order to tackle this problem we investigate the previously proposed approximation of SGD by a stochastic differential equation (SDE). We theoretically argue that three factors - learning rate, batch size and gradient covariance - influence the minima found by SGD. In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization. We confirm these findings experimentally. Further, we include experiments which show that learning rate schedules can be replaced with batch size schedules and that the ratio of learning rate to batch size is an important factor influencing the memorization process.
null
http://arxiv.org/abs/1711.04623v3
http://arxiv.org/pdf/1711.04623v3.pdf
ICLR 2018 1
[ "Stanisław Jastrzębski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ]
[ "Memorization", "Open-Ended Question Answering" ]
2017-11-13T00:00:00
https://openreview.net/forum?id=rJma2bZCW
https://openreview.net/pdf?id=rJma2bZCW
three-factors-influencing-minima-in-sgd-1
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/the-toybox-dataset-of-egocentric-visual
1806.06034
null
null
The Toybox Dataset of Egocentric Visual Object Transformations
In object recognition research, many commonly used datasets (e.g., ImageNet and similar) contain relatively sparse distributions of object instances and views, e.g., one might see a thousand different pictures of a thousand different giraffes, mostly taken from a few conventionally photographed angles. These distributional properties constrain the types of computational experiments that are able to be conducted with such datasets, and also do not reflect naturalistic patterns of embodied visual experience. As a contribution to the small (but growing) number of multi-view object datasets that have been created to bridge this gap, we introduce a new video dataset called Toybox that contains egocentric (i.e., first-person perspective) videos of common household objects and toys being manually manipulated to undergo structured transformations, such as rotation, translation, and zooming. To illustrate potential uses of Toybox, we also present initial neural network experiments that examine 1) how training on different distributions of object instances and views affects recognition performance, and 2) how viewpoint-dependent object concepts are represented within the hidden layers of a trained network.
null
http://arxiv.org/abs/1806.06034v3
http://arxiv.org/pdf/1806.06034v3.pdf
null
[ "Xiaohan Wang", "Tengyu Ma", "James Ainooson", "Seunghwan Cha", "Xiaotian Wang", "Azhar Molla", "Maithilee Kunda" ]
[ "Object", "Object Recognition", "Translation" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/latent-space-physics-towards-learning-the
1802.10123
null
null
Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow
We propose a method for the data-driven inference of temporal evolutions of physical functions with deep learning. More specifically, we target fluid flows, i.e. Navier-Stokes problems, and we propose a novel LSTM-based approach to predict the changes of pressure fields over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. We demonstrate for the first time that dense 3D+time functions of physics system can be predicted within the latent spaces of neural networks, and we arrive at a neural-network based simulation algorithm with significant practical speed-ups. We highlight the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.
We propose a method for the data-driven inference of temporal evolutions of physical functions with deep learning.
http://arxiv.org/abs/1802.10123v3
http://arxiv.org/pdf/1802.10123v3.pdf
null
[ "Steffen Wiewel", "Moritz Becher", "Nils Thuerey" ]
[ "Dimensionality Reduction" ]
2018-02-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/one-shot-unsupervised-cross-domain
1806.06029
null
null
One-Shot Unsupervised Cross Domain Translation
Given a single image x from domain A and a set of images from domain B, our task is to generate the analogous of x in B. We argue that this task could be a key AI capability that underlines the ability of cognitive agents to act in the world and present empirical evidence that the existing unsupervised domain translation methods fail on this task. Our method follows a two step process. First, a variational autoencoder for domain B is trained. Then, given the new sample x, we create a variational autoencoder for domain A by adapting the layers that are close to the image in order to directly fit x, and only indirectly adapt the other layers. Our experiments indicate that the new method does as well, when trained on one sample x, as the existing domain transfer methods, when these enjoy a multitude of training samples from domain A. Our code is made publicly available at https://github.com/sagiebenaim/OneShotTranslation
Given a single image x from domain A and a set of images from domain B, our task is to generate the analogous of x in B.
http://arxiv.org/abs/1806.06029v2
http://arxiv.org/pdf/1806.06029v2.pdf
NeurIPS 2018 12
[ "Sagie Benaim", "Lior Wolf" ]
[ "Translation", "Unsupervised Image-To-Image Translation", "Zero-Shot Learning" ]
2018-06-15T00:00:00
http://papers.nips.cc/paper/7480-one-shot-unsupervised-cross-domain-translation
http://papers.nips.cc/paper/7480-one-shot-unsupervised-cross-domain-translation.pdf
one-shot-unsupervised-cross-domain-1
null
[ { "code_snippet_url": "", "description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Solana Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Solana Customer Service Number +1-833-534-1729", "source_title": "Reducing the Dimensionality of Data with Neural Networks", "source_url": "https://science.sciencemag.org/content/313/5786/504" } ]
https://paperswithcode.com/paper/variational-attention-for-sequence-to
1712.08207
null
null
Variational Attention for Sequence-to-Sequence Models
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequence-to-sequence (Seq2Seq) models typically serve as encoder-decoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent space may be bypassed by the attention model, and thus becomes ineffective. In this paper, we propose a variational attention mechanism for VED, where the attention vector is also modeled as Gaussian distributed random variables. Results on two experiments show that, without loss of quality, our proposed method alleviates the bypassing phenomenon as it increases the diversity of generated sentences.
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network.
http://arxiv.org/abs/1712.08207v3
http://arxiv.org/pdf/1712.08207v3.pdf
COLING 2018 8
[ "Hareesh Bahuleyan", "Lili Mou", "Olga Vechtomova", "Pascal Poupart" ]
[ "Decoder", "Diversity" ]
2017-12-21T00:00:00
https://aclanthology.org/C18-1142
https://aclanthology.org/C18-1142.pdf
variational-attention-for-sequence-to-2
null
[]
https://paperswithcode.com/paper/low-shot-learning-with-large-scale-diffusion
1706.02332
null
null
Low-shot learning with large-scale diffusion
This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time. This setup is often referred to as low-shot learning, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes for which training examples are abundant. We consider a semi-supervised setting based on a large collection of images to support label propagation. This is possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling label propagation up to hundred millions of images leads to state of the art accuracy in the low-shot learning regime.
This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time.
http://arxiv.org/abs/1706.02332v3
http://arxiv.org/pdf/1706.02332v3.pdf
CVPR 2018 6
[ "Matthijs Douze", "Arthur Szlam", "Bharath Hariharan", "Hervé Jégou" ]
[ "Few-Shot Image Classification", "graph construction" ]
2017-06-07T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Douze_Low-Shot_Learning_With_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Douze_Low-Shot_Learning_With_CVPR_2018_paper.pdf
low-shot-learning-with-large-scale-diffusion-1
null
[]
https://paperswithcode.com/paper/homonym-detection-in-curated-bibliographies
1806.06017
null
null
Homonym Detection in Curated Bibliographies: Learning from dblp's Experience (full version)
Identifying (and fixing) homonymous and synonymous author profiles is one of the major tasks of curating personalized bibliographic metadata repositories like the dblp computer science bibliography. In this paper, we present and evaluate a machine learning approach to identify homonymous author bibliographies using a simple multilayer perceptron setup. We train our model on a novel gold-standard data set derived from the past years of active, manual curation at the dblp computer science bibliography.
null
http://arxiv.org/abs/1806.06017v1
http://arxiv.org/pdf/1806.06017v1.pdf
null
[ "Marcel R. Ackermann", "Florian Reitz" ]
[ "BIG-bench Machine Learning" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/detecting-abnormal-events-in-video-using
1801.05030
null
null
Detecting abnormal events in video using Narrowed Normality Clusters
We formulate the abnormal event detection problem as an outlier detection task and we propose a two-stage algorithm based on k-means clustering and one-class Support Vector Machines (SVM) to eliminate outliers. In the feature extraction stage, we propose to augment spatio-temporal cubes with deep appearance features extracted from the last convolutional layer of a pre-trained neural network. After extracting motion and appearance features from the training video containing only normal events, we apply k-means clustering to find clusters representing different types of normal motion and appearance features. In the first stage, we consider that clusters with fewer samples (with respect to a given threshold) contain mostly outliers, and we eliminate these clusters altogether. In the second stage, we shrink the borders of the remaining clusters by training a one-class SVM model on each cluster. To detected abnormal events in the test video, we analyze each test sample and consider its maximum normality score provided by the trained one-class SVM models, based on the intuition that a test sample can belong to only one cluster of normality. If the test sample does not fit well in any narrowed normality cluster, then it is labeled as abnormal. We compare our method with several state-of-the-art methods on three benchmark data sets. The empirical results indicate that our abnormal event detection framework can achieve better results in most cases, while processing the test video in real-time at 24 frames per second on a single CPU.
null
http://arxiv.org/abs/1801.05030v4
http://arxiv.org/pdf/1801.05030v4.pdf
null
[ "Radu Tudor Ionescu", "Sorina Smeureanu", "Marius Popescu", "Bogdan Alexe" ]
[ "Anomaly Detection", "Clustering", "CPU", "Event Detection", "Outlier Detection" ]
2018-01-12T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)", "full_name": "Support Vector Machine", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.", "name": "Non-Parametric Classification", "parent": null }, "name": "SVM", "source_title": null, "source_url": null }, { "code_snippet_url": "https://cryptoabout.info", "description": "**k-Means Clustering** is a clustering algorithm that divides a training set into $k$ different clusters of examples that are near each other. It works by initializing $k$ different centroids {$\\mu\\left(1\\right),\\ldots,\\mu\\left(k\\right)$} to different values, then alternating between two steps until convergence:\r\n\r\n(i) each training example is assigned to cluster $i$ where $i$ is the index of the nearest centroid $\\mu^{(i)}$\r\n\r\n(ii) each centroid $\\mu^{(i)}$ is updated to the mean of all training examples $x^{(j)}$ assigned to cluster $i$.\r\n\r\nText Source: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [scikit-learn](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_digits.html)", "full_name": "k-Means Clustering", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.", "name": "Clustering", "parent": null }, "name": "k-Means Clustering", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/real-time-deep-learning-method-for-abandoned
1803.01160
null
null
Real-Time Deep Learning Method for Abandoned Luggage Detection in Video
Recent terrorist attacks in major cities around the world have brought many casualties among innocent citizens. One potential threat is represented by abandoned luggage items (that could contain bombs or biological warfare) in public areas. In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene's background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method.
null
http://arxiv.org/abs/1803.01160v3
http://arxiv.org/pdf/1803.01160v3.pdf
null
[ "Sorina Smeureanu", "Radu Tudor Ionescu" ]
[ "Deep Learning", "Motion Estimation", "object-detection", "Object Detection" ]
2018-03-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/optimizing-the-trade-off-between-single-stage
1803.08707
null
null
Optimizing the Trade-off between Single-Stage and Two-Stage Object Detectors using Image Difficulty Prediction
There are mainly two types of state-of-the-art object detectors. On one hand, we have two-stage detectors, such as Faster R-CNN (Region-based Convolutional Neural Networks) or Mask R-CNN, that (i) use a Region Proposal Network to generate regions of interests in the first stage and (ii) send the region proposals down the pipeline for object classification and bounding-box regression. Such models reach the highest accuracy rates, but are typically slower. On the other hand, we have single-stage detectors, such as YOLO (You Only Look Once) and SSD (Singe Shot MultiBox Detector), that treat object detection as a simple regression problem by taking an input image and learning the class probabilities and bounding box coordinates. Such models reach lower accuracy rates, but are much faster than two-stage object detectors. In this paper, we propose to use an image difficulty predictor to achieve an optimal trade-off between accuracy and speed in object detection. The image difficulty predictor is applied on the test images to split them into easy versus hard images. Once separated, the easy images are sent to the faster single-stage detector, while the hard images are sent to the more accurate two-stage detector. Our experiments on PASCAL VOC 2007 show that using image difficulty compares favorably to a random split of the images. Our method is flexible, in that it allows to choose a desired threshold for splitting the images into easy versus hard.
null
http://arxiv.org/abs/1803.08707v3
http://arxiv.org/pdf/1803.08707v3.pdf
null
[ "Petru Soviany", "Radu Tudor Ionescu" ]
[ "Object", "object-detection", "Object Detection", "Region Proposal", "regression" ]
2018-03-23T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/facebookresearch/detectron2/blob/bb9f5d8e613358519c9865609ab3fe7b6571f2ba/detectron2/layers/roi_align.py#L51", "description": "**Region of Interest Align**, or **RoIAlign**, is an operation for extracting a small feature map from each RoI in detection and segmentation based tasks. It removes the harsh quantization of [RoI Pool](https://paperswithcode.com/method/roi-pooling), properly *aligning* the extracted features with the input. To avoid any quantization of the RoI boundaries or bins (using $x/16$ instead of $[x/16]$), RoIAlign uses bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and the result is then aggregated (using max or average).", "full_name": "RoIAlign", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.", "name": "RoI Feature Extractors", "parent": null }, "name": "RoIAlign", "source_title": "Mask R-CNN", "source_url": "http://arxiv.org/abs/1703.06870v3" }, { "code_snippet_url": null, "description": "**Non Maximum Suppression** is a computer vision method that selects a single entity out of many overlapping entities (for example bounding boxes in object detection). The criteria is usually discarding entities that are below a given probability bound. With remaining entities we repeatedly pick the entity with the highest probability, output that as the prediction, and discard any remaining box where a $\\text{IoU} \\geq 0.5$ with the box output in the previous step.\r\n\r\nImage Credit: [Martin Kersner](https://github.com/martinkersner/non-maximum-suppression-cpp)", "full_name": "Non Maximum Suppression", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Proposal Filtering", "parent": null }, "name": "Non Maximum Suppression", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/facebookresearch/detectron2/blob/601d7666faaf7eb0ba64c9f9ce5811b13861fe12/detectron2/modeling/roi_heads/mask_head.py#L154", "description": "**Mask R-CNN** extends [Faster R-CNN](http://paperswithcode.com/method/faster-r-cnn) to solve instance segmentation tasks. It achieves this by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. In principle, Mask R-CNN is an intuitive extension of Faster [R-CNN](https://paperswithcode.com/method/r-cnn), but constructing the mask branch properly is critical for good results. \r\n\r\nMost importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is evident in how [RoIPool](http://paperswithcode.com/method/roi-pooling), the *de facto* core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, Mask R-CNN utilises a simple, quantization-free layer, called [RoIAlign](http://paperswithcode.com/method/roi-align), that faithfully preserves exact spatial locations. \r\n\r\nSecondly, Mask R-CNN *decouples* mask and class prediction: it predicts a binary mask for each class independently, without competition among classes, and relies on the network's RoI classification branch to predict the category. In contrast, an [FCN](http://paperswithcode.com/method/fcn) usually perform per-pixel multi-class categorization, which couples segmentation and classification.", "full_name": "Mask R-CNN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Instance Segmentation** models are models that perform the task of [Instance Segmentation](https://paperswithcode.com/task/instance-segmentation).", "name": "Instance Segmentation Models", "parent": null }, "name": "Mask R-CNN", "source_title": "Mask R-CNN", "source_url": "http://arxiv.org/abs/1703.06870v3" }, { "code_snippet_url": "", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)", "full_name": "1x1 Convolution", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "1x1 Convolution", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "https://github.com/amdegroot/ssd.pytorch/blob/5b0b77faa955c1917b0c710d770739ba8fbff9b7/ssd.py#L10", "description": "**SSD** is a single-stage object detection method that discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. \r\n\r\nThe fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. Improvements over competing single-stage methods include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales.", "full_name": "SSD", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.", "name": "Object Detection Models", "parent": null }, "name": "SSD", "source_title": "SSD: Single Shot MultiBox Detector", "source_url": "http://arxiv.org/abs/1512.02325v5" }, { "code_snippet_url": null, "description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.", "full_name": "Region Proposal Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Region Proposal", "parent": null }, "name": "RPN", "source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "source_url": "http://arxiv.org/abs/1506.01497v3" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10", "description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)", "full_name": "RoIPool", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.", "name": "RoI Feature Extractors", "parent": null }, "name": "RoIPool", "source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "source_url": "http://arxiv.org/abs/1311.2524v5" }, { "code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22", "description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.", "full_name": "Faster R-CNN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.", "name": "Object Detection Models", "parent": null }, "name": "Faster R-CNN", "source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "source_url": "http://arxiv.org/abs/1506.01497v3" } ]
https://paperswithcode.com/paper/partially-supervised-image-captioning
1806.06004
null
null
Partially-Supervised Image Captioning
Image captioning models are becoming increasingly successful at describing the content of images in restricted domains. However, if these models are to function in the wild - for example, as assistants for people with impaired vision - a much larger number and variety of visual concepts must be understood. To address this problem, we teach image captioning models new visual concepts from labeled images and object detection datasets. Since image labels and object classes can be interpreted as partial captions, we formulate this problem as learning from partially-specified sequence data. We then propose a novel algorithm for training sequence models, such as recurrent neural networks, on partially-specified sequences which we represent using finite state automata. In the context of image captioning, our method lifts the restriction that previously required image captioning models to be trained on paired image-sentence corpora only, or otherwise required specialized model architectures to take advantage of alternative data modalities. Applying our approach to an existing neural captioning model, we achieve state of the art results on the novel object captioning task using the COCO dataset. We further show that we can train a captioning model to describe new visual concepts from the Open Images dataset while maintaining competitive COCO evaluation scores.
null
http://arxiv.org/abs/1806.06004v2
http://arxiv.org/pdf/1806.06004v2.pdf
NeurIPS 2018 12
[ "Peter Anderson", "Stephen Gould", "Mark Johnson" ]
[ "Image Captioning", "Object", "object-detection", "Object Detection", "Sentence" ]
2018-06-15T00:00:00
http://papers.nips.cc/paper/7458-partially-supervised-image-captioning
http://papers.nips.cc/paper/7458-partially-supervised-image-captioning.pdf
partially-supervised-image-captioning-1
null
[]
https://paperswithcode.com/paper/on-machine-learning-and-structure-for-mobile
1806.06003
null
null
On Machine Learning and Structure for Mobile Robots
Due to recent advances - compute, data, models - the role of learning in autonomous systems has expanded significantly, rendering new applications possible for the first time. While some of the most significant benefits are obtained in the perception modules of the software stack, other aspects continue to rely on known manual procedures based on prior knowledge on geometry, dynamics, kinematics etc. Nonetheless, learning gains relevance in these modules when data collection and curation become easier than manual rule design. Building on this coarse and broad survey of current research, the final sections aim to provide insights into future potentials and challenges as well as the necessity of structure in current practical applications.
null
http://arxiv.org/abs/1806.06003v1
http://arxiv.org/pdf/1806.06003v1.pdf
null
[ "Markus Wulfmeier" ]
[ "BIG-bench Machine Learning" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/adapting-neural-text-classification-for
1806.01742
null
null
Adapting Neural Text Classification for Improved Software Categorization
Software Categorization is the task of organizing software into groups that broadly describe the behavior of the software, such as "editors" or "science." Categorization plays an important role in several maintenance tasks, such as repository navigation and feature elicitation. Current approaches attempt to cast the problem as text classification, to make use of the rich body of literature from the NLP domain. However, as we will show in this paper, text classification algorithms are generally not applicable off-the-shelf to source code; we found that they work well when high-level project descriptions are available, but suffer very large performance penalties when classifying source code and comments only. We propose a set of adaptations to a state-of-the-art neural classification algorithm and perform two evaluations: one with reference data from Debian end-user programs, and one with a set of C/C++ libraries that we hired professional programmers to annotate. We show that our proposed approach achieves performance exceeding that of previous software classification techniques as well as a state-of-the-art neural text classification technique.
Software Categorization is the task of organizing software into groups that broadly describe the behavior of the software, such as "editors" or "science."
http://arxiv.org/abs/1806.01742v2
http://arxiv.org/pdf/1806.01742v2.pdf
null
[ "Alexander LeClair", "Zachary Eberhart", "Collin McMillan" ]
[ "Classification", "General Classification", "text-classification", "Text Classification" ]
2018-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-dataset-for-building-code-mixed-goal
1806.05997
null
null
A Dataset for Building Code-Mixed Goal Oriented Conversation Systems
There is an increasing demand for goal-oriented conversation systems which can assist users in various day-to-day activities such as booking tickets, restaurant reservations, shopping, etc. Most of the existing datasets for building such conversation systems focus on monolingual conversations and there is hardly any work on multilingual and/or code-mixed conversations. Such datasets and systems thus do not cater to the multilingual regions of the world, such as India, where it is very common for people to speak more than one language and seamlessly switch between them resulting in code-mixed conversations. For example, a Hindi speaking user looking to book a restaurant would typically ask, "Kya tum is restaurant mein ek table book karne mein meri help karoge?" ("Can you help me in booking a table at this restaurant?"). To facilitate the development of such code-mixed conversation models, we build a goal-oriented dialog dataset containing code-mixed conversations. Specifically, we take the text from the DSTC2 restaurant reservation dataset and create code-mixed versions of it in Hindi-English, Bengali-English, Gujarati-English and Tamil-English. We also establish initial baselines on this dataset using existing state of the art models. This dataset along with our baseline implementations is made publicly available for research purposes.
("Can you help me in booking a table at this restaurant?").
http://arxiv.org/abs/1806.05997v1
http://arxiv.org/pdf/1806.05997v1.pdf
COLING 2018 8
[ "Suman Banerjee", "Nikita Moghe", "Siddhartha Arora", "Mitesh M. Khapra" ]
[ "Goal-Oriented Dialog" ]
2018-06-15T00:00:00
https://aclanthology.org/C18-1319
https://aclanthology.org/C18-1319.pdf
a-dataset-for-building-code-mixed-goal-2
null
[]
https://paperswithcode.com/paper/ego-lane-analysis-system-elas-dataset-and
1806.05984
null
null
Ego-Lane Analysis System (ELAS): Dataset and Algorithms
Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research detection, estimation, and tracking in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes (i.e., immediate left and right lanes) presence. In this paper, we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines with Kalman filter and spline with particle filter). Based on the estimated lane, all other events are detected. To validate ELAS and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e., lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.
null
http://arxiv.org/abs/1806.05984v1
http://arxiv.org/pdf/1806.05984v1.pdf
null
[ "Rodrigo F. Berriel", "Edilson de Aguiar", "Alberto F. de Souza", "Thiago Oliveira-Santos" ]
[ "Change Detection", "General Classification", "Self-Driving Cars" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bayesian-convolutional-neural-networks-with-1
1806.05978
null
null
Uncertainty Estimations by Softplus normalization in Bayesian Convolutional Neural Networks with Variational Inference
We introduce a novel uncertainty estimation for classification tasks for Bayesian convolutional neural networks with variational inference. By normalizing the output of a Softplus function in the final layer, we estimate aleatoric and epistemic uncertainty in a coherent manner. The intractable posterior probability distributions over weights are inferred by Bayes by Backprop. Firstly, we demonstrate how this reliable variational inference method can serve as a fundamental construct for various network architectures. On multiple datasets in supervised learning settings (MNIST, CIFAR-10, CIFAR-100), this variational inference method achieves performances equivalent to frequentist inference in identical architectures, while the two desiderata, a measure for uncertainty and regularization are incorporated naturally. Secondly, we examine how our proposed measure for aleatoric and epistemic uncertainties is derived and validate it on the aforementioned datasets.
On multiple datasets in supervised learning settings (MNIST, CIFAR-10, CIFAR-100), this variational inference method achieves performances equivalent to frequentist inference in identical architectures, while the two desiderata, a measure for uncertainty and regularization are incorporated naturally.
https://arxiv.org/abs/1806.05978v6
https://arxiv.org/pdf/1806.05978v6.pdf
null
[ "Kumar Shridhar", "Felix Laumann", "Marcus Liwicki" ]
[ "Bayesian Inference", "General Classification", "Variational Inference" ]
2018-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How Do I File a Claim with Expedia?\r\nCall **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Fast Help & Exclusive Travel Discounts!Need to file a claim with Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal offers on hotels, flights, and vacation packages. Resolve your issue quickly while enjoying limited-time travel discounts that make your next trip smoother, more affordable, and worry-free. Don’t miss out—call today and save!\r\n.How do I get a full refund from Expedia?\r\nHow Do I Communicate with Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for 24/7 Support & Exclusive Travel Discounts!Need to reach Expedia fast? Call now to speak directly with a live agent and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get personalized assistance while enjoying limited-time travel offers that make your next journey smoother, more affordable, and stress-free. Don’t wait—call today and save!", "full_name": "(TravEL!!Guide)How Do I File a Claim with Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "(TravEL!!Guide)How Do I File a Claim with Expedia?", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/supervised-learning-with-generalized-tensor
1806.05964
null
null
From probabilistic graphical models to generalized tensor networks for supervised learning
Tensor networks have found a wide use in a variety of applications in physics and computer science, recently leading to both theoretical insights as well as practical algorithms in machine learning. In this work we explore the connection between tensor networks and probabilistic graphical models, and show that it motivates the definition of generalized tensor networks where information from a tensor can be copied and reused in other parts of the network. We discuss the relationship between generalized tensor network architectures used in quantum physics, such as string-bond states, and architectures commonly used in machine learning. We provide an algorithm to train these networks in a supervised-learning context and show that they overcome the limitations of regular tensor networks in higher dimensions, while keeping the computation efficient. A method to combine neural networks and tensor networks as part of a common deep learning architecture is also introduced. We benchmark our algorithm for several generalized tensor network architectures on the task of classifying images and sounds, and show that they outperform previously introduced tensor-network algorithms. The models we consider also have a natural implementation on a quantum computer and may guide the development of near-term quantum machine learning architectures.
null
https://arxiv.org/abs/1806.05964v2
https://arxiv.org/pdf/1806.05964v2.pdf
null
[ "Ivan Glasser", "Nicola Pancotti", "J. Ignacio Cirac" ]
[ "BIG-bench Machine Learning", "Quantum Machine Learning", "Tensor Networks" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/techniques-for-visualizing-lstms-applied-to
1705.08153
null
null
Techniques for visualizing LSTMs applied to electrocardiograms
This paper explores four different visualization techniques for long short-term memory (LSTM) networks applied to continuous-valued time series. On the datasets analysed, we find that the best visualization technique is to learn an input deletion mask that optimally reduces the true class score. With a specific focus on single-lead electrocardiograms from the MIT-BIH arrhythmia dataset, we show that salient input features for the LSTM classifier align well with medical theory.
null
http://arxiv.org/abs/1705.08153v3
http://arxiv.org/pdf/1705.08153v3.pdf
null
[ "Jos van der Westhuizen", "Joan Lasenby" ]
[ "Time Series", "Time Series Analysis" ]
2017-05-23T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.", "full_name": "Sigmoid Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Sigmoid Activation", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329", "description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)", "full_name": "Tanh Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Tanh Activation", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/deep-temporal-lstm-for-daily-living-action
1802.00421
null
null
Deep-Temporal LSTM for Daily Living Action Recognition
In this paper, we propose to improve the traditional use of RNNs by employing a many to many model for video classification. We analyze the importance of modeling spatial layout and temporal encoding for daily living action recognition. Many RGB methods focus only on short term temporal information obtained from optical flow. Skeleton based methods on the other hand show that modeling long term skeleton evolution improves action recognition accuracy. In this work, we propose a deep-temporal LSTM architecture which extends standard LSTM and allows better encoding of temporal information. In addition, we propose to fuse 3D skeleton geometry with deep static appearance. We validate our approach on public available CAD60, MSRDailyActivity3D and NTU-RGB+D, achieving competitive performance as compared to the state-of-the art.
null
http://arxiv.org/abs/1802.00421v2
http://arxiv.org/pdf/1802.00421v2.pdf
null
[ "Srijan Das", "Michal Koperski", "Francois Bremond", "Gianpiero Francesca" ]
[ "Action Recognition", "General Classification", "Optical Flow Estimation", "Temporal Action Localization", "Video Classification" ]
2018-02-01T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.", "full_name": "Sigmoid Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Sigmoid Activation", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329", "description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)", "full_name": "Tanh Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Tanh Activation", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/controllable-semantic-image-inpainting
1806.05953
null
null
Controllable Semantic Image Inpainting
We develop a method for user-controllable semantic image inpainting: Given an arbitrary set of observed pixels, the unobserved pixels can be imputed in a user-controllable range of possibilities, each of which is semantically coherent and locally consistent with the observed pixels. We achieve this using a deep generative model bringing together: an encoder which can encode an arbitrary set of observed pixels, latent variables which are trained to represent disentangled factors of variations, and a bidirectional PixelCNN model. We experimentally demonstrate that our method can generate plausible inpainting results matching the user-specified semantics, but is still coherent with observed pixels. We justify our choices of architecture and training regime through more experiments.
null
http://arxiv.org/abs/1806.05953v1
http://arxiv.org/pdf/1806.05953v1.pdf
null
[ "Jin Xu", "Yee Whye Teh" ]
[ "Image Inpainting" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/amortized-context-vector-inference-for
1805.09039
null
SygONjRqKm
Amortized Context Vector Inference for Sequence-to-Sequence Networks
Neural attention (NA) has become a key component of sequence-to-sequence models that yield state-of-the-art performance in as hard tasks as abstractive document summarization (ADS) and video captioning (VC). NA mechanisms perform inference of context vectors; these constitute weighted sums of deterministic input sequence encodings, adaptively sourced over long temporal horizons. Inspired from recent work in the field of amortized variational inference (AVI), in this work we consider treating the context vectors generated by soft-attention (SA) models as latent variables, with approximate finite mixture model posteriors inferred via AVI. We posit that this formulation may yield stronger generalization capacity, in line with the outcomes of existing applications of AVI to deep networks. To illustrate our method, we implement it and experimentally evaluate it considering challenging ADS, VC, and MT benchmarks. This way, we exhibit its improved effectiveness over state-of-the-art alternatives.
null
http://arxiv.org/abs/1805.09039v9
http://arxiv.org/pdf/1805.09039v9.pdf
null
[ "Kyriacos Tolias", "Ioannis Kourouklides", "Sotirios Chatzis" ]
[ "Document Summarization", "Variational Inference", "Video Captioning" ]
2018-05-23T00:00:00
https://openreview.net/forum?id=SygONjRqKm
https://openreview.net/pdf?id=SygONjRqKm
null
null
[]
https://paperswithcode.com/paper/a-challenge-set-for-french-english-machine
1806.02725
null
null
A Challenge Set for French --> English Machine Translation
We present a challenge set for French --> English machine translation based on the approach introduced in Isabelle, Cherry and Foster (EMNLP 2017). Such challenge sets are made up of sentences that are expected to be relatively difficult for machines to translate correctly because their most straightforward translations tend to be linguistically divergent. We present here a set of 506 manually constructed French sentences, 307 of which are targeted to the same kinds of structural divergences as in the paper mentioned above. The remaining 199 sentences are designed to test the ability of the systems to correctly translate difficult grammatical words such as prepositions. We report on the results of using this challenge set for testing two different systems, namely Google Translate and DEEPL, each on two different dates (October 2017 and January 2018). All the resulting data are made publicly available.
null
http://arxiv.org/abs/1806.02725v2
http://arxiv.org/pdf/1806.02725v2.pdf
null
[ "Pierre Isabelle", "Roland Kuhn" ]
[ "Machine Translation", "Translation" ]
2018-06-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/discovering-user-groups-for-natural-language
1806.05947
null
null
Discovering User Groups for Natural Language Generation
We present a model which predicts how individual users of a dialog system understand and produce utterances based on user groups. In contrast to previous work, these user groups are not specified beforehand, but learned in training. We evaluate on two referring expression (RE) generation tasks; our experiments show that our model can identify user groups and learn how to most effectively talk to them, and can dynamically assign unseen users to the correct groups as they interact with the system.
null
http://arxiv.org/abs/1806.05947v1
http://arxiv.org/pdf/1806.05947v1.pdf
WS 2018 7
[ "Nikos Engonopoulos", "Christoph Teichmann", "Alexander Koller" ]
[ "Referring Expression", "Text Generation" ]
2018-06-15T00:00:00
https://aclanthology.org/W18-5018
https://aclanthology.org/W18-5018.pdf
discovering-user-groups-for-natural-language-1
null
[]
https://paperswithcode.com/paper/efficient-nearest-neighbors-search-for-large
1806.05946
null
null
Efficient Nearest Neighbors Search for Large-Scale Landmark Recognition
The problem of landmark recognition has achieved excellent results in small-scale datasets. When dealing with large-scale retrieval, issues that were irrelevant with small amount of data, quickly become fundamental for an efficient retrieval phase. In particular, computational time needs to be kept as low as possible, whilst the retrieval accuracy has to be preserved as much as possible. In this paper we propose a novel multi-index hashing method called Bag of Indexes (BoI) for Approximate Nearest Neighbors (ANN) search. It allows to drastically reduce the query time and outperforms the accuracy results compared to the state-of-the-art methods for large-scale landmark recognition. It has been demonstrated that this family of algorithms can be applied on different embedding techniques like VLAD and R-MAC obtaining excellent results in very short times on different public datasets: Holidays+Flickr1M, Oxford105k and Paris106k.
It allows to drastically reduce the query time and outperforms the accuracy results compared to the state-of-the-art methods for large-scale landmark recognition.
http://arxiv.org/abs/1806.05946v1
http://arxiv.org/pdf/1806.05946v1.pdf
null
[ "Federico Magliani", "Tomaso Fontanini", "Andrea Prati" ]
[ "Landmark Recognition", "Retrieval" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/repmet-representative-based-metric-learning
1806.04728
null
null
RepMet: Representative-based metric learning for classification and one-shot object detection
Distance metric learning (DML) has been successfully applied to object classification, both in the standard regime of rich training data and in the few-shot scenario, where each category is represented by only a few examples. In this work, we propose a new method for DML that simultaneously learns the backbone network parameters, the embedding space, and the multi-modal distribution of each of the training categories in that space, in a single end-to-end training process. Our approach outperforms state-of-the-art methods for DML-based object classification on a variety of standard fine-grained datasets. Furthermore, we demonstrate the effectiveness of our approach on the problem of few-shot object detection, by incorporating the proposed DML architecture as a classification head into a standard object detection model. We achieve the best results on the ImageNet-LOC dataset compared to strong baselines, when only a few training examples are available. We also offer the community a new episodic benchmark based on the ImageNet dataset for the few-shot object detection task.
Distance metric learning (DML) has been successfully applied to object classification, both in the standard regime of rich training data and in the few-shot scenario, where each category is represented by only a few examples.
http://arxiv.org/abs/1806.04728v3
http://arxiv.org/pdf/1806.04728v3.pdf
null
[ "Leonid Karlinsky", "Joseph Shtok", "Sivan Harary", "Eli Schwartz", "Amit Aides", "Rogerio Feris", "Raja Giryes", "Alex M. Bronstein" ]
[ "Classification", "Few-Shot Object Detection", "General Classification", "Metric Learning", "Object", "object-detection", "Object Detection", "One-Shot Object Detection" ]
2018-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/query-k-means-clustering-and-the-double-dixie
1806.05938
null
null
Query K-means Clustering and the Double Dixie Cup Problem
We consider the problem of approximate $K$-means clustering with outliers and side information provided by same-cluster queries and possibly noisy answers. Our solution shows that, under some mild assumptions on the smallest cluster size, one can obtain an $(1+\epsilon)$-approximation for the optimal potential with probability at least $1-\delta$, where $\epsilon>0$ and $\delta\in(0,1)$, using an expected number of $O(\frac{K^3}{\epsilon \delta})$ noiseless same-cluster queries and comparison-based clustering of complexity $O(ndK + \frac{K^3}{\epsilon \delta})$, here, $n$ denotes the number of points and $d$ the dimension of space. Compared to a handful of other known approaches that perform importance sampling to account for small cluster sizes, the proposed query technique reduces the number of queries by a factor of roughly $O(\frac{K^6}{\epsilon^3})$, at the cost of possibly missing very small clusters. We extend this settings to the case where some queries to the oracle produce erroneous information, and where certain points, termed outliers, do not belong to any clusters. Our proof techniques differ from previous methods used for $K$-means clustering analysis, as they rely on estimating the sizes of the clusters and the number of points needed for accurate centroid estimation and subsequent nontrivial generalizations of the double Dixie cup problem. We illustrate the performance of the proposed algorithm both on synthetic and real datasets, including MNIST and CIFAR $10$.
null
http://arxiv.org/abs/1806.05938v2
http://arxiv.org/pdf/1806.05938v2.pdf
NeurIPS 2018 12
[ "I Chien", "Chao Pan", "Olgica Milenkovic" ]
[ "Clustering" ]
2018-06-15T00:00:00
http://papers.nips.cc/paper/7899-query-k-means-clustering-and-the-double-dixie-cup-problem
http://papers.nips.cc/paper/7899-query-k-means-clustering-and-the-double-dixie-cup-problem.pdf
query-k-means-clustering-and-the-double-dixie-1
null
[]
https://paperswithcode.com/paper/dynamic-weight-alignment-for-temporal
1712.06530
null
null
Dynamic Weight Alignment for Temporal Convolutional Neural Networks
In this paper, we propose a method of improving temporal Convolutional Neural Networks (CNN) by determining the optimal alignment of weights and inputs using dynamic programming. Conventional CNN convolutions linearly match the shared weights to a window of the input. However, it is possible that there exists a more optimal alignment of weights. Thus, we propose the use of Dynamic Time Warping (DTW) to dynamically align the weights to the input of the convolutional layer. Specifically, the dynamic alignment overcomes issues such as temporal distortion by finding the minimal distance matching of the weights and the inputs under constraints. We demonstrate the effectiveness of the proposed architecture on the Unipen online handwritten digit and character datasets, the UCI Spoken Arabic Digit dataset, and the UCI Activities of Daily Life dataset.
null
http://arxiv.org/abs/1712.06530v6
http://arxiv.org/pdf/1712.06530v6.pdf
null
[ "Brian Kenji Iwana", "Seiichi Uchida" ]
[ "Dynamic Time Warping", "Time Series Analysis" ]
2017-12-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-semantic-sentence-embeddings-using-1
1806.00807
null
null
Learning Semantic Sentence Embeddings using Sequential Pair-wise Discriminator
In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of securing word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we use a sequential encoder-decoder model for generating paraphrase, we would like the generated paraphrase to be semantically close to the original sentence. One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far. This is ensured by using a sequential pair-wise discriminator that shares weights with the encoder that is trained with a suitable loss function. Our loss function penalizes paraphrase sentence embedding distances from being too large. This loss is used in combination with a sequential encoder-decoder network. We also validated our method by evaluating the obtained embeddings for a sentiment analysis task. The proposed method results in semantic embeddings and outperforms the state-of-the-art on the paraphrase generation and sentiment analysis task on standard datasets. These results are also shown to be statistically significant.
One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far.
http://arxiv.org/abs/1806.00807v5
http://arxiv.org/pdf/1806.00807v5.pdf
COLING 2018 8
[ "Badri N. Patro", "Vinod K. Kurmi", "Sandeep Kumar", "Vinay P. Namboodiri" ]
[ "Decoder", "Paraphrase Generation", "Sentence", "Sentence Embedding", "Sentence-Embedding", "Sentence Embeddings", "Sentiment Analysis" ]
2018-06-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bayesian-best-arm-identification-for
1711.06299
null
null
Bayesian Best-Arm Identification for Selecting Influenza Mitigation Strategies
Pandemic influenza has the epidemic potential to kill millions of people. While various preventive measures exist (i.a., vaccination and school closures), deciding on strategies that lead to their most effective and efficient use remains challenging. To this end, individual-based epidemiological models are essential to assist decision makers in determining the best strategy to curb epidemic spread. However, individual-based models are computationally intensive and it is therefore pivotal to identify the optimal strategy using a minimal amount of model evaluations. Additionally, as epidemiological modeling experiments need to be planned, a computational budget needs to be specified a priori. Consequently, we present a new sampling technique to optimize the evaluation of preventive strategies using fixed budget best-arm identification algorithms. We use epidemiological modeling theory to derive knowledge about the reward distribution which we exploit using Bayesian best-arm identification algorithms (i.e., Top-two Thompson sampling and BayesGap). We evaluate these algorithms in a realistic experimental setting and demonstrate that it is possible to identify the optimal strategy using only a limited number of model evaluations, i.e., 2-to-3 times faster compared to the uniform sampling method, the predominant technique used for epidemiological decision making in the literature. Finally, we contribute and evaluate a statistic for Top-two Thompson sampling to inform the decision makers about the confidence of an arm recommendation.
null
http://arxiv.org/abs/1711.06299v2
http://arxiv.org/pdf/1711.06299v2.pdf
null
[ "Pieter Libin", "Timothy Verstraeten", "Diederik M. Roijers", "Jelena Grujic", "Kristof Theys", "Philippe Lemey", "Ann Nowé" ]
[ "Decision Making", "Thompson Sampling" ]
2017-11-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-bayesian-model-selection-for-variable
1806.05924
null
null
Robust Bayesian Model Selection for Variable Clustering with the Gaussian Graphical Model
Variable clustering is important for explanatory analysis. However, only few dedicated methods for variable clustering with the Gaussian graphical model have been proposed. Even more severe, small insignificant partial correlations due to noise can dramatically change the clustering result when evaluating for example with the Bayesian Information Criteria (BIC). In this work, we try to address this issue by proposing a Bayesian model that accounts for negligible small, but not necessarily zero, partial correlations. Based on our model, we propose to evaluate a variable clustering result using the marginal likelihood. To address the intractable calculation of the marginal likelihood, we propose two solutions: one based on a variational approximation, and another based on MCMC. Experiments on simulated data shows that the proposed method is similarly accurate as BIC in the no noise setting, but considerably more accurate when there are noisy partial correlations. Furthermore, on real data the proposed method provides clustering results that are intuitively sensible, which is not always the case when using BIC or its extensions.
Even more severe, small insignificant partial correlations due to noise can dramatically change the clustering result when evaluating for example with the Bayesian Information Criteria (BIC).
http://arxiv.org/abs/1806.05924v1
http://arxiv.org/pdf/1806.05924v1.pdf
null
[ "Daniel Andrade", "Akiko Takeda", "Kenji Fukumizu" ]
[ "Clustering", "model", "Model Selection" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/online-feature-ranking-for-intrusion
1803.00530
null
null
Online Feature Ranking for Intrusion Detection Systems
Many current approaches to the design of intrusion detection systems apply feature selection in a static, non-adaptive fashion. These methods often neglect the dynamic nature of network data which requires to use adaptive feature selection techniques. In this paper, we present a simple technique based on incremental learning of support vector machines in order to rank the features in real time within a streaming model for network data. Some illustrative numerical experiments with two popular benchmark datasets show that our approach allows to adapt to the changes in normal network behaviour and novel attack patterns which have not been experienced before.
null
http://arxiv.org/abs/1803.00530v2
http://arxiv.org/pdf/1803.00530v2.pdf
null
[ "Buse Gul Atli", "Alexander Jung" ]
[ "feature selection", "Incremental Learning", "Intrusion Detection" ]
2018-03-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/an-empirical-analysis-of-the-correlation-of
1806.05900
null
null
An Empirical Analysis of the Correlation of Syntax and Prosody
The relation of syntax and prosody (the syntax--prosody interface) has been an active area of research, mostly in linguistics and typically studied under controlled conditions. More recently, prosody has also been successfully used in the data-based training of syntax parsers. However, there is a gap between the controlled and detailed study of the individual effects between syntax and prosody and the large-scale application of prosody in syntactic parsing with only a shallow analysis of the respective influences. In this paper, we close the gap by investigating the significance of correlations of prosodic realization with specific syntactic functions using linear mixed effects models in a very large corpus of read-out German encyclopedic texts. Using this corpus, we are able to analyze prosodic structuring performed by a diverse set of speakers while they try to optimize factual content delivery. After normalization by speaker, we obtain significant effects, e.g. confirming that the subject function, as compared to the object function, has a positive effect on pitch and duration of a word, but a negative effect on loudness.
null
http://arxiv.org/abs/1806.05900v1
http://arxiv.org/pdf/1806.05900v1.pdf
null
[ "Arne Köhn", "Timo Baumann", "Oskar Dörfler" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/magix-model-agnostic-globally-interpretable
1706.07160
null
null
MAGIX: Model Agnostic Globally Interpretable Explanations
Explaining the behavior of a black box machine learning model at the instance level is useful for building trust. However, it is also important to understand how the model behaves globally. Such an understanding provides insight into both the data on which the model was trained and the patterns that it learned. We present here an approach that learns if-then rules to globally explain the behavior of black box machine learning models that have been used to solve classification problems. The approach works by first extracting conditions that were important at the instance level and then evolving rules through a genetic algorithm with an appropriate fitness function. Collectively, these rules represent the patterns followed by the model for decisioning and are useful for understanding its behavior. We demonstrate the validity and usefulness of the approach by interpreting black box models created using publicly available data sets as well as a private digital marketing data set.
null
http://arxiv.org/abs/1706.07160v3
http://arxiv.org/pdf/1706.07160v3.pdf
null
[ "Nikaash Puri", "Piyush Gupta", "Pratiksha Agarwal", "Sukriti Verma", "Balaji Krishnamurthy" ]
[ "BIG-bench Machine Learning", "Marketing", "model" ]
2017-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/improving-width-based-planning-with-compact
1806.05898
null
null
Improving width-based planning with compact policies
Optimal action selection in decision problems characterized by sparse, delayed rewards is still an open challenge. For these problems, current deep reinforcement learning methods require enormous amounts of data to learn controllers that reach human-level performance. In this work, we propose a method that interleaves planning and learning to address this issue. The planning step hinges on the Iterated-Width (IW) planner, a state of the art planner that makes explicit use of the state representation to perform structured exploration. IW is able to scale up to problems independently of the size of the state space. From the state-actions visited by IW, the learning step estimates a compact policy, which in turn is used to guide the planning step. The type of exploration used by our method is radically different than the standard random exploration used in RL. We evaluate our method in simple problems where we show it to have superior performance than the state-of-the-art reinforcement learning algorithms A2C and Alpha Zero. Finally, we present preliminary results in a subset of the Atari games suite.
null
http://arxiv.org/abs/1806.05898v1
http://arxiv.org/pdf/1806.05898v1.pdf
null
[ "Miquel Junyent", "Anders Jonsson", "Vicenç Gómez" ]
[ "Atari Games", "Deep Reinforcement Learning", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**A2C**, or **Advantage Actor Critic**, is a synchronous version of the [A3C](https://paperswithcode.com/method/a3c) policy gradient method. As an alternative to the asynchronous implementation of A3C, A2C is a synchronous, deterministic implementation that waits for each actor to finish its segment of experience before updating, averaging over all of the actors. This more effectively uses GPUs due to larger batch sizes.\r\n\r\nImage Credit: [OpenAI Baselines](https://openai.com/blog/baselines-acktr-a2c/)", "full_name": "A2C", "introduced_year": 2000, "main_collection": { "area": "Reinforcement Learning", "description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.", "name": "Policy Gradient Methods", "parent": null }, "name": "A2C", "source_title": "Asynchronous Methods for Deep Reinforcement Learning", "source_url": "http://arxiv.org/abs/1602.01783v2" } ]
https://paperswithcode.com/paper/mining-rank-data
1806.05897
null
null
Mining Rank Data
The problem of frequent pattern mining has been studied quite extensively for various types of data, including sets, sequences, and graphs. Somewhat surprisingly, another important type of data, namely rank data, has received very little attention in data mining so far. In this paper, we therefore addresses the problem of mining rank data, that is, data in the form of rankings (total orders) of an underlying set of items. More specifically, two types of patterns are considered, namely frequent rankings and dependencies between such rankings in the form of association rules. Algorithms for mining frequent rankings and frequent closed rankings are proposed and tested experimentally, using both synthetic and real data.
null
http://arxiv.org/abs/1806.05897v1
http://arxiv.org/pdf/1806.05897v1.pdf
null
[ "Sascha Henzgen", "Eyke Hüllermeier" ]
[ "Form" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-front-end-filter-bank-parameters
1806.05892
null
null
Learning Front-end Filter-bank Parameters using Convolutional Neural Networks for Abnormal Heart Sound Detection
Automatic heart sound abnormality detection can play a vital role in the early diagnosis of heart diseases, particularly in low-resource settings. The state-of-the-art algorithms for this task utilize a set of Finite Impulse Response (FIR) band-pass filters as a front-end followed by a Convolutional Neural Network (CNN) model. In this work, we propound a novel CNN architecture that integrates the front-end bandpass filters within the network using time-convolution (tConv) layers, which enables the FIR filter-bank parameters to become learnable. Different initialization strategies for the learnable filters, including random parameters and a set of predefined FIR filter-bank coefficients, are examined. Using the proposed tConv layers, we add constraints to the learnable FIR filters to ensure linear and zero phase responses. Experimental evaluations are performed on a balanced 4-fold cross-validation task prepared using the PhysioNet/CinC 2016 dataset. Results demonstrate that the proposed models yield superior performance compared to the state-of-the-art system, while the linear phase FIR filterbank method provides an absolute improvement of 9.54% over the baseline in terms of an overall accuracy metric.
In this work, we propound a novel CNN architecture that integrates the front-end bandpass filters within the network using time-convolution (tConv) layers, which enables the FIR filter-bank parameters to become learnable.
http://arxiv.org/abs/1806.05892v1
http://arxiv.org/pdf/1806.05892v1.pdf
null
[ "Ahmed Imtiaz Humayun", "Shabnam Ghaffarzadegan", "Zhe Feng", "Taufiq Hasan" ]
[ "Anomaly Detection" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/structured-low-rank-matrix-learning
1704.07352
null
null
Structured low-rank matrix learning: algorithms and applications
We consider the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduce a novel factorization for modeling such matrices. A salient feature of the proposed factorization scheme is it decouples the low-rank and the structural constraints onto separate factors. We formulate the optimization problem on the Riemannian spectrahedron manifold, where the Riemannian framework allows to develop computationally efficient conjugate gradient and trust-region algorithms. Experiments on problems such as standard/robust/non-negative matrix completion, Hankel matrix learning and multi-task learning demonstrate the efficacy of our approach. A shorter version of this work has been published in ICML'18.
null
http://arxiv.org/abs/1704.07352v5
http://arxiv.org/pdf/1704.07352v5.pdf
null
[ "Pratik Jawanpuria", "Bamdev Mishra" ]
[ "Matrix Completion", "Multi-Task Learning" ]
2017-04-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hierarchical-novelty-detection-for-visual
1804.00722
null
null
Hierarchical Novelty Detection for Visual Object Recognition
Deep neural networks have achieved impressive success in large-scale visual object recognition tasks with a predefined set of classes. However, recognizing objects of novel classes unseen during training still remains challenging. The problem of detecting such novel classes has been addressed in the literature, but most prior works have focused on providing simple binary or regressive decisions, e.g., the output would be "known," "novel," or corresponding confidence intervals. In this paper, we study more informative novelty detection schemes based on a hierarchical classification framework. For an object of a novel class, we aim for finding its closest super class in the hierarchical taxonomy of known classes. To this end, we propose two different approaches termed top-down and flatten methods, and their combination as well. The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy. Furthermore, our method can generate a hierarchical embedding that leads to improved generalized zero-shot learning performance in combination with other commonly-used semantic embeddings.
null
http://arxiv.org/abs/1804.00722v2
http://arxiv.org/pdf/1804.00722v2.pdf
CVPR 2018 6
[ "Kibok Lee", "Kimin Lee", "Kyle Min", "Yuting Zhang", "Jinwoo Shin", "Honglak Lee" ]
[ "Generalized Zero-Shot Learning", "Novelty Detection", "Object", "Object Recognition", "Zero-Shot Learning" ]
2018-04-02T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Lee_Hierarchical_Novelty_Detection_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Lee_Hierarchical_Novelty_Detection_CVPR_2018_paper.pdf
hierarchical-novelty-detection-for-visual-1
null
[]
https://paperswithcode.com/paper/automated-image-data-preprocessing-with-deep
1806.05886
null
null
Automated Image Data Preprocessing with Deep Reinforcement Learning
Data preparation, i.e. the process of transforming raw data into a format that can be used for training effective machine learning models, is a tedious and time-consuming task. For image data, preprocessing typically involves a sequence of basic transformations such as cropping, filtering, rotating or flipping images. Currently, data scientists decide manually based on their experience which transformations to apply in which particular order to a given image data set. Besides constituting a bottleneck in real-world data science projects, manual image data preprocessing may yield suboptimal results as data scientists need to rely on intuition or trial-and-error approaches when exploring the space of possible image transformations and thus might not be able to discover the most effective ones. To mitigate the inefficiency and potential ineffectiveness of manual data preprocessing, this paper proposes a deep reinforcement learning framework to automatically discover the optimal data preprocessing steps for training an image classifier. The framework takes as input sets of labeled images and predefined preprocessing transformations. It jointly learns the classifier and the optimal preprocessing transformations for individual images. Experimental results show that the proposed approach not only improves the accuracy of image classifiers, but also makes them substantially more robust to noisy inputs at test time.
Data preparation, i. e. the process of transforming raw data into a format that can be used for training effective machine learning models, is a tedious and time-consuming task.
https://arxiv.org/abs/1806.05886v2
https://arxiv.org/pdf/1806.05886v2.pdf
null
[ "Tran Ngoc Minh", "Mathieu Sinn", "Hoang Thanh Lam", "Martin Wistuba" ]
[ "Deep Reinforcement Learning", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-simple-blind-denoising-filter-inspired-by
1806.05882
null
null
A simple blind-denoising filter inspired by electrically coupled photoreceptors in the retina
Photoreceptors in the retina are coupled by electrical synapses called "gap junctions". It has long been established that gap junctions increase the signal-to-noise ratio of photoreceptors. Inspired by electrically coupled photoreceptors, we introduced a simple filter, the PR-filter, with only one variable. On BSD68 dataset, PR-filter showed outstanding performance in SSIM during blind denoising tasks. It also significantly improved the performance of state-of-the-art convolutional neural network blind denosing on non-Gaussian noise. The performance of keeping more details might be attributed to small receptive field of the photoreceptors.
null
http://arxiv.org/abs/1806.05882v4
http://arxiv.org/pdf/1806.05882v4.pdf
null
[ "Yang Yue", "Liuyuan He", "Gan He", "Jian. K. Liu", "Kai Du", "Yonghong Tian", "Tiejun Huang" ]
[ "Denoising", "SSIM" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/neural-stethoscopes-unifying-analytic
1806.05502
null
null
Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes
Visually predicting the stability of block towers is a popular task in the domain of intuitive physics. While previous work focusses on prediction accuracy, a one-dimensional performance measure, we provide a broader analysis of the learned physical understanding of the final model and how the learning process can be guided. To this end, we introduce neural stethoscopes as a general purpose framework for quantifying the degree of importance of specific factors of influence in deep neural networks as well as for actively promoting and suppressing information as appropriate. In doing so, we unify concepts from multitask learning as well as training with auxiliary and adversarial losses. We apply neural stethoscopes to analyse the state-of-the-art neural network for stability prediction. We show that the baseline model is susceptible to being misled by incorrect visual cues. This leads to a performance breakdown to the level of random guessing when training on scenarios where visual cues are inversely correlated with stability. Using stethoscopes to promote meaningful feature extraction increases performance from 51% to 90% prediction accuracy. Conversely, training on an easy dataset where visual cues are positively correlated with stability, the baseline model learns a bias leading to poor performance on a harder dataset. Using an adversarial stethoscope, the network is successfully de-biased, leading to a performance increase from 66% to 88%.
null
https://arxiv.org/abs/1806.05502v5
https://arxiv.org/pdf/1806.05502v5.pdf
null
[ "Fabian B. Fuchs", "Oliver Groth", "Adam R. Kosiorek", "Alex Bewley", "Markus Wulfmeier", "Andrea Vedaldi", "Ingmar Posner" ]
[]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/financial-risk-and-returns-prediction-with
1806.05876
null
null
Financial Risk and Returns Prediction with Modular Networked Learning
An artificial agent for financial risk and returns' prediction is built with a modular cognitive system comprised of interconnected recurrent neural networks, such that the agent learns to predict the financial returns, and learns to predict the squared deviation around these predicted returns. These two expectations are used to build a volatility-sensitive interval prediction for financial returns, which is evaluated on three major financial indices and shown to be able to predict financial returns with higher than 80% success rate in interval prediction in both training and testing, raising into question the Efficient Market Hypothesis. The agent is introduced as an example of a class of artificial intelligent systems that are equipped with a Modular Networked Learning cognitive system, defined as an integrated networked system of machine learning modules, where each module constitutes a functional unit that is trained for a given specific task that solves a subproblem of a complex main problem expressed as a network of linked subproblems. In the case of neural networks, these systems function as a form of an "artificial brain", where each module is like a specialized brain region comprised of a neural network with a specific architecture.
null
http://arxiv.org/abs/1806.05876v1
http://arxiv.org/pdf/1806.05876v1.pdf
null
[ "Carlos Pedro Gonçalves" ]
[ "Prediction" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/data-efficient-design-exploration-through
1806.05865
null
null
Data-Efficient Design Exploration through Surrogate-Assisted Illumination
Design optimization techniques are often used at the beginning of the design process to explore the space of possible designs. In these domains illumination algorithms, such as MAP-Elites, are promising alternatives to classic optimization algorithms because they produce diverse, high-quality solutions in a single run, instead of only a single near-optimal solution. Unfortunately, these algorithms currently require a large number of function evaluations, limiting their applicability. In this article we introduce a new illumination algorithm, Surrogate-Assisted Illumination (SAIL), that leverages surrogate modeling techniques to create a map of the design space according to user-defined features while minimizing the number of fitness evaluations. On a 2-dimensional airfoil optimization problem SAIL produces hundreds of diverse but high-performing designs with several orders of magnitude fewer evaluations than MAP-Elites or CMA-ES. We demonstrate that SAIL is also capable of producing maps of high-performing designs in realistic 3-dimensional aerodynamic tasks with an accurate flow simulation. Data-efficient design exploration with SAIL can help designers understand what is possible, beyond what is optimal, by considering more than pure objective-based optimization.
Design optimization techniques are often used at the beginning of the design process to explore the space of possible designs.
http://arxiv.org/abs/1806.05865v1
http://arxiv.org/pdf/1806.05865v1.pdf
null
[ "Adam Gaier", "Alexander Asteroth", "Jean-Baptiste Mouret" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deeplaser-practical-fault-attack-on-deep
1806.05859
null
null
DeepLaser: Practical Fault Attack on Deep Neural Networks
As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors -- ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for real-world applications, where faults can be introduced by simpler means (such as altering the supply voltage).
null
http://arxiv.org/abs/1806.05859v2
http://arxiv.org/pdf/1806.05859v2.pdf
null
[ "Jakub Breier", "Xiaolu Hou", "Dirmanto Jap", "Lei Ma", "Shivam Bhasin", "Yang Liu" ]
[ "Autonomous Vehicles" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/semantic-variation-in-online-communities-of
1806.05847
null
null
Semantic Variation in Online Communities of Practice
We introduce a framework for quantifying semantic variation of common words in Communities of Practice and in sets of topic-related communities. We show that while some meaning shifts are shared across related communities, others are community-specific, and therefore independent from the discussed topic. We propose such findings as evidence in favour of sociolinguistic theories of socially-driven semantic variation. Results are evaluated using an independent language modelling task. Furthermore, we investigate extralinguistic features and show that factors such as prominence and dissemination of words are related to semantic variation.
null
http://arxiv.org/abs/1806.05847v1
http://arxiv.org/pdf/1806.05847v1.pdf
WS 2017 1
[ "Marco Del Tredici", "Raquel Fernández" ]
[ "Language Modelling" ]
2018-06-15T00:00:00
https://aclanthology.org/W17-6804
https://aclanthology.org/W17-6804.pdf
semantic-variation-in-online-communities-of-1
null
[]
https://paperswithcode.com/paper/a-covariance-matrix-self-adaptation-evolution
1806.05845
null
null
A Covariance Matrix Self-Adaptation Evolution Strategy for Optimization under Linear Constraints
This paper addresses the development of a covariance matrix self-adaptation evolution strategy (CMSA-ES) for solving optimization problems with linear constraints. The proposed algorithm is referred to as Linear Constraint CMSA-ES (lcCMSA-ES). It uses a specially built mutation operator together with repair by projection to satisfy the constraints. The lcCMSA-ES evolves itself on a linear manifold defined by the constraints. The objective function is only evaluated at feasible search points (interior point method). This is a property often required in application domains such as simulation optimization and finite element methods. The algorithm is tested on a variety of different test problems revealing considerable results.
null
http://arxiv.org/abs/1806.05845v2
http://arxiv.org/pdf/1806.05845v2.pdf
null
[ "Patrick Spettel", "Hans-Georg Beyer", "Michael Hellwig" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/co-clustering-via-information-theoretic
1801.00584
null
null
Co-Clustering via Information-Theoretic Markov Aggregation
We present an information-theoretic cost function for co-clustering, i.e., for simultaneous clustering of two sets based on similarities between their elements. By constructing a simple random walk on the corresponding bipartite graph, our cost function is derived from a recently proposed generalized framework for information-theoretic Markov chain aggregation. The goal of our cost function is to minimize relevant information loss, hence it connects to the information bottleneck formalism. Moreover, via the connection to Markov aggregation, our cost function is not ad hoc, but inherits its justification from the operational qualities associated with the corresponding Markov aggregation problem. We furthermore show that, for appropriate parameter settings, our cost function is identical to well-known approaches from the literature, such as Information-Theoretic Co-Clustering of Dhillon et al. Hence, understanding the influence of this parameter admits a deeper understanding of the relationship between previously proposed information-theoretic cost functions. We highlight some strengths and weaknesses of the cost function for different parameters. We also illustrate the performance of our cost function, optimized with a simple sequential heuristic, on several synthetic and real-world data sets, including the Newsgroup20 and the MovieLens100k data sets.
null
http://arxiv.org/abs/1801.00584v2
http://arxiv.org/pdf/1801.00584v2.pdf
null
[ "Clemens Bloechl", "Rana Ali Amjad", "Bernhard C. Geiger" ]
[ "Clustering" ]
2018-01-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/real-time-monocular-visual-odometry-for
1806.05842
null
null
Real-time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments
In the context of robotic underwater operations, the visual degradations induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, most localization methods are based on expensive navigational sensors associated with acoustic positioning. On the other hand, visual odometry and visual SLAM have been exhaustively studied for aerial or terrestrial applications, but state-of-the-art algorithms fail underwater. In this paper we tackle the problem of using a simple low-cost camera for underwater localization and propose a new monocular visual odometry method dedicated to the underwater environment. We evaluate different tracking methods and show that optical flow based tracking is more suited to underwater images than classical approaches based on descriptors. We also propose a keyframe-based visual odometry approach highly relying on nonlinear optimization. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles (ROVs) used for underwater archaeological missions but the developed system can be used in any other applications as long as visual information is available.
null
https://arxiv.org/abs/1806.05842v3
https://arxiv.org/pdf/1806.05842v3.pdf
null
[ "Maxime Ferrera", "Julien Moras", "Pauline Trouvé-Peloux", "Vincent Creuze" ]
[ "Monocular Visual Odometry", "Optical Flow Estimation", "Visual Odometry" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-road-to-success-assessing-the-fate-of
1806.05838
null
null
The Road to Success: Assessing the Fate of Linguistic Innovations in Online Communities
We investigate the birth and diffusion of lexical innovations in a large dataset of online social communities. We build on sociolinguistic theories and focus on the relation between the spread of a novel term and the social role of the individuals who use it, uncovering characteristics of innovators and adopters. Finally, we perform a prediction task that allows us to anticipate whether an innovation will successfully spread within a community.
We investigate the birth and diffusion of lexical innovations in a large dataset of online social communities.
http://arxiv.org/abs/1806.05838v1
http://arxiv.org/pdf/1806.05838v1.pdf
COLING 2018 8
[ "Marco Del Tredici", "Raquel Fernández" ]
[ "Relation" ]
2018-06-15T00:00:00
https://aclanthology.org/C18-1135
https://aclanthology.org/C18-1135.pdf
the-road-to-success-assessing-the-fate-of-1
null
[]
https://paperswithcode.com/paper/on-the-exact-minimization-of-saturated-loss
1806.05833
null
null
On the exact minimization of saturated loss functions for robust regression and subspace estimation
This paper deals with robust regression and subspace estimation and more precisely with the problem of minimizing a saturated loss function. In particular, we focus on computational complexity issues and show that an exact algorithm with polynomial time-complexity with respect to the number of data can be devised for robust regression and subspace estimation. This result is obtained by adopting a classification point of view and relating the problems to the search for a linear model that can approximate the maximal number of points with a given error. Approximate variants of the algorithms based on ramdom sampling are also discussed and experiments show that it offers an accuracy gain over the traditional RANSAC for a similar algorithmic simplicity.
null
http://arxiv.org/abs/1806.05833v2
http://arxiv.org/pdf/1806.05833v2.pdf
null
[ "Fabien Lauer" ]
[ "General Classification", "regression" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/selfless-sequential-learning
1806.05421
null
Bkxbrn0cYX
Selfless Sequential Learning
Sequential learning, also called lifelong learning, studies the problem of learning tasks in a sequence with access restricted to only the data of the current task. In this paper we look at a scenario with fixed model capacity, and postulate that the learning process should not be selfish, i.e. it should account for future tasks to be added and thus leave enough capacity for them. To achieve Selfless Sequential Learning we study different regularization strategies and activation functions. We find that imposing sparsity at the level of the representation (i.e.~neuron activations) is more beneficial for sequential learning than encouraging parameter sparsity. In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition. It results in few active neurons which in turn leaves more free neurons to be utilized by upcoming tasks. As neural inhibition over an entire layer can be too drastic, especially for complex tasks requiring strong representations, our regularizer only inhibits other neurons in a local neighbourhood, inspired by lateral inhibition processes in the brain. We combine our novel regularizer, with state-of-the-art lifelong learning methods that penalize changes to important previously learned parts of the network. We show that our new regularizer leads to increased sparsity which translates in consistent performance improvement %over alternative regularizers we studied on diverse datasets.
In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition.
http://arxiv.org/abs/1806.05421v5
http://arxiv.org/pdf/1806.05421v5.pdf
ICLR 2019 5
[ "Rahaf Aljundi", "Marcus Rohrbach", "Tinne Tuytelaars" ]
[ "Lifelong learning" ]
2018-06-14T00:00:00
https://openreview.net/forum?id=Bkxbrn0cYX
https://openreview.net/pdf?id=Bkxbrn0cYX
selfless-sequential-learning-1
null
[]
https://paperswithcode.com/paper/temporal-stability-in-predictive-process
1712.04165
null
null
Temporal Stability in Predictive Process Monitoring
Predictive process monitoring is concerned with the analysis of events produced during the execution of a business process in order to predict as early as possible the final outcome of an ongoing case. Traditionally, predictive process monitoring methods are optimized with respect to accuracy. However, in environments where users make decisions and take actions in response to the predictions they receive, it is equally important to optimize the stability of the successive predictions made for each case. To this end, this paper defines a notion of temporal stability for binary classification tasks in predictive process monitoring and evaluates existing methods with respect to both temporal stability and accuracy. We find that methods based on XGBoost and LSTM neural networks exhibit the highest temporal stability. We then show that temporal stability can be enhanced by hyperparameter-optimizing random forests and XGBoost classifiers with respect to inter-run stability. Finally, we show that time series smoothing techniques can further enhance temporal stability at the expense of slightly lower accuracy.
We then show that temporal stability can be enhanced by hyperparameter-optimizing random forests and XGBoost classifiers with respect to inter-run stability.
http://arxiv.org/abs/1712.04165v3
http://arxiv.org/pdf/1712.04165v3.pdf
null
[ "Irene Teinemaa", "Marlon Dumas", "Anna Leontjeva", "Fabrizio Maria Maggi" ]
[ "Binary Classification", "Predictive Process Monitoring", "Time Series", "Time Series Analysis" ]
2017-12-12T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.", "full_name": "Sigmoid Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Sigmoid Activation", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329", "description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)", "full_name": "Tanh Activation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Tanh Activation", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/three-dimensional-deep-learning-approach-for
1806.05824
null
null
Three dimensional Deep Learning approach for remote sensing image classification
Recently, a variety of approaches has been enriching the field of Remote Sensing (RS) image processing and analysis. Unfortunately, existing methods remain limited faced to the rich spatio-spectral content of today's large datasets. It would seem intriguing to resort to Deep Learning (DL) based approaches at this stage with regards to their ability to offer accurate semantic interpretation of the data. However, the specificity introduced by the coexistence of spectral and spatial content in the RS datasets widens the scope of the challenges presented to adapt DL methods to these contexts. Therefore, the aim of this paper is firstly to explore the performance of DL architectures for the RS hyperspectral dataset classification and secondly to introduce a new three-dimensional DL approach that enables a joint spectral and spatial information process. A set of three-dimensional schemes is proposed and evaluated. Experimental results based on well knownhyperspectral datasets demonstrate that the proposed method is able to achieve a better classification rate than state of the art methods with lower computational costs.
null
http://arxiv.org/abs/1806.05824v1
http://arxiv.org/pdf/1806.05824v1.pdf
null
[ "Amina Ben Hamida", "A. Benoit", "Patrick Lambert", "Chokri Ben Amar" ]
[ "Classification", "Deep Learning", "General Classification", "image-classification", "Image Classification", "Remote Sensing Image Classification", "Specificity" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/primal-dual-residual-networks
1806.05823
null
null
Primal-dual residual networks
In this work, we propose a deep neural network architecture motivated by primal-dual splitting methods from convex optimization. We show theoretically that there exists a close relation between the derived architecture and residual networks, and further investigate this connection in numerical experiments. Moreover, we demonstrate how our approach can be used to unroll optimization algorithms for certain problems with hard constraints. Using the example of speech dequantization, we show that our method can outperform classical splitting methods when both are applied to the same task.
null
http://arxiv.org/abs/1806.05823v1
http://arxiv.org/pdf/1806.05823v1.pdf
null
[ "Christoph Brauer", "Dirk Lorenz" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bubblerank-safe-online-learning-to-rerank
1806.05819
null
null
BubbleRank: Safe Online Learning to Re-Rank via Implicit Click Feedback
In this paper, we study the problem of safe online learning to re-rank, where user feedback is used to improve the quality of displayed lists. Learning to rank has traditionally been studied in two settings. In the offline setting, rankers are typically learned from relevance labels created by judges. This approach has generally become standard in industrial applications of ranking, such as search. However, this approach lacks exploration and thus is limited by the information content of the offline training data. In the online setting, an algorithm can experiment with lists and learn from feedback on them in a sequential fashion. Bandit algorithms are well-suited for this setting but they tend to learn user preferences from scratch, which results in a high initial cost of exploration. This poses an additional challenge of safe exploration in ranked lists. We propose BubbleRank, a bandit algorithm for safe re-ranking that combines the strengths of both the offline and online settings. The algorithm starts with an initial base list and improves it online by gradually exchanging higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive experiments on a large-scale real-world click dataset.
null
https://arxiv.org/abs/1806.05819v2
https://arxiv.org/pdf/1806.05819v2.pdf
null
[ "Chang Li", "Branislav Kveton", "Tor Lattimore", "Ilya Markov", "Maarten de Rijke", "Csaba Szepesvari", "Masrour Zoghi" ]
[ "Learning-To-Rank", "Re-Ranking", "Safe Exploration" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/safe-active-feature-selection-for-sparse
1806.05817
null
null
Safe Active Feature Selection for Sparse Learning
We present safe active incremental feature selection~(SAIF) to scale up the computation of LASSO solutions. SAIF does not require a solution from a heavier penalty parameter as in sequential screening or updating the full model for each iteration as in dynamic screening. Different from these existing screening methods, SAIF starts from a small number of features and incrementally recruits active features and updates the significantly reduced model. Hence, it is much more computationally efficient and scalable with the number of features. More critically, SAIF has the safe guarantee as it has the convergence guarantee to the optimal solution to the original full LASSO problem. Such an incremental procedure and theoretical convergence guarantee can be extended to fused LASSO problems. Compared with state-of-the-art screening methods as well as working set and homotopy methods, which may not always guarantee the optimal solution, SAIF can achieve superior or comparable efficiency and high scalability with the safe guarantee when facing extremely high dimensional data sets. Experiments with both synthetic and real-world data sets show that SAIF can be up to 50 times faster than dynamic screening, and hundreds of times faster than computing LASSO or fused LASSO solutions without screening.
null
http://arxiv.org/abs/1806.05817v2
http://arxiv.org/pdf/1806.05817v2.pdf
null
[ "Shaogang Ren", "Jianhua Z. Huang", "Shuai Huang", "Xiaoning Qian" ]
[ "feature selection", "Sparse Learning" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sgm-sequence-generation-model-for-multi-label
1806.04822
null
null
SGM: Sequence Generation Model for Multi-label Classification
Multi-label classification is an important yet challenging task in natural language processing. It is more complex than single-label classification in that the labels tend to be correlated. Existing methods tend to ignore the correlations between labels. Besides, different parts of the text can contribute differently for predicting different labels, which is not considered by existing models. In this paper, we propose to view the multi-label classification task as a sequence generation problem, and apply a sequence generation model with a novel decoder structure to solve it. Extensive experimental results show that our proposed methods outperform previous work by a substantial margin. Further analysis of experimental results demonstrates that the proposed methods not only capture the correlations between labels, but also select the most informative words automatically when predicting different labels.
Further analysis of experimental results demonstrates that the proposed methods not only capture the correlations between labels, but also select the most informative words automatically when predicting different labels.
http://arxiv.org/abs/1806.04822v3
http://arxiv.org/pdf/1806.04822v3.pdf
COLING 2018 8
[ "Pengcheng Yang", "Xu sun", "Wei Li", "Shuming Ma", "Wei Wu", "Houfeng Wang" ]
[ "Classification", "Decoder", "General Classification", "model", "Multi-Label Classification", "MUlTI-LABEL-ClASSIFICATION" ]
2018-06-13T00:00:00
https://aclanthology.org/C18-1330
https://aclanthology.org/C18-1330.pdf
sgm-sequence-generation-model-for-multi-label-1
null
[]
https://paperswithcode.com/paper/best-sources-forward-domain-generalization
1806.05810
null
null
Best sources forward: domain generalization through source-specific nets
A long standing problem in visual object categorization is the ability of algorithms to generalize across different testing conditions. The problem has been formalized as a covariate shift among the probability distributions generating the training data (source) and the test data (target) and several domain adaptation methods have been proposed to address this issue. While these approaches have considered the single source-single target scenario, it is plausible to have multiple sources and require adaptation to any possible target domain. This last scenario, named Domain Generalization (DG), is the focus of our work. Differently from previous DG methods which learn domain invariant representations from source data, we design a deep network with multiple domain-specific classifiers, each associated to a source domain. At test time we estimate the probabilities that a target sample belongs to each source domain and exploit them to optimally fuse the classifiers predictions. To further improve the generalization ability of our model, we also introduced a domain agnostic component supporting the final classifier. Experiments on two public benchmarks demonstrate the power of our approach.
null
http://arxiv.org/abs/1806.05810v1
http://arxiv.org/pdf/1806.05810v1.pdf
null
[ "Massimiliano Mancini", "Samuel Rota Bulò", "Barbara Caputo", "Elisa Ricci" ]
[ "Domain Adaptation", "Domain Generalization", "Object Categorization" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/molecular-generative-model-based-on
1806.05805
null
null
Molecular generative model based on conditional variational autoencoder for de novo molecular design
We propose a molecular generative model based on the conditional variational autoencoder for de novo molecular design. It is specialized to control multiple molecular properties simultaneously by imposing them on a latent space. As a proof of concept, we demonstrate that it can be used to generate drug-like molecules with five target properties. We were also able to adjust a single property without changing the others and to manipulate it beyond the range of the dataset.
We propose a molecular generative model based on the conditional variational autoencoder for de novo molecular design.
http://arxiv.org/abs/1806.05805v1
http://arxiv.org/pdf/1806.05805v1.pdf
null
[ "Jaechang Lim", "Seongok Ryu", "Jin Woo Kim", "Woo Youn Kim" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hybrid-approach-of-relation-network-and
1711.05859
null
null
Hybrid Approach of Relation Network and Localized Graph Convolutional Filtering for Breast Cancer Subtype Classification
Network biology has been successfully used to help reveal complex mechanisms of disease, especially cancer. On the other hand, network biology requires in-depth knowledge to construct disease-specific networks, but our current knowledge is very limited even with the recent advances in human cancer biology. Deep learning has shown a great potential to address the difficult situation like this. However, deep learning technologies conventionally use grid-like structured data, thus application of deep learning technologies to the classification of human disease subtypes is yet to be explored. Recently, graph based deep learning techniques have emerged, which becomes an opportunity to leverage analyses in network biology. In this paper, we proposed a hybrid model, which integrates two key components 1) graph convolution neural network (graph CNN) and 2) relation network (RN). We utilize graph CNN as a component to learn expression patterns of cooperative gene community, and RN as a component to learn associations between learned patterns. The proposed model is applied to the PAM50 breast cancer subtype classification task, the standard breast cancer subtype classification of clinical utility. In experiments of both subtype classification and patient survival analysis, our proposed method achieved significantly better performances than existing methods. We believe that this work is an important starting point to realize the upcoming personalized medicine.
In this paper, we proposed a hybrid model, which integrates two key components 1) graph convolution neural network (graph CNN) and 2) relation network (RN).
http://arxiv.org/abs/1711.05859v3
http://arxiv.org/pdf/1711.05859v3.pdf
null
[ "Sungmin Rhee", "Seokjun Seo", "Sun Kim" ]
[ "Classification", "Deep Learning", "General Classification", "Relation", "Relation Network", "Survival Analysis" ]
2017-11-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/learning-to-act-properly-predicting-and
1712.07576
null
null
Learning to Act Properly: Predicting and Explaining Affordances from Images
We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k, referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task.
null
http://arxiv.org/abs/1712.07576v2
http://arxiv.org/pdf/1712.07576v2.pdf
CVPR 2018 6
[ "Ching-Yao Chuang", "Jiaman Li", "Antonio Torralba", "Sanja Fidler" ]
[ "Visual Reasoning" ]
2017-12-20T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Chuang_Learning_to_Act_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Chuang_Learning_to_Act_CVPR_2018_paper.pdf
learning-to-act-properly-predicting-and-1
null
[]
https://paperswithcode.com/paper/weakly-supervised-deep-image-hashing-through
1806.05804
null
null
Weakly Supervised Deep Image Hashing through Tag Embeddings
Many approaches to semantic image hashing have been formulated as supervised learning problems that utilize images and label information to learn the binary hash codes. However, large-scale labeled image data is expensive to obtain, thus imposing a restriction on the usage of such algorithms. On the other hand, unlabelled image data is abundant due to the existence of many Web image repositories. Such Web images may often come with images tags that contain useful information, although raw tags, in general, do not readily lead to semantic labels. Motivated by this scenario, we formulate the problem of semantic image hashing as a weakly-supervised learning problem. We utilize the information contained in the user-generated tags associated with the images to learn the hash codes. More specifically, we extract the word2vec semantic embeddings of the tags and use the information contained in them for constraining the learning. Accordingly, we name our model Weakly Supervised Deep Hashing using Tag Embeddings (WDHT). WDHT is tested for the task of semantic image retrieval and is compared against several state-of-art models. Results show that our approach sets a new state-of-art in the area of weekly supervised image hashing.
We utilize the information contained in the user-generated tags associated with the images to learn the hash codes.
http://arxiv.org/abs/1806.05804v3
http://arxiv.org/pdf/1806.05804v3.pdf
CVPR 2019 6
[ "Vijetha Gattupalli", "Yaoxin Zhuo", "Baoxin Li" ]
[ "Deep Hashing", "Image Retrieval", "Retrieval", "TAG", "Weakly-supervised Learning" ]
2018-06-15T00:00:00
http://openaccess.thecvf.com/content_CVPR_2019/html/Gattupalli_Weakly_Supervised_Deep_Image_Hashing_Through_Tag_Embeddings_CVPR_2019_paper.html
http://openaccess.thecvf.com/content_CVPR_2019/papers/Gattupalli_Weakly_Supervised_Deep_Image_Hashing_Through_Tag_Embeddings_CVPR_2019_paper.pdf
weakly-supervised-deep-image-hashing-through-1
null
[]
https://paperswithcode.com/paper/wikiref-wikilinks-as-a-route-to-recommending
1806.04092
null
null
WikiRef: Wikilinks as a route to recommending appropriate references for scientific Wikipedia pages
The exponential increase in the usage of Wikipedia as a key source of scientific knowledge among the researchers is making it absolutely necessary to metamorphose this knowledge repository into an integral and self-contained source of information for direct utilization. Unfortunately, the references which support the content of each Wikipedia entity page, are far from complete. Why are the reference section ill-formed for most Wikipedia pages? Is this section edited as frequently as the other sections of a page? Can there be appropriate surrogates that can automatically enhance the reference section? In this paper, we propose a novel two step approach -- WikiRef -- that (i) leverages the wikilinks present in a scientific Wikipedia target page and, thereby, (ii) recommends highly relevant references to be included in that target page appropriately and automatically borrowed from the reference section of the wikilinks. In the first step, we build a classifier to ascertain whether a wikilink is a potential source of reference or not. In the following step, we recommend references to the target page from the reference section of the wikilinks that are classified as potential sources of references in the first step. We perform an extensive evaluation of our approach on datasets from two different domains -- Computer Science and Physics. For Computer Science we achieve a notably good performance with a precision@1 of 0.44 for reference recommendation as opposed to 0.38 obtained from the most competitive baseline. For the Physics dataset, we obtain a similar performance boost of 10% with respect to the most competitive baseline.
null
http://arxiv.org/abs/1806.04092v2
http://arxiv.org/pdf/1806.04092v2.pdf
COLING 2018 8
[ "Abhik Jana", "Pranjal Kanojiya", "Pawan Goyal", "Animesh Mukherjee" ]
[]
2018-06-11T00:00:00
https://aclanthology.org/C18-1032
https://aclanthology.org/C18-1032.pdf
wikiref-wikilinks-as-a-route-to-recommending-2
null
[]
https://paperswithcode.com/paper/disentangled-person-image-generation
1712.02621
null
null
Disentangled Person Image Generation
Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.
Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information.
http://arxiv.org/abs/1712.02621v4
http://arxiv.org/pdf/1712.02621v4.pdf
CVPR 2018 6
[ "Liqian Ma", "Qianru Sun", "Stamatios Georgoulis", "Luc van Gool", "Bernt Schiele", "Mario Fritz" ]
[ "Gesture-to-Gesture Translation", "Image Generation", "Person Re-Identification", "Pose Transfer" ]
2017-12-07T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Ma_Disentangled_Person_Image_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Ma_Disentangled_Person_Image_CVPR_2018_paper.pdf
disentangled-person-image-generation-1
null
[]
https://paperswithcode.com/paper/scalable-factorized-hierarchical-variational
1804.03201
null
null
Scalable Factorized Hierarchical Variational Autoencoder Training
Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations. Among them, a factorized hierarchical variational autoencoder (FHVAE) is a variational inference-based model that formulates a hierarchical generative process for sequential data. Specifically, an FHVAE model can learn disentangled and interpretable representations, which have been proven useful for numerous speech applications, such as speaker verification, robust speech recognition, and voice conversion. However, as we will elaborate in this paper, the training algorithm proposed in the original paper is not scalable to datasets of thousands of hours, which makes this model less applicable on a larger scale. After identifying limitations in terms of runtime, memory, and hyperparameter optimization, we propose a hierarchical sampling training algorithm to address all three issues. Our proposed method is evaluated comprehensively on a wide variety of datasets, ranging from 3 to 1,000 hours and involving different types of generating factors, such as recording conditions and noise types. In addition, we also present a new visualization method for qualitatively evaluating the performance with respect to the interpretability and disentanglement. Models trained with our proposed algorithm demonstrate the desired characteristics on all the datasets.
Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations.
http://arxiv.org/abs/1804.03201v2
http://arxiv.org/pdf/1804.03201v2.pdf
null
[ "Wei-Ning Hsu", "James Glass" ]
[ "Disentanglement", "Hyperparameter Optimization", "Robust Speech Recognition", "Speaker Verification", "speech-recognition", "Speech Recognition", "Variational Inference", "Voice Conversion" ]
2018-04-09T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Interpretability", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.", "name": "Image Models", "parent": null }, "name": "Interpretability", "source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression", "source_url": "http://arxiv.org/abs/1310.1533v2" }, { "code_snippet_url": "", "description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Solana Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Solana Customer Service Number +1-833-534-1729", "source_title": "Reducing the Dimensionality of Data with Neural Networks", "source_url": "https://science.sciencemag.org/content/313/5786/504" } ]
https://paperswithcode.com/paper/learning-6-dof-grasping-interaction-via-deep
1708.07303
null
null
Learning 6-DOF Grasping Interaction via Deep Geometry-aware 3D Representations
This paper focuses on the problem of learning 6-DOF grasping with a parallel jaw gripper in simulation. We propose the notion of a geometry-aware representation in grasping based on the assumption that knowledge of 3D geometry is at the heart of interaction. Our key idea is constraining and regularizing grasping interaction learning through 3D geometry prediction. Specifically, we formulate the learning of deep geometry-aware grasping model in two steps: First, we learn to build mental geometry-aware representation by reconstructing the scene (i.e., 3D occupancy grid) from RGBD input via generative 3D shape modeling. Second, we learn to predict grasping outcome with its internal geometry-aware representation. The learned outcome prediction model is used to sequentially propose grasping solutions via analysis-by-synthesis optimization. Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations. This dataset includes 101 everyday objects spread across 7 categories, additionally, we propose a data augmentation strategy for effective learning; (3) We demonstrate that the learned geometry-aware representation leads to about 10 percent relative performance improvement over the baseline CNN on grasping objects from our dataset. (4) We further demonstrate that the model generalizes to novel viewpoints and object instances.
Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations.
http://arxiv.org/abs/1708.07303v4
http://arxiv.org/pdf/1708.07303v4.pdf
null
[ "Xinchen Yan", "Jasmine Hsu", "Mohi Khansari", "Yunfei Bai", "Arkanath Pathak", "Abhinav Gupta", "James Davidson", "Honglak Lee" ]
[ "3D geometry", "3D Geometry Prediction", "3D Shape Modeling", "Data Augmentation" ]
2017-08-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/satr-dl-improving-surgical-skill-assessment
1806.05798
null
null
SATR-DL: Improving Surgical Skill Assessment and Task Recognition in Robot-assisted Surgery with Deep Neural Networks
Purpose: This paper focuses on an automated analysis of surgical motion profiles for objective skill assessment and task recognition in robot-assisted surgery. Existing techniques heavily rely on conventional statistic measures or shallow modelings based on hand-engineered features and gesture segmentation. Such developments require significant expert knowledge, are prone to errors, and are less efficient in online adaptive training systems. Methods: In this work, we present an efficient analytic framework with a parallel deep learning architecture, SATR-DL, to assess trainee expertise and recognize surgical training activity. Through an end-to-end learning technique, abstract information of spatial representations and temporal dynamics is jointly obtained directly from raw motion sequences. Results: By leveraging a shared high-level representation learning, the resulting model is successful in the recognition of trainee skills and surgical tasks, suturing, needle-passing, and knot-tying. Meanwhile, we explore the use of ensemble in classification at the trial level, where the SATR-DL outperforms state-of-the-art performance by achieving accuracies of 0.960 and 1.000 in skill assessment and task recognition, respectively. Conclusion: This study highlights the potential of SATR-DL to provide improvements for an efficient data-driven assessment in intelligent robotic surgery.
null
http://arxiv.org/abs/1806.05798v1
http://arxiv.org/pdf/1806.05798v1.pdf
null
[ "Ziheng Wang", "Ann Majewicz Fey" ]
[ "Representation Learning" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/supervised-speech-separation-based-on-deep
1708.07524
null
null
Supervised Speech Separation Based on Deep Learning: An Overview
Speech separation is the task of separating target speech from background interference. Traditionally, speech separation is studied as a signal processing problem. A more recent approach formulates speech separation as a supervised learning problem, where the discriminative patterns of speech, speakers, and background noise are learned from training data. Over the past decade, many supervised separation algorithms have been put forward. In particular, the recent introduction of deep learning to supervised speech separation has dramatically accelerated progress and boosted separation performance. This article provides a comprehensive overview of the research on deep learning based supervised speech separation in the last several years. We first introduce the background of speech separation and the formulation of supervised separation. Then we discuss three main components of supervised separation: learning machines, training targets, and acoustic features. Much of the overview is on separation algorithms where we review monaural methods, including speech enhancement (speech-nonspeech separation), speaker separation (multi-talker separation), and speech dereverberation, as well as multi-microphone techniques. The important issue of generalization, unique to supervised learning, is discussed. This overview provides a historical perspective on how advances are made. In addition, we discuss a number of conceptual issues, including what constitutes the target source.
null
http://arxiv.org/abs/1708.07524v2
http://arxiv.org/pdf/1708.07524v2.pdf
null
[ "DeLiang Wang", "Jitong Chen" ]
[ "Deep Learning", "Speaker Separation", "Speech Dereverberation", "Speech Enhancement", "Speech Separation" ]
2017-08-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-learning-with-convolutional-neural-2
1806.05796
null
null
Deep Learning with Convolutional Neural Network for Objective Skill Evaluation in Robot-assisted Surgery
With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved a competitive accuracy of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully-tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within 1-3 second window, without needing an observation of entire training trial. This study highlights the potentials of deep architectures for an proficient online skill assessment in modern surgical training.
null
http://arxiv.org/abs/1806.05796v2
http://arxiv.org/pdf/1806.05796v2.pdf
null
[ "Ziheng Wang", "Ann Majewicz Fey" ]
[ "Time Series", "Time Series Analysis" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rapidnn-in-memory-deep-neural-network
1806.05794
null
null
RAPIDNN: In-Memory Deep Neural Network Acceleration Framework
Deep neural networks (DNN) have demonstrated effectiveness for various applications such as image processing, video segmentation, and speech recognition. Running state-of-the-art DNNs on current systems mostly relies on either generalpurpose processors, ASIC designs, or FPGA accelerators, all of which suffer from data movements due to the limited onchip memory and data transfer bandwidth. In this work, we propose a novel framework, called RAPIDNN, which processes all DNN operations within the memory to minimize the cost of data movement. To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i.e., multiplication, addition, activation functions, and pooling. The framework extracts representative operands of a DNN model, e.g., weights and input values, using clustering methods to optimize the model for in-memory processing. Then, it maps the extracted operands and their precomputed results into the accelerator memory blocks. At runtime, the accelerator identifies computation results based on efficient in-memory search capability which also provides tunability of approximation to further improve computation efficiency. Our evaluation shows that RAPIDNN achieves 68.4x, 49.5x energy efficiency improvement and 48.1x, 10.9x speedup as compared to ISAAC and PipeLayer, the state-of-the-art DNN accelerators, while ensuring less than 0.3% of quality loss.
null
http://arxiv.org/abs/1806.05794v4
http://arxiv.org/pdf/1806.05794v4.pdf
null
[ "Mohsen Imani", "Mohammad Samragh", "Yeseong Kim", "Saransh Gupta", "Farinaz Koushanfar", "Tajana Rosing" ]
[ "Clustering", "speech-recognition", "Speech Recognition", "Video Segmentation", "Video Semantic Segmentation" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/recurrent-multiresolution-convolutional
1806.05793
null
null
Recurrent Multiresolution Convolutional Networks for VHR Image Classification
Classification of very high resolution (VHR) satellite images has three major challenges: 1) inherent low intra-class and high inter-class spectral similarities, 2) mismatching resolution of available bands, and 3) the need to regularize noisy classification maps. Conventional methods have addressed these challenges by adopting separate stages of image fusion, feature extraction, and post-classification map regularization. These processing stages, however, are not jointly optimizing the classification task at hand. In this study, we propose a single-stage framework embedding the processing stages in a recurrent multiresolution convolutional network trained in an end-to-end manner. The feedforward version of the network, called FuseNet, aims to match the resolution of the panchromatic and multispectral bands in a VHR image using convolutional layers with corresponding downsampling and upsampling operations. Contextual label information is incorporated into FuseNet by means of a recurrent version called ReuseNet. We compared FuseNet and ReuseNet against the use of separate processing steps for both image fusion, e.g. pansharpening and resampling through interpolation, and map regularization such as conditional random fields. We carried out our experiments on a land cover classification task using a Worldview-03 image of Quezon City, Philippines and the ISPRS 2D semantic labeling benchmark dataset of Vaihingen, Germany. FuseNet and ReuseNet surpass the baseline approaches in both quantitative and qualitative results.
null
http://arxiv.org/abs/1806.05793v1
http://arxiv.org/pdf/1806.05793v1.pdf
null
[ "John Ray Bergado", "Claudio Persello", "Alfred Stein" ]
[ "Classification", "General Classification", "image-classification", "Image Classification", "Land Cover Classification", "Pansharpening" ]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mobilefacenets-efficient-cnns-for-accurate
1804.07573
null
null
MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices
Face Analysis Project on MXNet
Face Analysis Project on MXNet
http://arxiv.org/abs/1804.07573v4
http://arxiv.org/pdf/1804.07573v4.pdf
null
[ "Sheng Chen", "Yang Liu", "Xiang Gao", "Zhen Han" ]
[ "Lightweight Face Recognition" ]
2018-04-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/monaural-source-enhancement-maximizing-source
1806.05791
null
null
Monaural source enhancement maximizing source-to-distortion ratio via automatic differentiation
Recently, deep neural network (DNN) has made a breakthrough in monaural source enhancement. Through a training step by using a large amount of data, DNN estimates a mapping between mixed signals and clean signals. At this time, we use an objective function that numerically expresses the quality of a mapping by DNN. In the conventional methods, L1 norm, L2 norm, and Itakura-Saito divergence are often used as objective functions. Recently, an objective function based on short-time objective intelligibility (STOI) has also been proposed. However, these functions only indicate similarity between the clean signal and the estimated signal by DNN. In other words, they do not show the quality of noise reduction or source enhancement. Motivated by the fact, this paper adopts signal-to-distortion ratio (SDR) as the objective function. Since SDR virtually shows signal-to-noise ratio (SNR), maximizing SDR solves the above problem. The experimental results revealed that the proposed method achieved better performance than the conventional methods.
null
http://arxiv.org/abs/1806.05791v1
http://arxiv.org/pdf/1806.05791v1.pdf
null
[ "Hiroaki Nakajima", "Yu Takahashi", "Kazunobu Kondo", "Yuji Hisaminato" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/image-classification-and-retrieval-with
1806.05789
null
null
Image classification and retrieval with random depthwise signed convolutional neural networks
We propose a random convolutional neural network to generate a feature space in which we study image classification and retrieval performance. Put briefly we apply random convolutional blocks followed by global average pooling to generate a new feature, and we repeat this k times to produce a k-dimensional feature space. This can be interpreted as partitioning the space of image patches with random hyperplanes which we formalize as a random depthwise convolutional neural network. In the network's final layer we perform image classification and retrieval with the linear support vector machine and k-nearest neighbor classifiers and study other empirical properties. We show that the ratio of image pixel distribution similarity across classes to within classes is higher in our network's final layer compared to the input space. When we apply the linear support vector machine for image classification we see that the accuracy is higher than if we were to train just the final layer of VGG16, ResNet18, and DenseNet40 with random weights. In the same setting we compare it to an unsupervised feature learning method and find our accuracy to be comparable on CIFAR10 but higher on CIFAR100 and STL10. We see that the accuracy is not far behind that of trained networks, particularly in the top-k setting. For example the top-2 accuracy of our network is near 90% on both CIFAR10 and a 10-class mini ImageNet, and 85% on STL10. We find that k-nearest neighbor gives a comparable precision on the Corel Princeton Image Similarity Benchmark than if we were to use the final layer of trained networks. As with other networks we find that our network fails to a black box attack even though we lack a gradient and use the sign activation. We highlight sensitivity of our network to background as a potential pitfall and an advantage. Overall our work pushes the boundary of what can be achieved with random weights.
We find that k-nearest neighbor gives a comparable precision on the Corel Princeton Image Similarity Benchmark than if we were to use the final layer of trained networks.
http://arxiv.org/abs/1806.05789v3
http://arxiv.org/pdf/1806.05789v3.pdf
null
[ "Yunzhe Xue", "Usman Roshan" ]
[ "General Classification", "image-classification", "Image Classification", "Retrieval" ]
2018-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157", "description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.", "full_name": "Global Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Global Average Pooling", "source_title": "Network In Network", "source_url": "http://arxiv.org/abs/1312.4400v3" }, { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)", "full_name": "Average Pooling", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ", "name": "Pooling Operations", "parent": null }, "name": "Average Pooling", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/a-survey-of-automatic-facial-micro-expression
1806.05781
null
null
A Survey of Automatic Facial Micro-expression Analysis: Databases, Methods and Challenges
Over the last few years, automatic facial micro-expression analysis has garnered increasing attention from experts across different disciplines because of its potential applications in various fields such as clinical diagnosis, forensic investigation and security systems. Advances in computer algorithms and video acquisition technology have rendered machine analysis of facial micro-expressions possible today, in contrast to decades ago when it was primarily the domain of psychiatrists where analysis was largely manual. Indeed, although the study of facial micro-expressions is a well-established field in psychology, it is still relatively new from the computational perspective with many interesting problems. In this survey, we present a comprehensive review of state-of-the-art databases and methods for micro-expressions spotting and recognition. Individual stages involved in the automation of these tasks are also described and reviewed at length. In addition, we also deliberate on the challenges and future directions in this growing field of automatic facial micro-expression analysis.
null
http://arxiv.org/abs/1806.05781v1
http://arxiv.org/pdf/1806.05781v1.pdf
null
[ "Yee-Hui Oh", "John See", "Anh Cat Le Ngo", "Raphael Chung-Wei Phan", "Vishnu Monn Baskaran" ]
[]
2018-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/surprising-negative-results-for-generative
1806.05780
null
BJl4f2A5tQ
Surprising Negative Results for Generative Adversarial Tree Search
While many recent advances in deep reinforcement learning (RL) rely on model-free methods, model-based approaches remain an alluring prospect for their potential to exploit unsupervised data to learn environment model. In this work, we provide an extensive study on the design of deep generative models for RL environments and propose a sample efficient and robust method to learn the model of Atari environments. We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning. While MCTS on the learned model is computationally expensive, similar to AlphaGo, GATS follows depth limited MCTS. GATS employs deep Q network (DQN) and learns a Q-function to assign values to the leaves of the tree in MCTS. We theoretical analyze GATS vis-a-vis the bias-variance trade-off and show GATS is able to mitigate the worst-case error in the Q-estimate. While we were expecting GATS to enjoy a better sample complexity and faster converges to better policies, surprisingly, GATS fails to outperform DQN. We provide a study on which we show why depth limited MCTS fails to perform desirably.
We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.
https://arxiv.org/abs/1806.05780v4
https://arxiv.org/pdf/1806.05780v4.pdf
ICLR 2019 5
[ "Kamyar Azizzadenesheli", "Brandon Yang", "Weitang Liu", "Zachary C. Lipton", "Animashree Anandkumar" ]
[ "Atari Games", "Deep Reinforcement Learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-15T00:00:00
https://openreview.net/forum?id=BJl4f2A5tQ
https://openreview.net/pdf?id=BJl4f2A5tQ
surprising-negative-results-for-generative-1
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition", "full_name": "Q-Learning", "introduced_year": 1984, "main_collection": { "area": "Reinforcement Learning", "description": "", "name": "Off-Policy TD Control", "parent": null }, "name": "Q-Learning", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "A **DQN**, or Deep Q-Network, approximates a state-value function in a [Q-Learning](https://paperswithcode.com/method/q-learning) framework with a neural network. In the Atari Games case, they take in several frames of the game as an input and output state values for each action as an output. \r\n\r\nIt is usually used in conjunction with [Experience Replay](https://paperswithcode.com/method/experience-replay), for storing the episode steps in memory for off-policy learning, where samples are drawn from the replay memory at random. Additionally, the Q-Network is usually optimized towards a frozen target network that is periodically updated with the latest weights every $k$ steps (where $k$ is a hyperparameter). The latter makes training more stable by preventing short-term oscillations from a moving target. The former tackles autocorrelation that would occur from on-line learning, and having a replay memory makes the problem more like a supervised learning problem.\r\n\r\nImage Source: [here](https://www.researchgate.net/publication/319643003_Autonomous_Quadrotor_Landing_using_Deep_Reinforcement_Learning)", "full_name": "Deep Q-Network", "introduced_year": 2000, "main_collection": { "area": "Reinforcement Learning", "description": "", "name": "Q-Learning Networks", "parent": "Off-Policy TD Control" }, "name": "DQN", "source_title": "Playing Atari with Deep Reinforcement Learning", "source_url": "http://arxiv.org/abs/1312.5602v1" } ]
https://paperswithcode.com/paper/deep-learning-approximation-zero-shot-neural
1806.05779
null
SyGdZXSboX
Deep Learning Approximation: Zero-Shot Neural Network Speedup
Neural networks offer high-accuracy solutions to a range of problems, but are costly to run in production systems because of computational and memory requirements during a forward pass. Given a trained network, we propose a techique called Deep Learning Approximation to build a faster network in a tiny fraction of the time required for training by only manipulating the network structure and coefficients without requiring re-training or access to the training data. Speedup is achieved by by applying a sequential series of independent optimizations that reduce the floating-point operations (FLOPs) required to perform a forward pass. First, lossless optimizations are applied, followed by lossy approximations using singular value decomposition (SVD) and low-rank matrix decomposition. The optimal approximation is chosen by weighing the relative accuracy loss and FLOP reduction according to a single parameter specified by the user. On PASCAL VOC 2007 with the YOLO network, we show an end-to-end 2x speedup in a network forward pass with a 5% drop in mAP that can be re-gained by finetuning.
null
http://arxiv.org/abs/1806.05779v1
http://arxiv.org/pdf/1806.05779v1.pdf
null
[ "Michele Pratusevich" ]
[ "Deep Learning" ]
2018-06-15T00:00:00
https://openreview.net/forum?id=SyGdZXSboX
https://openreview.net/pdf?id=SyGdZXSboX
null
null
[]
https://paperswithcode.com/paper/gradient-descent-learns-one-hidden-layer-cnn
1712.00779
null
null
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
We consider the problem of learning a one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation, i.e., $f(\mathbf{Z}, \mathbf{w}, \mathbf{a}) = \sum_j a_j\sigma(\mathbf{w}^T\mathbf{Z}_j)$, in which both the convolutional weights $\mathbf{w}$ and the output weights $\mathbf{a}$ are parameters to be learned. When the labels are the outputs from a teacher network of the same architecture with fixed weights $(\mathbf{w}^*, \mathbf{a}^*)$, we prove that with Gaussian input $\mathbf{Z}$, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, gradient descent with weight normalization from randomly initialized weights can still be proven to recover the true parameters with constant probability, which can be boosted to probability $1$ with multiple restarts. We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.
null
http://arxiv.org/abs/1712.00779v2
http://arxiv.org/pdf/1712.00779v2.pdf
ICML 2018 7
[ "Simon S. Du", "Jason D. Lee", "Yuandong Tian", "Barnabas Poczos", "Aarti Singh" ]
[]
2017-12-03T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=1922
http://proceedings.mlr.press/v80/du18b/du18b.pdf
gradient-descent-learns-one-hidden-layer-cnn-1
null
[ { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/utils/weight_norm.py#L8", "description": "**Weight Normalization** is a normalization method for training neural networks. It is inspired by [batch normalization](https://paperswithcode.com/method/batch-normalization), but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each $k$-dimentional weight vector $\\textbf{w}$ in terms of a parameter vector $\\textbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using:\r\n\r\n$$ \\textbf{w} = \\frac{g}{\\Vert\\\\textbf{v}\\Vert}\\textbf{v}$$\r\n\r\nwhere $\\textbf{v}$ is a $k$-dimensional vector, $g$ is a scalar, and $\\Vert\\textbf{v}\\Vert$ denotes the Euclidean norm of $\\textbf{v}$. This reparameterization has the effect of fixing the Euclidean norm of the weight vector $\\textbf{w}$: we now have $\\Vert\\textbf{w}\\Vert = g$, independent of the parameters $\\textbf{v}$.", "full_name": "Weight Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Weight Normalization", "source_title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", "source_url": "http://arxiv.org/abs/1602.07868v3" } ]
https://paperswithcode.com/paper/on-the-power-of-over-parametrization-in-1
1803.01206
null
null
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a \emph{globally} optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.
We provide new theoretical insights on why over-parametrization is effective in learning neural networks.
http://arxiv.org/abs/1803.01206v2
http://arxiv.org/pdf/1803.01206v2.pdf
ICML 2018
[ "Simon S. Du", "Jason D. Lee" ]
[]
2018-03-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hardware-trojan-attacks-on-neural-networks
1806.05768
null
null
Hardware Trojan Attacks on Neural Networks
With the rising popularity of machine learning and the ever increasing demand for computational power, there is a growing need for hardware optimized implementations of neural networks and other machine learning models. As the technology evolves, it is also plausible that machine learning or artificial intelligence will soon become consumer electronic products and military equipment, in the form of well-trained models. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In this paper, we illuminate these security issues by introducing hardware Trojan attacks on neural networks, expanding the current taxonomy of neural network security to incorporate attacks of this nature. To aid in this, we develop a novel framework for inserting malicious hardware Trojans in the implementation of a neural network classifier. We evaluate the capabilities of the adversary in this setting by implementing the attack algorithm on convolutional neural networks while controlling a variety of parameters available to the adversary. Our experimental results show that the proposed algorithm could effectively classify a selected input trigger as a specified class on the MNIST dataset by injecting hardware Trojans into $0.03\%$, on average, of neurons in the 5th hidden layer of arbitrary 7-layer convolutional neural networks, while undetectable under the test data. Finally, we discuss the potential defenses to protect neural networks against hardware Trojan attacks.
null
http://arxiv.org/abs/1806.05768v1
http://arxiv.org/pdf/1806.05768v1.pdf
null
[ "Joseph Clements", "Yingjie Lao" ]
[ "BIG-bench Machine Learning", "Neural Network Security" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/motion-planning-networks
1806.05767
null
null
Motion Planning Networks
Fast and efficient motion planning algorithms are crucial for many state-of-the-art robotics applications such as self-driving cars. Existing motion planning methods become ineffective as their computational complexity increases exponentially with the dimensionality of the motion planning problem. To address this issue, we present Motion Planning Networks (MPNet), a neural network-based novel planning algorithm. The proposed method encodes the given workspaces directly from a point cloud measurement and generates the end-to-end collision-free paths for the given start and goal configurations. We evaluate MPNet on various 2D and 3D environments including the planning of a 7 DOF Baxter robot manipulator. The results show that MPNet is not only consistently computationally efficient in all environments but also generalizes to completely unseen environments. The results also show that the computation time of MPNet consistently remains less than 1 second in all presented experiments, which is significantly lower than existing state-of-the-art motion planning algorithms.
Fast and efficient motion planning algorithms are crucial for many state-of-the-art robotics applications such as self-driving cars.
http://arxiv.org/abs/1806.05767v2
http://arxiv.org/pdf/1806.05767v2.pdf
null
[ "Ahmed H. Qureshi", "Anthony Simeonov", "Mayur J. Bency", "Michael C. Yip" ]
[ "Motion Planning", "Self-Driving Cars", "Transfer Learning" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/generative-adversarial-networks-and
1806.05764
null
null
Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution
Video super-resolution (VSR) has become one of the most critical problems in video processing. In the deep learning literature, recent works have shown the benefits of using adversarial-based and perceptual losses to improve the performance on various image restoration tasks; however, these have yet to be applied for video super-resolution. In this work, we propose a Generative Adversarial Network(GAN)-based formulation for VSR. We introduce a new generator network optimized for the VSR problem, named VSRResNet, along with a new discriminator architecture to properly guide VSRResNet during the GAN training. We further enhance our VSR GAN formulation with two regularizers, a distance loss in feature-space and pixel-space, to obtain our final VSRResFeatGAN model. We show that pre-training our generator with the Mean-Squared-Error loss only quantitatively surpasses the current state-of-the-art VSR models. Finally, we employ the PercepDist metric (Zhang et al., 2018) to compare state-of-the-art VSR models. We show that this metric more accurately evaluates the perceptual quality of SR solutions obtained from neural networks, compared with the commonly used PSNR/SSIM metrics. Finally, we show that our proposed model, the VSRResFeatGAN model, outperforms current state-of-the-art SR models, both quantitatively and qualitatively.
null
http://arxiv.org/abs/1806.05764v2
http://arxiv.org/pdf/1806.05764v2.pdf
null
[ "Alice Lucas", "Santiago Lopez Tapia", "Rafael Molina", "Aggelos K. Katsaggelos" ]
[ "Generative Adversarial Network", "Image Restoration", "SSIM", "Super-Resolution", "Video Super-Resolution" ]
2018-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Dogecoin Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Dogecoin Customer Service Number +1-833-534-1729", "source_title": "Generative Adversarial Networks", "source_url": "https://arxiv.org/abs/1406.2661v1" } ]
https://paperswithcode.com/paper/pac-bayes-control-synthesizing-controllers
1806.04225
null
null
PAC-Bayes Control: Learning Policies that Provably Generalize to Novel Environments
Our goal is to learn control policies for robots that provably generalize well to novel environments given a dataset of example environments. The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization of control policies to novel environments and generalization of hypotheses in the supervised learning setting. In particular, we utilize the Probably Approximately Correct (PAC)-Bayes framework, which allows us to obtain upper bounds that hold with high probability on the expected cost of (stochastic) control policies across novel environments. We propose policy learning algorithms that explicitly seek to minimize this upper bound. The corresponding optimization problem can be solved using convex optimization (Relative Entropy Programming in particular) in the setting where we are optimizing over a finite policy space. In the more general setting of continuously parameterized policies (e.g., neural network policies), we minimize this upper bound using stochastic gradient descent. We present simulated results of our approach applied to learning (1) reactive obstacle avoidance policies and (2) neural network-based grasping policies. We also present hardware results for the Parrot Swing drone navigating through different obstacle environments. Our examples demonstrate the potential of our approach to provide strong generalization guarantees for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., depth images), and neural network-based policies.
The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization of control policies to novel environments and generalization of hypotheses in the supervised learning setting.
https://arxiv.org/abs/1806.04225v5
https://arxiv.org/pdf/1806.04225v5.pdf
null
[ "Anirudha Majumdar", "Alec Farid", "Anoopkumar Sonar" ]
[]
2018-06-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/insights-on-representational-similarity-in
1806.05759
null
null
Insights on representational similarity in neural networks with canonical correlation
Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method (Raghu et al., 2017). We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations.
Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training.
http://arxiv.org/abs/1806.05759v3
http://arxiv.org/pdf/1806.05759v3.pdf
NeurIPS 2018 12
[ "Ari S. Morcos", "Maithra Raghu", "Samy Bengio" ]
[]
2018-06-14T00:00:00
http://papers.nips.cc/paper/7815-insights-on-representational-similarity-in-neural-networks-with-canonical-correlation
http://papers.nips.cc/paper/7815-insights-on-representational-similarity-in-neural-networks-with-canonical-correlation.pdf
insights-on-representational-similarity-in-1
null
[]
https://paperswithcode.com/paper/enhanced-local-binary-patterns-for-automatic
1702.03349
null
null
Enhanced Local Binary Patterns for Automatic Face Recognition
This paper presents a novel automatic face recognition approach based on local binary patterns. This descriptor considers a local neighbourhood of a pixel to compute the feature vector values. This method is not very robust to handle image noise, variances and different illumination conditions. We address these issues by proposing a novel descriptor which considers more pixels and different neighbourhoods to compute the feature vector values. The proposed method is evaluated on two benchmark corpora, namely UFI and FERET face datasets. We experimentally show that our approach outperforms state-of-the-art methods and is efficient particularly in the real conditions where the above mentioned issues are obvious. We further show that the proposed method handles well one training sample issue and is also robust to the image resolution.
null
http://arxiv.org/abs/1702.03349v2
http://arxiv.org/pdf/1702.03349v2.pdf
null
[ "Pavel Král", "Ladislav Lenc", "Antonín Vrba" ]
[ "Face Recognition" ]
2017-02-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sliced-wasserstein-autoencoder-an
1804.01947
null
null
Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model
In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances. We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution. In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Autoencoders (WAE) and Variational Autoencoders (VAE), while benefiting from an embarrassingly simple implementation.
In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution.
http://arxiv.org/abs/1804.01947v3
http://arxiv.org/pdf/1804.01947v3.pdf
null
[ "Soheil Kolouri", "Phillip E. Pope", "Charles E. Martin", "Gustavo K. Rohde" ]
[ "model" ]
2018-04-05T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.", "full_name": "Solana Customer Service Number +1-833-534-1729", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.", "name": "Generative Models", "parent": null }, "name": "Solana Customer Service Number +1-833-534-1729", "source_title": "Reducing the Dimensionality of Data with Neural Networks", "source_url": "https://science.sciencemag.org/content/313/5786/504" } ]
https://paperswithcode.com/paper/detecting-speech-act-types-in-developer
1806.05130
null
null
Detecting Speech Act Types in Developer Question/Answer Conversations During Bug Repair
This paper targets the problem of speech act detection in conversations about bug repair. We conduct a "Wizard of Oz" experiment with 30 professional programmers, in which the programmers fix bugs for two hours, and use a simulated virtual assistant for help. Then, we use an open coding manual annotation procedure to identify the speech act types in the conversations. Finally, we train and evaluate a supervised learning algorithm to automatically detect the speech act types in the conversations. In 30 two-hour conversations, we made 2459 annotations and uncovered 26 speech act types. Our automated detection achieved 69% precision and 50% recall. The key application of this work is to advance the state of the art for virtual assistants in software engineering. Virtual assistant technology is growing rapidly, though applications in software engineering are behind those in other areas, largely due to a lack of relevant data and experiments. This paper targets this problem in the area of developer Q/A conversations about bug repair.
null
http://arxiv.org/abs/1806.05130v3
http://arxiv.org/pdf/1806.05130v3.pdf
null
[ "Andrew Wood", "Paige Rodeghero", "Ameer Armaly", "Collin McMillan" ]
[]
2018-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/probabilistic-tools-for-the-analysis-of
1801.06733
null
null
Probabilistic Tools for the Analysis of Randomized Optimization Heuristics
This chapter collects several probabilistic tools that proved to be useful in the analysis of randomized search heuristics. This includes classic material like Markov, Chebyshev and Chernoff inequalities, but also lesser known topics like stochastic domination and coupling or Chernoff bounds for geometrically distributed random variables and for negatively correlated random variables. Most of the results presented here have appeared previously, some, however, only in recent conference publications. While the focus is on collecting tools for the analysis of randomized search heuristics, many of these may be useful as well in the analysis of classic randomized algorithms or discrete random structures.
null
https://arxiv.org/abs/1801.06733v6
https://arxiv.org/pdf/1801.06733v6.pdf
null
[ "Benjamin Doerr" ]
[]
2018-01-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/using-clinical-narratives-and-structured-data
1806.04818
null
null
Using Clinical Narratives and Structured Data to Identify Distant Recurrences in Breast Cancer
Accurately identifying distant recurrences in breast cancer from the Electronic Health Records (EHR) is important for both clinical care and secondary analysis. Although multiple applications have been developed for computational phenotyping in breast cancer, distant recurrence identification still relies heavily on manual chart review. In this study, we aim to develop a model that identifies distant recurrences in breast cancer using clinical narratives and structured data from EHR. We apply MetaMap to extract features from clinical narratives and also retrieve structured clinical data from EHR. Using these features, we train a support vector machine model to identify distant recurrences in breast cancer patients. We train the model using 1,396 double-annotated subjects and validate the model using 599 double-annotated subjects. In addition, we validate the model on a set of 4,904 single-annotated subjects as a generalization test. We obtained a high area under curve (AUC) score of 0.92 (SD=0.01) in the cross-validation using the training dataset, then obtained AUC scores of 0.95 and 0.93 in the held-out test and generalization test using 599 and 4,904 samples respectively. Our model can accurately and efficiently identify distant recurrences in breast cancer by combining features extracted from unstructured clinical narratives and structured clinical data.
null
http://arxiv.org/abs/1806.04818v2
http://arxiv.org/pdf/1806.04818v2.pdf
null
[ "Zexian Zeng", "Ankita Roy", "Xiaoyu Li", "Sasa Espino", "Susan Clare", "Seema Khan", "Yuan Luo" ]
[ "Computational Phenotyping" ]
2018-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/natural-language-processing-for-ehr-based
1806.04820
null
null
Natural Language Processing for EHR-Based Computational Phenotyping
This article reviews recent advances in applying natural language processing (NLP) to Electronic Health Records (EHRs) for computational phenotyping. NLP-based computational phenotyping has numerous applications including diagnosis categorization, novel phenotype discovery, clinical trial screening, pharmacogenomics, drug-drug interaction (DDI) and adverse drug event (ADE) detection, as well as genome-wide and phenome-wide association studies. Significant progress has been made in algorithm development and resource construction for computational phenotyping. Among the surveyed methods, well-designed keyword search and rule-based systems often achieve good performance. However, the construction of keyword and rule lists requires significant manual effort, which is difficult to scale. Supervised machine learning models have been favored because they are capable of acquiring both classification patterns and structures from data. Recently, deep learning and unsupervised learning have received growing attention, with the former favored for its performance and the latter for its ability to find novel phenotypes. Integrating heterogeneous data sources have become increasingly important and have shown promise in improving model performance. Often better performance is achieved by combining multiple modalities of information. Despite these many advances, challenges and opportunities remain for NLP-based computational phenotyping, including better model interpretability and generalizability, and proper characterization of feature relations in clinical narratives
null
http://arxiv.org/abs/1806.04820v2
http://arxiv.org/pdf/1806.04820v2.pdf
null
[ "Zexian Zeng", "Yu Deng", "Xiaoyu Li", "Tristan Naumann", "Yuan Luo" ]
[ "Computational Phenotyping" ]
2018-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Interpretability", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.", "name": "Image Models", "parent": null }, "name": "Interpretability", "source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression", "source_url": "http://arxiv.org/abs/1310.1533v2" } ]
https://paperswithcode.com/paper/to-understand-deep-learning-we-need-to
1802.01396
null
null
To understand deep learning we need to understand kernel learning
Generalization performance of classifiers in deep learning has recently become a subject of intense study. Deep models, typically over-parametrized, tend to fit the training data exactly. Despite this "overfitting", they perform well on test data, a phenomenon not yet fully understood. The first point of our paper is that strong performance of overfitted classifiers is not a unique feature of deep learning. Using six real-world and two synthetic datasets, we establish experimentally that kernel machines trained to have zero classification or near zero regression error perform very well on test data, even when the labels are corrupted with a high level of noise. We proceed to give a lower bound on the norm of zero loss solutions for smooth kernels, showing that they increase nearly exponentially with data size. We point out that this is difficult to reconcile with the existing generalization bounds. Moreover, none of the bounds produce non-trivial results for interpolating solutions. Second, we show experimentally that (non-smooth) Laplacian kernels easily fit random labels, a finding that parallels results for ReLU neural networks. In contrast, fitting noisy data requires many more epochs for smooth Gaussian kernels. Similar performance of overfitted Laplacian and Gaussian classifiers on test, suggests that generalization is tied to the properties of the kernel function rather than the optimization process. Certain key phenomena of deep learning are manifested similarly in kernel methods in the modern "overfitted" regime. The combination of the experimental and theoretical results presented in this paper indicates a need for new theoretical ideas for understanding properties of classical kernel methods. We argue that progress on understanding deep learning will be difficult until more tractable "shallow" kernel methods are better understood.
null
http://arxiv.org/abs/1802.01396v3
http://arxiv.org/pdf/1802.01396v3.pdf
ICML 2018 7
[ "Mikhail Belkin", "Siyuan Ma", "Soumik Mandal" ]
[ "Deep Learning", "Generalization Bounds" ]
2018-02-05T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2026
http://proceedings.mlr.press/v80/belkin18a/belkin18a.pdf
to-understand-deep-learning-we-need-to-1
null
[ { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null } ]