reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Beta Distribution <s> This paper proposes a regression model where the response is beta distributed using a parameterization of the beta law that is indexed by mean and dispersion parameters. The proposed model is useful for situations where the variable of interest is continuous and restricted to the interval (0, 1) and is related to other variables through a regression structure. The regression parameters of the beta regression model are interpretable in terms of the mean of the response and, when the logit link is used, of an odds ratio, unlike the parameters of a linear regression that employs a transformed response. Estimation is performed by maximum likelihood. We provide closed-form expressions for the score function, for Fisher's information matrix and its inverse. Hypothesis testing is performed using approximations obtained from the asymptotic normality of the maximum likelihood estimator. Some diagnostic measures are introduced. Finally, practical applications that employ real data are presented and discussed. <s> BIB001 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Beta Distribution <s> Summary There is considerable interest in understanding how factors such as time and geographic distance between isolates might influence the evolutionary direction of foot-and-mouth disease. Genetic differences between viruses can be measured as the proportion of nucleotides that differ for a given sequence or gene. We present a Bayesian hierarchical regression model for the statistical analysis of continuous data with sample space restricted to the interval (0, 1). The data are modelled using beta distributions with means that depend on covariates through a link function. We discuss methodology for: (i) the incorporation of informative prior information into an analysis; (ii) fitting the model using Markov chain Monte Carlo sampling; (iii) model selection using Bayes factors; and (iv) semiparametric beta regression using penalized splines. The model was applied to two different datasets. <s> BIB002
|
The "basic" beta distribution is a very popular for its flexibility in modelling random variables with bounded range (0,1). Variables with other ranges can be easily transformed by translation and scaling. Its two parameters α, β > 0 drive the shape of the distribution, allowing for convex and concave shapes, symmetry, left and right skewness and high or low kurtosis and even a flat form of the uniform distribution The standard pdf of a beta-distributed random is the beta function, Γ(·) denotes the gamma function. Under this form, the moments are This beta distribution is conjugate to the binomial model Bi(n, p) with parameters n ∈ N and p ∈ [0, 1], as the prior for p. Under several conditions, the parameterization used above may not be suitable. This occurs, e.g., if the random variable X is modelled as a dependent variable given independent regressors. BIB001 propose parametrization with the mean µ = α/(α+ β) and precision φ = α+ β, yielding This form is exploited in beta regression, e.g. BIB001 ) and BIB002 . There exists also a whole class of beta distributions called generalized beta distributions, yielding tens of more or less common distributions including χ 2 , lognormal, gamma etc. as special cases. The extent of this class is far beyond the scope of this paper.
|
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Beta-rectangular Distribution <s> PERT is a widely utilized framework for project management. However, as a result of underlying assumptions about the activity times, the PERT formulas prescribe a light-tailed distribution with a constant variance conditional on the range. Given the pervasiveness of heavy-tailed phenomena in business contexts as well as inherently differing levels of uncertainty about different activities, there is a need for a more flexible distribution which allows for varying amounts of dispersion and greater likelihoods of more extreme tail-area events. In particular, we argue that the tail-area decay of an activity time distribution is a key factor which has been insufficiently considered previously. We provide a distribution which permits varying amounts of dispersion and greater likelihoods of more extreme tail-area events that is straightforward to implement with expert judgments. Moreover, the distribution can be integrated into the PERT framework such that the classic PERT results represent an important special case of the method presented here. <s> BIB001 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Beta-rectangular Distribution <s> A new regression model for proportions is presented by considering the Beta rectangular distribution proposed by Hahn (2008). This new model includes the Beta regression model introduced by Ferrari and Cribari-Neto (2004) and the variable dispersion Beta regression model introduced by Smithson and Verkuilen (2006) as particular cases. Like Branscum, Johnson, and Thurmond (2007), a Bayesian inference approach is adopted using Markov Chain Monte Carlo (MCMC) algorithms. Simulation studies on the in∞uence of outliers by considering contam- inated data under four perturbation patterns to generate outliers were carried out and conflrm that the Beta rectangular regression model seems to be a new robust alternative for modeling proportion data and that the Beta regression model shows sensitivity to the estimation of regression coe-cients, to the posterior distribution of all parameters and to the model comparison criteria considered. Furthermore, two applications are presented to illustrate the robustness of the Beta rectangular model. <s> BIB002
|
As noted in BIB001 ) the definition of the beta distribution in terms of mean and precision (4) neither considers tail-area events nor greater flexibility ICINCO2013-10thInternationalConferenceonInformaticsinControl,AutomationandRobotics in variance specification. Therefore, BIB001 proposed a mixture of beta distribution and a uniform distribution, giving it the name beta rectangular dis- where µ and φ are the mean and precision of a beta component f B (·) and θ ∈ [0, 1] is a mixing parameter (weight). Due to the distributions' support, the constant density of the uniform distribution is equivalent directly to θ. The moments of this mixture are straightforwardly It is worth to notice that the uniform component is equivalently a beta distribution B(1/2,2) and the mixture can be viewed as a beta mixture with one component fixed. The beta-uniform mixture was recently proposed to improve robustness of beta regression to outliers, BIB002 . Some examples of the beta-rectangular distribution are in Fig. 4 .
|
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Variational Bayes <s> We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that an accurate variational transformation can be used to obtain a closed form approximation to the posterior distribution of the parameters thereby yielding an approximate posterior predictive model. This approach is readily extended to binary graphical model with complete observations. For graphical models with incomplete observations we utilize an additional variational transformation and again obtain a closed form approximation to the posterior. Finally, we show that the dual of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates. <s> BIB001 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Variational Bayes <s> Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics. <s> BIB002 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Variational Bayes <s> This paper presents a new deterministic approximation technique in Bayesian networks. This method, "Expectation Propagation," unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be donvincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers. <s> BIB003
|
The variational Bayesian (VB) inference, rooted in the field of calculus of variations, serves for analytic approximation of the posterior pdf of parameters and potentially other latent variables BIB001 . Let us denote Z = (Z 1 , . . . , Z n ) as the set comprising both parameters and latent variables. The goal is to find analytically tractable approximation q(Z) of f (Z|X). Similarly to EM decomposition (7), we may write where the analogues of (8) and (9) are Unlike in the EM algorithm, the elements of Z are factorized into M independent factors Z i , i = 1, . . . , M, such that This, put back into (11) yields where This directly yields the VB-optimal factors The additive constant changes to multiplicative in exponentiation, providing the solution The resulting algorithm is very similar to the expectation-maximization, but unlike it, VB computes the posterior distributions of all parameters. The expectations are taken with respect to variables not in the current factor, which, in turn, are recomputed in the same way. The algorithm is guaranteed to converge and, under convexity of the lower bound, to the global maximum BIB002 . It is necessary to stress that the variational Bayesian method provides analytic approximations of the posterior distribution of parameters and latent variables. The sacrifice is their factorized treatment (11), neglecting the dependency properties carried by the true joint posterior pdfs. An alternative expectation propagation algorithm BIB003 ) overcomes this issue by exploiting reversed order of pdfs in the Kullback-Leibler divergence in (11). The price is elevated level of computational difficulties. A recent example of the VB algorithm used in conjunction with bounded variables is presented in . It provides a method for VB estimation of beta mixture models. An interesting part of the paper is approximate analytic solution of otherwise analytically intractable integrals emerging from special (gamma or beta) functions in the beta distribution. This reveals the pervasive computational problems connected even with the very standard distributions with bounded support.
|
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Simulation from Posterior <s> A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion. <s> BIB001 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> Simulation from Posterior <s> We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, non-linear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low-energy states (‘annealing’), or what is the same thing, the most probable states under the Gib... <s> BIB002
|
The industrial practice often deals with complicated models, for which the inference is neither analytically nor approximately (in the EM and VB sense) tractable. This issue is yet emphasized when distributions with bounded support are used. The form OverviewofBoundedSupportDistributionsandMethodsforBayesianTreatmentofIndustrialData of resulting analytically unreachable posteriors need to be evaluated by simulations, exploiting a (usually big) set of draws to represent the distributions. In high-dimensional problems, the Markov chain Monte Carlo (MCMC) methods dominate this field. The idea of Markov chain simulation is to simulate a random walk in the space of unknown (multivariate) parameter θ. The random walk converges to a stationary distribution close to the target posterior f (θ|x) . Two popular MCMC methods, the Metropolis-Hastings BIB001 and Gibbs algorithms BIB002 , have become standards in Bayesian modelling. Metropolis-Hastings Algorithm: first draws a starting point θ accomplishing f (θ|x) > 0 from some suitable distribution. Then, it recursively exploits a Markov transition kernel (proposal distribution) q(θ ′ |θ) in the following way: 1. Sample a candidate point θ ′ from q(θ ′ |θ).
|
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> EXAMPLE OF APPLICATION <s> This paper proposes a regression model where the response is beta distributed using a parameterization of the beta law that is indexed by mean and dispersion parameters. The proposed model is useful for situations where the variable of interest is continuous and restricted to the interval (0, 1) and is related to other variables through a regression structure. The regression parameters of the beta regression model are interpretable in terms of the mean of the response and, when the logit link is used, of an odds ratio, unlike the parameters of a linear regression that employs a transformed response. Estimation is performed by maximum likelihood. We provide closed-form expressions for the score function, for Fisher's information matrix and its inverse. Hypothesis testing is performed using approximations obtained from the asymptotic normality of the maximum likelihood estimator. Some diagnostic measures are introduced. Finally, practical applications that employ real data are presented and discussed. <s> BIB001 </s> Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data <s> EXAMPLE OF APPLICATION <s> Summary There is considerable interest in understanding how factors such as time and geographic distance between isolates might influence the evolutionary direction of foot-and-mouth disease. Genetic differences between viruses can be measured as the proportion of nucleotides that differ for a given sequence or gene. We present a Bayesian hierarchical regression model for the statistical analysis of continuous data with sample space restricted to the interval (0, 1). The data are modelled using beta distributions with means that depend on covariates through a link function. We discuss methodology for: (i) the incorporation of informative prior information into an analysis; (ii) fitting the model using Markov chain Monte Carlo sampling; (iii) model selection using Bayes factors; and (iv) semiparametric beta regression using penalized splines. The model was applied to two different datasets. <s> BIB002
|
As an illustrative example, we estimate the Bayesian beta regression model (e.g. BIB001 ) and BIB002 ) on a 20 data points from a rolling mill, depicted in Fig. 7 . The horizontal axis represents discrete time, the vertical axis describes the control in 0.01%. We fitted the model (4) using a logit link function as follows corresponding with the reparameterized beta distribution (4). The coefficients (β 0 , β 1 ) together with precision φ were estimated as independent, f (β 0 , β 1 , φ) = f (β 0 , β 1 ) f (φ) with β 0 and β 1 being normal and φ gamma distributed. The model was estimated in GNU R interfacing with OpenBUGS through the BRugs package. The chain length was 50 000 samples with initial 4000 samples serving for burn-in. Results of estimation of regression coefficients are given in Table 1 . The mean values of posterior distributions areβ 0 = −5.799 andβ 1 = 0.11, with the corresponding 95% credibility intervals (defined as highest density intervals) being [−7.762, −3.827] and [−1.853, 2.075] for β 0 and β 1 , respectively. One of the rules of thumb recommends that the simulation should be run until the Monte Carlo error for each parameter of interest falls below 5% of the sample standard deviation. Table 1 shows that the simulation reached less than 0.6% for both coefficients. The posterior distributions of β 0 and β 1 are depicted in Fig. 8 as histograms of Monte Carlo samples together with kernel density estimates (in red). For comparison, the betareg package was used for beta regression in frequentist statistical framework BIB001 . The model had the same structure, the link function was identically the logit. Coefficients estimates wereβ 0 = −5.866 and β 1 = 0.115, respectively, model precision was 2578.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We address the problem of visually detecting causal events and fitting them together into a coherent story of the action witnessed by the camera. We show that this can be done by reasoning about the motions and collisions of surfaces, using high-level causal constraints derived from psychological studies of infant visual behavior. These constraints are naive forms of basic physical laws governing substantiality, contiguity, momentum, and acceleration. We describe two implementations. One system parses instructional videos, extracting plans of action and key frames suitable for storyboarding. Since learning will play a role in making such systems robust, we introduce a new framework for higher-order hidden Markov models and demonstrate its use in a second system that segments stereo video into actions in near real-time. Rather than attempt accurate low-level vision, both systems use high-level causal analysis to integrate fast but sloppy pixel-based representations over time. The output is suitable for summary, indexing, and automated editing. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann's hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We show quite good face clustering is possible for a dataset of inaccurately and ambiguously labelled face images. Our dataset is 44,773 face images, obtained by applying a face finder to approximately half a million captioned news images. This dataset is more realistic than usual face recognition datasets, because it contains faces captured "in the wild" in a variety of configurations with respect to the camera, taking a variety of expressions, and under illumination of widely varying color. Each face image is associated with a set of names, automatically extracted from the associated caption. Many, but not all such sets contain the correct name. We cluster face images in appropriate discriminant coordinates. We use a clustering procedure to break ambiguities in labelling and identify incorrectly labelled faces. A merging procedure then identifies variants of names that refer to the same individual. The resulting representation can be used to label faces in news images or to organize news pictures by individuals present. An alternative view of our procedure is as a process that cleans up noisy supervised data. We demonstrate how to use entropy measures to evaluate such procedures. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> A theoretical framework for grounding language is introduced that provides a computational path from sensing and motor action to words and speech acts. The approach combines concepts from semiotics and schema theory to develop a holistic approach to linguistic meaning. Schemas serve as structured beliefs that are grounded in an agent's physical environment through a causal-predictive cycle of action and perception. Words and basic speech acts are interpreted in terms of grounded schemas. The framework reflects lessons learned from implementations of several language processing robots. It provides a basis for the analysis and design of situated, multimodal communication systems that straddle symbolic and non-symbolic realms. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Mobile robots that interact with humans in an intuitive way must be able to follow directions provided by humans in unconstrained natural language. In this work we investigate how statistical machine translation techniques can be used to bridge the gap between natural language route instructions and a map of an environment built by a robot. Our approach uses training data to learn to translate from natural language instructions to an automatically-labeled map. The complexity of the translation process is controlled by taking advantage of physical constraints imposed by the map. As a result, our technique can efficiently handle uncertainty in both map labeling and parsing. Our experiments demonstrate the promising capabilities achieved by our approach. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Studying natural language, and especially how people describe the world around them can help us better understand the visual world. In turn, it can also help us in the quest to generate natural language that describes this world in a human manner. We present a simple yet effective approach to automatically compose image descriptions given computer vision based inputs and using web-scale n-grams. Unlike most previous work that summarizes or retrieves pre-existing text relevant to an image, our method composes sentences entirely from scratch. Experimental results indicate that it is viable to generate simple textual descriptions that are pertinent to the specific content of an image, while permitting creativity in the description -- making for more human-like annotations than previous approaches. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G3), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as "Put the tire pallet on the truck." The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> This contribution addresses generation of natural language descriptions for human actions and behaviour observed in video streams. The work starts with implementation of conventional image processing techniques to extract high-level features from video. Because human is often the most important and also interesting feature, description focuses on humans and their activities. Although feature extraction processes are erroneous at various levels, we explore approaches to put them together to produce a coherent description. Evaluation is made by calculating the overlap similarity score between human authored and machine generated descriptions. <s> BIB011 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> This contribution addresses generation of natural language descriptions for human actions, behaviour and their relations with other objects observed in video streams. The work starts with implementation of conventional image processing techniques to extract high level features from video. These features are converted into natural language descriptions using context free grammar. Although feature extraction processes are erroneous at various levels, we explore approaches to putting them together to produce a coherent description. Evaluation is made by calculating ROUGE scores between human annotated and machine generated descriptions. Further we introduce a task based evaluation by human subjects which provides qualitative evaluation of generated descriptions. <s> BIB012 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task. <s> BIB013 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose a system for human-robot interaction that learns both models for spatial prepositions and for object recognition. Our system grounds the meaning of an input sentence in terms of visual percepts coming from the robot's sensors in order to send an appropriate command to the PR2 or respond to spatial queries. To perform this grounding, the system recognizes the objects in the scene, determines which spatial relations hold between those objects, and semantically parses the input sentence. The proposed system uses the visual and spatial information in conjunction with the semantic parse to interpret statements that refer to objects (nouns), their spatial relationships (prepositions), and to execute commands (actions). The semantic parse is inherently compositional, allowing the robot to understand complex commands that refer to multiple objects and relations such as: “Move the cup close to the robot to the area in front of the plate and behind the tea box”. Our system correctly parses 94% of the 210 online test sentences, correctly interprets 91% of the correctly parsed sentences, and correctly executes 89% of the correctly interpreted sentences. <s> BIB014 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level. <s> BIB015 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. <s> BIB016 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics , which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph , i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. <s> BIB017 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison. <s> BIB018 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. <s> BIB019 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture. <s> BIB020 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description focus on generating only single sentences and are not able to vary the descriptions’ level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. To understand the difference between detailed and short descriptions, we collect and analyze a video description corpus of three levels of detail. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from it. For our multi-sentence descriptions we model across-sentence consistency at the level of the SR by enforcing a consistent topic. Human judges rate our descriptions as more readable, correct, and relevant than related work. <s> BIB021 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time. <s> BIB022 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> In this paper, we address the task of learning novel visual concepts, and their interactions with other concepts, from a few images with sentence descriptions. Using linguistic context and visual features, our method is able to efficiently hypothesize the semantic meaning of new words and add them to its word dictionary so that they can be used to describe images which contain these novel concepts. Our method has an image captioning module based on [38] with several improvements. In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task. We propose methods to prevent overfitting the new concepts. In addition, three novel concept datasets are constructed for this new task, and are publicly available on the project page. In the experiments, we show that our method effectively learns novel visual concepts from a few examples without disturbing the previously learned concepts. The project page is: www.stat.ucla.edu/junhua. mao/projects/child_learning.html. <s> BIB023 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. <s> BIB024 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented. <s> BIB025 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. <s> BIB026 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call "percepts" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. ::: We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features. <s> BIB027 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD). <s> BIB028 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. <s> BIB029 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB030 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates. <s> BIB031 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos. <s> BIB032 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. <s> BIB033 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets. <s> BIB034 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> When organizing the Microsoft Research Video To Language challenge [1], we found that, in our previously released dataset [10], some sentences annotated by AMT workers are identical in one video clip or very similar in one category. Therefore, to control the quality of data and annotations, as well as the competitions, we removed those simple and duplicated sentences and replaced them with refined ones. We finally released the fixed dataset in our challenge website [1]. Due to these modifications of the dataset, the performance cannot be well matched with what we reported in our CVPR paper [10]. Here, we have reported the new performance in the following tables which also appeared in our CVPR paper (referred to as Table 1, 2, 3, 4, 5, 6, and 7, respectively). If you are trying to reproduce or compare the baselines conducted on our MSR-VTT dataset, please refer to this supplementary material and the updated performance reported in this material. However, please cite our CVPR paper [10] if you want to use the MSR-VTT as your dataset. <s> BIB035 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Recent advances in image captioning task have led to increasing interests in video captioning task. However, most works on video captioning are focused on generating single input of aggregated features, which hardly deviates from image captioning process and does not fully take advantage of dynamic contents present in videos. We attempt to generate video captions that convey richer contents by temporally segmenting the video with action localization, generating multiple captions from multiple frames, and connecting them with natural language processing techniques, in order to generate a story-like caption. We show that our proposed method can generate captions that are richer in contents and can compete with state-of-the-art method without explicitly using video-level features as input. <s> BIB036 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count? <s> BIB037 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization. <s> BIB038 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> A Semantic Compositional Network (SCN) is developed for image captioning, in which semantic concepts (i.e., tags) are detected from the image, and the probability of each tag is used to compose the parameters in a long short-term memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an ensemble of tag-dependent weight matrices. The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag. In addition to captioning images, we also extend the SCN to generate captions for video clips. We qualitatively analyze semantic composition in SCNs, and quantitatively evaluate the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text. Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics. <s> BIB039 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods. <s> BIB040 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> INTRODUCTION <s> Dense video captioning is a fine-grained video understanding task that involves two sub-problems: localizing distinct events in a long video stream, and generating captions for the localized events. We propose the Joint Event Detection and Description Network (JEDDi-Net), which solves the dense video captioning task in an end-to-end fashion. Our model continuously encodes the input video stream with three-dimensional convolutional layers, proposes variable-length temporal events based on pooled features, and generates their captions. Proposal features are extracted within each proposal segment through 3D Segment-of-Interest pooling from shared video feature encoding. In order to explicitly model temporal relationships between visual events and their captions in a single video, we also propose a two-level hierarchical captioning module that keeps track of context. On the large-scale ActivityNet Captions dataset, JEDDi-Net demonstrates improved results as measured by standard metrics. We also present the first dense captioning results on the TACoS-MultiLevel dataset. <s> BIB041
|
D ESCRIBING a short video in natural language is a trivial task for most people, but a very challenging one for machines. Automatic video description involves understanding of many entities and the detection of their occurrences in a video employing computer vision techniques. These entities include background scene, humans, objects, human actions, human-object interactions, human-human interactions, other events, and the order in which events occur. All this information must then be articulated using a comprehensible and grammatically correct text employing Natural Language Processing (NLP) techniques. Over the past few years, these two traditionally independent fields, Computer Vision (CV) and Natural Language Processing (NLP) have joined forces to address the upsurge of research interests in understanding and describing images and videos. Special issues of journals are published focusing on language in vision [9] and workshops uniting the two areas have also been held regularly at both NLP and CV conferences [15] , , , . Automatic video description has many applications in human-robot interaction, automatic video subtitling and video surveillance. It can be used to help the visually impaired by generating verbal descriptions of surroundings through speech synthesis, or automatically generating and reading out film descriptions. Currently, these are achieved through very costly and time-consuming manual processes. Another application is the description of sign language videos in natural language. Video description can also generate written procedures for human or service robots by automatically converting actions in a demonstration video into simple instructions, for example, assembling furniture, installing CD-ROM, making coffee or changing a flat tyre BIB032 , BIB001 . The advancement of video description opens up enormous opportunities in many application domains. It is envisaged that in the near future, we would be able to interact with robots in the same manner as with humans BIB013 . If video description is advanced to the stage of being able to comprehend events unfolding in the real world and render them in spoken words, Service Robots or Smart phone Apps will be able to understand human actions and other events to converse with humans in a much more meaningful and coherent manner. For example, they could answer a user's question as to where they left their wallet or discuss what they should cook for dinner. In industry settings, they could potentially remind a worker of any actions/procedures that are missing from a routine operation. The recent release of a dialogue dataset, Talk the Walk , has introduced yet another interesting application where a natural language dialogue between a guide and a tourist helps the tourist to reach a previously unseen location on a map using perception, action and interaction modeling. Leveraging the recent developments in deep neural net- works for NLP and CV, and the increased availability of large multi-modal datasets, automatically generating stories from pixels is no longer a science fiction. This growing body of work has mainly originated from the robotics community and can be labeled broadly as language grounded meaning from vision to robotic perception BIB006 . Related research areas include, connecting words to pictures BIB003 , BIB004 , , narrating images in natural language sentences BIB007 , , BIB009 and understanding natural language instructions for robotic applications BIB014 , BIB008 , BIB010 . Another closely related field is Visual Information Retrieval (VIR), which takes visual (image, drawing or sketch), text (tags, keywords or complete sentence) or mixed visual and text query to perform content based search. Thanks to the release of benchmark datasets MS COCO BIB016 and Flicker30k BIB017 , research in image captioning and retrieval , BIB022 , BIB018 , BIB023 , and image question answering BIB024 , BIB019 , BIB025 , BIB026 has also become very active. Automatically generating natural language sentences describing the video content has two components; understanding the visual content and describing it in grammatically correct natural language sentences. Figure 1 shows a simple deep learning based video captioning framework. The task of video description is relatively more challenging, compared to image captioning, because not all objects in the video are relevant to the description such as the detected objects that do not play any role in the observed activity BIB020 . Moreover, video description methods must additionally capture the speed, direction of relevant objects as well as causality among events, actions, and objects. Finally, events in videos can be of varying lengths and may even result in a possible overlap of events BIB038 . See Figure 2 for example. The event of piano recitals is spanned over almost the entire duration of the video, however, the applause is a very short event that only takes place at the end. The example illustrates differences between three related areas of research, namely, image captioning, video captioning and dense video captioning. In this example, image captioning techniques recognize the event as mere clapping whereas it is actually an applause that resulted from a previous eventpiano playing. Below we define some terminologies used in this paper. • Visual Description: The unifying concept encompassing (see Fig. 3 ) the automatic generation of single or multiple natural language sentences that convey the information in still images or video clips. • Video Captioning: Conveying the information of a video clip as a whole through a single automatically generated natural language sentence based on the premise that short video clips usually contain one main event BIB027 , , BIB039 , BIB040 , BIB028 , BIB029 . • Video Description: Automatically generating multiple natural language sentences that provide a narrative of a relatively longer video clip. The descriptions are more detailed and may be in the form of paragraphs. Video description is sometimes also referred to as story telling or paragraph generation BIB021 , BIB033 . • Dense Video Captioning: Detection and conveying information of all, possibly overlapping, events of different lengths in a video using a natural language sentence per event. As illustrated in Fig. 2 , dense video captioning localizes events in time BIB038 , , BIB041 , and generates sentences that are not necessarily coherent. On the other hand video description gives a more detailed account of one or more events in a video clip using multiple coherent sentences without having to localize individual events. Video captioning research started with the classical template based approaches in which Subject (S), Verb (V), and Object (O) are detected separately and then joined using a sentence template. These approaches are referred to as SVO-Triplets BIB020 , BIB005 . However, the advent of deep learning and the tremendous advancements in CV and NLP have equally affected the area of video captioning. Hence, latest approaches follow deep learning based architectures [133] , BIB028 that encode the visual features with 2D/3D-CNN and use LSTM/GRU to learn the sequence. The output of both approaches is either a single sentence BIB034 , BIB035 , or multiple sentences BIB020 , BIB015 , BIB011 , BIB021 , BIB036 , BIB033 per video Fig. 2 : Illustration of differences between image captioning, video captioning and dense video captioning. Image (video frame) captioning describes each frame with a single sentence. Video captioning describes the complete video with one sentence. In dense video captioning, each event in video is temporally detected and described by a single sentence eventually resulting in multiple sentences localized in time but not necessarily coherent. clip. Early research on video description mostly focused on domain specific short video clips with limited vocabularies of objects and activities BIB020 , BIB015 , BIB012 , BIB005 , BIB013 , . Description of open domain and relatively longer videos remains a challenge, as it needs large vocabularies and training data. Methods that follow CNN-LSTM/GRU framework mainly differ from each other in the different types of CNNs and language models (vanilla RNN, LSTM, and GRUs) they employ and as well as how they pass the extracted visual features to the language model (at the first time step only or all time steps). Later methods progressed by introducing additional transformations on top of the standard encoderdecoder framework. These transformations include attention mechanism BIB029 where the model learns which part of the video to focus on, sequence learning BIB028 that models a sequence of video frames with the sequence of words in the corresponding sentence, semantic attributes BIB039 , BIB040 that exploits the visual semantics in addition to CNN features, and joint modeling of visual content with compositional text BIB034 . More recently, video based visual description problem has evolved towards dense video captioning and video story telling. New datasets have also been introduced to progress along these lines. When it comes to performance comparison, quantitative evaluation of video description systems is not straightforward. Currently, automatic evaluations are typically performed using machine translation and image captioning metrics, including Bilingual Evaluation Understudy (BLEU) BIB002 , Recall Oriented Understudy for Gisting Evaluation (ROUGE) , Metric for Evaluation of Translation with Explicit Ordering (METEOR) , Consensus based Image Description Evaluation (CIDEr) BIB030 , and the recently proposed Semantic Propositional Image Captioning Evaluation (SPICE) BIB037 and Word Mover's Distance (WMD) BIB031 metrics. Section 5.1 presents these measures. Here, we give a brief overview to establish motivation for our survey. BLEU is a precision-based metric, which accounts for precise matching of n-grams in the generated and ground truth references. METEOR, on the other hand, first creates an alignment between the two sentences by comparing exact tokens, stemmed tokens and paraphrases. It also takes into consideration the semantically similar matches using WordNet synonyms. ROUGE, similar to BLEU, has different n-grams based versions and computes recall for the generated sentences and the reference sentences. CIDEr is a human-consensus-based evaluation metric, which was developed specifically for evaluating image captioning methods but has also been used in video description tasks. WMD makes use of word embeddings (semantically meaningful vector representations of words) and compares two texts using the Earth Mover's Distance (EMD). This metric is relatively less sensitive to word order and synonym changes in a sentence and, like CIDEr and METEOR, it provides high correlation with human judgments. Lastly, SPICE is a more recent metric that correlates more with human judgment of semantic quality as compared to previously reported metrics. It compares the semantic information of two sentences by matching their content in dependency parse trees. These metrics capture very different performance measures for the same method and are not perfectly aligned with human judgments. Also, due to the hand engineered nature of these metrics, their scores are unstable when the candidate sentence is perturbed with synonyms, word order, length and redundancy. Hence, there is a need for an evaluation metric that is learned from training data to score in harmony with human judgments in describing videos with diverse content. The current literature lacks a comprehensive and systematic survey that covers different aspects of video description research including methods, dataset characteristics, evaluation measures, benchmark results and related competitions and video Q&A challenges. We fill this gap and present a comprehensive survey of the literature. We first highlight the important applications and major trends of video description in Section 1 and then classify automatic video description methods into three groups, giving an overview of the models from each group in Section 2. In Section 3, we elaborate on the available video description datasets used for benchmarking. In Section 4, we present the details of video competitions and challenges. Furthermore, we review the evaluation metrics that are used for quantitative analysis of the generated descriptions in Section 5. In Section 6, benchmark results achieved through the aforementioned methods are compared and discussed. In Section 7, we discuss the possible future directions and finally Section 8 concludes our survey and discusses some insights into the findings.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This paper describes an approach to Japanese-English translation through internal expressions which are similar to those used in our recent approach to English-Japanese translation [2]. Attention is focused on construction of the internal expressions of Japanese sentences based on case structures of predicates and also conversion of the Japanese internal expressions to the English ones for generating good English sentences in conventional use. Finally, associated with translation, extraction of specified translated information from Japanese patent claim sentences is described briefly. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> Images of vehicles which move in traffic scenes recorded by a stationary camera have been detected and tracked without operator intervention. The resulting vehicle trajectories were projected from the image plane onto the street plane. A suitable system internal representation of about 90 German motion verbs was then exploited in order to automatically characterize trajectory segments in terms of natural language concepts. A multiresolution approach for feature matching has been developed which is robust enough to track vehicle images across hundreds of frames, despite considerable variations in size and projected velocity. Results from experiments with image sequences from real-world traffic scenes are presented. > <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This book presents the most complete exposition of the theory of head-driven phrase structure grammar (HPSG), introduced in the authors' "Information-Based Syntax and Semantics." HPSG provides an integration of key ideas from the various disciplines of cognitive science, drawing on results from diverse approaches to syntactic theory, situation semantics, data type theory, and knowledge representation. The result is a conception of grammar as a set of declarative and order-independent constraints, a conception well suited to modelling human language processing. This self-contained volume demonstrates the applicability of the HPSG approach to a wide range of empirical problems, including a number which have occupied center-stage within syntactic theory for well over twenty years: the control of "understood" subjects, long-distance dependencies conventionally treated in terms of "wh"-movement, and syntactic constraints on the relationship between various kinds of pronouns and their antecedents. The authors make clear how their approach compares with and improves upon approaches undertaken in other frameworks, including in particular the government-binding theory of Noam Chomsky. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. > <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We address the problem of visually detecting causal events and fitting them together into a coherent story of the action witnessed by the camera. We show that this can be done by reasoning about the motions and collisions of surfaces, using high-level causal constraints derived from psychological studies of infant visual behavior. These constraints are naive forms of basic physical laws governing substantiality, contiguity, momentum, and acceleration. We describe two implementations. One system parses instructional videos, extracting plans of action and key frames suitable for storyboarding. Since learning will play a role in making such systems robust, we introduce a new framework for higher-order hidden Markov models and demonstrate its use in a second system that segments stereo video into actions in near real-time. Rather than attempt accurate low-level vision, both systems use high-level causal analysis to integrate fast but sloppy pixel-based representations over time. The output is suitable for summary, indexing, and automated editing. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> In this paper we develop a representation for the temporal structure inherent in human actions and demonstrate an effective method for using that representation to detect the occurrence of actions. The temporal structure of the action, sub-actions, events, and sensor information is described using a constraint network based on Allen's interval algebra. We map these networks onto a simpler, S-valued domain (past, now, fut) network-a PNF-network-to allow fast detection of actions and sub-actions. The occurrence of an action is computed by considering the minimal domain of its PNF-network, under constraints imposed by the current state of the sensors and the previous states of the network. We illustrate the approach with examples, showing that a major advantage of PNF propagation is the detection and removal of in-consistent situations. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> Face Recognition (FR) systems are increasingly gaining more importance. Face detection and tracking in a complex scene forms the first step in building a practical FR system. In this paper, a method to detect and track human faces in color image sequences is described. Skin color classification and morphological segmentation is used to detect face(s) in the first frame. These detected faces are tracked over subsequent frames by using the position of the faces in the first frame as the marker and detecting for skin in the localized region. Specific advantages of this approach are that skin color analysis method is simple and powerful, and the system can be used to detect/track multiple faces. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> The goal of this paper is to describe and demonstrate the application of Bayesian networks in a generic automatic video surveillance system. Taking image features of tracked moving regions from an image sequence as input, mobile object properties are first computed and noise is suppressed by statistical methods. The probability that a scenario occurs is then computed from these mobile object properties through several layers of naive Bayesian classifiers (or a Bayesian network). Several issues and solutions regarding the efficiency of the Bayesian network are discussed. For example, the parameters of the networks, which represent rare activities (typical of video surveillance applications), can be learned from image sequences of similar scenarios which are more common. We demonstrate the effectiveness of our approach by training the networks with 600 image frames belonging to one domain of interest and applying them to image sequences in a different domain. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the "integral image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> In this paper, we present techniques for recognizing complex, multitasked activities from video. Visual information like image features and motion appearances, combined with domain-specific information, like object context is used initially to label events. Each action event is represented with a unique symbol, allowing for a sequence of interactions to be described as an ordered symbolic string. Then, a model of stochastic context-free grammar (SCFG), which is developed using underlying rules of an activity, is used to provide the structure for recognizing semantically meaningful behavior over extended periods. Symbolic strings are parsed using the Earley-Stolcke algorithm to determine the most likely semantic derivation for recognition. Parsing substrings allows us to recognize patterns that describe high-level, complex events taking place over segments of the video sequence. We introduce new parsing strategies to enable error detection and recovery in stochastic context-free grammar and methods of quantifying group and individual behavior in activities with separable roles. We show through experiments, with a popular card game, the recognition of high-level narratives of multi-player games and the identification of player strategies and behavior using computer vision. <s> BIB011 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. We present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, main street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user. <s> BIB012 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> Dynamic Probabilistic Networks (DPNs) are exploited for modeling the temporal relationships among a set of different object temporal events in the scene for a coherent and robust scene-level behaviour interpretation. In particular, we develop a Dynamically Multi-Linked Hidden Markov Model (DML-HMM) to interpret group activities involving multiple objects captured in an outdoor scene. The model is based on the discovery of salient dynamic interlinks among multiple temporal events using DPNs. Object temporal events are detected and labeled using Gaussian Mixture Models with automatic model order selection. A DML-HMM is built using Schwarz's Bayesian Information Criterion based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among different object events. Our experiments demonstrate that its performance on modelling group activities in a noisy outdoor scene is superior compared to that of a Multi-Observation Hidden Markov Model (MOHMM), a Parallel Hidden Markov Model (PaHMM) and a Coupled Hidden Markov Model (CHMM). <s> BIB013 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments. <s> BIB014 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> A representational gap exists between low-level measurements (segmentation, object classification, tracking) and high-level understanding of video sequences. In this paper, we propose a novel representation of events in videos to bridge this gap, based on the CASE representation of natural languages. The proposed representation has three significant contributions over existing frameworks. First, we recognize the importance of causal and temporal relationships between subevents and extend CASE to allow the representation of temporal structure and causality between sub-events. Second, in order to capture both multi-agent and multithreaded events, we introduce a hierarchical CASE representation of events in terms of sub-events and case-lists. Last, for purposes of implementation we present the concept of a temporal event-tree, and pose the problem of event detection as subtree pattern matching. By extending CASE, a natural language representation, for the representation of events, the proposed work allows a plausible means of interface between users and the computer. We show two important applications of the proposed event representation for the automated annotation of standard meeting video sequences, and for event detection in extended videos of railroad crossings. <s> BIB015 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a "Challenge Project on Video Event Taxonomy" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices. <s> BIB016 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. <s> BIB017 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This exploratory paper quests for a stochastic and context sensitive grammar of images. The grammar should achieve the following four objectives and thus serves as a unified framework of representation, learning, and recognition for a large number of object categories. (i) The grammar represents both the hierarchical decompositions from scenes, to objects, parts, primitives and pixels by terminal and nonterminal nodes and the contexts for spatial and functional relations by horizontal links between the nodes. It formulates each object category as the set of all possible valid configurations produced by the grammar. (ii) The grammar is embodied in a simple And-Or graph representation where each Or-node points to alternative sub-configurations and an And-node is decomposed into a number of components. This representation supports recursive top-down/bottom-up procedures for image parsing under the Bayesian framework and make it convenient to scale up in complexity. Given an input image, the image parsing task constructs a most probable parse graph on-the-fly as the output interpretation and this parse graph is a subgraph of the And-Or graph after making choice on the Or-nodes. (iii) A probabilistic model is defined on this And-Or graph representation to account for the natural occurrence frequency of objects and parts as well as their relations. This model is learned from a relatively small training set per category and then sampled to synthesize a large number of configurations to cover novel object instances in the test set. This generalization capability is mostly missing in discriminative machine learning methods and can largely improve recognition performance in experiments. (iv) To fill the well-known semantic gap between symbols and raw signals, the grammar includes a series of visual dictionaries and organizes them through graph composition. At the bottom-level the dictionary is a set of image primitives each having a number of anchor points with open bonds to link with other primitives. These primitives can be combined to form larger and larger graph structures for parts and objects. The ambiguities in inferring local primitives shall be resolved through top-down computation using larger structures. Finally these primitives forms a primal sketch representation which will generate the input image with every pixels explained. The proposal grammar integrates three prominent representations in the literature: stochastic grammars for composition, Markov (or graphical) models for contexts, and sparse coding with primitives (wavelets). It also combines the structure-based and appearance based methods in the vision literature. Finally the paper presents three case studies to illustrate the proposed grammar. <s> BIB018 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This paper presents an integrated system for emotion detection. In this research effort, we have taken into account the fact that emotions are most widely represented with eye and mouth expressions. The proposed system uses color images and it is consisted of three modules. The first module implements skin detection, using Markov random fields models for image segmentation and skin detection. A set of several colored images with human faces have been considered as the training set. A second module is responsible for eye and mouth detection and extraction. The specific module uses the HLV color space of the specified eye and mouth region. The third module detects the emotions pictured in the eyes and mouth, using edge detection and measuring the gradient of eyes' and mouth's region figure. The paper provides results from the system application, along with proposals for further research. <s> BIB019 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose. <s> BIB020 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> In this paper we propose a framework that performs automatic semantic annotation of visual events (SAVE). This is an enabling technology for content-based video annotation, query and retrieval with applications in Internet video search and video data mining. The method involves identifying objects in the scene, describing their inter-relations, detecting events of interest, and representing them semantically in a human readable and query-able format. The SAVE framework is composed of three main components. The first component is an image parsing engine that performs scene content extraction using bottom-up image analysis and a stochastic attribute image grammar, where we define a visual vocabulary from pixels, primitives, parts, objects and scenes, and specify their spatio-temporal or compositional relations; and a bottom-up top-down strategy is used for inference. The second component is an event inference engine, where the video event markup language (VEML) is adopted for semantic representation, and a grammar-based approach is used for event analysis and detection. The third component is the text generation engine that generates text report using head-driven phrase structure grammar (HPSG). The main contribution of this paper is a framework for an end-to-end system that infers visual events and annotates a large collection of videos. Experiments with maritime and urban scenes indicate the feasibility of the proposed approach. <s> BIB021 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> System theoretic approaches to action recognition model the dynamics of a scene with linear dynamical systems (LDSs) and perform classification using metrics on the space of LDSs, e.g. Binet-Cauchy kernels. However, such approaches are only applicable to time series data living in a Euclidean space, e.g. joint trajectories extracted from motion capture data or feature point trajectories extracted from video. Much of the success of recent object recognition techniques relies on the use of more complex feature descriptors, such as SIFT descriptors or HOG descriptors, which are essentially histograms. Since histograms live in a non-Euclidean space, we can no longer model their temporal evolution with LDSs, nor can we classify them using a metric for LDSs. In this paper, we propose to represent each frame of a video using a histogram of oriented optical flow (HOOF) and to recognize human actions by classifying HOOF time-series. For this purpose, we propose a generalization of the Binet-Cauchy kernels to nonlinear dynamical systems (NLDS) whose output lives in a non-Euclidean space, e.g. the space of histograms. This can be achieved by using kernels defined on the original non-Euclidean space, leading to a well-defined metric for NLDSs. We use these kernels for the classification of actions in video sequences using (HOOF) as the output of the NLDS. We evaluate our approach to recognition of human actions in several scenarios and achieve encouraging results. <s> BIB022 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension. <s> BIB023 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function. <s> BIB024 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We describe a general method for building cascade classifiers from part-based deformable models such as pictorial structures. We focus primarily on the case of star-structured models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection by more than one order of magnitude without sacrificing detection accuracy. In our algorithm, partial hypotheses are pruned with a sequence of thresholds. In analogy to probably approximately correct (PAC) learning, we introduce the notion of probably approximately admissible (PAA) thresholds. Such thresholds provide theoretical guarantees on the performance of the cascade method and can be computed from a small sample of positive examples. Finally, we outline a cascade detection algorithm for a general class of models defined by a grammar formalism. This class includes not only tree-structured pictorial structures but also richer models that can represent each part recursively as a mixture of other parts. <s> BIB025 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This contribution addresses generation of natural language descriptions for human actions and behaviour observed in video streams. The work starts with implementation of conventional image processing techniques to extract high-level features from video. Because human is often the most important and also interesting feature, description focuses on humans and their activities. Although feature extraction processes are erroneous at various levels, we explore approaches to put them together to produce a coherent description. Evaluation is made by calculating the overlap similarity score between human authored and machine generated descriptions. <s> BIB026 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> Presented is a hybrid method to generate textual descriptions of video based on actions. The method includes an action classifier and a description generator. The aim for the action classifier is to detect and classify the actions in the video, such that they can be used as verbs for the description generator. The aim of the description generator is (1) to find the actors (objects or persons) in the video and connect these correctly to the verbs, such that these represent the subject, and direct and indirect objects, and (2) to generate a sentence based on the verb, subject, and direct and indirect objects. The novelty of our method is that we exploit the discriminative power of a bag-of-features action detector with the generative power of a rule-based action descriptor. Shown is that this approach outperforms a homogeneous setup with the rule-based action detector and action descriptor. <s> BIB027 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We have developed a system that recognizes 48 human behaviors from video. The essential elements are (i) inference of the actors in the scene, (ii) assessment of event-related properties of actors and between actors, (iii) exploiting the event properties to recognize the behaviors. The performance of our recognizer approaches human performance, yet the performance for unseen variations of the behaviors needs to be improved. <s> BIB028 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This contribution addresses generation of natural language descriptions for human actions, behaviour and their relations with other objects observed in video streams. The work starts with implementation of conventional image processing techniques to extract high level features from video. These features are converted into natural language descriptions using context free grammar. Although feature extraction processes are erroneous at various levels, we explore approaches to putting them together to produce a coherent description. Evaluation is made by calculating ROUGE scores between human annotated and machine generated descriptions. Further we introduce a task based evaluation by human subjects which provides qualitative evaluation of generated descriptions. <s> BIB029 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level. <s> BIB030 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with "real-world" knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61% of the time. <s> BIB031 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture. <s> BIB032 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> This paper integrates techniques in natural language processing and computer vision to improve recognition and description of entities and activities in real-world videos. We propose a strategy for generating textual descriptions of videos by using a factor graph to combine visual detections with language statistics. We use state-of-the-art visual recognition systems to obtain confidences on entities, activities, and scenes present in the video. Our factor graph model combines these detection confidences with probabilistic knowledge mined from text corpora to estimate the most likely subject, verb, object, and place. Results on YouTube videos show that our approach improves both the joint detection of these latent, diverse sentence components and the detection of some individual components when compared to using the vision system alone, as well as over a previous n-gram language-modeling approach. The joint detection allows us to automatically generate more accurate, richer sentential descriptions of videos with a wide array of possible content. <s> BIB033 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Classical Methods <s> The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. <s> BIB034
|
The SVO (Subject, Object, Verb) tuples based methods are among the first successful methods used specifically for video description. However, research efforts were made long before to describe visual content into natural language, albeit not explicitly for captioning or description. The first ever attempt goes back to Koller et al. BIB002 in 1991, who developed a system that was able to characterize motion of vehicles in real traffic scenes using natural language verbs. Later in 1997, Brand et al. BIB005 dubbed this as "Inverse Hollywood Problem" (since in Hollywood script (description) is converted into video, here the problem is opposite), and described a series of actions into semantic tag summaries in order to develop a storyboard from instructional videos. They also developed a system, "video gister", that was able to heuristically parse the videos into a series of key actions and generate a script that describes actions detected in the video. They also generated key frames depicting the detected causal events and defined the series of events into semantics representation e.g. Add by enter, motion, detach and remove by attach, move, leave. Video gister was limited to only one human arm (actor) interacting with non liquid objects and was able to understand only five actions (touch, put, get, add, remove). Getting back to SVO tuple based methods, which tackle the video description generation task in two stages. The first stage known as content identification focuses on visual recognition and classification of the main objects in the video clip. These typically include the performer or actor, the action and the object of that action. The second stage involves sentence generation which maps the objects identified in the first stage to Subject, Verb and Object (and hence the name SVO), and filling in handcrafted templates for grammatically sound sentences. These templates are created using grammar or rule-based systems, which are only effective in very constrained environments, i.e. short clips or videos with limited number of objects and actions. Numerous method have been proposed for detecting objects, humans, actions, and events in videos. Below we summarize the recognition techniques used in the Stage I of the SVO tuples based approaches. • Object Recognition: Object recognition in SVO approaches was performed typically using conventional methods, including model-based shape matching through edge detection or color matching BIB014 , HAAR features matching BIB010 , context-based object recognition BIB012 , Scale Invariant Feature Transform (SIFT) BIB007 , discriminatively trained partbased models BIB024 and Deformable Parts Model (DPM) BIB020 , BIB025 . • Human and Activity Detection: Human detection methods employed features such as Histograms of Oriented Gradient (HOG) BIB017 followed by SVM. For activity detection, features like Spatiotemporal Interest Points such as Histogram of Oriented Optical Flow (HOOF) BIB022 , Bayesian Networks (BN) BIB009 , Dynamic Bayesian Networks (DBNs) BIB013 , Hidden Markov Models (HMM) , state machines BIB002 , and PNF Networks BIB006 have been used by SVO approaches. • Integrated Approaches: Instead of detecting the description-relevant entities separately, Stochastic Attribute Image Grammar (SAIG) BIB018 and Stochastic Context Free Grammars (SCFG) BIB011 , allow for compositional representation of visual entities present in a video, an image or a scene based on their spatial and functional relations. Using the visual grammar, the content of an image is first extracted as a parse graph. A parsing algorithm is then used to find the best scoring entities that describe the video. In other words, not all entities present in a video are of equal relevance, which is a distinct feature of this class of methods compared to the aforementioned approaches. For Stage II, sentence generation, a variety of methods have been proposed including HALogen representation , Head-driven Phrase Structure Grammar (HPSG) BIB003 , planner and surface realizer . The primary common task of these methods is to define templates. A template is a user-defined language structure containing placeholders. In order to function properly, a template comprises of three parts named lexicons, grammar and template rules. Lexicon represents vocabulary that describes high level video features. Template rules are user-defined rules guiding Fig. 4 : An example of various templates used for sentence generation from videos. Subject, verb, and object are used to fill in these template. Verb is obtained from action/activity detection methods using spatio-temporal features whereas subject and object are obtained from object detection methods using spatial features. the selection of appropriate lexicons for sentence generation. Grammar defines linguistic rules to describe the structure of expressions in a language, ensuring that a generated sentence is syntactically correct. Using production rules, Grammar can generate a large number of various configurations from a relatively small vocabulary. In template based approaches, a sentence is generated by fitting the most important entities to each of the categories required by the template, e.g. subject, verb, object, and place. Entities and actions recognized in the content identification stage are used as lexicons. Correctness of the generated sentence is ensured by Grammar. Figure 4 presents examples of some popular templates used for sentence generation in template based approaches. Figure 5 gives a timeline of how the classical methods evolved over time whereas below we provide a survey of SVO methods by grouping them into three categories namely, subject (human) focused, action and object focused and methods that use the SVO approach on open domain videos. Note that the division boundaries are frequently blurred between these categories. (1) Subject (Human) Focused: In 2002, Kojima et al. BIB014 proposed one of the earliest methods designed specifically for video captioning. This method focuses primarily on describing videos of one person performing one action only. To detect humans in a scene, they calculated the probability of a pixel coming from the background or the skin region using the values and distributions of pixel chromaticity. Once a human's head and hands are detected, the human posture is estimated by considering three kinds of geometric information i.e. position of the head and hands and direction of the head. For example, to obtain the head direction, the detected head image is compared against a list of precollected head models and a threshold is used to decide on the matching head direction. For object detection, they applied two-way matching, i.e. shape-based matching and pixel based color matching to a list of predefined known objects. Actions detected are all related to object handling and the difference image is used to detect actions such as putting an object down or lifting an object up. To generate the description in sentences, pre-defined case frames and verb patterns as proposed by Nishida et al. BIB001 , are used. Case frame is a type of frame expression used for representing the relationship between cases, which are classified into 8 categories. The frequently used ones are agent, object, and locus. For example, "a person walks from the table to the door", is represented as: [PRED:walk, AG:person, GO-LOC:by(door), SO-LOC:front (table)] , where PRED is the predicate for action, AG is the agent or actor, GO-LOC is the goal location and SO-LOC is the source location. A list of semantic primitives are defined about movements, which are organized using body action state transitions. For example, if moving is detected and the speed is fast, then the activity state is transitioned from moving to running. They also distinguish durative actions (e.g. walk) from instantaneous actions (e.g. stand up). The major drawback of their approach is that it cannot be easily extended to more complex scenarios such as multiple actors, incorporating temporal information, and capturing causal relationship between events. The heavy reliance on the correctness of manually created activity concept hierarchy and state transition model also prevents it from being used in practical situations. Hakeem et. al. BIB015 addressed the shortcomings of Kojima et. al's BIB014 work and proposed an extended case framework (CASE E ) using hierarchical CASE representations. They incorporated multiple agent events, temporal information, and causal relationship between the events to describe the events in natural language. They introduced case-list to incorporate multiple agents in AG, [PRED:move, AG:{person1, person2},...]. Moreover, they incorporated temporal information into CASE using temporal logic to encode the relationship between sub-events. As some events are conditional on other events, they also captured causal relationship between events. For example, in the sentence "a man played piano and the crowd applauded", the applaud occurred because the piano was played. Khan et al. BIB026 introduced a framework to describe human related contents such as actions (limited to five only) and emotions in videos using natural language sentences. They implemented a suite of conventional image processing techniques, including face detection BIB008 , emotion detection BIB019 , action detection , non-human object detection BIB010 and scene classification , to extract the high level entities of interest from video frames. These include humans, objects, actions, gender, position and emotion. Since their approach encapsulates human related actions, human is rendered as Subject and the objects upon which action is performed are rendered as Object. A template based approach is adopted to generate natural language sentences based on the detected entities. They evaluated the method on a dataset of 50 snippets, each spanning 5 to 20 seconds duration. Out of 50, 20 snippets were human close-ups and 30 showed human activities such as stand, walk, sit, run and wave. The primary focus of their research was on activities involving a human interacting with some objects. Hence, their method does not generate any description until a human is detected in the video. The method cannot identify actions with subtle movements (such as smoking and drinking) and interactions among humans. (2) Action and Object Focused: Lee et al. BIB021 proposed a method for semantically annotating visual content in three sequential stages namely, image parsing, event inference and language generation. An "image parsing engine" using stochastic attribute image grammar (SAIG) BIB018 is employed to produce a visual vocabulary i.e. a list of visual entities present in the frame along with their relationships. This output is then fed into an "event inference engine", which extracts semantic and contextual information of visual events, along with their relationships. Video Event Markup Language (VEML) BIB016 is used to represent semantic information. In the final stage, head-driven phrase structure grammar (HPSG) BIB003 is used to generate text description from the semantic representation. Compared to Kojima et al. BIB014 , grammar-based methods can infer and annotate a wider range of scenes and events. Ten streams of urban traffic and maritime scenes over a period of 120 minutes, containing more than 400 moving objects are used for evaluation. Some detected events include "entering the scene, moving, stopping, turning, approaching traffic intersection, watercraft approaching maritime markers and land areas and scenarios where one object follows the other" BIB021 . Recall and Precision rates are employed to evaluate the accuracy of the events that are detected with respect to manually labeled ground truth. Due to poor estimation of the motion direction from low number of perspective views, their method does not perform well on "turning" events. Hanckmann et al. BIB027 proposed a method to automatically describe events involving multiple actions (7 on average), performed by one or more individuals. Unlike Khan et al. BIB026 , human-human interactions are taken into account in addition to human-object interactions. Bag-of-features (48 in total) are collected as action detectors BIB028 for detecting and classifying actions in a video. The description generator subsequently describes the verbs relating the actions to the scene entities. It finds the appropriate actors among objects or persons and connects them to the appropriate verbs. In contrast to Khan et al. BIB026 who assume that the subject is always a person, Hanckmann et al. BIB027 generalizes subjects to include vehicles as well. Furthermore, the number of human actions is much richer. Compared to the five verbs in Khan et al. BIB026 ), they have 48 verbs capturing a diverse range of actions such as approach, arrive, bounce, carry, catch and etc. Barbu et al. BIB032 generated sentence descriptions for short videos of highly constrained domains consisting of 70 object classes, 48 action classes and a vocabulary of 118 words. They rendered a detected object and action as noun and verb respectively. Adjectives are used for the object properties and prepositions are used for their spatial relationships. Their approach comprises of three steps. In the first step, object detection BIB025 is carried out on each frame by limiting 12 detections per frame to avoid over detections. Second, object tracking BIB004 , is performed to increase the precision. Third, using dynamic programming the optimal set of detections is chosen. Verb labels corresponding to actions in the videos are then produced using Hidden Markov Models (HMMs). After getting the verb, all tracks are merged to generate template based sentences that comply to grammar rules. Despite the reasonably accurate lingual descriptions generated for videos in constrained environments, the aforementioned methods have trouble scaling to accommodate increased number of objects and actions in open domain and large video corpora. To incorporate all the relevant concepts, these methods require customized detectors for each entity. Furthermore, the texts generated by existing methods of the time have mostly been in the form of putting together lists of keywords using grammars and templates without any semantic verification. To address the issue of lacking semantic verification, Das et. al BIB030 proposed a hybrid method that produces content of high relevance compared to simple keyword annotation methods. They borrowed ideas from image captioning techniques. This hybrid model comprises of three steps in a hierarchical manner. First, in a bottom up approach, keywords are predicted using low level video features. In this approach they first find a proposal distribution over the training set of vocabulary using multimodal latent topic models. Then by using grammar rules and parts of speech (POS) tagging, most probable subjects, objects and verbs are selected. Second, in a top down approach, a set of concepts is detected and stitched together. A tripartite graph template is then used for converting the stitched concepts to a natural language description. Finally, for semantic verification, they produced a ranked set of natural language sentences by comparing the predicted keywords with the BIB033 approach where confidences are obtained by integrating probabilities from visual recognition system, with statistics from out of domain English text corpora to determine the most likely SVOP tuple. The red block shows low probability given to a correct object by the visual system that is rectified by the high probability from the linguistic model. detected concepts. Quantitative evaluation of this hybrid method shows that it was able to generate more relevant content compared to its predecessors BIB032 , BIB029 . (3) SVO Methods for Open Domain Videos: While most of the prior mentioned works are restricted to constrained domains, Krishnamoorthy et al. BIB031 lead the early works of describing open domain videos. They used selected open domain YouTube videos, however, the subjects and objects were limited to the 20 entities that were available in the classifier training set. Their main contribution is the introduction of text-mining using web-scale text corpora to aid the selection of the best SVO tuple to improve sentence coherence. In addition to focusing on open domain videos and utilizing web scaled text corpora, Guadarrama et al. and Thomason et al. BIB033 started dealing with relatively larger vocabularies. Compared to Krishnamoorthy et al. BIB031 , instead of using only 20 objects in the PASCAL dataset BIB023 , all videos of the YouTube corpora are used for the detection of 241 objects, 45 subjects, and 218 verbs. To describe short YouTube videos, Guadarrama et al. proposed a novel language driven approach. They introduced "zeroshot" verb recognition for selecting unseen verbs in the training set. For example, if subject is "person", object refers to "car" and the model-predicted verb is "move", then the most suitable verb would be "drive". Thomason et al. BIB033 used visual recognition techniques on YouTube videos for probabilistic estimations of subjects, verbs, and objects. Their approach is illustrated in Figure 6 . The object and action classifiers were trained on ImageNet BIB034 . In addition to detecting subjects, verbs and objects, places (12 scenes) where actions are performed, e.g. kitchen or play ground are also identified. To further improve the accuracy of assigning visually detected entities to the right category, probabilities using language statistics obtained from four "out of domain" English text corpora: English Gigaword, British National Corpus (BNC), ukWac and WaCkypedia EN are used to enhance the confidence of word-category alignment for sentence generation. A small "in domain" corpus comprising human-annotated sentences for the video description dataset is also constructed and incorporated into the sentence generation stage. Co-occurring bi-gram (SV, VO, and OP) statistics from the candidate SVOP tuples are calculated using both the "out of domain" and the "in domain" corpus, which are used in a Factor Graph Model (FGM) to predict the most probable SVO and place combination. Finally, the detected SVOP tuple is used to generate an English sentence through a template based approach. Classical methods focused mainly on the detection of pre-defined entities and events separately. These methods then tried to describe the detected entities and events using template based sentences. However, to describe open domain videos or those with more events and entities, classical methods must employ object and action detection techniques for each entity which is unrealistic due to the computational complexity. Moreover, template based descriptions are insufficient to describe all possible events in videos given the linguistic complexity and diversity. Consequently, these methods failed to describe semantically rich videos.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> We describe an open-source toolkit for statistical machine translation whose novel contributions are (a) support for linguistically motivated factors, (b) confusion network decoding, and (c) efficient data formats for translation models and language models. In addition to the SMT decoder, the toolkit also includes a wide variety of tools for training, tuning and applying the system to many translation tasks. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with "real-world" knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61% of the time. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description focus on generating only single sentences and are not able to vary the descriptions’ level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. To understand the difference between detailed and short descriptions, we collect and analyze a video description corpus of three levels of detail. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from it. For our multi-sentence descriptions we model across-sentence consistency at the level of the SR by enforcing a consistent topic. Human judges rate our descriptions as more readable, correct, and relevant than related work. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> This paper integrates techniques in natural language processing and computer vision to improve recognition and description of entities and activities in real-world videos. We propose a strategy for generating textual descriptions of videos by using a factor graph to combine visual detections with language statistics. We use state-of-the-art visual recognition systems to obtain confidences on entities, activities, and scenes present in the video. Our factor graph model combines these detection confidences with probabilistic knowledge mined from text corpora to estimate the most likely subject, verb, object, and place. Results on YouTube videos show that our approach improves both the joint detection of these latent, diverse sentence components and the detection of some individual components when compared to using the vision system alone, as well as over a previous n-gram language-modeling approach. The joint detection allows us to automatically generate more accurate, richer sentential descriptions of videos with a wide array of possible content. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> This paper focuses on transcription generation in the form of subject, verb, object (SVO) triplets for videos in the wild, given off-the-shelf visual concept detectors. This problem is challenging due to the availability of sentence only annotations, the unreliability of concept detectors, and the lack of training samples for many words. Facing these challenges, we propose a Semantic Aware Transcription (SAT) framework based on Random Forest classifiers. It takes concept detection results as input, and outputs a distribution of English words. SAT uses video, sentence pairs for training. It hierarchically learns node splits by grouping semantically similar words, measured by a continuous skip-gram language model. This not only addresses the sparsity of training samples per word, but also yields semantically reasonable errors during transcription. SAT provides a systematic way to measure the relatedness of a concept detector to real words, which helps us understand the relationship between current visual detectors and words in a semantic space. Experiments on a large video dataset with 1,970 clips and 85,550 sentences are used to demonstrate our idea. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the MPII Movie Description dataset (MPII-MD) contains a parallel corpus of over 68K sentences and video snippets from 94 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are far more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks. <s> BIB011 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Most previous work on video description trains individual parts of speech independently. It is more appealing from a linguistic point of view, for word models for all parts of speech to be learned simultaneously from whole sentences, a hypothesis suggested by some linguists for child language acquisition. In this paper, we learn to describe video by discriminatively training positive sentential labels against negative ones in a weakly supervised fashion: the meaning representations (i.e., HMMs) of individual words in these labels are learned from whole sentences without any correspondence annotation of what those words denote in the video. Textual descriptions are then generated for new video using trained word models. <s> BIB012 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Statistical Methods <s> Abstract : The overall objective in guidance by semantics is to improve the sensing and actuation of unmanned and optionally manned platforms by incorporating high-level visual inference into the robot loop. Our specific goal in this project is to perform mapping and localization on a mobile platform using semantically meaningful sensor data. In our case, we have used a camera image co-registered with a laser scan to filter in scan points that fall on buildings in the scene or an RGBD sensor. This enables the robotic platform to only make use of readings that are known to be good, static landmarks, such as buildings as we have done in previous years. However, buildings are just one small type of semantic inference we can and plan to use. <s> BIB013
|
Naïve SVO tuple rule-based engineering approaches are indeed inadequate to describe open domain videos and large datasets, such as YouTubeClips , TACoS-MultiLevel BIB006 , MPII-MD BIB009 , and M-VAD BIB010 . These datasets contain very large vocabularies as well as tens of hours of videos. There are three important differences between these open domain and previous datasets. Firstly, open domain videos contain unforeseeable diverse set of subjects, objects, activities and places. Secondly, due to the sophisticated nature of human languages, such datasets are often annotated with multiple viable meaningful descriptions. Thirdly, the videos to be described are often long, potentially stretching through many hours. Descriptions of such videos with multiple sentences or even paragraphs become more desirable. To avoid the tedious efforts required in rule-based engineering methods, Rohrbach et. al. BIB003 proposed a machine learning method to convert visual content into natural language. They used parallel corpora of videos and associated annotations. Their method follows a two step approach. First, it learns to represent the video as intermediate semantic labels using maximum posterior estimate (MAP). Then, it translates the semantic labels into natural language sentences by using techniques borrowed from Statistical Machine Translation (SMT) BIB002 . In this machine translation approach, the intermediate semantic label representation is the source while the expected annotations are regarded as the target language. For the object and activity recognition stages, the research moved from earlier threshold-based detection BIB001 to manual feature engineering and traditional classifiers BIB004 , , BIB005 , BIB007 . For the sentence generation stage, an uptake of machine learning methods can be observed in recent years to address the issue of large vocabulary. This is also evidenced by the trend in recent methods that use models for lexical entries that are learned in a weakly supervised BIB006 , BIB003 , BIB011 , BIB012 or fully supervised BIB013 , , BIB005 , BIB008 fashion. However, the separation of the two stages makes this camp of methods incapable of capturing the interplay of visual features and linguistic patterns, let alone learning a transferable state space between visual artifacts and linguistic representations. In the next section, we review the deep learning methods and discuss how they address the scalability, language complexity and domain transferability issues faced by open domain video description.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network architecture and the Connectionist Temporal Classification objective function. A modification to the objective function is introduced that trains the network to minimise the expectation of an arbitrary transcription loss function. This allows a direct optimisation of the word error rate, even in the absence of a lexicon or language model. The system achieves a word error rate of 27.3% on the Wall Street Journal corpus with no prior linguistic information, 21.9% with only a lexicon of allowed words, and 8.2% with a trigram language model. Combining the network with a baseline system further reduces the error rate to 6.7%. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Learning Models <s> Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. <s> BIB007
|
The whirlwind success of deep learning in almost all subfields of computer vision, has also revolutionized video description approaches. In particular, Convolutional Neural Networks (CNNs) BIB002 are the state of the art for modeling visual data and excel at tasks such as object recognition BIB002 , BIB003 , BIB006 . Long Short-Term Memory (LSTMs) BIB001 and the more general deep Recurrent Neural Networks (RNNs), on the other hand, are now dominating the area of sequence modeling, setting new benchmarks in machine translation , BIB004 , speech recognition BIB005 and the closely related task of image captioning , BIB007 . While conventional methods struggle to cope with large-scale, more complex and diverse datasets for video description, researchers have combined these deep nets in various configurations with promising performances. As shown in Figure 7 , the deep learning approaches to video description can also be divided into two sequential stages, namely, visual content extraction and text generation. However, in contrast to the SVO Tuple Methods in Section 2.1, where lexical word tokens are generated as a result of the first stage through visual content extraction, visual features represented by fixed or dynamic real-valued vectors are produced instead. This is often referred to as the video encoding stage. CNN, RNN or Long Short-Term Memory (LSTM) are used in this encoding stage to learn these visual features, that are then used in the second stage for text generation, also known as the decoding stage. For decoding, different flavours of RNNs are used, such as deep RNN, Bi-directional RNN, LSTM or Gated Recurrent Units (GRU). The resulting description can be a single sentence or multiple sentences. Figure 8 illustrates a typical end-to-end video description system with encoder-decoder stages. The encoding part is followed by transformations such as mean pooling, temporal encoding or attention mechanisms to represent the visual content. Some methods apply sequenceto-sequence learning and/or semantic attributes learning in their frameworks. The aforementioned mechanisms have been used in different combinations by contemporary methods. We group the literature based on the different combinations of deep learning architectures for encoding and decoding stages, namely: • CNN -RNN Video Description, where convolution architectures are used for visual encoding and recurrent structures are used for decoding. This is the most common architecture employed in deep learning based video description methods; • RNN -RNN Video Description, where recurrent networks are used for both stages; and
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Detecting humans in films and videos is a challenging problem owing to the motion of the subjects, the camera and the background and to variations in pose, appearance, clothing, illumination and background clutter. We develop a detector for standing and moving people in videos with possibly moving cameras and backgrounds, testing several different motion coding schemes and showing empirically that orientated histograms of differential optical flow give the best overall performance. These motion-based descriptors are combined with our Histogram of Oriented Gradient appearance descriptors. The resulting detector is tested on several databases including a challenging test set taken from feature films and containing wide ranges of pose, motion and background variations, including moving cameras and backgrounds. We validate our results on two challenging test sets containing more than 4400 human examples. The combined detector reduces the false alarm rate by a factor of 10 relative to the best appearance-based detector, for example giving false alarm rates of 1 per 20,000 windows tested at 8% miss rate on our Test Set 1. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ($\approx$ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Videos have become ubiquitous due to the ease of capturing and sharing via social platforms like Youtube, Facebook, Instagram, and others. The computer vision community has tried to tackle various video analysis problems independently. As a consequence, even though some really good hand-crafted features have been proposed there is a lack of generic features for video analysis. On the other hand, the image domain has progressed rapidly by using features from deep convolutional networks. These deep features are proving to be generic and perform well on variety of image tasks. In this work we propose Convolution 3D (C3D) feature, a generic spatio-temporal feature obtained by training a deep 3-dimensional convolutional network on a large annotated video dataset comprising objects, scenes, actions, and other frequently occurring concepts. We show that by using spatio-temporal convolutions the trained features encapsulate appearance and motion cues and perform well on different video classification tasks. C3D has three main advantages. First, it is generic: achieving state-of-the-art results on object recognition, scene classification, sport classification, and action similarity labeling in videos. Second, it is compact: obtaining better accuracy than best hand-crafted features and best deep image features with a lower dimensional feature descriptor. Third, it is efficient to compute: $91$ times faster than current hand-crafted features, and two orders of magnitude faster than current deep-learning based video classification methods. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. <s> BIB011 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB012 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD). <s> BIB013 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] and M-VAD [31] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long Short-Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these classifiers we generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD and M-VAD datasets. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task. <s> BIB014 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets. <s> BIB015 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality. <s> BIB016 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. <s> BIB017 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization. <s> BIB018 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> CNN-RNN Video Description <s> Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE_L metrics. <s> BIB019
|
Given its success in computer vision and simplicity, CNN is still by far the most popular network structure used Donahue et al. were the first to use a deep neural networks to solve the video captioning problem. They proposed three architectures for video description. Their model is based on the assumption to have CRF based predictions of subjects, objects, and verbs after full pass of complete video. This allows the architecture to observe the complete video at each time step. The first architecture, LSTM encoderdecoder with CRF max, is motivated by the statistical machine translation (SMT) based video description approach by Rohrbach et al. BIB004 mentioned earlier in Section 2.2. Recognizing the state of the art machine translation performance of LSTMs, the SMT module in BIB004 is replaced with a stacked LSTM comprising two layers for encoding and decoding. Similar to BIB006 , the first LSTM layer encodes the one-hot vector of the input sentence allowing for variablelength inputs. The final hidden representation from the first encoder stage is then fed into the decoder stage to generate a sentence by producing one word per time step. Another variant of the architecture, LSTM decoder with CRF max, incorporates max predictions. This architecture encodes the semantic representation into a fixed length vector. Similar to image description, LSTM is able to see the whole visual content at every time step. An advantage of LSTM is that it is able to incorporate probability vectors during training as well as testing. This virtue of LSTM is exploited in the third variant of the architecture, LSTM decoder with CRF probabilities. Instead of using max predication like in second variant (LSTM decoder with CRF max), this architecture incorporates probability distributions. Although the LSTM outperformed the SMT based approach of BIB004 , it was still not trainable in an end-to-end fashion. In contrast to the work by Donahue et al. , where an intermediate role representation was adopted, Venugopalan et al. presented the first end-to-end trainable network architecture for generating natural language description of videos. Their model is able to simultaneously learn the semantic as well as grammatical structure of the associated language. Moreover, Donahue et al. presented results on domain specific cooking videos comprising pre-defined objects and actors. On the other hand, Venugopalan et al. reported results on open domain YouTube Clips . To avoid supervised intermediate representations, they connected an LSTM directly to the output of the CNN. The CNN extracts visual features whereas the LSTM models the sequence dynamics. They transformed a short video into a fixed length visual input using a CNN model BIB007 that is slightly different from AlexNet BIB003 . The CNN model BIB007 was learned using the ILSVRC-2012 object classification dataset (comprising 1.2M images), which is a subset of ImageNet BIB010 . It provides a robust and efficient way without manual feature selection for initialization object recognition in the videos. They sampled every tenth frame in the video and extracted features for all sample frames from the fc7 layer of the CNN. Furthermore, they represented a complete video by averaging all the extracted frame-wise feature vectors into a single vector. These feature vectors are then fed into a two-layered LSTM BIB005 . The feature vectors from CNN form the input to the first layer of the LSTM. A second LSTM layer is stacked on top of first LSTM layer, where the hidden state of the first LSTM layer becomes the input to the second LSTM unit for caption generation. In essence, the transforming of multiple frame-based feature vectors into a single aggregated video-based vector, reduces the video description problem into an image captioning one. This end-to-end model performed better than the previous video description systems at the time and was able to effectively generate the sequence without any templates. However, as a result of simple averaging, valuable temporal information of the video, such as the order of appearances of any two objects, are lost. Therefore, this approach is only suitable of generating captions for short clips with a single major action in the clip. Open domain videos are rich in complex interactions among actors and objects. Representation of such videos us-ing a temporally averaged single feature vector is, therefore, prone to produce clutter. Consequently, the descriptions produced are bound to be inadequate because valuable temporal ordering information of events are not captured in the representation. With the success of C3D BIB008 in capturing spatio-temporal action dynamics in videos, Li et al. BIB011 proposed a novel 3D-CNN to model the spatiotemporal information in videos. Their 3D-CNN is based on GoogLeNet BIB012 and pre-trained on an activity recognition dataset. It captures local fine motion information between consecutive frames. This local motion information is then subsequently summarized and preserved through higherlevel representations by modeling a video as a 3D spatiotemporal cuboid. It is further represented by concatenation of HoG, HoF, MbH BIB001 , BIB002 . These transformations not only help capture local motion features but also reduce the computation of the subsequent 3D CNN. For global temporal structure, a temporal attention mechanism is proposed and adapted from soft attention BIB009 . Using 3D CNN and attention mechanisms in RNN, they were able to improve results. Recently, GRU-EVE BIB019 was proposed as an effective and computationally efficient technique for video captioning. GRU-EVE uses a standard GRU for language modeling but with Enriched Visual Encoding as follows. It applies the Short Fourier Transform on 2D/3D-CNN features in a hierarchical manner to encapsulate the spatiotemporal video dynamics. The visual features are further enriched with high level semantics of the detected objects and actions in the video. Interestingly, the enriched features obtained by applying Short Fourier Transform on 2D-CNN features alone BIB019 , outperform C3D BIB008 features. Unlike the fixed video representation models discussed above, variable visual representation models are able to directly map input videos comprising different number of frames to variable length words or sentences (outputs), and are successful in modeling various complex temporal dynamics. Venugopalan et al. BIB013 proposed an architecture to address the variable representation problem for both the input (video frames) and the output (sentence) stage. For that purpose they used a two-layered LSTM framework, where the sequence of video frames is input to the first layer of the LSTM. The hidden state of the first LSTM layer forms the input to the second layer of the LSTM. The output of the second LSTM layer is the associated caption. The LSTM parameters are shared in both stages. Although sequenceto-sequence learning had previously been used in machine translation BIB006 , this is the first method BIB013 to use a sequence-to-sequence approach in video captioning. Later methods have adopted a similar framework, with minor variations including attention mechanisms BIB011 , making a common visual-semantic-embedding BIB015 or using out of domain knowledge either with language models BIB016 or visual classifiers BIB014 . While deep learning has achieved much better results compared to previously used classifier based approaches, most methods aimed at producing one sentence from a video clip containing only one major event. In real-world applications, videos generally contain more than a single event. Description of such multi-events and semantically rich videos by only one sentence ends up to be overly simplified, and hence, uninformative. For example, instead of saying "someone sliced the potatoes with a knife, chopped the onions into pieces and put the onions and potatoes into the pot", a single sentence generation method would probably say "someone is cooking". Yu et al. BIB017 proposed a hierarchical recurrent neural network (h-RNN) that applies the attention mechanisms on both the temporal and spatial aspects. They focused on the sentence decoder and introduced a hierarchical framework that comprises of a sentence generator and on top of that a paragraph generator. First, a Gated Recurrent Unit (GRU) layer takes video features as input and generates a single short sentence. The other recurrent layer generates paragraphs using context and the sentence vectors obtained from the sentence generator. The paragraph generator thus captures the dependencies between sentences and generates a paragraph of sentences that are related. Recently, Krishna et al. BIB018 introduced the concept of dense-captioning of events in a video and employed action detection techniques to predict the temporal intervals. They proposed a model to extract multiple events with one single pass of a video, attempting to describe the detected events simultaneously. This is the first work of its kind detecting and describing multiple and overlapping events in a video. However, the model did not achieve significant improvement on the captioning benchmark.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> RNN -RNN Video Description <s> Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> RNN -RNN Video Description <s> We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> RNN -RNN Video Description <s> We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. <s> BIB003
|
Although not as popular as the CNN-RNN framework, another approach is to also encode the visual information using RNNs. Srivastava et al. BIB002 use one LSTM to extract features from video frames (i.e. encoding) and then pass the feature vector through another LSTM for decoding. They also introduced some variants of their models and predicted the future sequences from the previous frames. The authors adopted a machine translation model BIB001 for visual recognition but could not achieve significant improvement in classification accuracy. Yu et al. BIB003 proposed a similar approach and used two RNN structures for the video description task. Their configuration is a hierarchical decoder with multiple Gated Recurrent Units (GRU) for sentence generation. The output of this decoder is then fed to a paragraph generator which models the time dependencies between the sentences while focusing on linguistic aspects. The authors improved the state-of-the-art results for video description, however, their method is inefficient for videos involving fine-grained activities and small interactive objects.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels - allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits - in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Image captioning is a challenging problem owing to the complexity in understanding the image content and diverse ways of describing it in natural language. Recent advances in deep neural networks have substantially improved the performance of this task. Most state-of-the-art approaches follow an encoder-decoder framework, which generates captions using a sequential recurrent prediction model. However, in this paper, we introduce a novel decision-making framework for image captioning. We utilize a policy network and a value network to collaboratively generate captions. The policy network serves as a local guidance by providing the confidence of predicting the next word according to the current state. Additionally, the value network serves as a global and lookahead guidance by evaluating all possible extensions of the current state. In essence, it adjusts the goal of predicting the correct words towards the goal of generating captions similar to the ground truth captions. We train both networks using an actor-critic reinforcement learning model, with a novel reward defined by visual-semantic embedding. Extensive experiments and analyses on the Microsoft COCO dataset show that the proposed framework outperforms state-of-the-art approaches across different evaluation metrics. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> In video captioning task, the best practice has been achieved by attention-based models which associate salient visual components with sentences in the video. However, existing study follows a common procedure which includes a frame-level appearance modeling and motion modeling on equal interval frame sampling, which may bring about redundant visual information, sensitivity to content noise and unnecessary computation cost. We propose a plug-and-play PickNet to perform informative frame picking in video captioning. Based on a standard Encoder-Decoder framework, we develop a reinforcement-learning-based procedure to train the network sequentially, where the reward of each frame picking action is designed by maximizing visual diversity and minimizing textual discrepancy. If the candidate is rewarded, it will be selected and the corresponding latent representation of Encoder-Decoder will be updated for future trials. This procedure goes on until the end of the video sequence. Consequently, a compact frame subset can be selected to represent the visual information and perform video captioning without performance degradation. Experiment results shows that our model can use 6-8 frames to achieve competitive performance across popular benchmarks. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Deep Reinforcement Learning Models <s> Although end-to-end (E2E) learning has led to impressive progress on a variety of visual understanding tasks, it is often impeded by hardware constraints (e.g., GPU memory) and is prone to overfitting. When it comes to video captioning, one of the most challenging benchmark tasks in computer vision, those limitations of E2E learning are especially amplified by the fact that both the input videos and output captions are lengthy sequences. Indeed, state-of-the-art methods for video captioning process video frames by convolutional neural networks and generate captions by unrolling recurrent neural networks. If we connect them in an E2E manner, the resulting model is both memory-consuming and data-hungry, making it extremely hard to train. In this paper, we propose a multitask reinforcement learning approach to training an E2E video captioning model. The main idea is to mine and construct as many effective tasks (e.g., attributes, rewards, and the captions) as possible from the human captioned videos such that they can jointly regulate the search space of the E2E neural network, from which an E2E video captioning model can be found and generalized to the testing phase. To the best of our knowledge, this is the first video captioning model that is trained end-to-end from the raw video input to the caption output. Experimental results show that such a model outperforms existing ones to a large margin on two benchmark video captioning datasets. <s> BIB011
|
Deep Reinforcement Learning (DRL) has out-performed humans in many real-word games. In DRL, artificial intelligent agents learn from the environment through trial and error and adjust learning policies purely from environmental rewards or punishments. DRL approaches are popularized by Google Deep Mind BIB003 , BIB004 since 2013. Due to the absence of a straight forward cost function, learning mechanisms in this approach are considerably harder to devise as compared to traditional supervised techniques. Two distinct challenges are evident in reinforcement learning when compared with conventional supervised approaches: (1) The model does not have full access to the function being optimized. It has to query the function through interaction. (2) The interaction with the environment is state based where the present input depends on previous actions. The choice of reinforcement learning algorithms then depends on the scope of the problem at hand. For example, variants of Hierarchical Reinforcement Learning (HRL) framework have been applied to Atari games BIB005 , BIB007 . Similarly, different variants of DRL have been used to meet the challenging requirements of image captioning BIB008 as well as video description BIB010 , BIB011 , BIB009 , , [171] . Xwang et al. [171] proposed a fully-differentiable neural network architecture using reinforcement learning for video description. Their method follows a general encoderdecoder framework. The encoding stage captures the video frame features using ResNet-152 BIB006 . The frame level features are processed through two stage encoder i.e. low level LSTM BIB001 followed by a high level LSTM BIB002 . For decoding, they employed HRL to generate the word by word natural language descriptions. The HRL agent comprises of three components, a low level worker that accomplishes tasks as set by manager, a high level manager that sets goals and internal critic to ascertain whether the task has been accomplished or not and informs the manager accordingly to help manager update the goals. The process iterates till reaching the end of sentence token. This method is demonstrated to be capable of capturing more details of the video content thus generating more fine-grained descriptions. However, this method has shown very little improvement over existing baseline methods. In 2018, Chen et al. BIB010 proposed a RL based model selecting key informative frames to represent a complete video, in an attempt to minimize noise and unnecessary computations. Key frames are selected such that they maximize visual diversity and minimize the textual discrepancy. Hence, a compact subset of 6-8 frames on average can represented a full video. Evaluated against several popular benchmarks, it was demonstrated that video captions can be produced without performance degradation but at a significantly reduced computational cost. The method did not use motion features for encoding, a design trade-off between speed and accuracy. DRL based methods are gaining popularity and have shown comparable results in video description. Due to their unconventional learning methodology, DRL methods are unlikely to suffer from paucity of labelled training data, hardware constraints and overfitting problems. Therefore, these methods are expected to flourish.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> YouCook <s> While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> YouCook <s> The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level. <s> BIB002
|
The YouCook dataset BIB002 consists of 88 YouTube cooking videos of different people cooking various recipes. The background (kitchen/scene) is different in most of the videos. This dataset represents a more challenging visual problem than the MP-II Cooking BIB001 dataset that is recorded with a fixed camera view point in the same kitchen and with the same background. The dataset is divided into six different cooking styles, for example grilling, baking etc. For machine learning, the training set contains 49 videos and the test set contains 39 videos. Frame wise annotations of objects and actions are also provided for the training videos. The object categories for the dataset include "utensils", "bowls" and "food" etc. Amazon Mechanical Turk (AMT) was employed for human generated multiple natural language descriptions of each video. Each AMT worker provided at least three sentences per video as a description, and on average 8 descriptions were collected per video. See Figure 9 (b) for example clips and descriptions.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TACoS <s> State-of-the-art human activity recognition methods build on discriminative learning which requires a representative training set for good performance. This leads to scalability issues for the recognition of large sets of highly diverse activities. In this paper we leverage the fact that many human activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. To share and transfer knowledge between composite activities we model them by a common set of attributes corresponding to basic actions and object participants. This attribute representation allows to incorporate script data that delivers new variations of a composite activity or even to unseen composite activities. In our experiments on 41 composite cooking tasks, we found that script data to successfully capture the high variability of composite activities. We show improvements in a supervised case where training data for all composite cooking tasks is available, but we are also able to recognize unseen composites by just using script data and without any manual video annotation. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TACoS <s> While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition. <s> BIB002
|
Textually Annotated Cooking Scenes (TACoS) is a subset of MP-II Composites BIB001 . TACoS was further processed to provide coherent textual descriptions for high quality videos. Note that MP-II Composites contain more videos but less activities than the MP-II Cooking BIB002 . It contains 212 high resolution videos with 41 cooking activities. Videos in the MP-II Composites dataset span over different lengths ranging from 1-23 minutes with an average length of 4.5 minutes. The TACoS dataset was constructed by filtering through MP-II Composites, while restricting to only those activities that involve manipulation of cooking ingredients, and have at least 4 videos for the same activity. As a result, TACoS contains 26 fine grained cooking activities in 127 videos. AMT workers were employed to align the sentences and associated videos for example: "preparing carrots", "cutting a cucumber" or "separating eggs" etc.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TACoS-MultiLevel <s> Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos . We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results demonstrate that a text-based model of similarity between actions improves substantially when combined with visual information from videos depicting the described actions. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TACoS-MultiLevel <s> Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description focus on generating only single sentences and are not able to vary the descriptions’ level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. To understand the difference between detailed and short descriptions, we collect and analyze a video description corpus of three levels of detail. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from it. For our multi-sentence descriptions we model across-sentence consistency at the level of the SR by enforcing a consistent topic. Human judges rate our descriptions as more readable, correct, and relevant than related work. <s> BIB002
|
TACoS Multilevel BIB002 corpus annotations were also collected via AMT workers on the TACoS corpus BIB001 . For each video in the TACoS corpus, three levels of descriptions were collected that include: (1) detailed description of video with no more than 15 sentences per video; (2) a short description that comprises 3-5 sentences per video; and finally (3) a single sentence description of the video. Annotation of the data is provided in the form of tuples such as object, activity, tool, source and target with a person always being the subject. See Figure 9 (e) for example clips and descriptions.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> MPII-MD <s> Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the MPII Movie Description dataset (MPII-MD) contains a parallel corpus of over 68K sentences and video snippets from 94 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are far more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> MPII-MD <s> In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing. <s> BIB002
|
MPII-Movie Description Corpus BIB001 contains transcribed audio descriptions extracted from 94 Hollywood movies. These movies are subdivided into 68,337 clips with an average length of 3.9 seconds paired with 68,375 sentences amounting to almost one sentence per clip. Every clip is paired with one sentence that is extracted from the script of the movie and the audio description data. The Audio Descriptions (ADs) were collected first by retrieving the audio streams from the movie using online services MakeMkV 1 and Subtitle Edit 2 . These audio streams are further transcribed using crowd sourced transcription service . Then the transcribed texts were aligned with associated spoken sentences using their time stamps. In order to remove the misalignments of audio content with the visual content itself, each sentence was also manually aligned with the corresponding video clip. During the manual alignment process, sentences describing the content not present in the video clip were also filtered out. The audio descriptions track is an added feature in the dataset tying to describe the visual content to help visually impaired persons. The total time span of the dataset videos is almost 73.6 hours and the vocabulary size is 653,467. Example clips and descriptions are shown in Figure 9 (f). BIB002 is based on the Descriptive Video Service (DVS) and contains 48,986 video clips from 92 different movies. Each clip is spanned over 6.2 seconds on average and the entire time for the complete dataset is 84.6 hours. The total number of sentences is 55,904, with few clips associated with more than one sentence. The vocabulary of the dataset spans about 17,609 words (Nouns-9,512: Verbs-2,571: Adjectives-3,560: Adverbs-857). The dataset split consists of 38,949, 4,888 and 5,149 video clips for training, validation and testing respectively. See Figure 9 (g) for example clips and descriptions.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> ActivityNet Entities <s> Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> ActivityNet Entities <s> Video description is one of the most challenging problems in vision and language understanding due to the large variability both on the video and language side. Models, hence, typically shortcut the difficulty in recognition and generate plausible sentences that are based on priors but are not necessarily grounded in the video. In this work, we explicitly link the sentence to the evidence in the video by annotating each noun phrase in a sentence with the corresponding bounding box in one of the frames of a video. Our dataset, ActivityNet-Entities, augments the challenging ActivityNet Captions dataset with 158k bounding box annotations, each grounding a noun phrase. This allows training video description models with this data, and importantly, evaluate how grounded or "true" such model are to the video they describe. To generate grounded captions, we propose a novel video description model which is able to exploit these bounding box annotations. We demonstrate the effectiveness of our model on our dataset, but also show how it can be applied to image description on the Flickr30k Entities dataset. We achieve state-of-the-art performance on video description, video paragraph description, and image description and demonstrate our generated sentences are better grounded in the video. <s> BIB002
|
ActivityNet Entities dataset (or ANet-Entities) BIB002 is the first video dataset with entities grounding and annotations. This dataset is build on the training and validation splits of the ActivityNet Captions dataset BIB001 , but with different captions. In this dataset, noun phrases (NPs) of video descriptions have been grounded to bounding boxes in the video frames. The dataset comprises 14281 annotated videos, 52k video segments with at least one noun phrase annotated per segment and 158k bounding boxes with annotations. The dataset employs training set (10k) similar to ActivityNet Captions. However, validation set of ActivityNet Captions is randomly and evenly split into ANet-Entities validation (2.5k) and testing (2.5k) sets.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> ActivityNet Captions <s> This exploratory paper quests for a stochastic and context sensitive grammar of images. The grammar should achieve the following four objectives and thus serves as a unified framework of representation, learning, and recognition for a large number of object categories. (i) The grammar represents both the hierarchical decompositions from scenes, to objects, parts, primitives and pixels by terminal and nonterminal nodes and the contexts for spatial and functional relations by horizontal links between the nodes. It formulates each object category as the set of all possible valid configurations produced by the grammar. (ii) The grammar is embodied in a simple And-Or graph representation where each Or-node points to alternative sub-configurations and an And-node is decomposed into a number of components. This representation supports recursive top-down/bottom-up procedures for image parsing under the Bayesian framework and make it convenient to scale up in complexity. Given an input image, the image parsing task constructs a most probable parse graph on-the-fly as the output interpretation and this parse graph is a subgraph of the And-Or graph after making choice on the Or-nodes. (iii) A probabilistic model is defined on this And-Or graph representation to account for the natural occurrence frequency of objects and parts as well as their relations. This model is learned from a relatively small training set per category and then sampled to synthesize a large number of configurations to cover novel object instances in the test set. This generalization capability is mostly missing in discriminative machine learning methods and can largely improve recognition performance in experiments. (iv) To fill the well-known semantic gap between symbols and raw signals, the grammar includes a series of visual dictionaries and organizes them through graph composition. At the bottom-level the dictionary is a set of image primitives each having a number of anchor points with open bonds to link with other primitives. These primitives can be combined to form larger and larger graph structures for parts and objects. The ambiguities in inferring local primitives shall be resolved through top-down computation using larger structures. Finally these primitives forms a primal sketch representation which will generate the input image with every pixels explained. The proposal grammar integrates three prominent representations in the literature: stochastic grammars for composition, Markov (or graphical) models for contexts, and sparse coding with primitives (wavelets). It also combines the structure-based and appearance based methods in the vision literature. Finally the paper presents three case studies to illustrate the proposed grammar. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> ActivityNet Captions <s> Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization. <s> BIB002
|
ActivityNet Captions dataset BIB002 contains 100k dense natural language descriptions of about 20k videos from ActivityNet BIB001 that correspond to approximately 849 hours. On average, each description is composed of 13.48 words and covers about 36 seconds of video. There are multiple descriptions for every video and when combined together, these descriptions cover 94.6% content present in the entire video. In addition, 10% temporal overlap makes the dataset especially interesting and challenging for studying multiple events occurring at the same time. An example of this dataset is given in Figure 9 (h).
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> LSMDC <s> In this work, we introduce a dataset of video annotated with high quality natural language phrases describing the visual content in a given segment of time. Our dataset is based on the Descriptive Video Service (DVS) that is now encoded on many digital media products such as DVDs. DVS is an audio narration describing the visual elements and actions in a movie for the visually impaired. It is temporally aligned with the movie and mixed with the original movie soundtrack. We describe an automatic DVS segmentation and alignment method for movies, that enables us to scale up the collection of a DVS-derived dataset with minimal human intervention. Using this method, we have collected the largest DVS-derived dataset for video description of which we are aware. Our dataset currently includes over 84.6 hours of paired video/sentences from 92 DVDs and is growing. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> LSMDC <s> Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the MPII Movie Description dataset (MPII-MD) contains a parallel corpus of over 68K sentences and video snippets from 94 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are far more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> LSMDC <s> In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. <s> BIB003
|
The Large Scale Movie Description Challenge (LSMDC) started in 2015 in conjunction with ICCV 2015, and as an ECCV workshop in 2016. The Challenge comprises a test set that is released publicly and a blind test set that is withheld. A server is provided to automatically evaluate results. The challenge consists of three primary tasks i.e. Movie Description, Annotation/Retrieval and Fill-in-the-Blank. Since 2017, the MovieQA challenge has also been included in LSMDC in addition to the previous three tasks. The dataset for this challenge was first introduced in ICCV 2015 workshop . The LSMDC dataset basically combines two benchmark datasets, M-VAD BIB001 and MPII-MD BIB002 which were initially collected independently (see Section 3.2). The two datasets were merged for this Challenge, with overlaps removed to avoid repetition of the same movie in the test and training sets. Further, the manual alignments performed on MPII-MD were also removed from the validation and the test sets. The dataset was then augmented by clips only (without aligned annotations) from 20 additional movies to make up the blind test of the Challenge. These additional clips were added for evaluation only. The final LSMDC dataset has 118,081 video clips extracted from 202 unique movies. It has approximately one sentence per clip. Names of characters in the reference captions are replaced with the token word "SOMEONE". The dataset is further split into 91908 training clips, 6542 validations clips, 10053 public test clips and a blind (withheld) test set of 9578 clips. The average clip length is approximately 4.8 seconds. The training set captions consists of 22,829 unique words. A summary of the LSMDC dataset can be found in Table 2 . A survey of benchmark results on video description (Section-6) shows that LSMDC has emerged as the most challenging dataset, evident by the poor performances of several models. As mentioned in the dataset section (Section 3.2), natural language descriptions of movie clips are typically sourced from movie scripts and audio descriptions, so misalignments between captions and videos often occur when text refer to objects that appeared just before or after the cutting point of a clip. Misalignment is certainly a key contributing factor to the poor performances observed on this dataset. Submission protocol of the challenge is similar to the MSCOCO Image Captioning Challenge BIB003 , and uses the same protocol for automatic evaluation. Human evaluation is used to select the final winner. The latest results of automatic evaluation on LSMDC are publicly available .
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TRECVID <s> We describe three semantic text similarity systems developed for the *SEM 2013 STS shared task and the results of the corresponding three runs. All of them shared a word similarity feature that combined LSA word similarity and WordNet knowledge. The first, which achieved the best mean score of the 89 submitted runs, used a simple term alignment algorithm augmented with penalty terms. The other two runs, ranked second and fourth, used support vector regression models to combine larger sets of features. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> TRECVID <s> We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated. <s> BIB002
|
Text Retrieval Conference (TREC) is a series of workshops emphasizing various subareas of Information Retrieval (IR) research. In particular, the TREC Video Retrieval Evaluation (TRECVID) [1] workshops, started in 2001, are dedicated to research efforts on content-based exploitation of digital videos. The primary areas of interests include "semantic indexing, video summarization, video copy detection, multimedia event detection and ad-hoc video search" [1] . Since TREC-2016, Video to Text Description (VTT) using natural language has also been included in the challenge tasks. TRECVID-2017 VTT task used a dataset of over 50K automatically collected Twitter Vine videos, where each clip spans over approximately 6 seconds. This task is performed on a manually annotated selected subset that consists of 1,880 Twitter Vine videos. The dataset is further divided into four groups, G2, G3, G4 and G5, based on the number of descriptions (2 to 5) per videos. Furthermore, each video is tagged as easy or hard according to the difficulty level in describing it. Example frames from the VTT dataset are show in Figure 10 . TRECVID uses metrics such as METEOR, BLEU and CIDEr (details in Section-5) for automatic evaluation, in addition to a newly introduced metric, referred to as Semantic Text Similarity (STS) BIB001 . As the name suggests, STS measures semantic similarity of the generated and reference descriptions. Human evaluations are also employed to gauge the quality of the automatically generated descriptions following the Direct Assessment (DA) method. Due to its high reliability, DA is now employed as the official ranking method for machine translation benchmark evaluations [28] . As per DA based video description evaluation, human assessors are shown video-sentence pairs to rate how well the sentence describes the events in the video on a scale of 0 − 100 BIB002 .
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> EVALUATION METRICS <s> Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> EVALUATION METRICS <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> EVALUATION METRICS <s> We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> EVALUATION METRICS <s> There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count? <s> BIB004
|
Evaluations performed over machine generated captions/descriptions of videos can be divided into Automatic Evaluations and Human Evaluations. Automatic evaluations are performed using six different metrics which were originally designed for machine translation and image captioning. These metrics are BLEU BIB001 , ROUGE L , ME-TEOR , CIDEr BIB002 , WMD BIB003 and, SPICE BIB004 . Below, we discuss these metrics in detail as well as their limitations and reliability. Human Evaluations are performed to because of the unsatisfactory performance of automatic metrics given that there are numerous different ways to correctly describe the same video.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Automatic Sentence Generation Evaluation <s> Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Automatic Sentence Generation Evaluation <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Automatic Sentence Generation Evaluation <s> There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count? <s> BIB003
|
Evaluation of video descriptions, automatically or manually generated, is challenging because as there is no specific ground truth or "right answer", that can be taken as a reference for benchmarking accuracy. A video can be correctly described in a wide variety of sentences, that may differ not only syntactically but also in terms of semantic content. Consider a sample from MSVD dataset as shown in Figure 11 for instance, several ground truth captions are available for the same video clip. Note that each caption describes the clip in an equally valid, but different way with varied attentions and levels of details in the clip, ranging from "jet", "commercial airplane" to "South African jet" and from "flying", "soaring" to "banking" and lastly from "air", "blue sky" to "clear sky". For automatic evaluation, when comparing the generated sentences with ground truth descriptions, three evaluation metrics are borrowed from machine translation, namely, Bilingual Evaluation Understudy (BLEU) BIB001 , Recall Oriented Understudy of Gisting Evaluation (ROUGE) and Metric for Evaluation of Translation with Explicit Ordering (METEOR) . Consensus based Image Description Evaluation (CIDEr) BIB002 and Semantic Propositional Image Captioning Evaluation (SPICE) BIB003 are two other recently introduced metrics specifically designed for image captioning tasks, that are also being used for automatic evaluation of video description. Table 3 gives an overview of the metrics included in this survey. In addition to these automatic evaluation metrics, human evaluations are also employed to determine the performance of an automated video description algorithms.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Metric for Evaluation of Translation with Explicit Ordering (METEOR, 2005) <s> Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Metric for Evaluation of Translation with Explicit Ordering (METEOR, 2005) <s> Image description is a new natural language generation task, where the aim is to generate a human-like description of an image. The evaluation of computer-generated text is a notoriously difficult problem, however, the quality of image descriptions has typically been measured using unigram BLEU and human judgements. The focus of this paper is to determine the correlation of automatic measures with human judgements for this task. We estimate the correlation of unigram and Smoothed BLEU, TER, ROUGE-SU4, and Meteor against human judgements on two data sets. The main finding is that unigram BLEU has a weak correlation, and Meteor has the strongest correlation with human judgements. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Metric for Evaluation of Translation with Explicit Ordering (METEOR, 2005) <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Metric for Evaluation of Translation with Explicit Ordering (METEOR, 2005) <s> We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Metric for Evaluation of Translation with Explicit Ordering (METEOR, 2005) <s> There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count? <s> BIB005
|
METEOR was proposed to address the shortcomings of BLEU BIB001 . Instead of exact lexical match required by BLEU, METEOR introduced semantic matching. METEOR takes WordNet [51] , a lexical database of the English language to account for various match levels, including exact words matches, stemmed words matches, synonymy matching and the paraphrase matching. METEOR score computation is based on how well the generated and reference sentences are aligned. Each sentence is taken as a set of unigrams and alignment is done by mapping unigrams of candidate and reference sentences. During mapping, a unigram in candidate sentence (or reference sentence) should either map to unigram in reference sentence (or candidate sentence) or to zero. In case of multiple options available for alignments between the two sentences, the alignment configuration with less number of crossings is preferred. After finalizing the alignment process, METEOR score is calculated. Initially, unigram based precision score P is calculated using P = m cr /m ct relationship. Here m cr represents the number of unigrams co-occurring in both candidate, as well as reference sentences and m ct corresponds to total number of unigrams in the candidate sentences. Then unigram based recall score R is calculated using R = m cr /m rt . Here m cr represents the number of unigrams co-occurring in both candidate as well as reference sentences. However, m rt is the number of unigrams in the reference sentences. Further, precision and recall scores are used to compute the F-score using following equation: The precision, recall and F-score measures account for unigram based congruity and do not cater for n-grams. The n-gram based similarities are used to calculate the penalty p for alignment between candidate and reference sentences. This penalty takes into account the non-adjacent mappings between the two sentences. The penalty is calculated by grouping the unigrams into minimum number of chunks. The chunk includes unigrams that are adjacent in candidate BIB001 Machine translation n-gram precision ROUGE Document summarization n-gram recall METEOR Machine translation n-gram with synonym matching CIDEr BIB003 Image captioning tf-idf weighted n-gram similarity SPICE BIB005 Image captioning Scene-graph synonym matching WMD BIB004 Document similarity Earth mover distance on word2vec as well as reference sentences. If a generated sentence is an exact match to the reference sentence then there will be only one chunk. The penalty is computed as where N c in represents the number of chunks and N u corresponds to the number of unigrams grouped together. The METEOR score for the sentence is then computed as: Corpus level score can be computed using the same equation by using aggregated values of all the arguments i.e. P, R and p. In case of multiple reference sentences, the maximum METEOR score of a generated and reference sentence is taken. To date, correlation of METEOR score with human judgments is better than that of BLEU score. Moreover, Elliot et al. BIB002 also found METEOR to be a better evaluation metric as compared to contemporary metrics. Their conclusion is based on Spearman's correlation computation of automatic evaluation metrics against human judgments.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Consensus based Image Description Evaluation (CIDEr, 2015) <s> The term‐weighting function known as IDF was proposed in 1972, and has since been extremely widely used, usually as part of a TF*IDF function. It is often described as a heuristic, and many papers have been written (some based on Shannon's Information Theory) seeking to establish some theoretical basis for it. Some of these attempts are reviewed, and it is shown that the Information Theory approaches are problematic, but that there are good theoretical justifications of both IDF and TF*IDF in the traditional probabilistic model of information retrieval. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Consensus based Image Description Evaluation (CIDEr, 2015) <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB002
|
CIDEr BIB002 is a recently introduced evaluation metric for image captioning task. It evaluates the consensus between a predicted sentence c i and reference sentences of the corresponding image. It performs stemming and converts all the words from candidate as well as reference sentences into their root forms e.g. stems, stemmer, stemming, and stemmed to their root word stem. CIDEr treats each sentence as a set of n-grams containing 1 to 4 words. To encode the consensus between predicted sentence and reference sentence, it measures the co-existence frequency of n-grams in both sentences. Finally, n-grams that are very common among the reference sentences of all the images are given lower weight, as they are likely to be less informative about the image content, and more biased towards lexical structure of the sentences. The weight for each n-gram is computed using Term Frequency Inverse Document Frequency (TF-IDF) BIB001 . The term TF puts higher weightage on frequently occurring n-grams in the reference sentence of the image, whereas IDF puts lower weightage on commonly appearing n-grams across the whole dataset. Finally, CIDEr n score is computed as where g n (c i ) is a vector representing all n-grams with length n and g n (c i ) depicts magnitude of g n (c i ). Same is true for g n (s ij ). Further, CIDEr uses higher order n-grams (higher the order, longer the sequence of words) to capture the grammatical properties and richer semantics of the text. For that matter, it combines the scores of different n-grams using the following equation: The most popular version of CIDEr in image and video description evaluation is CIDEr-D, that incorporates a few modifications in the originally proposed CIDEr to prevent higher scores for the captions that badly fail in human judgments. Firstly, they proposed removal of stemming to ensure correct form of words are used. Otherwise, multiple forms of verbs (singular, plural etc) are mapped to the same token producing high score for incorrect sentences. Secondly, they ensure that if the words of high confidence are repeated in a sentence a high score is not produced as in the original CIDEr produces even if the sentence does not make sense. This is done by introducing a Gaussian penalty over length differences between the candidate and reference sentences and by clipping to the n-grams count equal to the number of occurrences in the reference sentence. The latter ensures that the desired sentence length is not achieved by repetition of high confidence words to get a high score. The aforementioned changes makes the metric robust and ensures its high correlation score BIB002 .
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Word Mover's Distance (WMD, 2015) <s> We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Word Mover's Distance (WMD, 2015) <s> The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. ::: ::: An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Word Mover's Distance (WMD, 2015) <s> We present the Word Mover's Distance (WMD), a novel distance function between text documents. Our work is based on recent results in word embeddings that learn semantically meaningful representations for words from local cooccurrences in sentences. The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. We show that this distance metric can be cast as an instance of the Earth Mover's Distance, a well studied transportation problem for which several highly efficient solvers have been developed. Our metric has no hyperparameters and is straight-forward to implement. Further, we demonstrate on eight real world document classification data sets, in comparison with seven state-of-the-art baselines, that the WMD metric leads to unprecedented low k-nearest neighbor document classification error rates. <s> BIB003
|
The WMD BIB003 makes use of word embeddings which are semantically meaningful vector representations of words learnt from text corpora. WMD distance measures the dissimilarity between two text documents. Two captions with different words may still have the same semantic meanings. On the other hand, it is possible for multiple captions to BIB001 , typically used in transportation to calculate the travel cost using word2vec embeddings BIB002 . In this metric, each caption or description is represented by a bag-of-words histogram that includes all but the start and stop words. The magnitude of each bag-of-words histogram is then normalized. To account for semantic similarities that exist between pairs of words, the WMD metric uses the Euclidean distance in the word2vec embedding space. The distance between two documents or captions is then defined as the cost required to move all words between captions. Figure 12 illustrates an example WMD calculation process. The WMD is modelled as a special case of EMD BIB001 and is then solved by linear optimization. Compared to BLUE, ROUGE and CIDEr, WMD is less sensitive to words order or synonym swapping. Further, similar to CIDEr and METEOR, it gives high correlation against human judgments.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Reliability of Evaluation Metrics <s> Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Reliability of Evaluation Metrics <s> Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD). <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Reliability of Evaluation Metrics <s> Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal transitions between frame chunks with different granularities, i.e., it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks. Notably, even using a single network with only RGB stream as input, HRNE beats all the recent systems which combine multiple inputs, such as RGB ConvNet plus 3D ConvNet. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Reliability of Evaluation Metrics <s> We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. <s> BIB004
|
A good method to evaluate the video descriptions is to compare the machine generated descriptions with the ground truth descriptions annotated by humans. However, as shown in Figure 11 , the reference captions can vary within itself and can only represent few samples out of all valid samples for the same video clip. Having more reference sample captions create a better solution space and hence lead to more reliable evaluation. Another aspect of the evaluation problem is the syntactic variations in candidate sentences. The same problem also exists in the well studied field of machine translation. In this case, a sentence in a source language can be translated into various sentences in a target language. Syntactically different sentences may still have the same semantic content. In a nutshell, evaluation metrics assess the suitability of a caption to the visual input by comparing how well the candidate caption matches with that of reference caption(s). The agreement of the metric scores with human judgments (i.e. the gold standard) improves with the increased number of reference captions BIB001 . Numerous studies BIB003 , BIB001 , BIB001 , BIB002 , BIB004 also found that CIDEr, WMD, SPICE and METEOR have higher correlations to human judgments and are regarded as superior amongst the contemporary metrics. WMD and SPICE are very recent automatic caption evaluation metrics and have not been studied extensively in the literature at the time of this survey.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD). <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal transitions between frame chunks with different granularities, i.e., it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks. Notably, even using a single network with only RGB stream as input, HRNE beats all the recent systems which combine multiple inputs, such as RGB ConvNet plus 3D ConvNet. <s> BIB004 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> The use of Recurrent Neural Networks for video captioning has recently gained a lot of attention, since they can be used both to encode the input video and to generate the corresponding description. In this paper, we present a recurrent video encoding scheme which can discover and leverage the hierarchical structure of the video. Unlike the classical encoder-decoder approach, in which a video is encoded continuously by a recurrent layer, we propose a novel LSTM cell which can identify discontinuity points between frames or segments and modify the temporal connections of the encoding layer accordingly. We evaluate our approach on three large-scale datasets: the Montreal Video Annotation dataset, the MPII Movie Description dataset and the Microsoft Video Description Corpus. Experiments show that our approach can discover appropriate hierarchical representations of input videos and improve the state of the art results on movie description datasets. <s> BIB005 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantics trained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality. <s> BIB006 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets. <s> BIB007 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods. <s> BIB008 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin. <s> BIB009 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization. <s> BIB010 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to long-term sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multimodal Memory Model (M3) to describe videos, which builds a visual and textual shared memory to model the long-term visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the state-of-the-art methods in terms of BLEU and METEOR. <s> BIB011 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) with a novel encoder-decoder-reconstructor architecture, which leverages both the forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder makes use of the forward flow to produce the sentence description based on the encoded video semantic features. Two types of reconstructors are customized to employ the backward flow and reproduce the video features based on the hidden state sequence generated by the decoder. The generation loss yielded by the encoder-decoder and the reconstruction loss introduced by the reconstructor are jointly drawn into training the proposed RecNet in an end-to-end fashion. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the encoder-decoder models and leads to significant gains in video caption accuracy. <s> BIB012 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as"dense video captioning."In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96% on ActivityNet Captions official test set. <s> BIB013 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE_L metrics. <s> BIB014 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> BENCHMARK RESULTS <s> Dense video captioning is a fine-grained video understanding task that involves two sub-problems: localizing distinct events in a long video stream, and generating captions for the localized events. We propose the Joint Event Detection and Description Network (JEDDi-Net), which solves the dense video captioning task in an end-to-end fashion. Our model continuously encodes the input video stream with three-dimensional convolutional layers, proposes variable-length temporal events based on pooled features, and generates their captions. Proposal features are extracted within each proposal segment through 3D Segment-of-Interest pooling from shared video feature encoding. In order to explicitly model temporal relationships between visual events and their captions in a single video, we also propose a two-level hierarchical captioning module that keeps track of context. On the large-scale ActivityNet Captions dataset, JEDDi-Net demonstrates improved results as measured by standard metrics. We also present the first dense captioning results on the TACoS-MultiLevel dataset. <s> BIB015
|
We summarize the benchmark results of various techniques on each video description dataset. We group the methods based on the dataset they reported results on and then order them chronologically. Moreover, for multiple variants of the same model, only their best reported results are reported here. For a detailed analysis of each method and its variants, the original paper should be consulted. In addition, where multiple n-gram scores are reported for the BLEU metric, we have chosen only the BLEU@4 results as this is the closest to human evaluations. From Table 5 Another key observation is that earlier works have mainly reported results in terms of subject, verb, object (SVO) and in some cases place (scene) detection accuracies in the video, whereas more recent works started to report sentence level matches using the automatic evaluation metrics. Considering the diverse nature of the datasets and the limitations of automatic evaluation metrics, we analyze the results of different methods using four popular metrics namely BLEU, METEOR, CIDEr and ROUGE. Table 5 summarizes results for the MSVD dataset. GRU-EVE BIB014 achieves the best performance on METEOR and ROUGE L metrics and the second best on CIDEr metric whereas LSTM-TSA BIB008 and M 3 -IC BIB011 report the best BLEU scores. RecNet local BIB012 has the best CIDEr score and second best BLEU score. As shown in Table 6 , on TACoS Multilevel dataset, h-RNN BIB003 has the best results on all reported metrics i.e. BLEU, METEOR and CIDEr. This method does not provide ROUGE score. On the more challenging M-VAD dataset, overall the reported results (Table 7) are very poor, however, within the presented results we see that so far only Temporal-Attention BIB001 , and HRNE BIB004 reported results using the BLEU metric with a BLEU score of 0.7 each. All the papers using this dataset report METEOR results and so far BAE BIB005 has produced the best METEOR score followed by LSTM-TSA BIB008 . HRNE BIB004 and Glove+Deep Fusion Ensemble BIB006 share the third place for METEOR score. MPII-MD is another very challenging dataset and still has very low benchmark results, as shown in Table 8 , similar to the M-VAD dataset. Only BAE BIB005 has reported BLEU score for this dataset. LSTM-TSA BIB008 has achieved the best METEOR score followed by LSTM-E BIB007 and S2VT BIB002 at second and third place respectively. No other paper using this dataset has reported CIDEr and ROUGE score except BAE BIB005 . Results on another popular dataset, MSR-VTT, are overall better than the M-VAD and MPII-II datasets. As shown in Table 9 , CST-GT-None has reported the highest score on all four metrics i.e. BLEU, METEOR, CIDEr and ROUGE. DenseVidCap BIB009 and HRL [171] respectively report the second and third best scores on BLEU metric. GRU-EVE BIB014 reports the third best score in METEOR and CIDEr metrics. Results of another recent and popular ActivityNet Captions dataset are presented in Table 10 . This dataset was primarily introduced for dense video captioning and is gaining popularity very quickly. In this dataset, DenseCap Model BIB010 stands at top in terms of BLEU score. Best METEOR score is reported by LSTM-A+PG+R . Highest scores in CIDEr and ROUGE metrics are achieved by methods DVC BIB013 and JEDDi-Net BIB015 respectively. Finally, in Table 11 , we report two results for LSMDC and Charades each and only one result for YouCook-II datasets. YouCook-II is also a recent dataset and not reported much in the literature. We summarize the best reporting methods for each dataset along with their published scores. The tables group methods by the used dataset(s). Hence, one can infer the difficulty level of datasets by comparing the intra dataset scores of the same methods and the popularity of a particular dataset from the number of methods that have reported results on it.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> FUTURE AND EMERGING DIRECTIONS <s> When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> FUTURE AND EMERGING DIRECTIONS <s> Deep neural perception and control networks have become key components of self-driving vehicles. User acceptance is likely to benefit from easy-to-interpret textual explanations which allow end-users to understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. We propose a new approach to introspective explanations which consists of two parts. First, we use a visual (spatial) attention model to train a convolutional network end-to-end from images to the vehicle control commands, i.e., acceleration and change of course. The controller's attention identifies image regions that potentially influence the network's output. Second, we use an attention-based video-to-text model to produce textual explanations of model actions. The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. Finally, we explore a version of our model that generates rationalizations, and compare with introspective explanations on the same video segments. We evaluate these models on a novel driving dataset with ground-truth human explanations, the Berkeley DeepDrive eXplanation (BDD-X) dataset. Code is available at https://github.com/JinkyuKimUCB/explainable-deep-driving. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> FUTURE AND EMERGING DIRECTIONS <s> We present the MAC network, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. MAC moves away from monolithic black-box neural architectures towards a design that encourages both transparency and versatility. The model approaches problems by decomposing them into a series of attention-based reasoning steps, each performed by a novel recurrent Memory, Attention, and Composition (MAC) cell that maintains a separation between control and memory. By stringing the cells together and imposing structural constraints that regulate their interaction, MAC effectively learns to perform iterative reasoning processes that are directly inferred from the data in an end-to-end approach. We demonstrate the model's strength, robustness and interpretability on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the model is computationally-efficient and data-efficient, in particular requiring 5x less data than existing models to achieve strong results. <s> BIB003 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> FUTURE AND EMERGING DIRECTIONS <s> We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being sufficiently grounded in vision to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person real-time chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and consists of dialog question-answer pairs from 10-round, human-human dialogs grounded in images from the COCO dataset. <s> BIB004
|
Automatic video description has come very far since the pioneer methods, especially after the adoption of deep learning. Although the performance of existing methods is still far below that of humans, the gap is diminishing at a steady rate and there is still ample room for algorithmic improvements. Here, we list several possible future and emerging directions that have the potential to advance this research area. Visual Reasoning: Although video VQA is still in nascent stage, beyond VQA is the visual reasoning problem. This is a very promising field to further explore. Here the model is made not to just answer a particular question but to reason why it chose that particular answer. For example in a video where a road side with parking marks is shown, the question is "Can a vehicle be parked here?", the model answers correctly, "Yes". The next question is "Why?" to which the model reasons that there is a parking sign on the road which means it is legal to park here. Another example is the explanations generated by self driving cars BIB002 where the system keeps the passengers in confidence by generating natural language descriptions of the reasons behind its decisions e.g. to slow down, take a turn etc. An example of visual reasoning models is the MAC Network BIB003 which is able to think and reason giving promising results on CLEVR BIB001 , a visual reasoning dataset. Visual Dialogue: Similar to audio dialogue (e.g. Siri, Hello Google, Alexa and ECHO), visual dialogue BIB004 is another promising and flourishing field, especially in an era where we look forward to interact with robots. In visual dialogue, given a video, a model is asked a series of questions sequentially in a dialogue/conversation manner. The model tries to answer (no matter right or wrong) these questions. This is different from visual reasoning where the model argues the reasons that lead the model to choose particular answers.
|
Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Audio and Video: <s> We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases. <s> BIB001 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Audio and Video: <s> The thud of a bouncing ball, the onset of speech as lips open—when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/off-screen audio source separation, e.g. removing the off-screen translator’s voice from a foreign official’s speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory. <s> BIB002 </s> Video Description: A Survey of Methods, Datasets and Evaluation Metrics <s> Audio and Video: <s> In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically coupled object and word detectors. <s> BIB003
|
While the majority of computer vision research has focused on video description, without the help of audio, audio is naturally present in most of videos. Audio can help in video description by providing background information for instance, the sound of train, ocean, traffic when there is no visual cue of their presence. Audio can additionally provide semantic information for example, who the person is or what they are saying on the other side of the phone. It can also provide clues about the story, context and sometimes explicitly mention the object or action to complement the video information. Therefore, using audio in video description models will certainly improve the performance BIB003 , BIB002 . External Knowledge: In video description, most of the time we are comparing the performance with humans who have extensive out of domain or prior knowledge. When humans watch a clip and describe it, most of the time they don't rely solely on the visual (or even the audio) content. Instead, they additionally employ their background knowledge. Similarly, it would be interesting and promising approach to augment the video description techniques with prior external knowledge BIB001 . This approach has shown significantly better performance in visual question answering methods and is likely to improve video description accuracy. Addressing the Finite Model Capacity: Existing methods are trying to perform end-to-end training while using as much data as possible for better learning. However, this approach is inherently limited in learning in itself as no matter how big the training dataset becomes, it will never cover the combinatorial complexity of the real world events. Therefore, learning to use data rather than learning the data itself, is more important and may help improve the upcoming system performances.
|
Survey on Context-Aware Pervasive Learning Environments <s> I. INTRODUCTION <s> In the mobile learning context, it is helpful to consider context awareness and adaptivity as two sides of the same coin. The purpose of the adaptivity and context awareness is to better support a variety of learners, given that they may have very different skills and motivations to learn in varying contexts. The recent research on adaptivity and context awareness has turned towards supporting pervasive environments and this is coupled with the increasing trend in seeing learning environments from an informal learning perspective. In this paper are presented experiences of developing an adaptive and context aware mobile learning system, with examples of other systems underlining the development towards supporting pervasive learning environments. We then consider approaches for the future development of systems supporting pervasive learning environments. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> I. INTRODUCTION <s> This paper introduces a new design and approach to enhance existing learning systems. Our model is unique compared to existing approaches as we view a pervasive educational (e.g., university campus) environment as comprising myriad mobile services, which are "hanging" over users (e.g., staffs, students, etc.), but different services are appropriate in different contexts. These services can be downloaded and cached on the user's device (e.g., a laptop or pocket pc device) for future re-use. Our design is also different in that it is not only able to deliver mobile learning services at right contexts, but it also provides control over the services running in particular contexts. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> I. INTRODUCTION <s> This paper proposes a computer supported ubiquitous learning environment for language learning. Especially, this paper describes contextaware language-learning support system for learning Japanese mimicry and onomatopoeia (MIO) words, which is called JAMIOLAS (Japanese Mimicry and Onomatopoeia Learning Assisting System). This system provides learner the appropriate MIO expressions deriving the learner?s situation and sensor data. Phidgets is used as sensors in order to get realworld data from the physical world, e.g., temperature, humidity, etc. This paper describes the implementation, the interface and usage scenario of JAMIOLAS. <s> BIB003
|
Mobile learning, or m-learning, has become popular and is currently being intensively researched. In this paper we consider m-learning to refer specifically to learning facilitated by mobile devices such as PDAs and mobile phones. The primary aim of m-learning is to provide the users with a learning environment which is not restricted to a specific location or time. Compared to a traditional classroom setting, m-learning increases the mobility of a learner, allowing him/her to learn while sitting in a bus, for example. Furthermore, networked mobile devices allow learners to perform co-operative learning tasks in a group. Pervasive learning is the latest trend in harnessing the technology to support learning. In this form of learning, the mediator is a pervasive computing environment which consists of interconnected, embedded computing devices such as portable computers, wireless sensors, auxiliary input/output devices and servers. One could therefore consider pervasive learning as an extension to m-learning where the roles of the intelligent environment and of the context are emphasised. The physical environment is central as it provides salient resources for learning. According to BIB001 , a pervasive learning environment is a setting in which students can become totally immersed in the learning process. They further note that pervasive computing is an immersive experience which mediates between the learner's mental (e.g. needs, preferences, prior knowledge), physical (e.g. objects, other learners) and virtual (e.g. content accessible with mobile devices, artefacts) context. The intersection of these contexts is referred to as pervasive learning environment ( BIB001 ). Reference BIB002 regard a pervasive learning environment as a collection of mobile users, mobile services, mobile devices, contexts and policies, while BIB003 state that in pervasive learning, computers can obtain information about the context of learning from the learning environment in which embedded small devices, such as sensors, pads and badges, communicate together. Common to these definitions is the interplay of intelligent technology and context in which the learner is situated (i.e. context-awareness). Other terms used to describe pervasive computing include situated computing, ubiquitous computing, embedded computing, ambient intelligence, and everyware. In this paper, pervasive learning environments are based on environments with embedded intelligence in the form of sensors, tags and interaction devices. There has been research conducted on building and evaluating pervasive learning environments, however no survey has yet evaluated these environments. Such information is necessary not only for avoiding reinventing the wheel, but also for understanding the current state-ofthe-art in this area. By recognising the commonly used technologies, methods and models, we can design and build pervasive learning systems more effectively. Our intention is to provide an overview of what kind of pervasive learning environments have been developed, how they were built, what are the sensor technologies used in these systems to make them context-aware, what learning models are suggested for these environments, and what are the roles of mobile devices. By reviewing existing work, we seek to build a solid ground for further research on how different learning models can be efficiently utilised in pervasive learning environments and what are the critical features of such an environment. The role of mobile devices is an important factor from the perspective of wider work which aims to design and implement a flexible pervasive mobile learning system. This work also includes establishing and recognising the best learning models for such system. The paper is organised as follows. We first define the methodology used in the survey and continue by describing the observations resulted from the analysis of the literature. Finally, we conclude by discussing implications of the results and concluding the findings.
|
Survey on Context-Aware Pervasive Learning Environments <s> C. Data Evaluation <s> This paper describes a computer supported collaborative learning (CSCL) in a ubiquitous computing environment. In the environment called CLUE, the learners provide and share individual experience and interaction corpus and discuss about them. This paper focuses on the design, implementation, and evaluation of knowledge awareness map. The map visualizes the relationship between the shared knowledge and the current and past interactions of learners. The map plays a very important role for finding peer helpers, and inducing collaboration. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> C. Data Evaluation <s> The idea of pervasive learning is to create a network of devices, people and situation that allow learning experiences to play out. This idea is attractive, but is not easily implemented. This paper presents a pervasive learning architecture called GlobalEdu which is composed by educational services (ES) and pervasive personal pedagogical agent (P3A) implemented above the ISAM platform. It is a software architecture that manages a large scale pervasive environment. ES expands the ISAM services to educational domain to provide knowledge management, context management, control and communication. P3A is shall always be with the learner, assisting the process of learning, independent of user's device at the moment. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> C. Data Evaluation <s> This paper proposes a distributed platform designed to support pervasive learning and interactivity on a university campus and to ease tasks related to learning and teaching. The platform exploits wireless technologies in order to provide service access anywhere and anytime. The platform architecture is described, and the functionalities of the modules composing it are discussed, together with the relationships among modules. <s> BIB003 </s> Survey on Context-Aware Pervasive Learning Environments <s> C. Data Evaluation <s> This paper looks at the card metaphor used successfully in the Equator Ambient Wood and Chawton House projects to structure and author content as part of innovative school field trips using wireless and ubiquitous technologies. The framework provided by the metaphor is laid out and observations made as to how it has been used by domain experts in creating educational experiences. The trade-off between formalisation and restricting pedagogy is examined and key benefits that the metaphor provides are given <s> BIB004
|
After the main body of the papers was collected, we proceeded to read through the remaining papers in order perform a deeper analysis of the data and extract relevant information. For this purpose, we established a set of questions to be answered with that information. The questions are based on the research questions and they are presented in table II. In these questions A, B and C refer to the research questions 1, 2, and 3, respectively. The question Q-B0 has two sequential questions, namely Q-B0a and Q-B0b, which are only answered if the Q-B0 has a positive answer. We could not extract answers to all these iJIM -Volume 3, Issue 1, January 2009questions from every paper, but majority of the papers had sufficient information available. During the evaluation process we excluded 4 papers as deeper analysis showed that they did not meet the inclusion rules, reducing the number of included papers to 14. However, as one of the papers presents 2 different systems, the total number of relevant works was 15. The observations based on the information extracted from these papers are presented in the following section. Papers that were part of the survey but are not explicitly referred to elsewhere in this article are: BIB002 , BIB003 , BIB001 and BIB004 .
|
Survey on Context-Aware Pervasive Learning Environments <s> III. OBSERVATIONS <s> Ubiquitous computing is a new information and communication technology that utilize a large number of cooperative small nodes with computing and/or communication capabilities such as handheld terminals, smart mobile phones, sensor network nodes, contactless smart cards, RFIDs (radio frequency identification), and so on. This paper proposes the concept of ubiquitous learning that enables anyone to learn at anytime and anywhere by fully utilizing ubiquitous computing technologies. As the pilot system of the ubiquitous learning, this paper briefly introduces digital ubiquitous museum that makes our daily living environments learning materials. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> III. OBSERVATIONS <s> This paper proposes a computer supported ubiquitous learning environment for language learning. Especially, this paper describes contextaware language-learning support system for learning Japanese mimicry and onomatopoeia (MIO) words, which is called JAMIOLAS (Japanese Mimicry and Onomatopoeia Learning Assisting System). This system provides learner the appropriate MIO expressions deriving the learner?s situation and sensor data. Phidgets is used as sensors in order to get realworld data from the physical world, e.g., temperature, humidity, etc. This paper describes the implementation, the interface and usage scenario of JAMIOLAS. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> III. OBSERVATIONS <s> We explore the use of ubiquitous sensing in the home for contextsensitive microlearning. To assess how users would respond to frequent and brief learning interactions tied to context, a sensor-triggered mobile phone application was developed, with foreign language vocabulary as the learning domain. A married couple used the system in a home environment, during the course of everyday activities, for a four-week study period. Built-in and stick-on multi-modal sensors detected the participants' interactions with hundreds of objects, furniture, and appliances. Sensor activations triggered the audio presentation of English and Spanish phrases associated with object use. Phrases were presented on average 57 times an hour; this intense interaction was found to be acceptable even after extended use. Based on interview feedback, we consider design attributes that may have reduced the interruption burden and helped sustain user interest, and which may be applicable to other context-sensitive, always-on systems. <s> BIB003
|
After the data evaluation, we performed a deeper analysis on the extracted information. As the result, a set of observations was established. These are presented in A. Observation 1 From the reviewed works, the most commonly used sensor technology was RFID (Radio Frequency IDentification) as 9 out of 15 works mentioned it explicitly. The second most popular sensor technology was GPS, scoring 4 hits in total. Other explicitly mentioned sensors were light sensors, moisture sensors, wired trigger sensors, water flow sensors, piezoelectric "object usage" sensors, force sensors, temperature sensors, humidity sensors, infra-red distance sensors, motion sensors, touch sensors, cameras, 3D accelerometers and microphones. Two works did not explain what kind of sensors are used as they merely presented the possibility of using sensor technologies in the respective systems. From the 13 works that mentioned some sensors being used, 7 utilized more than 1 sensor type. RFID has been successfully used for sensing nearby persons ( ), physical resources ( , BIB003 ), locations of the user or objects ( BIB002 , , BIB003 ), and user's actions ( BIB003 ). In addition to presenting a pervasive learning environment, BIB001 mentioned two ambitious ongoing projects in Japan, namely food traceability and location-aware computing. The goal of the former project is to attach RFID tags onto all food products, thus increasing the visibility of the food production chains. The latter project aims to tag all places in Japan's national infrastructure, thus supporting efficient transportation, sightseeing and also pervasive learning. Most of the pervasive learning applications that utilised RFID technology used RFID reader embedded or attached (via Bluetooth or by using extension slots) to mobile devices to read the tag information. This might be an indication that RFID is likely to become the next big thing in mobile wireless near-field communication just like Bluetooth did a few years back.
|
Survey on Context-Aware Pervasive Learning Environments <s> B. Observation 2 <s> In this paper, we describe an interactive guide system for kids in museums. The system uses a sensing board which can rapidly recognize types and locations of multiple objects, and creates an immersive environment by giving users visual and auditory feedback to their manipulations on the board. The purpose of the system is to attract users' interests in a real exhibition by allowing them to interact with the corresponding virtual exhibition on the board, and providing them with its information. We have evaluated the system in a museum. From the interviews and questionnaires, we have confirmed that it is easy for kids to use the system and it can raise their motivation for visiting real exhibitions. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> B. Observation 2 <s> This paper describes a computer supported ubiquitous learning environment for language learning. This paper proposes two systems. The first is context-aware language-learning support system for Japanese polite expressions learning, which is called JAPELAS (Japanese polite expressions learning assisting system). This system provides learner the appropriate polite expressions deriving the learner's situation and personal information. The second system is called TANGO (Tag Added learNinG Objects) system, which detects the objects around learner using RFID tags, and provides the learner the educational information. This paper describes the preliminary evaluation of those two systems. <s> BIB002
|
Out of 15 works only 7 discussed learning models and most of them did not explicitly suggest their suitability. However, we were able to extract the learning model types supported in each system by carefully analysing the descriptions of system functionalities. As a result, we devised a list of learning models that could be used in pervasive learning environments. Many systems supported more than one of these models simultaneously, e.g. a system could be both group-based and problem-based. Table IV presents the extracted learning models and examples how they were used. Reference BIB002 suggests the most suitable learning models for pervasive learning are on-demand learning, hands-on or minds-on learning, and authentic learning. They further divide authentic learning into action, situated, incidental and experimental learning. The authors particularly emphasize the effectiveness of authentic, contextual learning for learning a foreign language. It is clear, however, that authentic learning is suitable for any kind of learning need where environment and context are major factors. BIB001 proposes a system which utilizes an RFID-enhanced interactive sensor board for museums. The idea is that when an object is placed on the sensor board, a projected image on the board shows more information about that object. The board is able to recognise multiple objects simultaneously, thus a group of learners can communicate and learn at the same time.
|
Survey on Context-Aware Pervasive Learning Environments <s> Authentic learning <s> This paper describes a computer supported ubiquitous learning environment for language learning. This paper proposes two systems. The first is context-aware language-learning support system for Japanese polite expressions learning, which is called JAPELAS (Japanese polite expressions learning assisting system). This system provides learner the appropriate polite expressions deriving the learner's situation and personal information. The second system is called TANGO (Tag Added learNinG Objects) system, which detects the objects around learner using RFID tags, and provides the learner the educational information. This paper describes the preliminary evaluation of those two systems. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> Authentic learning <s> This paper proposes a computer supported ubiquitous learning environment for language learning. Especially, this paper describes contextaware language-learning support system for learning Japanese mimicry and onomatopoeia (MIO) words, which is called JAMIOLAS (Japanese Mimicry and Onomatopoeia Learning Assisting System). This system provides learner the appropriate MIO expressions deriving the learner?s situation and sensor data. Phidgets is used as sensors in order to get realworld data from the physical world, e.g., temperature, humidity, etc. This paper describes the implementation, the interface and usage scenario of JAMIOLAS. <s> BIB002
|
Reference BIB001 proposes two different systems; JAPELAS for learning polite Japanese expressions through situations, and TANGO for learning vocabulary about the surrounding objects. According to the authors, both of these system are particularly well suited for authentic learning as language skills are best acquired in a real-world environment. The same authors have created the JAMIOLAS pervasive learning environment ( BIB002 ), which allows users to learn Japanese mimicry and onomatopoeic expressions through authentic situations. For example, when a user goes out and it rains, the system tells the user onomatopoeia for raining. The authors explicitly refer to this learning activity as authentic learning.
|
Survey on Context-Aware Pervasive Learning Environments <s> Problem-based learning <s> This paper describes a prototype application based on smartphone devices which supports independent learners within a mobile environment. The techniques employed allow children to rapidly gain access to a large repository of multimedia information through the use of a camera equipped mobile phone or smartphone. More specifically, the use of visual codes (Rohs M. et al., 2004) attached to locations and objects within a museum and park in Lancaster enable the retrieval of Web based information to be triggered by capturing images using the integrated CCD camera. Moreover, the location and orientation of the phone are used as contextual parameters in order to control the specific information to be retrieved by the system. The prototype described in this paper is currently under evaluation by groups of children in Lancaster in order to evaluate the use of this platform for teaching and learning. Our aim was to establish whether or not this user interaction technique could be harnessed for education based applications targeted at young children. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> Problem-based learning <s> We explore the use of ubiquitous sensing in the home for contextsensitive microlearning. To assess how users would respond to frequent and brief learning interactions tied to context, a sensor-triggered mobile phone application was developed, with foreign language vocabulary as the learning domain. A married couple used the system in a home environment, during the course of everyday activities, for a four-week study period. Built-in and stick-on multi-modal sensors detected the participants' interactions with hundreds of objects, furniture, and appliances. Sensor activations triggered the audio presentation of English and Spanish phrases associated with object use. Phrases were presented on average 57 times an hour; this intense interaction was found to be acceptable even after extended use. Based on interview feedback, we consider design attributes that may have reduced the interruption burden and helped sustain user interest, and which may be applicable to other context-sensitive, always-on systems. <s> BIB002
|
Reference describes a pervasive learning system for a university laboratory in which learners are provided with a set of learning activities to perform. The objects in the lab are equipped with RFID tags so the system is aware what the users are doing at any given moment, and can therefore monitor the progress of the learning activities. Learning activities are represented as complex problems to be solved, hence problem-based learning. Despite several learning models being presented in the papers, few were tested or validated. Microlearning was tested by BIB002 by running a non-stop scenario for several weeks. Participants in the test were optimistic about the possible use of technology and they showed increased level of knowledge of their foreign language vocabulary. However, as the scenario was executed only for two test subjects, this result does not yet validate the usage of microlearning in sensor-enhanced pervasive learning environment, but neither does it disprove the positive effect of the technology on learning. The system presented by BIB001 (independent learning) was validated by two test settings; an initial user study with a small group of children, and an investigation of overall performance of the system. The results of the former test suggested that the children enjoyed using the system and the overall feedback was positive. The school staff members were also supportive towards the usage of the system. The performance test concentrated on how the process of capturing an image and awaiting a response affected the usability of the system. The time of the process varied from 6 to 26 seconds, depending on the status of the GPRS connection. The performance test did not validate the learning model directly, but it did indicate suggest that the system is usable. In the third validated system , the authors set up experiments in which groups of students assembled a part of a computer; one group used Google to retrieve information and the other group used the LORAMS system to watch videos previously recorded by other students who had had the same learning experience earlier. The results suggest that LORAMS helped the students of the latter group perform better than the first group.
|
Survey on Context-Aware Pervasive Learning Environments <s> C. Observation 3 <s> In this paper, we describe an interactive guide system for kids in museums. The system uses a sensing board which can rapidly recognize types and locations of multiple objects, and creates an immersive environment by giving users visual and auditory feedback to their manipulations on the board. The purpose of the system is to attract users' interests in a real exhibition by allowing them to interact with the corresponding virtual exhibition on the board, and providing them with its information. We have evaluated the system in a museum. From the interviews and questionnaires, we have confirmed that it is easy for kids to use the system and it can raise their motivation for visiting real exhibitions. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> C. Observation 3 <s> Ubiquitous computing is a new information and communication technology that utilize a large number of cooperative small nodes with computing and/or communication capabilities such as handheld terminals, smart mobile phones, sensor network nodes, contactless smart cards, RFIDs (radio frequency identification), and so on. This paper proposes the concept of ubiquitous learning that enables anyone to learn at anytime and anywhere by fully utilizing ubiquitous computing technologies. As the pilot system of the ubiquitous learning, this paper briefly introduces digital ubiquitous museum that makes our daily living environments learning materials. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> C. Observation 3 <s> The idea of pervasive learning is to create a network of devices, people and situation that allow learning experiences to play out. This idea is attractive, but is not easily implemented. This paper presents a pervasive learning architecture called GlobalEdu which is composed by educational services (ES) and pervasive personal pedagogical agent (P3A) implemented above the ISAM platform. It is a software architecture that manages a large scale pervasive environment. ES expands the ISAM services to educational domain to provide knowledge management, context management, control and communication. P3A is shall always be with the learner, assisting the process of learning, independent of user's device at the moment. <s> BIB003
|
All except one of the reviewed systems use a clientserver architecture, and the exception implements a touchbased and RFID-enabled sensor board in a museum BIB001 . In this stand-alone system, the sensor board is directly connected to a computer which also manages the video projector used to project an image onto a board. The projected video is adapted to user actions and objects places on the board. Of the client-server based systems, two systems also allowed ad-hoc peer-to-peer communication without server intervention. Details of hardware and software were not given in many of the reviewed papers and none presented a thorough technical description. Therefore, the following information may not correspond to all the state-of-the-art technologies used in pervasive learning environments. The operating systems of the mobile devices were Windows Mobile, Windows XP and Symbian OS. On the server side, XML was used for encapsulating data and messages. Furthermore, BIB002 used TRON (The Realtime Operating system Nucleus) operating system on the server. In other systems the operating system was not explicitly mentioned. Communication between the server and the client was established either by GPRS or WLAN, and two papers mentioned the usage of the HTTP protocol. The programming environment on the server side was mentioned only twice (Java Servlets on Tomcat software, and ASP.Net). Information about the programming language used on the client was available for all but seven of the systems, and were: C++ (3), Java (2), Visual Basic (2), C# (1) and Flash BIB003 . One of the systems used both Flash and C++. Mobile devices were used as learning tools in all but one of the systems. Explicitly mentioned types of mobile devices were Tablet PCs (2), PDAs (6) and mobile phones . Based on this information we can conclude that PDAs may be currently the most popular client type in pervasive learning environments. However, due to the recent convergence of mobile phones and PDA devices, both device types could be used for the same purpose. Tablet PCs are somewhat clumsy for pervasive learning in systems where high mobility is required. We established different roles of mobile devices based on the extracted information, and these roles are presented in table V together with their frequencies and descriptions. Frequency denotes how many times a role was present in the reviewed systems, and it is worth noticing that in one system a mobile device can have several roles, but none of the systems supported all five. One system used a mobile device as an auxiliary tool for reading RFID tags, but users were also able to use the system without a mobile device. Navigation tool 2 Mobile devices were used for navigation; with the help of the device a user is able to know his/her own location or a location of a specific object within the environment. In the reviewed systems, the navigation feature was either based on GPS or RFID. Notice receiving tool 2 In two systems, different types of announcements and notices were delivered to users' mobile devices, such as reminders and announcements submitted by the teacher.
|
Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> In this paper, we describe an interactive guide system for kids in museums. The system uses a sensing board which can rapidly recognize types and locations of multiple objects, and creates an immersive environment by giving users visual and auditory feedback to their manipulations on the board. The purpose of the system is to attract users' interests in a real exhibition by allowing them to interact with the corresponding virtual exhibition on the board, and providing them with its information. We have evaluated the system in a museum. From the interviews and questionnaires, we have confirmed that it is easy for kids to use the system and it can raise their motivation for visiting real exhibitions. <s> BIB001 </s> Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> We explore the use of ubiquitous sensing in the home for contextsensitive microlearning. To assess how users would respond to frequent and brief learning interactions tied to context, a sensor-triggered mobile phone application was developed, with foreign language vocabulary as the learning domain. A married couple used the system in a home environment, during the course of everyday activities, for a four-week study period. Built-in and stick-on multi-modal sensors detected the participants' interactions with hundreds of objects, furniture, and appliances. Sensor activations triggered the audio presentation of English and Spanish phrases associated with object use. Phrases were presented on average 57 times an hour; this intense interaction was found to be acceptable even after extended use. Based on interview feedback, we consider design attributes that may have reduced the interruption burden and helped sustain user interest, and which may be applicable to other context-sensitive, always-on systems. <s> BIB002 </s> Survey on Context-Aware Pervasive Learning Environments <s> E. Observation 5 <s> In this paper we present the concept and technical architecture of the SciMyst pervasive mobile game. Encouraged by the positive experiences of game deployment at the SciFest2007 science festival, in Joensuu, Finland, we discuss means to use SciMyst in the context of museums with an aim to boost visitor engagement and interaction with the surrounding environment. As the result, we propose an array of novel technologies to be used in the SciMyst framework for museums. <s> BIB003
|
The roles of the physical environment had some variation but in general three different roles were recognisable, albeit not explicitly presented. These roles and their respective frequencies were: context for learning , content for learning , and system resource (3). It is worth noticing that in one system an environment can have multiple roles. For example, there were five cases where the environment was both context and content for learning. Additionally, two of the reviewed systems, an interactive sensor board for museums BIB001 and an interactive toy set for children , did not utilise the environment, and one paper did not state the role of the environment at all. Environment is a context for learning when learning is situation-based and the system adapts according to situations and contexts in which the user is present. This is also called contextual or situational learning. The environment provides content for learning when the system utilises the information within the environment as a learning resource. Finally, environment is a system resource when some objects within the environment are triggers for system events (e.g. furniture with embedded sensors which trigger usage events BIB002 ). IV. DISCUSSION The evidence presented in Observation 1 suggests that RFID is the most prevalent sensor technology used in pervasive learning environments, in part due relatively cheap price of RFID tags (approx. 1€ each in the authors' countries) and readers (150€), compared to the cost of a basic wireless sensor node of at least 300€. RFID-based readers are already available in some mobile devices as integrated chips, including models by Nokia and Samsung, and we expect that RFID will become a mainstream technology in mobile devices within 5 years. This development will enable tagging any object in a pervasive learning environment, thus making the underlying system more aware of the environment. Observation 2 identifies several suitable learning models; however these require proper validation and comparison. Many of the proposed learning models were not validated, and those that were did not provide reliable results, as the test scenarios were inadequate in terms of the numbers of test participants and repetitions. It was discouraging to discover that only a handful of papers explicitly discussed learning models, and this leads us to believe that the authors of the other papers either did not consider learning models at all or did not include that information. All the learning models followed an informal constructivist approach. Authentic learning was mentioned more than once, thus suggesting its potentiality for pervasive learning. Nevertheless, the results of the observation 2 indicate that in this field learning model validations are required before any of the models can be seriously recommended. Observation 3 concentrated on technical implementations of pervasive learning environments and roles of mobile devices in them. The use of client-server architectures in most of the systems shows that centralised control is used in preference to a distributed system. The benefits of using a centralised approach are the ease of installation and maintenance. However, a distributed system consisting of autonomous sensor nodes and one or more coordinating servers would be more fault-tolerant and load-balanced. Fault tolerance is particularly important in large systems which are running constantly and have hundreds or thousands of resources. The systems presented in the reviewed papers were quite small, thus the absence of distributed control is justified. Popularity of PDA devices (6) as clients over Tablet PCs BIB002 and mobile phones (3) can be explained with screen size, physical dimensions, and processing capabilities. Displays on mobile phones are often too small for viewing information other than text and low quality images/video. On the other hand, Tablet PCs have large displays, but they are more difficult to carry around due to their large physical size. PDA devices often have larger displays than mobile phones and their size is smaller than that of Tablet PCs. Moreover, PDA devices have enough processing power for handling basic media types, while the resources are often more limited on mobile phones. Despite the popularity of PDA devices, mobile phone and PDA technologies have been converging, and there is a similar trend of convergence going on between laptops and mobile phones/PDAs. These new devices are called Ultra Mobile PCs (UMPCs) and their size is smaller than Tablet PCs, but bigger than mobile phones or PDAs. In addition to being highly portable, UMPC devices are capable of running a fullscale Windows XP operating system or equivalent Linux distribution, thus making them suitable client devices for various software solutions supporting pervasive learning activities. Currently the problems of UMPCs are high price and relatively short battery life. However, we can expect these aspects to improve in the near future. According to observation 3, there were five types of roles for mobile devices in the reviewed systems: data collection tool, content representation tool, communication tool, navigation tool and notice receiving tool. Since the content representation tool was the only role having a frequency more than 10, many of the systems merely concentrated on providing contextsensitive content to the user. This indicates that there is work to be done to increase interaction between the environment and the users, as well as among the users. For example, the data gathered with a data collection tool can be saved and processed later to continue the learning experience at another location, e.g. at home or in a classroom. As another example, communication with peers can help users to establish and strengthen social relationships. Observation 4 concluded that only a few pervasive learning environments are truly multi-user systems through supporting communication among users. The lack of voice-and video-based communication was also noted, and we suggest that a reason may be the requirement for other running applications to be closed before using mobile phones' built-in voice call capabilities. Furthermore, creating a new reliable VoIP (Voice Over IP) application is not a trivial task. Audio/video-based communication is more personal, instant and effective than forums or chats. If a pervasive learning environment is to be built on a principle of virtual collaboration, using instant communication is possibly a good way to implement it. An alternative method is to provide a meeting request tool for the users through which two or more users could meet physically after agreeing on it virtually. This kind of approach was used by BIB003 where two users of the system met physically after one user had sent a help request to another user. In Observation 5, we distinguished three different roles for the physical environment in pervasive learning systems: context for learning, content for learning and system resource, and the frequency figures (9, 7 and 3, respectively) indicate that context and content are used most often. Usage of the environment as a system resource would be higher if more systems would embed wireless sensor networking components for sensing different aspects of the environment. The low frequency of the system resource role is related to the lack of interaction with the environment; if the system would be able to closely observe user's behaviour and the state of the physical environment, the system would become more responsive and adaptive. This would in turn encourage users to interact more with the environment by using different objects and observing the consequences on the mobile device or in the physical environment.
|
A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> INTRODUCTION <s> Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. <s> BIB001 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> INTRODUCTION <s> Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. <s> BIB002
|
Ad Hoc Network is a multi-hop wireless networks which is consist of autonomous mobile nodes interconnected by means of wireless medium without having any fixed infrastructure. It's quick and easy deployment in a situation where its highly impossible to set up any fixed infrastructure networks, has increased the potential used in different applications in different critical scenarios. Such as battle fields, emergency disaster relief, conference and etc. A mobile ad hoc network [MANET] BIB001 BIB002 can be characterized by the mobile nodes which have freedom to move at any direction and has the ability of self-configuring, self-maintaining and selforganizing themselves within the network by means of radio links without any fixed infrastructure like based station, fixed link, routers, and centralized servers. As in the network there is no base station or central coordinator exists, so the individual node plays the responsibility as a router during the communication has to be played by each and every node, participating in the network communication. Hence all the nodes are incorporated with a routing mechanism in order to transmit a data packet from source to destination. Nodes are operated by battery which is having limited capacity and they all suffer from severe battery consumption, especially when they participate for data communication for various sources and destinations. An uninterrupted data transmission from a particular source to destination requires a continual updating of path. If any moment path is not fond from source to destination, then Route Discovery Process has to be called. And multiple times route Discovery Process may introduce heavy power consumption. A number of routing approaches have been proposed to reduce various types of power consumption caused by various reasons in the wireless ad hoc network, which in result not only prolongs the life span of individual nodes but also reduces the network partition and enhances the performance of the network. In fixed infrastructure wireless network is a static network where its different components have to be set up permanently prior to the establishment of the communication. It takes not only huge time but also involves a huge cost for establishing the network. The best example of a fixed infrastructure based network is Global system for mobile Communication (GSM) known as Second generation Mobile cellular System which is also a wireless network. In GSM, network architecture comprises several base transceiver stations (BTS) which are clustered and connected to a base station controller (BSC).Several BSC are connected to an Mobile Switching Center (MSC). The MSC has access to several data base, including Visiting Location Register (VLR), Home Location register (HLR).It is also responsible for establishing, managing and clearing connection as well as routing calls to proper radio cells. Here even if the nodes are mobile but they are limited with a fixed number of hops while communicating with other nodes. But in case of MANET, it is completely different. The network is considered as a temporary network as it is meant for a specific purpose and for a certain period of time. And it is based on multi-hop technology where the data can be transmitted through number of intermediate nodes from source to destination. With the rapid demands of MANET in the recent years, certainly have challenged the researchers to take up some of the crucial issues like bandwidth utilization, limited wireless transmission range, hidden terminal and exposed terminal problem, packet loss due to transmission error, mobility, stimulated change of route, security problem and battery constraint. One of the important challenges of MANET is power constraint. The mobile ad hoc networks are operated on battery power. And the power usually gets consumed in mainly two ways. First one is due to transmitting of data to a desired recipient. Secondly, mobile node might offer itself as an intermediate forwarding node in the networks. The power level of the node is also getting affected while any route is established between two end points. The tradeoff between frequency of route update dissemination and battery power utilization is one of the major design issues of ad hoc network protocols. Because high power consumption will increase the battery depletion rate which in turn reduces the node's lie time, network lie time and causes network partition. Due to high network partition performance et a affected due to increase in number of retransmission, packet loss, higher end to end delay and many more problems. Therefore, various energy efficient routing protocols have been proposed to increase the lifetime of the nodes as well as lifetime of the networks, so that communication can be carried out without any interruption. This article provides as well as analyzes different energy efficient routing protocols designed for ad hoc wireless networks which are only based on the mechanism of traditional DSR routing protocol. The remaining of the session is organized as follow. The next section-2 presents two subdivision of Ad Hoc routing Protocols and their basic routing mechanism. We have basically emphasized the basic working principle o DSR routing protocol as it we have explained all energy efficient routing protocol which is based on DSR only. In section-3 we have shade some lights on the requirement of energy aware routing protocol for MANET and its different approaches to achieve that goal. The next section-4 highlights all related work that has been done to make DSR as an efficient energy aware routing Protocol. And finally the last section-5 concludes the article.
|
A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power. <s> BIB001 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. <s> BIB002 </s> A Review of Energy Efficient Dynamic Source Routing Protocol for Mobile Ad Hoc Networks <s> ROUTING PROCESS IN AD HOC NETWORKS <s> Nodes in a mobile ad hoc network have limited battery power. If a node is used frequently for transmission or overhearing of data packets, more energy is consumed by that node and after certain amount of time the energy level may not be sufficient for data transmission resulting in link failure. In this paper, we have considered two routing protocols-Dynamic Source Routing (DSR) & Minimum Maximum Battery cost Routing (MMBCR) and studied their performances in terms of network lifetime for the same network scenario. Simulations are carried out using NS2. Finally from the simulation results we have concluded that MMBCR gives more network lifetime by selecting route with maximum battery capacity thereby outperforming DSR. General Terms Energy efficiency, MANETS, Routing Protocols. <s> BIB003
|
In MANET BIB002 BIB003 BIB001 , routing is a process of establishing a route and then forwarding packets from source to destination through some inter mediate nodes if the destination node is not directly within the range of sender node. The route establishment itself is a two steps process. First one is the Route Discovery where it finds the different routes from same source to destination. Second, the Route Selection, where it selects a particular route among all routes found for the same source to destination. Traditional protocols and data structure are available to maintain the routes and to execute it by selecting the path that is having minimum distance from source to destination where the minimum distance is in term of minimum hop count.
|
A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The blood vessel structure of the sclera is unique to each person, and it can be remotely obtained nonintrusively in the visible wavelengths. Therefore, it is well suited for human identification (ID). In this paper, we propose a new concept for human ID: sclera recognition. This is a challenging research problem because images of sclera vessel patterns are often defocused and/or saturated and, most importantly, the vessel structure in the sclera is multilayered and has complex nonlinear deformations. This paper has several contributions. First, we proposed the new approach for human ID: sclera recognition. Second, we developed a new method for sclera segmentation which works for both color and grayscale images. Third, we designed a Gabor wavelet-based sclera pattern enhancement method to emphasize and binarize the sclera vessel patterns. Finally, we proposed a line-descriptor-based feature extraction, registration, and matching method that is illumination, scale, orientation, and deformation invariant and can mitigate the multilayered deformation effects and tolerate segmentation error. The experimental results show that sclera recognition is a promising new biometrics for positive human ID. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Detection and segmentation of the periocular region <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB008
|
Initial studies were focused on feature extraction only (with the periocular region manually extracted), but automatic detection and segmentation have increasingly become a research target in itself. Some works have applied a full face detector first such as the Viola-Jones (VJ) detector ), e.g. BIB002 or Juefei-Xu and Savvides (2012), but successful extraction of the periocular region in this way relies on an accurate detection of the whole face. Using iris segmentation techniques may not be reliable under challenging conditions either . On the other hand, eye detection can be a decisive pre-processing task to ensure successful segmentation of the iris texture in difficult images, as in the study by . Here, they used correlation filters to detect the eye center over the difficult FOCS database of subjects walking through a portal, achieving a 95% success rate. However, despite this good result in indicating the eye position, accuracy of the iris segmentation algorithms evaluated were between 51% and 90% Correlation filters were also used for eye detection in BIB004 , although after applying the VJ face detector. Table 2 summarizes existing research dealing with the task of locating the eye position directly, without relying on full-face or iris detectors. and BIB008 used the VJ detector of face sub-parts. BIB008 also experimented with the CMU hyperspectral database, which has images captured simultaneously at multiple wavelengths. Since the eye is centered in all bands, accuracy can be boosted by collective detecting the eye over all bands. BIB001 made use of Gabor features for eye detection and face tracking purposes by performing saccades across the image, whereas Bigun (2014, 2015) proposed the use of symmetry filters tuned to detect circular symmetries. The latter has the advantage of not needing training, and detection is possible with a few 1D convolutions due to separability of the detection filters, built from derivatives of a Gaussian. BIB005 proposed a Local Eyebrow Active Shape Model (LE-ASM) to detect the eyebrow region directly from a given face image, with eyebrow pixels segmented afterwards using graph-cut based segmentation. ASMs were also used by JuefeiXu and Savvides (2012) to automatically extract the periocular region, albeit after the application of a VJ full-face detector. Recently, proposed a method to label seve components of the periocular region (iris, sclera, eyelashes, eyebrows, hair, skin and glasses) by using seven classifiers at the pixel level, with each classifier specialized in one component. Pixel features used for classification included the following texture and shape descriptors: RGB/HSV/YCbCr values, Local Binary Patterns (LBP), entropy and Gabor features. Some works have proposed the extraction of features from the sclera region only, therefore requiring an algorithm to specifically segment this region. For this purpose, BIB006 , BIB007 and BIB003 used the HSV/YCbCr color spaces. In these works, however, sclera detection is guided by a prior detection of the iris boundaries.
|
A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> The periocular region is the part of the face immediately surrounding the eye, and researchers have recently begun to investigate how to use the periocular region for recognition. Understanding how humans recognize faces helped computer vision researchers develop algorithms for face recognition. Likewise, understanding how humans analyze periocular images could benefit researchers developing algorithms for periocular recognition. We presented pairs of periocular images to testers and asked them to determine whether the two images were from the same person or from different people. Our testers correctly determined the relationship between the two images in over 90% of the queries. We asked them to describe what features in the images were helpful to them in making their decisions. We found that eyelashes, tear ducts, shape of the eye, and eyelids were used most frequently in determining whether two images were from the same person. The outer corner of the eye and the shape of the eye were used a higher proportion of the time for incorrect responses than they were for correct responses, suggesting that those two features are not as useful. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Recognition using periocular features <s> In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant. . <s> BIB004
|
Several feature extraction methods have been proposed for periocular recognition, with a taxonomy shown in Figure 3 . Existing features can be classified into: i) global features, which are extracted from the whole image or region of interest (ROI), and ii) local features, which are extracted from a set of discrete points, or key points, only. Table 3 gives an overview in chronological order of existing works for periocular recognition. The most widely used approaches include Local Binary Patterns (LBP) and, to a lesser extent, Histogram of Oriented Gradients (HOG) and Scale-Invariant Feature Transform (SIFT) key points. Over the course of the years, many other descriptors have been proposed. This section provides a brief description of the features used for periocular recognition (Section 4.1 and 4.2), followed by a review of the works mentioned in Table 3 (Section 4.3), highlighting their most important results or contributions. Due to pages limitation, we will omit references to the original works where features have been presented (unless they are originally proposed for periocular recognition in the mentioned reference). We refer to the references indicated for further information about the presented feature extraction techniques. Some preprocessing steps have been also used to cope with the difficulties found in unconstrained scenarios, such as pose correction by Active Appearance Models (AAM) (JuefeiXu et al., 2011) , illumination normalization BIB003 , correction of deformations due to expression change by Elastic Graph Matching (EGM) BIB004 , or color device-specific calibration . The use of subspace representation methods after feature extraction is also becoming a popular way either to improve performance or reducing the feature set, as mentioned next in this section. There are also periocular studies with human experts. BIB001 BIB002 evaluated the ability of (untrained) human observers to compare pairs of periocular images both with VW and NIR illumination, obtaining better results with the VW modality. They also tested three computer experts (LBP, HOG and SIFT), finding that the performance of humans and machines was similar.
|
A Survey on Periocular Biometrics Research <s> Textural-based features <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we perform a detailed investigation of various features that can be extracted from the periocular region of human faces for biometric identification. The emphasis of this study is to explore the BEST feature extraction approach used in stand-alone mode without any generative or discriminative subspace training. Simple distance measures are used to determine the verification rate (VR) on a very large dataset. Several filter-based techniques and local feature extraction methods are explored in this study, where we show an increase of 15% verification performance at 0.1% false accept rate (FAR) compared to raw pixels with the proposed Local Walsh-Transform Binary Pattern encoding. Additionally, when fusing our best feature extraction method with Kernel Correlation Feature Analysis (KCFA) [36], we were able to obtain VR of 61.2%. Our experiments are carried out on the large validation set of the NIST FRGC database [6], which contains facial images from environments with uncontrolled illumination. Verification experiments based on a pure 1–1 similarity matrix of 16028×8014 (~128 million comparisons) carried out on the entire database, where we find that we can achieve a raw VR of 17.0% at 0.1% FAR using our proposed Local Walsh-Transform Binary Pattern approach. This result, while may seem low, is more than the NIST reported baseline VR on the same dataset (12% at 0.1% FAR), when PCA was trained on the entire facial features for recognition [6]. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Given an image from a biometric sensor, it is important for the feature extraction module to extract an original set of features that can be used for identity recognition. This form of feature extraction has been referred to as Type I feature extraction. For some biometric systems, Type I feature extraction is used exclusively. However, a second form of feature extraction does exist and is concerned with optimizing/minimizing the original feature set given by a Type I feature extraction method. This second form of feature extraction has been referred to as Type II feature extraction (feature selection). In this paper, we present a genetic-based Type II feature extraction system, referred to as GEFE (Genetic & Evolutionary Feature Extraction), for optimizing the feature sets returned by Loocal Binary Pattern Type I feature extraction for periocular biometric recognition. Our results show that not only does GEFE dramatically reduce the number of features needed but the evolved features sets also have higher recognition rates. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Human identification based on iris biometrics requires high resolution iris images of a cooperative subject. Such images cannot be obtained in non-intrusive applications such as surveillance. However, the full region around the eye, known as the periocular region, can be acquired non-intrusively and used as a biometric. In this paper we investigate the use of periocular region for person identification. Current techniques have focused on choosing a single best frame, mostly manually, for matching. In contrast, we formulate, for the first time, person identification based on periocular regions as an image set classification problem. We generate periocular region image sets from the Multi Bio-metric Grand Challenge (MBGC) NIR videos. Periocular regions of the right eyes are mirrored and combined with those of the left eyes to form an image set. Each image set contains periocular regions of a single subject. For imageset classification, we use six state-of-the-art techniques and report their comparative recognition and verification performances. Our results show that image sets of periocular regions achieve significantly higher recognition rates than currently reported in the literature for the same database. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB013 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB014 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Recently periocular biometrics has drawn lot of attention of researchers and some efforts have been presented in the literature. In this paper, we propose a novel and robust approach for periocular recognition. In the approach face is detected in still face images which is then aligned and normalized. We utilized entire strip containing both the eyes as periocular region. For feature extraction, we computed the magnitude responses of the image filtered with a filter bank of complex Gabor filters. Feature dimensions are reduced by applying Direct Linear Discriminant Analysis (DLDA). The reduced feature vector is classified using Parzen Probabilistic Neural Network (PPNN). The experimental results demonstrate a promising verification and identification accuracy, also the robustness of the proposed approach is ascertained by providing comprehensive comparison with some of the well known state-of-the-art methods using publicly available face databases; MBGC v2.0, GTDB, IITK and PUT. <s> BIB015 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB016 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we propose a novel and robust approach for periocular recognition. Specifically, we propose fusion of Local Phase Quantization(LPQ) and Gabor wavelet descriptors to improve recognition performance and achieve robustness. We have utilized publicly available challenging still face images databases; MBGC v2.0, GTDB, PUT and Caltech. In the approach face is detected and normalized using eye centres. The region around left and right eyes, including eyebrow is extracted as left periocular and right periocular. The LPQ descriptor is then applied to extract the phase statistics features computed locally in a rectangular window. The descriptor is invariant to blur and also to uniform illumination changes. We also computed the Gabor magnitude response of the image, which encodes shape information over a broader range of scales. To reduce dimensionality of the operators and to extract discriminative features, we further utilized DLDA (Direct Linear Discriminant Analysis). The experimental analysis demonstrate that combination of LPQ and Gabor scores provides significant improvement in the performance and robustness, than applied individually. <s> BIB017 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB018 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB019 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB020 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB021 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB022 </s> A Survey on Periocular Biometrics Research <s> Textural-based features <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB023
|
BGM: Bayesian Graphical Models were used by . They adapted an iris matcher based on correlation filters applied to non-overlapping image patches. Patches of gallery and probe images are cross-correlated, and the output used to feed a Bayesian graphical model (BGM) trained to consider non-linear deformations and occlusions between images. BGM were also used by BIB009 and BIB005 , although called PDM or Probabilistic Deformation Models in these works. BSIF: Binarized Statistical Image Features BIB013 BIB010 ) computes a binary code for each pixel by linearly projecting image patches onto a subspace, whose basis vectors are learnt from natural images using Independent Component Analysis (ICA). Since it is based on natural images, it is expected that BSIF encodes texture features more robustly than other methods that also produce binary codes, such as LBPs. CRBM: Convolutional Restricted Boltzman Machines are a convolutional version of the Restricted Boltzman Machines, previously used in handwriting recognition, image classification, and face verification. CRBM, proposed for periocular recognition by BIB014 , is a generative stochastic neural network that learn a probability distribution over a set of inputs generated by filters which capture edge orientation and spatial connections between image patches. DCT: Discrete Cosine Transform (Juefei-Xu et al., 2010) expresses data points by a sum of cosine functions oscillating at different frequencies (which in 2D corresponds to horizontal and vertical frequencies). The 2D-DCT is computed in image blocks of size N × N (with N=3,5,7...) and the N 2 coefficients are assigned as featureto the center pixel of the block. DWT: Discrete Wavelet Transform was used by Juefei-Xu et al. (2010) and BIB015 with respect to the Haar wavelet, which, in 2D, leads to an approximation of image details in three orientations: horizontal, vertical and diagonal. Force Field Transform (Juefei-Xu et al., 2010) employs an analogy to gravitational force. Each pixel exerts a 'force' on its neighbors inversely proportional to the distance between them, weighted by the pixel value. The net force at one point is the aggregate of the forces exerted by all other 5 × 5 neighbors. Gabor filters are texture filters selective in frequency and orientation. A set of different frequencies and orientations are usually employed. For example, BIB001 and BIB006 BIB016 employed five frequencies and six orientations equally spaced in the logpolar frequency plane, achieving full coverage of the spectrum. BIB002 employed one frequency and four orientations, BIB017 employed one frequency and one orientation only, and BIB015 employed five frequencies and six orientations. Lastly, Cao and Schmid (2014) used two frequencies and eight orientations, with Gabor responses further encoded by LBP operators (below). GIST perceptual descriptors BIB022 consist of five perceptual dimensions related with scene description, correlated with the second-order statistics and spatial arrangement of structured image components: naturalness, which quantizes the vertical and horizontal edge distribution; openness, presence or lack of reference points; roughness, size of the largest prominent object; expansion, depth of the space gradient; and ruggedness, which quantizes the contour orientation that deviates from the horizontal. HOG: Histogram of Oriented Gradients. In HOG, the gradient orientation and magnitude are computed in each pixel. The histogram of orientations is then built, with each bin accumulating corresponding gradient magnitudes. In PHOG or Pyramid of Histogram of Oriented Gradients, instead of using image patches, HOG is extracted from the whole image. Then, the image is split up several times like a quad-tree and all subimages get their own HOG. JDSR: Joint Dictionary-based Sparse Representation BIB018 . computes a compact dictionary using a set of training images. A new image is represented as a sparse linear combination of the dictionary elements. A similar approach is SRC, or Sparse Representation Classification BIB010 ). An image is represented as a sparse linear combination of training images plus sparse errors due to perturbations. Images can be in original raw form or represented in any feature space. The features used included Eigenfaces, Laplacianfaces, Randomfaces, Fisherfaces, and downsampled versions of the raw image. BIB010 also tested BSIF and LBP features. Laws masks were used by BIB002 . Five 1D masks capturing shapes of level, edge, spot, wave and ripple were employed. In 2D, masks are 1D-convolved in all possible combinations with an image, thus producing 25 local features. LBP: Local Binary Patterns were first introduced for texture classification, since they can identify spots, line ends, edges, corners and other patterns. For each pixel p, a 3 × 3 neighborhood is considered. Every neighbor p i (i=1...8) is assigned a binary value of 1 if p i > p, or 0 otherwise. The binary values are then concatenated into a 8-bits binary number, and the decimal equivalent is assigned to characterize the texture at p, leading to 2 8 =256 possible labels. The LBP values of all pixels within a given patch are then quantized into a 8-bin histogram. LBP is one of the most popular periocular matching techniques in the literature (Table 3) , with many variants proposed. One is Uniform LBP or ULBP , used to reduce the length of the feature vector and achieve rotation invariance. A LBP is called uniform if it contains at most two bitwise transitions from 0 to 1 or vice-versa. A separate label is used for each uniform pattern, and all the non-uniform patterns are labeled with a single label, yielding to 59 different labels, instead of 256 as the regular LBP. The neighborhood can be also made larger to allow multi-resolution representations of the local texture pattern, leading to a circle of radius R, also called Circular LBP or CLBP BIB019 . To avoid a large number of binary values as R increases, only neighbors separated by certain angular distance may be chosen. In Three-Patch LBP or TPLBP/3PLBP BIB011 BIB020 , pixel p is compared with the central pixel of two (non-adjacent) patches situated across a circle R. Application of 3PLBP to multiple image scales across a Gaussian pyramid leads to the Hierarchical Three-Patch LBP or H3PLBP BIB011 . Further extension to two circles R 1 and R 2 results in Four-Patch LBP or FPLBP , involving four patches instead of three in the comparison. The use of subspace representation methods applied to LBPs is also very popular to reduce the feature set or improve performance, for example: BIB003 , BIB004 BIB007 , BIB012 BIB023 and BIB014 . Other works have also proposed to apply LBP upon other feature extraction itself, for example Juefei-Xu et al. (2010); Juefei-Xu and Savvides (2012), BIB019 or Cao and Schmid (2014) . LMF: Leung-Mallik filters is a set of filters constructed from Gaussian, Gaussian derivatives and Laplacian of Gaussian at different orientations and scales. In the experiments by BIB008 , filter responses from an image training set were clustered by k-means to construct a texton dictionary. The clusters (texton) producing the lowest EER were then used to classify test images. LoG: Laplacian of Gaussian filter is an edge detector, used by BIB002 for periocular recognition. LPQ: Local Phase Quantization extracts phase statistics of local patches by selective frequency filters in the Fourier domain. The phases of the four low-frequency coefficients are quantized in four bins. NGC: Normalized Gradient Correlation BIB018 computes in the Fourier domain the normalized correlation between the gradients of two images in pair-wise patches. PIGP: Phase Intensive Global Pattern BIB019 ) computes the intensity variation of pixel-neighborhoods with respect to different phases by convolution with a bank of 3 × 3 filters. The filters have 'U' shape when seen in 3D, with different rotations corresponding to the different phases. Four different angles between 0 and 3π/4 in steps of π/4 were considered. SRP: Structured Random Projections BIB021 encode horizontal and vertical directional features by means of 1D horizontal and vertical binary vectors (projection elements). Such elements have a single group of contiguous '1' values, with the location of '1's' randomly determined. The number k of projection elements and the length l of contiguous '1's' are to be fixed experimentally, with k=10 and l=3,6,...150 tested. Walsh masks are convolution filters which only contain +1 and -1 values, thus capturing the binary characteristics of an image in terms of contrast. N different 1D-filters of N elements are produced (N=3,5,7...) and combined in all possible pairs, yielding to N 2 2D-filters. Walsh masks were used by BIB002 , Juefei-Xu and Savvides (2012) and BIB019 to compute the Walsh-Hadamard Transform based LBPs (WLBP), which consists of extracting LBPs from the input image after being filtered with Walsh masks.
|
A Survey on Periocular Biometrics Research <s> Shape-based features <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Shape-based features <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Shape-based features <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB003
|
Eyelids shape descriptors BIB002 extract several properties from the polynomial encoding each eyelid, including: accumulated curvature at point i (out of t), defined as i j=1 ∂ 2 y j ∂x 2 ; shape context, represented by the histogram h i of (x i − x j , y i − y j ) at each point (x i , y i ), ∀ j i; and the Elliptical Fourier Descriptors (EFD) parameterizing y i coordinates of the eyelids. Proenca (2014) also applied LBP to the eyelids region only. Eyebrows shape was studied by Dong and Woodard (2011) and BIB003 . BIB001 encoded rectangularity, eccentricity, isoperimetric quotient, area percentage of different sub-regions, and critical points (comprising the right/left-most points, the highest point and the centroid). BIB003 proposed the use of shape context histograms encoding the distribution of eyebrow points relative to a given (reference) point, and the Procrustes analysis representing the eyebrow shape asymmetry.
|
A Survey on Periocular Biometrics Research <s> Color-based features <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Color-based features <s> This paper investigates the effectiveness of local appearance features such as Local Binary Patterns, Histograms of Oriented Gradient, Discrete Cosine Transform, and Local Color Histograms extracted from periocular region images for soft classification on gender and ethnicity. These features are classified by Artificial Neural Network or Support Vector Machine. Experiments are performed on visible and near-IR spectrum images derived from FRGC and MBGC datasets. For 4232 FRGC images of 404 subjects, we obtain baseline gender and ethnicity classifications of 97.3% and 94%. For 350 MBGC images of 60 subjects, we obtain baseline gender and ethnicity results of 90% and 89%. <s> BIB002
|
LCH: Local Color Histograms from image patches were used by BIB001 . They experimented with RGB and HSV spaces and their sub-spaces, finding that the RG (redgreen) color space outperformed the other, with a 4 × 4 histogram giving better results than coarser or finer resolutions. Thus each 4 × 4 histogram provides a 16 element feature vector per patch. LCH were also used by BIB002 for gender and ethnicity classification using periocular data (Section 7).
|
A Survey on Periocular Biometrics Research <s> Local features <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Local features <s> We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a featurebased representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Local features <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Local features <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Local features <s> Abstract The article proposes a novel multi-scale local feature based on the periocular recognition technique which is capable of extracting high-dimensional subtle features existent in the iris region as well as low-dimensional gross features in the periphery skin region of the iris. A set of filter banks of different scales is employed to exploit the phase-intensive patterns in visible spectrum periocular image of a subject captured from a distance in partial non-cooperative scenario. The proposed technique is verified with experiments on near-infrared illumination databases like BATH and CASIA-IrisV3-Lamp. Experiments have been further extended to images from visible spectrum ocular databases like UBIRISv2 and low-resolution eye regions extracted from FERETv4 face database to establish that the proposed feature performs comparably better than existing local features. To find the robustness of the proposed approach, the low resolution visible spectrum images of mentioned databases are converted to grayscale images. The proposed approach yields unique patterns from these grayscale images. The ability to find coarse-to-fine features in multi-scale and different phases is accountable for the improved robustness of the proposed approach. <s> BIB005
|
In local approaches, a sparse set of characteristic points (called key points) is detected first. Local features encode properties of the neighborhood around key points only, leading to local key point descriptors. Since the number of detected key points is not necessarily the same in each image, the resulting feature vector may not be of constant length. Therefore, the matching algorithm has to compare each key point of one image against all key points of the other image to find a pair match, thus increasing the computation time. The output from the matching function is typically the number of matched points, although a distance measurement between pairs may also be returned. To achieve scale invariance, key points are usually detected at different scales. Different key point detection algorithms exist, with some of the feature extraction methods of this section also having its own key point extraction method. For example, detection of key points with the SIFT feature extractor relies on a difference of Gaussians (DOG) function in the scale space, whereas detection with SURF is based on the Hessian matrix, but relying on integral images to speed up computations. Newer algorithms such as BRISK and ORB claim to provide an even faster alternative to SIFT or SURF key point extraction methods. BIB002 employs one key point extraction method (SURF), and then compute the SIFT, SURF, BRISK and ORB descriptors from these key points. Other periocular works like BIB002 , BIB003 and extract key points descriptors at selected sampling points in the center of image patches only, resembling the grid-like analysis of global approaches (Figure 1, right) but using local features. This way, no key point detection is carried out, and the obtained feature vector is of fixed size. The following local descriptors have been proposed in the literature for periocular recognition. BRISK: Binary Robust Invariant Scalable Key points descriptor is composed of a binary string by concatenating the results of simple brightness comparison tests. BRISK applies a sampling pattern of N=60 locations equally spaced on circles concentric with the key point. The origin of the sampling pattern is rotated according to the gradient angle around the key point to achieve rotation invariance. The intensity of all possible short-distance pixel pairs p i and p j of the sampling pattern is then compared, assigning a binary value of 1 if p i > p j , and 0 otherwise. The resulting feature vector at each key point has 512 bits. BRISK is employed for periocular recognition by BIB002 . ORB: Oriented FAST and Rotated BRIEF is based on the FAST corner detector and the visual descriptor BRIEF (Binary Robust Independent Elementary Features). As in BRISK, BRIEF also uses binary tests between pixels. Pixel pairs are considered from an image patch of size S × S . The original BRIEF deals poorly with rotation, so in ORB it is proposed to steer the descriptor according to the dominant rotation of the key point (obtained from the first order moments). The parameters employed in ORB are S =31 and a vector length of 256 bits per key point. ORB was used for periocular recognition by BIB002 . PILP: Phase Intensive Local Pattern was used by BIB005 , following the work in BIB004 where they presented PIGP (Phase Intensive Global Pattern). PILP uses a similar filter bank than PIGP, but used for key point extraction, rather than for feature encoding. Size of the filters varies from 3 × 3 to 9 × 9, to allow to cope with scale variations. This way, key points are the local extrema among pixels in its own window and windows in its neighboring phases. Feature extraction is then done by computing a gradient orientation histogram in the neighborhood of each keypoint, in a similar way than SIFT descriptor, below. SAFE: Symmetry Assessment by Feature Expansion BIB003 describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of various symmetric curve families. The iso-curves of such functions are highly symmetric w.r.t. the key points and the estimated coefficients have well defined geometric interpretations. The detected patterns resemble shapes such as parabolas, circles, spirals, etc. Detection is done in concentric circular bands of different radii around key points, with radii log-equidistantly sampled. Extracted features therefore quantify the presence of pattern families in annular rings around each key point. SIFT: Scale Invariant Feature Transformation. Together with LBP, SIFT is the most popular matching technique employed in the literature (Table 3) . SIFT encodes local orientation via histograms of gradients around key points. The dominant orientation of a key point is first obtained by the peak of the gradient orientation histogram in a 16×16 window. The key point feature vector of dimension 4×4×8 = 128 is then obtained by computing 8-bin gradient orientation histograms (relative to the dominant orientation to achieve rotation invariance) in 4 × 4 sub-regions around the key point. m-SIFT (modified SIFT) is a SIFT matcher where additional constraints are imposed to the angle and distance of matched key points BIB001 . SURF: Speeded Up Robust Features was aimed at providing a detector and feature extractor faster than SIFT and other local feature algorithms. Feature extraction is done over a 4 × 4 sub-region around the key point (relative to the dominant orientation) using Haar wavelet responses. SURF is employed for periocular recognition by Juefei-Xu et al. (2010), BIB002 and BIB005 .
|
A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Retinotopic sampling and the Gabor decomposition have a well-established role in computer vision in general as well as in face authentication. The concept of Retinal Vision we introduce aims at complementing these biologically inspired tools with models of higher-order visual process, specifically the Human Saccadic System. We discuss the Saccadic Search strategy, a general purpose attentional mechanism that identifies semantically meaningful structures in images by performing "jumps" (saccades) between relevant locations. Saccade planning relies on a priori knowledge encoded by SVM classifiers. The raw visual input is analysed by means of a log-polar retinotopic sensor, whose receptive fields consist in a vector of modified Gabor filters designed in the log-polar frequency plane. Applicability to complex cognitive tasks is demonstrated by facial landmark detection and authentication experiments over the M2VTS and Extended M2VTS (XM2VTS) databases. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> A fundamental challenge in face recognition lies in determining which facial characteristics are important in the identification of faces. Several studies have indicated the significance of certain facial features in this regard, particularly internal ones such as the eyes and mouth. Surprisingly, however, one rather prominent facial feature has received little attention in this domain: the eyebrows. Past work has examined the role of eyebrows in emotional expression and nonverbal communication, as well as in facial aesthetics and sexual dimorphism. However, it has not been made clear whether the eyebrows play an important role in the identification of faces. Here, we report experimental results which suggest that for face recognition the eyebrows may be at least as influential as the eyes. Specifically, we find that the absence of eyebrows in familiar faces leads to a very large and significant disruption in recognition performance. In fact, a significantly greater decrement in face recognition is observ... <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Periocular biometric refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric does not require high user cooperation and close capture distance unlike other ocular biometrics (e.g., iris, retina, and sclera). We study the feasibility of using periocular images of an individual as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set that can be used for matching. The effect of fusing these feature sets is also studied. The experimental results show a 77% rank-1 recognition accuracy using 958 images captured from 30 different subjects. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Given an image from a biometric sensor, it is important for the feature extraction module to extract an original set of features that can be used for identity recognition. This form of feature extraction has been referred to as Type I feature extraction. For some biometric systems, Type I feature extraction is used exclusively. However, a second form of feature extraction does exist and is concerned with optimizing/minimizing the original feature set given by a Type I feature extraction method. This second form of feature extraction has been referred to as Type II feature extraction (feature selection). In this paper, we present a genetic-based Type II feature extraction system, referred to as GEFE (Genetic & Evolutionary Feature Extraction), for optimizing the feature sets returned by Loocal Binary Pattern Type I feature extraction for periocular biometric recognition. Our results show that not only does GEFE dramatically reduce the number of features needed but the evolved features sets also have higher recognition rates. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we perform a detailed investigation of various features that can be extracted from the periocular region of human faces for biometric identification. The emphasis of this study is to explore the BEST feature extraction approach used in stand-alone mode without any generative or discriminative subspace training. Simple distance measures are used to determine the verification rate (VR) on a very large dataset. Several filter-based techniques and local feature extraction methods are explored in this study, where we show an increase of 15% verification performance at 0.1% false accept rate (FAR) compared to raw pixels with the proposed Local Walsh-Transform Binary Pattern encoding. Additionally, when fusing our best feature extraction method with Kernel Correlation Feature Analysis (KCFA) [36], we were able to obtain VR of 61.2%. Our experiments are carried out on the large validation set of the NIST FRGC database [6], which contains facial images from environments with uncontrolled illumination. Verification experiments based on a pure 1–1 similarity matrix of 16028×8014 (~128 million comparisons) carried out on the entire database, where we find that we can achieve a raw VR of 17.0% at 0.1% FAR using our proposed Local Walsh-Transform Binary Pattern approach. This result, while may seem low, is more than the NIST reported baseline VR on the same dataset (12% at 0.1% FAR), when PCA was trained on the entire facial features for recognition [6]. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The performance of iris recognition is affected if iris is captured at a distance. Further, images captured in visible spectrum are more susceptible to noise than if captured in near infrared spectrum. This research proposes periocular biometrics as an alternative to iris recognition if the iris images are captured at a distance. We propose a novel algorithm to recognize periocular images in visible spectrum and study the effect of capture distance on the performance of periocular biometrics. The performance of the algorithm is evaluated on more than 11,000 images of the UBIRIS v2 database. The results show promise towards using periocular region for recognition when the information is not sufficient for iris recognition. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. <s> BIB013 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper we proposed a novel multimodal biometric approach using iris and periocular biometrics to improve the performance of iris recognition in case of non-ideal iris images. Though iris recognition has the highest accuracy among all the available biometrics, still the noises at the image acquisition stage degrade the recognition accuracy. The periocular region can act as a supporting biometric, in case the iris is obstructed by several noises. The periocular region is the part of the face immediately surrounding the eye. The approach is based on fusion of features of iris and periocular region. The approach has shown significant improvement in the performance of iris recognition. The evaluation was done on a test database created from the images of UBIRIS V2 and CASIA iris interval database. We achieved identification accuracy upto 96 % on the test database. <s> BIB014 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB015 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Among the available biometric traits such as face, iris and fingerprint, there is an active research being carried out in the direction of unconstrained biometrics. Periocular recognition has proved its effectiveness and is regarded as complementary to iris recognition. The main objectives of this paper are three-fold: 1) to announce the availability of periocular dataset, which has a variability in terms of scale change (due to camera-subject distance), pose variation and non-uniform illumination; 2) to investigate the performance of periocular recognition methods with the presence of various degradation factors; 3) propose a new initialization strategy for the definition of the periocular region-of-interest (ROI), based on the geometric mean of eye corners. Our experiments confirm that performance can be consistently improved by this initialization method, when compared to the classical technique. <s> BIB016 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This work develops a novel face-based matcher composed of a multi-resolution hierarchy of patch-based feature descriptors for periocular recognition - recognition based on the soft tissue surrounding the eye orbit. The novel patch-based framework for periocular recognition is compared against other feature descriptors and a commercial full-face recognition system against a set of four uniquely challenging face corpora. The framework, hierarchical three-patch local binary pattern, is compared against the three-patch local binary pattern and the uniform local binary pattern on the soft tissue area around the eye orbit. Each challenge set was chosen for its particular non-ideal face representations that may be summarized as matching against pose, illumination, expression, aging, and occlusions. The MORPH corpora consists of two mug shot datasets labeled Album 1 and Album 2. The Album 1 corpus is the more challenging of the two due to its incorporation of print photographs (legacy) captured with a variety of cameras from the late 1960s to 1990s. The second challenge dataset is the FRGC still image set. Corpus three, Georgia Tech face database, is a small corpus but one that contains faces under pose, illumination, expression, and eye region occlusions. The final challenge dataset chosen is the Notre Dame Twins database, which is comprised of 100 sets of identical twins and 1 set of triplets. The proposed framework reports top periocular performance against each dataset, as measured by rank-1 accuracy: (1) MORPH Album 1, 33.2%; (2) FRGC, 97.51%; (3) Georgia Tech, 92.4%; and (4) Notre Dame Twins, 98.03%. Furthermore, this work shows that the proposed periocular matcher (using only a small section of the face, about the eyes) compares favorably to a commercial full-face matcher. <s> BIB017 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Human identification based on iris biometrics requires high resolution iris images of a cooperative subject. Such images cannot be obtained in non-intrusive applications such as surveillance. However, the full region around the eye, known as the periocular region, can be acquired non-intrusively and used as a biometric. In this paper we investigate the use of periocular region for person identification. Current techniques have focused on choosing a single best frame, mostly manually, for matching. In contrast, we formulate, for the first time, person identification based on periocular regions as an image set classification problem. We generate periocular region image sets from the Multi Bio-metric Grand Challenge (MBGC) NIR videos. Periocular regions of the right eyes are mirrored and combined with those of the left eyes to form an image set. Each image set contains periocular regions of a single subject. For imageset classification, we use six state-of-the-art techniques and report their comparative recognition and verification performances. Our results show that image sets of periocular regions achieve significantly higher recognition rates than currently reported in the literature for the same database. <s> BIB018 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB019 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB020 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We concentrate on utilization of facial periocular region for biometric identification. Although this region has superior discriminative characteristics, as compared to mouth and nose, it has not been frequently used as an independent modality for personal identification. We employ a featurebased representation, where the associated periocular image is divided into left and right sides, and descriptor vectors are extracted from these using popular feature extraction algorithms SIFT, SURF, BRISK, ORB, and LBP. We also concatenate descriptor vectors. Utilizing FLANN and Brute Force matchers, we report recognition rates and ROC. For the periocular region image data, obtained from widely used FERET database consisting of 865 subjects, we obtain Rank-1 recognition rate of 96.8% for full frontal and different facial expressions in same session cases. We include a summary of existing methods, and show that the proposed method produces lower/comparable error rates with respect to the current state of the art. <s> BIB021 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB022 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB023 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB024 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Iris recognition becomes an important technology in our society. Visual patterns of human iris provide rich texture information for personal identification. However, it is greatly challenging to match intra-class iris images with large variations in unconstrained environments because of noises, illumination variation, heterogeneity and so on. To track current state-of-the-art algorithms in iris recognition, we organized the first ICB* Competition on Iris Recognition in 2013 (or ICIR2013 shortly). In this competition, 8 participants from 6 countries submitted 13 algorithms totally. All the algorithms were trained on a public database (e.g. CASIA-Iris-Thousand [3]) and evaluated on an unpublished database. The testing results in terms of False Non-match Rate (FNMR) when False Match Rate (FMR) is 0.0001 are taken to rank the submitted algorithms. <s> BIB025 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB026 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we propose a novel and robust approach for periocular recognition. Specifically, we propose fusion of Local Phase Quantization(LPQ) and Gabor wavelet descriptors to improve recognition performance and achieve robustness. We have utilized publicly available challenging still face images databases; MBGC v2.0, GTDB, PUT and Caltech. In the approach face is detected and normalized using eye centres. The region around left and right eyes, including eyebrow is extracted as left periocular and right periocular. The LPQ descriptor is then applied to extract the phase statistics features computed locally in a rectangular window. The descriptor is invariant to blur and also to uniform illumination changes. We also computed the Gabor magnitude response of the image, which encodes shape information over a broader range of scales. To reduce dimensionality of the operators and to extract discriminative features, we further utilized DLDA (Direct Linear Discriminant Analysis). The experimental analysis demonstrate that combination of LPQ and Gabor scores provides significant improvement in the performance and robustness, than applied individually. <s> BIB027 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Recently periocular biometrics has drawn lot of attention of researchers and some efforts have been presented in the literature. In this paper, we propose a novel and robust approach for periocular recognition. In the approach face is detected in still face images which is then aligned and normalized. We utilized entire strip containing both the eyes as periocular region. For feature extraction, we computed the magnitude responses of the image filtered with a filter bank of complex Gabor filters. Feature dimensions are reduced by applying Direct Linear Discriminant Analysis (DLDA). The reduced feature vector is classified using Parzen Probabilistic Neural Network (PPNN). The experimental results demonstrate a promising verification and identification accuracy, also the robustness of the proposed approach is ascertained by providing comprehensive comparison with some of the well known state-of-the-art methods using publicly available face databases; MBGC v2.0, GTDB, IITK and PUT. <s> BIB028 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Partially constrained human recognition through periocular region has emerged as a new paradigm in biometric security. This article proposes Phase Intensive Global Pattern (PIGP): a novel global feature based on variation of intensity of a pixel-neighbours with respect to different phases. The feature thus extracted is claimed to be rotation invariant and hence useful to identify human from images with face-tilt. The performance of proposed feature is experimented on UBIRISv2 database, which is a very large standard dataset with unconstrained periocular images captured under visible spectrum. The proposed work has been compared with Circular Local Binary Pattern (CLBP), and Walsh Transform, and experimentally found to yield higher accuracy, though with increased computation complexity and increased size of the feature vector. <s> BIB029 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> In this paper, we propose to combine sclera and periocular features for identity verification. The proposal is particularly useful in applications related to face recognition when the face is partially occluded with only periocular region revealed. Due to its relatively new exposition in the literature of biometrics, particular attention will be paid to sclera feature extraction in this work. For periocular feature extraction, structured random projections were adopted to extract compressed vertical and horizontal components of image features. The binary sclera features are eventually fused with the periocular features at a score level. Extensive experiments have been performed on UBIRIS v1 session1 and session2 databases to assess the verification performance before and after fusion. Around 5% of equal error rate performance was observed to be enhanced by fusing sclera with periocular features comparing with that before fusion. <s> BIB030 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Recent studies in biometrics have shown that the periocular region of the face is sufficiently discriminative for robust recognition, and particularly effective in certain scenarios such as extreme occlusions, and illumination variations where traditional face recognition systems are unreliable. In this paper, we first propose a fully automatic, robust and fast graph-cut based eyebrow segmentation technique to extract the eyebrow shape from a given face image. We then propose an eyebrow shape-based identification system for periocular face recognition. Our experiments have been conducted over large datasets from the MBGC and AR databases and the resilience of the proposed approach has been evaluated under varying data conditions. The experimental results show that the proposed eyebrow segmentation achieves high accuracy with an F-Measure of 99.4% and the identification system achieves rates of 76.0% on the AR database and 85.0% on the MBGC database. <s> BIB031 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> This paper introduces the challenge of cross spectral periocular matching. The proposed algorithm utilizes neural network for learning the variabilities caused by two different spectrums. Two neural networks are first trained on each spectrum individually and then combined such that, by using the cross spectral training data, they jointly learn the cross spectral variability. To evaluate the performance, a cross spectral periocular database is prepared that contains images pertaining to visible night vision and near infrared spectrums. The proposed combined neural network architecture, on the cross spectral database, shows improved performance compared to existing feature descriptors and cross domain algorithms. <s> BIB032 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB033 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB034 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB035 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Abstract The article proposes a novel multi-scale local feature based on the periocular recognition technique which is capable of extracting high-dimensional subtle features existent in the iris region as well as low-dimensional gross features in the periphery skin region of the iris. A set of filter banks of different scales is employed to exploit the phase-intensive patterns in visible spectrum periocular image of a subject captured from a distance in partial non-cooperative scenario. The proposed technique is verified with experiments on near-infrared illumination databases like BATH and CASIA-IrisV3-Lamp. Experiments have been further extended to images from visible spectrum ocular databases like UBIRISv2 and low-resolution eye regions extracted from FERETv4 face database to establish that the proposed feature performs comparably better than existing local features. To find the robustness of the proposed approach, the low resolution visible spectrum images of mentioned databases are converted to grayscale images. The proposed approach yields unique patterns from these grayscale images. The ability to find coarse-to-fine features in multi-scale and different phases is accountable for the improved robustness of the proposed approach. <s> BIB036 </s> A Survey on Periocular Biometrics Research <s> Literature review of periocular recognition works <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB037
|
Periocular recognition started to gain popularity after the studies by BIB003 BIB008 . Some pioneering works can be traced back to 2002 BIB001 , although authors here did not call the local eye area 'periocular'. The approach by BIB008 combined global and local features, concretely LBP, HOG and SIFT. Reported performance of such study was fairly good, setting the framework for the use of the periocular modality. Many works have followed this approach as inspiration, with LBPs and their variations being particularly extensive in the literature Woodard et al., 2010a,b; BIB011 BIB017 BIB021 . The studies of (Woodard et al., 2010a,b) used for the first time NIR data (MBGC portal video), although they selected usable frames (higher quality) which mostly are in the earlier part of the video, where scale change is not substantial. also presented experiments over NIR portal data from the more difficult FOCS database, but with a different descriptor (BGM). BIB017 also evaluated the impact of covariates such as pose, expression, template aging, glasses and eyelids occlusion. Some works have also employed other features in addition to LBPs BIB011 BIB021 . BIB004 employed LCH (RG color histograms), reporting the best accuracy up to that date with the FRGC database of VW images. BIB011 proposed Leung-Mallik filters (LMF) as texture descriptors over the CASIA v4 Distance database of NIR images. BIB021 evaluated LBP, SIFT, and other local descriptors including SURF, BRISK and ORB over the FERET database. The use of subspace representation methods applied to raw pixels or LBP features is also becoming a popular way either to improve performance or reducing the feature set BIB005 BIB012 BIB018 BIB022 BIB035 . LBP has been also used in other works analyzing for example the impact of plastic surgery or gender transformation BIB023 on periocular recognition (see Section 7). Inspired by BIB003 BIB006 extended the experiments with additional global and local features to a significant larger set of the FRGC database with less ideal images (thus the lower accuracy w.r.t. previous studies): WLBP, Laws Masks, DCT, DWT, Force Field transform, SURF, Gabor filters and LoG filters. They later addressed the problem of aging degradation on periocular recognition using the FG-NET database BIB009 , reported to be an issue even at small time lapses BIB008 . To obtain age invariant features, they first performed preprocessing schemes, such as pose correction by Active Appearance Models (AAM), illumination and periocular region normalization. In a later work, Juefei-Xu and Savvides (2012) also applied WLBPs to study periocular recognition with data from a pan-tilt-zoom (PTZ) camera. As in the study above, they employed different schemes to correct illumination and pose variations. The mentioned work by BIB001 with Gabor filters served as inspiration to BIB013 BIB024 to carry out periocular experiments with several iris databases in NIR and VW, as well as a comparison with the iris modality (Section 6). A variation of this algorithm was fused with the SIFT descriptor, obtaining a leading position in the First ICB Competition on Iris Recognition, ICIR2013 BIB025 . They later proposed a matcher based on Symmetry Assessment by Feature Expansion (SAFE) descriptors BIB026 , which describes neighborhoods around key-points by estimating the presence of various symmetric curve families. Gabor filters were also used by BIB027 in their work presenting Local Phase Quantization (LPQ) as descriptors for periocular recognition. BIB028 also employed Gabor features over four different VW databases, with features reduced by Direct Linear Discriminant Analysis (DLDA) and further classified by a Parzen Probabilistic Neural Network (PPNN). BIB007 evaluated CLBP and GIST descriptors. They used the UBIRIS v2 database of uncontrolled VW iris images which includes a number of perturbations intentionally introduced (see Section 2). A number of subsequent works have also made use of UBIRIS v2 BIB014 BIB015 BIB029 BIB024 BIB036 . BIB014 used UBIRIS v2 in their comparison of iris and periocular modalities (Section 6), obtaining better results than BIB007 using just LBPs, although over a smaller set of images. Santos and Hoyle (2012) used LBPs and SIFT as by BIB003 in their study combining iris and periocular modalities (Section 6). BIB029 proposed global PIGP features, outperforming the Rank-1 performance of any previous study using UBIRIS v2. They later proposed local PILP features BIB036 , reporting the best Rank-1 periocular performance to date with UBIRIS v2. Proenca (2014) studied the fusion of iris and periocular biometrics (Section 6). Periocular features were extracted from the eyelids region only, consisting of the fusion of LBPs and eyelids shape descriptors. In a subsequent study, proposed a method to label seven components of the periocular region (see Section 3) with the purpose of demonstrating that regions such as hair or glasses should be avoided since they are unreliable for recognition (Section 5). They also proposed to use the center of mass of the cornea as reference point to define the periocular ROI, rather than the pupil center, which is much more sensitive to changes in gaze. Finally, BIB030 used the first version of UBIRIS in their study presenting directional projections or Structured Random Projections (SRP) as periocular features. Other shape features have been also proposed, such as eyebrow shape features, with surprisingly accurate results as a stand-alone trait. Indeed, eyebrows have been used by forensic analysts for years to aid in facial recognition BIB031 , suggested to be the most salient and stable features in a face BIB002 . BIB010 studied several geometrical shape properties over the MGBC/FRGC databases. They also used the extracted eyebrow features for gender classification (see Section 7). BIB031 proposed an eyebrow shape-based identification system, together with a eyebrow segmentation technique (presented in Section 3). BIB016 presented the first periocular database in VW range specifically acquired for periocular research (UBIPr). They also proposed to compute the ROI w.r.t. the midpoint of the eye corners (instead of the pupil center), which is less sensitive to gaze variations, leading to a significant improvement (EER from ∼30% to ∼20%). Posterior studies have managed to improve performance over the UBIPr database using a variety of features BIB019 BIB022 . The UBIPr database is also used by BIB035 in their extensive study evaluating data in VW (UBIPr, MBGC), NIR (MBGC) and multi-spectral (CMU-H database) range, with the reported Rank-1 results being the best published performance to date for the four databases employed. A new database of challenging periocular images in VW range (CSIP) was presented recently by BIB037 , the first one made public captured with smartphones. The paper proposed a device-specific calibration method to compensate for the chromatic disparity, as result of the variability of camera sensors and lenses used by different mobile phones. They also compared and fused the periocular and iris modalities (Section 6). Another database captured specifically for cross-spectral periocular research (IMP) was also recently presented by BIB032 , containing data in VW, NIR and night modalities. To match cross-spectral images, they proposed neural networks (NN) to learn the variability caused by different spectrums, with several variations of LBP and HOG tested as features. Crossspectral recognition was also addressed by BIB033 using a proprietary database of NIR and VW images. Finally, BIB020 and BIB034 presented a database in VW range acquired with a new type of camera, a Light Field Camera (LFC), which provides multiple images at different focus in a single capture. LFC overcomes one important disadvantage of sensors in VW range, which is guarantee-ing a good focused image. Unfortunately, the database has not been made available. Individuals were also acquired with a conventional digital camera, with a superior performance observed with the LFC camera. New periocular features were also presented in the two studies. BIB020 proposed Sparse Representation Classification (SRC), previously used in face recognition. BIB034 proposed Binarized Statistical Image Features (BSIF) for periocular recognition, further utilized as features of the SRC method described. Both BIB020 and BIB034 tested the fusion of iris and periocular modalities as well (Section 6).
|
A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> The periocular biometric comes into the spotlight recently due to several advantageous characteristics such as easily available and provision of crucial face information. However, many existing works are dedicated to extracting image features using texture based techniques such as local binary pattern (LBP). In view of the simplicity and effectiveness offered, this paper proposes to investigate into projection-based methods for periocular identity verification. Several well established projection-based methods such as principal component analysis, its variants and linear discriminant analysis will be adopted in our performance evaluation based on a subset of FERET face database. Our empirical results show that supervised learning methods significantly outperform those unsupervised learning methods and LBP in terms of equal error rate performance. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Best regions for periocular recognition <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB007
|
Most periocular algorithms work in a holistic way, defining a ROI around the eye (usually a rectangle) which is fully used for feature extraction. Such holistic approach implies that some components not relevant for identity recognition, such as hair or glasses, might erroneously bias the process . It can also be the case that a feature is not equally discriminative in all parts of the periocular region. The study by BIB005 identified which ocular elements humans find more useful for periocular recognition. With NIR images, eyelashes, tear ducts, eye shape and eyelids, were identified as the most useful, while skin was the less useful. But for VW data, blood vessels and skin were reported more helpful than eye shape and eyelashes. Similar studies have been done with automatic algorithms BIB007 , with results in consonance with the study with humans, despite using several machine algorithms based on different features, and different databases. With NIR images, regions around the iris (including the inner tear duct and lower eyelash) were the most useful, while cheek and skin texture were the less important. With VW images, on the other hand, the skin texture surrounding the eye was found very important, with the eyebrow/brow region (when present) also favored in visible range. This is in line with the assumption largely accepted in the literature that the iris texture is more suited to NIR illumination BIB001 , whereas the periocular modality is best for VW illumination BIB005 BIB003 . This seems to be explained by the fact that NIR illumination reveals the details of the iris texture, while the skin reflects most of the light, appearing over-illuminated (see for example 'BioSec' or other NIR iris examples in Figure 2) ; on the other hand, the skin texture is clearly visible in VW range, but only irises with moderate levels of pigmentation image reasonably well in this range BIB002 . BIB004 carried out experiments by masking parts of the periocular area over VW images of the FRGC database. They found that inclusion of eyebrows is beneficial for a better identification performance, with differences in Rank-1 of 8-19%, depending on the machine expert. Similarly, they observed that occluding ocular information (iris and sclera) deteriorates the performance, with reductions in Rank-1 accuracy of up to 41%. In the same direction, BIB006 focused on the inclusion of a significant part of the cheek region over VW images of the FERET database, finding that it does not contain significant discriminative information while it increases the image size. Including the eyebrows and the ocular region was also found to be beneficial in this study, corroborating the results of BIB004 . Recently, proposed a method to label seven components of the periocular region: iris, sclera, eyelashes, eyebrows, hair, skin and glasses. The usefulness of such segmentation is demonstrated by avoiding hair and glasses in the feature encoding and matching stages, obtaining performance improvements by fusion of LBP, HOG and SIFT features BIB004 over the UBIRIS v2 database of VW images (EER reduced from 12.8% to 9.5%).
|
A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Comparison and fusion with other modalities <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB006
|
Periocular biometrics has rapidly evolved to competing with face or iris recognition. The periocular region appears in face or iris images, therefore comparison and/or fusion with these modalities has been also proposed. This section gives an overview of these works, with a summary provided in Table 4 . Under difficult conditions, such as acquisition portals BIB004 , distant acquisition BIB005 , smartphones , webcams or digital cameras , the periocular modality is shown to be clearly superior to the iris modality, mostly due to the small size of the iris or the use of visible illumination. Visible illumination is predominant in relaxed or uncooperative setups due to the impossibility of using NIR illumination. Iris texture is more suited to the NIR spectrum, since this type of lightning reveals the details of the iris texture BIB001 , while the skin reflects most of the light, appearing over-illuminated. On the other hand, the skin texture is clearly visible in VW range, but only irises with moderate levels of pigmentation image reasonably well in this range BIB002 . Nevertheless, despite the poor performance shown by the iris in the visible spectrum, fusion with periocular is shown to improve the performance in many cases as well BIB006 . Similar trends are observed with face. Under difficult conditions, such as blur or downsampling, the periocular modality performs considerably better . It is also the case of partial face occlusions, where performance of full-face matchers is severely degraded BIB003 .
|
A Survey on Periocular Biometrics Research <s> Iris Modality <s> The principle that underlies the recognition of persons by their iris patterns is the failure of a test of statistical independence on texture phase structure as encoded by multiscale quadrature wavelets. The combinatorial complexity of this phase information across different persons spans about 249 degrees of freedom and generates a discrimination entropy of about 3.2 bits/mm/sup 2/ over the iris, enabling real-time decisions about personal identity with extremely high confidence. Algorithms first described by the author in 1993 have now been tested in several independent field trials and are becoming widely licensed. This presentation reviews how the algorithms work and presents the results of 9.1 million comparisons among different eye images acquired in trials in Britain, the USA, Korea, and Japan. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We evaluate the utility of the periocular region appearance cues for biometric identification. Even though periocular region is considered to be a highly discriminative part of a face, its utility as an independent modality or as a soft biometric is still an open ended question. It is our goal to establish a performance metric for the periocular region features so that their potential use in conjunction with iris or face can be evaluated. In this approach, we employ the local appearance based feature representation, where the image is divided into spatially salient patches, and histograms of texture and color are computed for each patch. The images are matched by computing the distance between the corresponding feature representations using various distance metrics. We report recognition results on images captured in the visible and near-infrared (NIR) spectrum. For the color periocular region data consisting of about 410 subjects and the NIR images of 85 subjects, we obtain the Rank-1 recognition rate of 91% and 87% respectively. Furthermore, we also demonstrate that recognition performance of the periocular region images is comparable to that of face. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> As biometrics has evolved, the iris has remained a preferred trait because its uniqueness, lifetime stability and regular shape contribute to good segmentation and recognition performance. However, commercially deployed systems are characterized by strong acquisition constraints based on active subject cooperation, which is not always achievable or even reasonable for extensive deployment in everyday scenarios. Research on new techniques has been focused on lowering these constraints without significantly impacting performance while increasing system usability, and new approaches have rapidly emerged. Here we propose a novel fusion of different recognition approaches and describe how it can contribute to more reliable noncooperative iris recognition by compensating for degraded images captured in less constrained acquisition setups and protocols under visible wavelengths and varying lighting conditions. The proposed method was tested at the NICE.II (Noisy Iris Challenge Evaluation - Part 2) contest, and its performance was corroborated by a third-place finish. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> In this paper we proposed a novel multimodal biometric approach using iris and periocular biometrics to improve the performance of iris recognition in case of non-ideal iris images. Though iris recognition has the highest accuracy among all the available biometrics, still the noises at the image acquisition stage degrade the recognition accuracy. The periocular region can act as a supporting biometric, in case the iris is obstructed by several noises. The periocular region is the part of the face immediately surrounding the eye. The approach is based on fusion of features of iris and periocular region. The approach has shown significant improvement in the performance of iris recognition. The evaluation was done on a test database created from the images of UBIRIS V2 and CASIA iris interval database. We achieved identification accuracy upto 96 % on the test database. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Iris recognition from at-a-distance face images has high applications in wide range of applications such as remote surveillance and for civilian identification. This paper presents a completely automated joint iris and periocular recognition approach from the face images acquired at-a-distance. Each of the acquired face images are used to detect and segment periocular images which are then employed for the iris segmentation. We employ complex texture descriptors using Leung-Mallik filters which can acquire multiple periocular features for more accurate recognition. Experimental results presented in this paper achieve 8.1% improvement in recognition accuracy over the best performing approach among SIFT, LBP and HoG presented in the literature. The combination of simultaneously segmented iris and periocular images achieves average rank-one recognition accuracy of 84.5%, i.e., an improvement of 52% than those from only using iris features, on independent test images from 131 subjects. In order to ensure the repeatability of the experiments, the CASIA.v4-distance, i.e., a publicly available database was employed and all the 142 subjects/images were considered in this work. <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> We present a new system for biometric recognition using periocular images. The feature extraction method employed describes neighborhoods around key points by projection onto harmonic functions which estimates the presence of a series of various symmetric curve families around such key points. The isocurves of such functions are highly symmetric w.r.t. The key points and the estimated coefficients have well defined geometric interpretations. The descriptors used are referred to as Symmetry Assessment by Feature Expansion (SAFE). Extraction is done across a set of discrete points of the image, uniformly distributed in a rectangular-shaped grid positioned in the eye centre. Experiments are done with two databases of iris data, one acquired with a close-up iris camera, and another in visible light with a webcam. The two databases have been annotated manually, meaning that the radius and centre of the pupil and sclera circles are available, which are used as input for the experiments. Results show that this new system has a performance comparable with other periocular recognition approaches. We particularly carry out comparative experiments with another periocular system based on Gabor features extracted from the same set of grid points, with the fusion of the two systems resulting in an improved performance. We also evaluate an iris texture matcher, providing fusion results with the periocular systems as well. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Iris Modality <s> Announcement of an iris and periocular dataset, with 10 different mobile setups.Mobile biometric recognition approach based on iris and periocular information.Improvements from a sensor-specific color calibration technique are reported.Biometric recognition feasibility over mobile cross-sensor setups is shown.Preferable mobile setups are pointed out. In recent years, the usage of mobile devices has increased substantially, as have their capabilities and applications. Extending biometric technologies to these gadgets is desirable because it would facilitate biometric recognition almost anytime, anywhere, and by anyone. The present study focuses on biometric recognition in mobile environments using iris and periocular information as the main traits. Our study makes three main contributions, as follows. (1) We demonstrate the utility of an iris and periocular dataset, which contains images acquired with 10 different mobile setups and the corresponding iris segmentation data. This dataset allows us to evaluate iris segmentation and recognition methods, as well as periocular recognition techniques. (2) We report the outcomes of device-specific calibration techniques that compensate for the different color perceptions inherent in each setup. (3) We propose the application of well-known iris and periocular recognition strategies based on classical encoding and matching techniques, as well as demonstrating how they can be combined to overcome the issues associated with mobile environments. <s> BIB013
|
Woodard et al. (2010a) evaluated NIR portal videos of the MBGC database. The periocular modality showed considerable superiority, with the performance further improved by the fusion, demonstrating the benefits of fusing periocular and iris information in non-ideal conditions. and BIB004 also used NIR portal data from the FOCS database. Despite using other feature extraction methods, they also concluded that the periocular modality is considerable superior than the iris modality in such difficult data. BIB005 utilized VW images from the UBIRIS v2 database, which has several perturbations deliberately introduced. As with the above studies with NIR data, combining Table 4 . Overview of existing works on comparison and fusion of the periocular modality with other biometric modalities. The acronyms of this table are fully defined in the text or in the referenced papers. Features with best accuracy are those giving the best fusion results. If no fusion results are available, they indicate the best features of each individual modality. The following acronyms are not defined elsewhere: 'w-sum'='weighted sum', 'LR'='logistic regression', 'NN'='Neural Networks', 'TERELM'='Total Error Rate Minimization', 'LG'='Log-Gabor'. n/a n/a FRGC -blur (kernel=7 pix, σ=1.5) n/a 77.86% n/a 31.09% n/a n/a FRGC -downsampling (40%) n/a 97.76% n/a 70.40% n/a n/a FRGC -uncontrolled lightning n/a 11.17% n/a 12.18% n/a n/a BIB003 HOG, LBP, SIFT FaceVACS -FRGC (1704 VW images) n/a 87.32% n/a 99.77% n/a n/a FRGC -partial face n/a ∼84% n/a 39.55% n/a n/a 46.49% n/a n/a periocular and iris features improved the overall performance over difficult VW data too. BIB006 used a virtual database, with VW periocular data from UBIRIS v2 and NIR iris data from CASIA Interval. Fusion was carried out at the feature level, with vectors from the two modalities pooled together. They also tested a simple mean fusion rule at the score level, which resulted in a smaller performance improvement. BIB007 used at-a-distance images from CA-SIA v4 Distance database, with a considerable performance improvement w.r.t. the individual modalities. BIB009 used a VW Light Field Camera (LFC), which provides multiple images at different focus in a single capture. Individuals were also acquired with a conventional digital camera. A superior performance with the LFC camera was observed with both modalities, which was reinforced even more with the fusion. The same databases were used in a posterior study by BIB010 , obtaining even better performance. BIB013 used their new CSIP database, acquired with 4 different mobile telephones in 10 different setups. Using a sensor-specific color correction technique, they achieved a periocular EER cross-sensor performance of 15.5%. Despite the poor performance of Gabor wavelets applied to the iris modality (34.4%), they achieved a 14.5% EER with the fusion of the two modalities. evaluated their Gabor-based periocular system and a set of four iris matchers. They used five different databases, three in NIR and two in VW range, observing that performance of the iris matchers was, in general, much better than the periocular matcher with NIR data, and the opposite with VW data. This is in tune with the literature, which indicates that the iris modality is more suited to NIR illumination BIB001 , whereas the periocular modality is best for VW illumination BIB008 BIB002 . With regards to the fusion, despite the poor performance of the iris matchers with VW data, its fusion with the periocular system resulted with important performance improvements. This is remarkable given the adverse acquisition conditions and the small resolution of the VW databases used. They further extended the study with their SAFE matcher BIB011 , and a SIFT matcher. Here, the availability of more machine experts allowed to obtain performance improvements through the fusion also with NIR databases, something not observed in their previous stud-ies. BIB012 proposed the fusion of a iris matcher based on multi-lobe differential filters (MLDF), with a periocular expert that parameterizes the shape of eyelids, over VW data of FRGC and UBIRIS v2 databases, with an average 20% of EER improvement.
|
A Survey on Periocular Biometrics Research <s> Face Modality <s> The human periocular region is known to be one of the most discriminative regions of a face image, and recent studies have indicated its potential as a biometric trait. However, the bulk of the previous work concerning the periocular region consists of feasibility studies that report recognition results on controlled data, and lacks rigorous performance evaluation, thus leaving various open questions regarding the effectiveness of periocular region as a biometrie modality. In this paper we present a performance evaluation of a local periocular texture based recognition approach. Specifically, the paper investigates the effect of input image quality on recognition performance, the uniqueness of texture between different color channels, and texture information present in different color channels. Recognition results of periocular texture features are compared to those of full face texture features and suggest that periocular texture features are robust to varying image quality. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Face Modality <s> The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Face Modality <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB003
|
Smeraldi and Bigün (2002) presented a face recognition expert based on Gabor filters applied to each facial landmark (eyes and mouth), with a different classifier employed in each landmark. Face authentication was performed by fusion of the three classifier's output. This way, the face expert is really a fusion of two eye (periocular) experts and one mouth expert. BIB001 used LBP on the FRGC database, extracted both from the periocular region and from the full face. Rather than the best accuracy obtained (first sub-row in Table 4 ), the interest relies on the impact of the input image quality, demonstrating that, at extreme values of blur or down-sampling, periocular recognition performed significantly better than face. On the other hand, both face and periocular under uncontrolled lighting were very poor, indicating that LBPs are not well suited for this scenario. Another study of the effect of non-ideal conditions was also carried out by BIB002 . They masked the face region below the nose to simulate partial face occlusion, showing that face performance is severely degraded in the presence of occlusion, whereas the periocular modality is much more robust. Jillela and Ross (2012) studied the problem of matching face images before and after undergoing plastic surgery. The rank-one recognition performance reported by the fusion of periocular and face matchers (Rank-1: 87.4%) is the highest accuracy observed in the literature with the utilized database, up to the publication of the study. As full face matchers, they used two COTS systems: PittPatt and VeriLook. BIB003 extracted features in different regions of the face (periocular, nose, mouth), and in the full-face to study the impact of face changes due to gender transformation. They found that the periocular region greatly outperformed other face components (nose, mouth) and the full face. They also observed (not reported in Table 4 ) that their periocular approach outperformed two Commercial Off The Shelf full face Systems (COTS): PittPatt (by 76.83% in Rank-1 accuracy) and Cognetic FaceVACs (by 56.23%).
|
A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> With periocular biometrics gaining attention recently, the goal of this paper is to investigate the effectiveness of local appearance features extracted from the periocular region images for soft biometrie classification. We extract gender and ethnicity information from the periocular region images using grayscale pixel intensities and periocular texture computed by Local Binary Patterns as our features and a SVM classifier. Results are presented on the visible spectrum periocular images obtained from the FRGC face dataset. For 4232 periocular images of 404 subjects, we obtain a baseline gender and ethnicity classification accuracy of 93% and 91%, respectively, using 5-fold cross validation. Furthermore, we show that fusion of the soft biometrie information obtained from our classification approach with the texture based periocular recognition approach results in an overall performance improvement. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> The periocular region, the region of the face surrounding the eyes, has gained increasing attention in biometrics in recent years. This region of the face is of particular interest when trying to identify a person whose face is partially occluded. We propose the novel idea of applying the information obtained from the periocular region to identify the gender of a person, which is a type of soft biometrie recognition. We gradually narrow the region of interest of the face to explore the feasibility of using smaller, eye-centered regions for building a robust gender classifier around the periocular region alone. Our experimental results show that at least an 85% classification rate is still obtainable using only the periocular region with a database of 936 low resolution images collected from the web. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> A wide variety of applications in forensic, government, and commercial fields require reliable personal identification. However, the recognition performance is severely affected when encountering non-ideal images caused by motion blur, poor contrast, various expressions, or illumination artifacts. In this paper, we investigated the use of shape-based eyebrow features under non-ideal imaging conditions for biometric recognition and gender classification. We extracted various shape-based features from the eyebrow images and compared three different classification methods: Minimum Distance Classifier (MD), Linear Discriminant Analysis Classifier (LDA) and Support Vector Machine Classifier (SVM). The methods were tested on images from two publicly available facial image databases: The Multiple Biometric Grand Challenge (MBGC) database and the Face Recognition Grand Challenge (FRGC) database. Obtained recognition rates of 90% using the MBGC database and 75% using the FRGC database as well as gender classification recognition rates of 96% and 97% for each database respectively, suggests the shape-based eyebrow features maybe be used for biometric recognition and soft biometric classification. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> This paper investigates the effectiveness of local appearance features such as Local Binary Patterns, Histograms of Oriented Gradient, Discrete Cosine Transform, and Local Color Histograms extracted from periocular region images for soft classification on gender and ethnicity. These features are classified by Artificial Neural Network or Support Vector Machine. Experiments are performed on visible and near-IR spectrum images derived from FRGC and MBGC datasets. For 4232 FRGC images of 404 subjects, we obtain baseline gender and ethnicity classifications of 97.3% and 94%. For 350 MBGC images of 60 subjects, we obtain baseline gender and ethnicity results of 90% and 89%. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> In recent years, the research over emerging trends of biometric has grabbed a lot of attention. Periocular biometric is one such field. Researchers have made attempts to extract computationally intensive local features from high quality periocular images. In contrast, this paper proposes a novel approach of extracting global features from periocular region of poor quality grayscale images for gender classification. Global gender features are extracted using independent component analysis and are evaluated using conventional neural network techniques, and further their performance is compared. All relevant experiments are held on periocular region cropped from FERET face database. The results exhibit promising classification accuracy establishing the fact that the approach can work in fusion with existing facial gender classification systems to help in improving its accuracy. <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> The task of successfully matching face images obtained before and after plastic surgery is a challenging problem. The degree to which a face is altered depends on the type and number of plastic surgeries performed, and it is difficult to model such variations. Existing approaches use learning based methods that are either computationally expensive or rely on a set of training images. In this work, a fusion approach is proposed that combines information from the face and ocular regions to enhance recognition performance in the identification mode. The proposed approach provides the highest reported recognition performance on a publicly accessible plastic surgery database, with a rank-one accuracy of 87.4%. Compared to existing approaches, the proposed approach is not learning based and reduces computational requirements. Furthermore, a systematic study of the matching accuracies corresponding to various types of surgeries is presented. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Soft-biometrics, gender transformation and plastic surgery analysis <s> This paper introduces a novel face recognition problem domain: the medically altered face for gender transformation. A data set of >1.2 million face images was constructed from wild videos obtained from You Tube of 38 subjects undergoing hormone replacement therapy (HRT) for gender transformation over a period of several months to three years. The HRT achieves gender transformation by severely altering the balance of sex hormones, which causes changes in the physical appearance of the face and body. This paper explores that the impact of face changes due to hormone manipulation and its ability to disguise the face and hence, its ability to effect match rates. Face disguise is achieved organically as hormone manipulation causes pathological changes to the body resulting in a modification of face appearance. This paper analyzes and evaluates face components versus full face algorithms in an attempt to identify regions of the face that are resilient to the HRT process. The experiments reveal that periocular face components using simple texture-based face matchers, local binary patterns, histogram of gradients, and patch-based local binary patterns out performs matching against the full face. Furthermore, the experiments reveal that a fusion of the periocular using one of the simple texture-based approaches (patched-based local binary patterns) out performs two Commercial Off The Shelf Systems full face systems: 1) PittPatt SDK and 2) Cognetic FaceVACs v8.5. The evaluated periocular-fused patch-based face matcher outperforms PittPatt SDK v5.2.2 by 76.83% and Cognetic FaceVACS v8.5 by 56.23% for rank-1 accuracy. <s> BIB007
|
Besides the task of personal recognition, a number of other tasks have been also proposed using features from the periocular region, as shown in Table 5 . Soft-biometrics refer to the classification of an individual in broad categories such as gender, ethnicity, age, height, weight, hair color, etc. While these cannot be used to uniquely identify a subject, it can reduce the search space or provide additional information to boost the recognition performance. Due to the popularity of facial recognition, face images have been frequently used to obtain both gender and ethnicity information, with high accuracy (>96%, for a summary see BIB004 ). Recently, it has been also suggested that periocular features can be potentially used for soft-biometrics classification BIB005 BIB004 BIB001 BIB002 . With accuracies comparable to these obtained by using the entire face, it indicates the effectiveness of the periocular region by itself for soft-biometrics purposes. BIB002 addressed gender classification using a database of 936 low resolution images collected from the web (Flickr), reporting a 85% classification accuracy. BIB004 studied gender and ethnicity classification over the FRGC and MBGC databases, with an accuracy of 89% or higher in both classification tasks. In a previous paper, they also showed that fusion of the soft-biometrics information with texture features from the periocular region can improve the recognition performance BIB001 . BIB005 studied the problem of gender classification with images from the FERET database. The reported classification accuracy is of 90%. An interesting study by BIB003 made use of shape features from the eyebrow region only, with very good results over the MBGC/FRGC databases comprising both NIR/VW data (96/97% of gender classification rate, respectively). Other studies are related with the effect on the recognition performance of plastic surgery or gender transformation, as presented in Section 6.3 (see Figure 4 as well). BIB007 studied the impact of gender transformation via Hormone Replacement Theory (HRT), which causes changes in the physical appearance of the face and body gradually over the course of the treatment. A database of >1.2 million face images from YouTube videos was built, with data from 38 subjects undergoing HRT over a period of several months to three years, observing that accuracy of the periocular region greatly outperformed other face components (nose, mouth) and the full face. Also, face matchers began to fail after only a few months of HRT treatment. BIB006 studied the matching of face images before and after undergoing plastic surgery. The work proposed a fusion recognition approach that combines face and periocular information, outperforming previous studies where only full-face matchers were used.
|
A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained. <s> BIB001 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Among the available biometric traits such as face, iris and fingerprint, there is an active research being carried out in the direction of unconstrained biometrics. Periocular recognition has proved its effectiveness and is regarded as complementary to iris recognition. The main objectives of this paper are three-fold: 1) to announce the availability of periocular dataset, which has a variability in terms of scale change (due to camera-subject distance), pose variation and non-uniform illumination; 2) to investigate the performance of periocular recognition methods with the presence of various degradation factors; 3) propose a new initialization strategy for the definition of the periocular region-of-interest (ROI), based on the geometric mean of eye corners. Our experiments confirm that performance can be consistently improved by this initialization method, when compared to the classical technique. <s> BIB002 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Periocular biometrics is the recognition of individuals based on the appearance of the region around the eye. Periocular recognition may be useful in applications where it is difficult to obtain a clear picture of an iris for iris biometrics, or a complete picture of a face for face biometrics. Previous periocular research has used either visible-light (VL) or near-infrared (NIR) light images, but no prior research has directly compared the two illuminations using images with similar resolution. We conducted an experiment in which volunteers were asked to compare pairs of periocular images. Some pairs showed images taken in VL, and some showed images taken in NIR light. Participants labeled each pair as belonging to the same person or to different people. Untrained participants with limited viewing times correctly classified VL image pairs with 88% accuracy, and NIR image pairs with 79% accuracy. For comparison, we presented pairs of iris images from the same subjects. In addition, we investigated differences between performance on light and dark eyes and relative helpfulness of various features in the periocular region under different illuminations. We calculated performance of three computer algorithms on the periocular images. Performance for humans and computers was similar. <s> BIB003 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Iris and Periocular biometrics has proved its effectiveness in accurately verifying the subject of interest. Recent improvements in visible spectrum Iris and Periocular verification have further boosted its application to unconstrained scenarios. However existing visible Iris verification systems suffer from low quality samples because of the limited depth-of-field exhibited by the conventional Iris capture systems. In this work, we propose a robust Iris and Periocular erification scheme in visible spectrum using Light Field Camera (LFC). Since the light field camera can provide multiple focus images in single capture, we are motivated to investigate its applicability for robust Iris and Periocular verification by exploring its all-in-focus property. Further, the use of all-in-focus property will extend the depth-of-focus and overcome the problem of focus that plays a predominant role in robust Iris and Periocular verification. We first collect a new Iris and Periocular biometric database using both light field and conventional camera by simulating real life scenarios. We then propose a new scheme for feature extraction and classification by exploring the combination of Local Binary Patterns (LBP) and Sparse Reconstruction Classifier (SRC). Extensive experiments are carried out on the newly collected database to bring out the merits and demerits on applicability of light field camera for Iris and Periocular verification. Finally, we also present the results on combining the information from Iris and Periocular biometrics using weighted sum rule. <s> BIB004 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> In challenging image acquisition settings where the performance of iris recognition algorithms degrades due to poor segmentation of the iris, image blur, specular reflections, and occlusions from eye lids and eye lashes, the periocular region has been shown to offer better recognition rates. However, the definition of a periocular region is subject to interpretation. This paper investigates the question of what is the best periocular region for recognition by identifying sub-regions of the ocular image when using near-infrared (NIR) or visible light (VL) sensors. To determine the best periocular region, we test two fundamentally different algorithms on challenging periocular datasets of contrasting build on four different periocular regions. Our results indicate that system performance does not necessarily improve as the ocular region becomes larger. Rather in NIR images the eye shape is more important than the brow or cheek as the image has little to no skin texture (leading to a smaller accepted region), while in VL images the brow is very important (requiring a larger region). <s> BIB005 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Visible spectrum iris verification has drawn substantial attention due to the feasibility, convenience and also accepted per-formance. This further allows one to perform the iris verification in an unconstrained environment at-a-distance and on the move. The integral part of the visible iris recognition rely on the accurate texture representation algorithm that can effectively capture the uniqueness of the texture even in the challenging conditions like reflection, illumination among others. In this paper, we explore a new scheme for the robust visible iris verification based on Binarized Statistical Image Features (BSIF). The core idea of the BSIF descriptor is to compute the binary code for each pixel by projecting them on the subspace which is learned from natural images using Independent Component Analysis (ICA). Thus, the BSIF is expected to encode the texture features more robustly when compared to contemporary schemes like Local Binary Patterns and its variants. The extensive experiments are carried out on the visible iris dataset captured using both Light field and conventional camera. The proposed feature extraction method is also extended for enhanced periocular recognition. Finally, we also present a comparative analysis with popular state-of-the-art iris recognition scheme. <s> BIB006 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> In biometrics research, the periocular region has been regarded as an interesting trade-off between the face and the iris, particularly in unconstrained data acquisition setups. As in other biometric traits, the current challenge is the development of more robust recognition algorithms. Having investigated the suitability of the ‘elastic graph matching’ (EGM) algorithm to handle non-linear distortions in the periocular region because of facial expressions, the authors observed that vertices locations often not correspond to displacements in the biological tissue. Hence, they propose a ‘globally coherent’ variant of EGM (GC-EGM) that avoids sudden local angular movements of vertices while maintains the ability to faithfully model non-linear distortions. Two main adaptations were carried out: (i) a new term for measuring vertices similarity and (ii) a new term in the edges-cost function penalises changes in orientation between the model and test graphs. Experiments were carried out both in synthetic and real data and point for the advantages of the proposed algorithm. Also, the recognition performance when using the EGM and GC-EGM was compared, and statistically significant improvements in the error rates were observed when using the GC-EGM variant. . <s> BIB007 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Automated and accurate biometrics identification using periocular imaging has wide range of applications from human surveillance to improving performance for iris recognition systems, especially under less-constrained imaging environment. Restricted Boltzmann Machine is a generative stochastic neural network that can learn the probability distribution over its set of inputs. As a convolutional version of Restricted Boltzman Machines, CRBM aim to accommodate large image sizes and greatly reduce the computational burden. However in the best of our knowledge, the unsupervised feature learning methods have not been explored in biometrics area except for the face recognition. This paper explores the effectiveness of CRBM model for the periocular recognition. We perform experiments on periocular image database from the largest number of subjects (300 subjects as test subjects) and simultaneously exploit key point features for improving the matching accuracy. The experimental results are presented on publicly available database, the Ubripr database, and suggest effectiveness of RBM feature learning for automated periocular recognition with the large number of subjects. The results from the investigation in this paper also suggest that the supervised metric learning can be effectively used to achieve superior performance than the conventional Euclidean distance metric for the periocular identification. <s> BIB008 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario. <s> BIB009 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Partial face recognition has been a problem of interest for more than a decade. Most of previous publications on partial face recognition assume intra spectral matching. Matching Short Wave Infrared (SWIR), Middle Wave Infrared (MWIR) or Near Infrared (NIR) images of partial face to a gallery of color images is a much more challenging task. The photometric properties of images in these four spectral bands are highly distinct. Because of the limited space — and also sufficient interest to this biometric — in this paper we present results of cross spectral matching applied to periocular regions. Equipped with a well developed automatic recognition algorithm for heterogeneous face, we demonstrate that the algorithm can be tuned and applied to periocular regions for a positive cross spectral matching of SWIR, MWIR and NIR periocular regions to visible periocular regions at short (1.5 m) and long (50 and 106 m) standoff distances. Our numerical analysis demonstrates the results of the matching. To the best of our knowledge, the performance evaluation presented in this paper is the first of its kind. <s> BIB010 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> This paper introduces the challenge of cross spectral periocular matching. The proposed algorithm utilizes neural network for learning the variabilities caused by two different spectrums. Two neural networks are first trained on each spectrum individually and then combined such that, by using the cross spectral training data, they jointly learn the cross spectral variability. To evaluate the performance, a cross spectral periocular database is prepared that contains images pertaining to visible night vision and near infrared spectrums. The proposed combined neural network architecture, on the cross spectral database, shows improved performance compared to existing feature descriptors and cross domain algorithms. <s> BIB011 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Display Omitted A literature review of ocular modalities such as iris and periocular is presented.Information fusion approaches that combine ocular modalities with other modalities are reviewed.Future research directions are presented on sensing technologies, algorithms, and fusion approaches. Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India's Aadhaar Program and the United Arab Emirate's border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. <s> BIB012 </s> A Survey on Periocular Biometrics Research <s> Conclusions and future work <s> Face recognition performance degrades significantly under occlusions that occur intentionally or unintentionally due to head gear or hair style. In many incidents captured by surveillance videos, the offenders cover their faces leaving only the periocular region visible. We present an extensive study on periocular region based person identification in video. While, previous techniques have handpicked a single best frame from videos, we formulate, for the first time, periocular region based person identification in video as an image-set classification problem. For thorough analysis, we perform experiments on periocular regions extracted automatically from RGB videos, NIR videos and hyperspectral image cubes. Each image-set is represented by four heterogeneous feature types and classified with six state-of-the-art image-set classification algorithms. We propose a novel two stage inverse Error Weighted Fusion algorithm for feature and classifier score fusion. The proposed two stage fusion is superior to single stage fusion. Comprehensive experiments were performed on four standard datasets, MBGC NIR and visible spectrum (Phillips et al., 2005), CMU Hyperspectral (Denes et al., 2002) and UBIPr (Padole and Proenca, 2012). We obtained average rank-1 recognition rates of 99.8, 98.5, 97.2, and 99.5% respectively which are significantly higher than the existing state of the art. Our results demonstrate the feasibility of image-set based periocular biometrics for real world applications. <s> BIB013
|
Periocular recognition has emerged as a promising trait for unconstrained biometrics after demands for increased robust- Table 5 . Overview of existing works on soft-biometrics, gender transformation and plastic surgery analysis using periocular features. The acronyms of this table are fully defined in the text or in the referenced papers. The following acronyms are not defined elsewhere: 'SVM'='Support Vector Machines'. ness of face or iris systems, showing a surprisingly high discrimination ability . The fastgrowing uptake of face technologies in social networks and smartphones, as well as the widespread use of surveillance cameras, arguably increases the interest of periocular biometrics. The periocular region has shown to be more tolerant to variability in expression, occlusion, and it has more capability of matching partial faces . It also finds applicability in other areas such as forensics analysis (crime scene images where perpetrators intentionally mask part of their faces). In such situation, identifying a suspect where only the periocular region is visible is one of the toughest realworld challenges in biometrics. Even in this difficult case, the periocular region can aid in the reconstruction of the whole face . This paper reviews the state of the art in periocular biometrics research. Our target is to provide a comprehensive coverage of the existing literature, giving an insight of the most relevant issues and challenges. We start by presenting existing databases utilized in periocular research. Acquisition setups comprise digital cameras, webcams, videocameras, smartphones, or close-up iris sensors. A small number of databases contain video data of subjects walking through an acquisition portal, or in hallways or atria. There are databases for particular problems too, such as aging, plastic surgery effects, gender transformation effects, expression changes, or crossspectral matching. However, the use of databases acquired with personal devices such as smartphones or tablets is limited, with recognition accuracy still some steps behind . The same can be said about surveillance cameras (Juefei-Xu and Savvides, 2012). New sensors are being proposed, such as Light Field Cameras, which capture multiple images at different focus in a single capture BIB004 BIB006 , guaranteeing to have a good focused image. Since the periocular modality requires less constrained acquisition than other ocular or face modalities, it is likely that the research community will move towards exploring ocular recognition at a distance and on the move in more detail as compared to previous studies BIB012 . Automatic detection and/of segmentation of the periocular region has been increasingly addressed as well, avoiding the need of segmenting the iris or detecting the full face first (Table 2). Recently, the use of eye corners as reference points to define the periocular ROI has been suggested, instead of the eye center, since eye corners are less sensitive to gaze variations and also appear in closed eyes BIB002 BIB007 BIB008 . We further review the features employed for periocular recognition, which comprises the majority of works in the literature. They can be classified into global and local approaches (Figure 3 ). Some works have also addressed the task of assessing if there are regions of the periocular area more useful than others for recognition purposes. This has been done both by asking to humans BIB003 and by using several machine algorithms BIB005 , with both humans and machines agreeing in the usefulness of different parts. Automatic segmentation of periocular parts can aid in avoiding those which are non-useful, as well as other elements such as hair or glasses, that can also deteriorate the recognition performance, as shown by in the first work which present an algorithm to segment components of the periocular region. Since the periocular area appears in face and iris images, comparison and fusion with these modalities has been also proposed, with a review of related works also given (Table 4) . Fusion of multiple modalities using ocular data is a promising path forward that is receiving increasing attention BIB012 due to unconstrained environments where switching between available modalities may be necessary BIB001 . Soft-biometrics is another area where the periocular modality has found applicability, with periocular features showing accuracies comparable to these obtained by using the entire face for the tasks of gender and ethnicity classification ( Table 5) . The periocular modality is also shown to aid or outperform face matchers in case of undergoing plastic surgery or gender transformation. Another issues that are receiving increasing attention is cross-modality BIB009 , crossspectral (Cao and BIB010 BIB011 , hyperspectral BIB013 or cross-sensor matching. The periocular modality also has the potential to allow ocular recognition at large stand-off distances (Cao and Schmid, 2014) , with applications in surveillance. Samples captured with different sensors are to be matched if, for example, people is allowed to use their own smartphone or surveillance cameras, or when new or improved sensors have to co-exist with existing ones (cross-sensor), not to mention if the sensors work in different spectral range (cross-spectral). Iris images are traditionally acquired in NIR spectrum, whereas face images normally are captured with VW sensors. Exchange of biometric information between different law enforcement agencies worldwide also poses similar problems. These are examples of some scenarios where, if biometrics is extensively deployed, data acquired from heterogeneous sources will have to co-exist BIB001 . These issues are of high interest in new scenarios arising from the widespread use of biometric technologies and the availability of multiple sensors and vendor solutions. Another important direction therefore is to enable periocular heterogeneous data to work together BIB012 .
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The paper presents results of measurements and simulations concerning the application of the European GSM system in high speed trains travelling at up to 500 km/h. The aim is to answer the question to what extent GSM (performance specified up to 250 km/h) can cope with the high velocities which are demanded for future railways. Measurements along railway lines have shown that a railway mobile radio channel results in better performance (Rice channel) than standard mobile radio channels (Rayleigh or weak Rice channel, see GSM-Recs). BER and block error rate of GSM traffic channels up to 500 km/h are simulated. Comparison of the results at 250 km/h and 500 km/h shows that the GSM high velocity problem can be solved either by increasing the SNR by about 2 dB or by increasing the Rice parameter c by about 6 dB (numerical values for profile RA=rural area; railway channel with c=6 dB against standard channel with c=0 dB), i.e. the BER at 500 km/h (railway channel) is not worse than the BER at 250 km/h (standard channel). A simple example shows that the benefit in the transmission of telegrams consisting of blocks of decoded bits can be much higher, The desired channel performance, i.e. a strong direct path (high Rice parameter), can be achieved by careful radio coverage planning along the railway line. This means a GSM standard receiver is sufficient to cope with the GSM high velocity problem and no additional means are needed. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This paper examines the railway environment from the point of view of the provision of 2nd generation voice services. It examines the radio environment including fading, Doppler, transients, and penetration loss into carriages, as well as special situations such as cuttings and tunnels. The paper reports on the operation of the IS-95 and GSM voice services at high speeds, both assuming track side base stations and non-track side base stations. The paper draws conclusions on the different types of environment encountered both for conventional and high speed rail lines, and the effect these factors have on the overall link budget. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The paper analyzes the special characteristics of GSM mobile communications in systems where the mobile stations move at speeds up to 500 km/h. One of the principal propagation problems of the train track's special environment is the speed of the MS. Another problem is the short loss of communications during the handover process. The propagation environment of the train is very special. Typically, a high speed track is full of cuttings, tunnels, bridges, etc; for this reason, the planning of the coverage must be done very carefully. The facilities and capabilities of GSM technology would be very interesting for railway applications, but GSM-R technology has not been commercially proved for high speed applications, so that a great effort is required to adapt GSM technology to this new applications. Although the changes in BTS and terminal are very small, the planning and network design is completely different from that of commercial networks. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> With the development of high-speed railway and public growing demand on data traffic, people pay much more attention to provide high data rate and high reliable services under high mobility circumstance. Due to the higher data rate and lower system latency, long-term evolution (LTE) has been chosen as the next generation's evolution of railway mobile communication system by the International Union of Railways. However, there are still many problems to be solved in the high mobility applications of LTE, especially the higher handover failure probability, which seriously degrades the reliability of railway communication. This article proposes an optimized handover scheme, in which the coordinated multiple point transmission technology and dual vehicle station coordination mechanism are applied to improve the traditional hard handover performance of LTE. The scheme enables the high speed train to receive signals from both adjacent base stations and obtain diversity gain when it moves through the overlapping areas, so it improves the quality of the received signal and provides reliable communication between train and ground eNodeBs. Numerical analysis and simulation results show that the proposed scheme can decrease the outage probability remarkably during handover and guarantee the reliability of train to ground communication. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Traffic telematics applications are currently under intense research and development for making transportation safer, more efficient, and more environmentally friendly. Reliable traffic telematics applications and services require vehicle-to-vehicle wireless communications that can provide robust connectivity, typically at data rates between 1 and 10 Mb/s. The development of such VTV communications systems and standards require, in turn, accurate models for the VTV propagation channel. A key characteristic of VTV channels is their temporal variability and inherent non-stationarity, which has major impact on data packet transmission reliability and latency. This article provides an overview of existing VTV channel measurement campaigns in a variety of important environments, and the channel characteristics (such as delay spreads and Doppler spreads) therein. We also describe the most commonly used channel modeling approaches for VTV channels: statistical as well as geometry-based channel models have been developed based on measurements and intuitive insights. Extensive references are provided. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> We present a survey of approaches for providing broadband Internet access to trains. We examine some of the barriers that hinder the use of broadband Internet on trains and then discuss some of the opportunities for broadband deployment to trains. This survey considers some of the basic concepts for providing broadband Internet access and then reviews associated network architectures. The review of network architectures shows that we can subdivide networks for providing broadband Internet access to trains into the train-based network, the access network-for connecting the train to the service provider(s)-and the aggregation network-for collecting user packets generated in the access network for transmission to the Internet. Furthermore, our review shows that the current trend is to provide Internet access to passengers on trains using IEEE 802.11; however, a clear method for connecting trains to the global Internet has yet to emerge. A summary of implementation efforts in Europe and North America serves to highlight some of the schemes that have been used thus far to connect trains to the Internet. We conclude by discussing some of the models developed, from a technical perspective, for testing the viability of deploying Internet access to trains. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This paper proposes a communication system using Wi- Fi (IEEE802.11g) to link between the Internet and high speed rail systems traveling at around 300km/h. In order to adapt Wi-Fi for high speed mobile communication, we optimized its coverage on a rail track with a developed directional antenna, which has a communication range of around 500m by 10mW. With the antenna, however, a mobile entity had to switch over antennae (a layer 2 handover (L2HO)) every 6 to 7 seconds. Furthermore Mobile IP handovers (a layer 3 handover (L3HO)) had to be appropriately controlled to avoid a simultaneous handover of Layer 2 and 3, which results in a fatal communication disruption. Therefore the designed system in this paper separated a L3HO from a L2HO. As a result, a maximum of 25Mbps with an average of 16Mbps for the UDP throughput and an average L2HO time of 110ms were realized while travelling at 270km/h. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Distributed antenna technology, as one of the important next-generation wireless communication technologies, has aroused extensive attention. The technology has been applied in high-speed movement environment. Due to the high density coverage of distributed antennas, almost anywhere in the area has line-of-sight (LOS) to reach at least one fixed antenna. However it may correspondingly result in smaller overlap between adjacent cells and higher probability of handover failure in high-speed movement scenario. In order to solve these problems, this paper proposes a novel handover scheme based on on-vehicle dual-antenna for high-speed railway distributed antenna system (DAS). On-vehicle antennas which collaborate each other, are mounted on the top of high-speed train (the one is in the front-end and the other is in the rear-end). The proposed scheme utilizes distributed transceivers and centralized processing technology. The numerical analysis results show that the novel scheme can pre-trigger handover appropriately, guarantee the higher handover success rate, and increase the system throughput by around 50%. In addition, the scheme is feasible and easy to be implemented. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Due to the rapid development of high-speed railways all over the world, it is very promising to deliver public broadband wireless network to passengers aboard high-speed trains. However, deploying conventional cellular network along the railway will lead to low coverage efficiency and radio resource waste, where a large part of service coverage area may not have any users' traffic at all. A model of analyzing the coverage efficiency of the conventional network and Radio over Fiber (RoF) network is presented in this paper. Based on certain parameters of Chinese high-speed railway scenario, simulation results demonstrate that the coverage efficiency of the RoF network improves greatly compared with the conventional network. In addition, the number of Remote Antenna Units (RAUs) mounted along the railway may be less than the number of antennas installed on the roof of the high-speed train due to initial infrastructure cost. The optimal match of the antennas and RAUs is proposed to maximum the coverage efficiency in this paper. The conclusions provide observations to guide RoF cell planning for high-speed railways. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> This tutorial paper provides a comprehensive overview on the recent development in broadband wireless communications for high speed trains. Starting with the introduction of the two-hop network structure, radio-over-fiber (RoF) based cell planning is described in detail. Moreover, based on the analysis of differences between conventional cellular systems and the one for high speed trains, promising techniques are recommended to improve the performance of handover, which is one of the main challenges in high speed train communications. Finally, in order to combat the fast fading caused by the high mobility, robust algorithms are needed in physical layer signal processing, including synchronization, channel estimation, modulation/demodulation, and so on. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> In this paper, we propose a new concept called mobile Femtocell (MFemtocell) network, which can be considered as a practical implementation of mobile relays (more precisely, moving networks). MFemtocells can be deployed in moving vehicles, such as trains, buses, or private cars to provide enhanced user throughput, extended coverage, and reduction of the signaling overhead and drop calls. We investigate the spectral efficiency of cellular systems with MFemtocell deployment and two resource partitioning schemes. Simulation results demonstrate that with the deployment of MFemtocells, the spectral efficiency and average user throughput can significantly be increased while the signaling overhead is reduced. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> With the deployment of high speed train (HST) systems increasing worldwide and their popularity with travelers growing, providing broadband wireless communications (BWC) in HSTs is becoming crucial. In this paper, a tutorial is presented on recent research into BWC provision for HSTs. The basic HST BWC network architecture is described. Two potential cellular architectures, microcells and distributed antenna systems (DASs) based cells, are introduced. In particular, the DAS is discussed in conjunction with radio over fiber (RoF) technology for BWC for HSTs. The technical challenges in providing DAS-based BWC for HSTs, such as handoff and RoF are discussed and outlined. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> 3GPP has completed a study on coordinated multipoint transmission and reception techniques to facilitate cooperative communications across multiple transmission and reception points (e.g., cells) for the LTE-Advanced system. In CoMP operation, multiple points coordinate with each other in such a way that the transmission signals from/to other points do not incur serious interference or even can be exploited as a meaningful signal. The goal of the study is to evaluate the potential performance benefits of CoMP techniques and the implementation aspects including the complexity of the standards support for CoMP. This article discusses some of the deployment scenarios in which CoMP techniques will likely be most beneficial and provides an overview of CoMP schemes that might be supported in LTE-Advanced given the modern silicon/DSP technologies and backhaul designs available today. In addition, practical implementation and operational challenges are discussed. We also assess the performance benefits of CoMP in these deployment scenarios with traffic varying from low to high load. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The recent advent of high speed trains introduces new mobility patterns in wireless environments. The LTE-A (Long Term Evolution of 3GPP - Advanced) networks have largely tackled the Doppler effect problem in the physical layer and are able to keep wireless service with 100Mpbs throughput within a cell in speeds up to 350 km/h. Yet the much more frequent handovers across cells greatly increases the possibility of service interruptions, and the problem is prominent for multimedia communications that demand both high-throughput and continuous connections. In this paper, we present a novel LTE-based solution to support high throughput and continuous multimedia services for high speed train passengers. Our solution is based on a Cell Array that smartly organizes the cells along a railway, together with a femto cell service that aggregates traffic demands within individual train cabins. Given that the movement direction and speed of a high-speed train are generally known, our Cell Array effectively predicts the upcoming LTE cells in service, and enables a seamless handover that will not interrupt multimedia streams. To accommodate the extreme channel variations, we further propose a scheduling and resource allocation mechanism to maximize the service rate based on periodical signal quality changes. Our simulation under diverse network and railway/train configurations demonstrates that the proposed solution achieves much lower handover latency and higher data throughput, as compared to existing solutions. It also well resists to network and traffic dynamics, thus enabling uninterrupted quality multimedia services for passengers in high speed trains. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Provision of high data rate services on train has attracted a great attention recently. In this paper, the issues of time division duplex (TDD) DAS, including coverage of remote antenna units (RAUs), echo channel effect and system deployment cost were analyzed. The timing drift problem while solving echo channel effect with moving cell concept in DAS was also depicted in detail. Furthermore, the frequency response, propagation model, and time dispersion parameters of the RoF-DAS channel will be analyzed with simulation. This paper provided analyzed RoF-DAS channel profile over high-speed railway communication system for future researches. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Long Term Evolution (LTE) is considered to be the natural evolution for the current Global System for Mobile Communications-Railways (GSM-R) in high speed railway environments, not only for its technical advantages and increased performance, but also due to the current evolution of public communication systems. In railway environments, mission critical services, operation assistance services, and passenger services must be supported by reliable mobile communication systems. Reliability and availability are key concerns for railway operators and as a consequence, railway operators are usually conservative adopters of information and communication technologies (ICT). This paper describes the feasibility of LTE as a successor to GSM-R for new railway mobile communication systems. We identify key features of LTE as a technology and analyze its ability to support both the migration of current railway services and the provisioning of potential future ones. We describe the key challenges to address specific requirements for railway communication services including the provisioning of voice service in LTE networks, handover performance, multicast multimedia transmission, and the provisioning of group communications service and railway emergency calls. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> Global System for Mobile Communications Railways (GSM-R) has been the de facto standard for wireless communications in the field of High Speed Railway (HSR). However due to the increasing requirements associated with HSR, Long Term Evolution for Railways (LTE-R) has been presented as the following wireless communication system. In this paper a complete performance analysis of LTE for HSR is carried out giving results for both Physical (PHY) and Medium Access Control (MAC) layer in order to identify appropriate Quality of Service (QoS) requirements. An adequate Rician channel model with a time-varying Doppler frequency offset has been integrated in a Downlink LTE simulator using Wireless Mobile SIMulator (WM-SIM) platform. The effect of Inter- Carrier Interference (ICI) degrades the Bit Error Rate (BER) whereas the reduced coherence time makes obsolete channel state information reducing the performance of adaptive modulation and coding (AMC). For this reason open loop Multiple Input Multiple Output (MIMO) techniques are attractive. Results illustrate the benefits of LTE in HSR scenario. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> When a train speeds up to 350km/h, it is challenging for continuous wireless coverage due to a number of critical issues, e.g. frequent handover and drop-offs. To address this problem, this paper proposes a novel handover scheme based on the dual antennas and Mobile Relay Station (MRS) for High Speed Railway (HSR) Distributed Antenna System (DAS). The scheme enables the dual antennas controlled by the MRS to receive signals from multiple Remote Antenna Units (RAUs), thus obtaining diversity gain when the train moves within one logic cell. While the train runs through the edge of a logic cell, a dual-antenna handover scheme is adopted to enhance the handover performance. Thereby, the proposal improves the quality of the received signal and provides reliable communication for train-tog round network. The numerical analysis and simulation results show the proposed handover scheme can reduce handover frequency dramatically and provide the seamless access for HSR compared to the standard LTE handover scheme. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> High-speed railway (HSR) brings convenience to peoples' lives and is generally considered as one of the most sustainable developments for ground transportation. One of the important parts of HSR construction is the signaling system, which is also called the “operation control system,” where wireless communications play a key role in the transmission of train control data. We discuss in detail the main differences in scientific research for wireless communications between the HSR operation scenarios and the conventional public land mobile scenarios. The latest research progress in wireless channel modeling in viaducts, cuttings, and tunnels scenarios are discussed. The characteristics of nonstationary channel and the line-of-sight (LOS) sparse and LOS multiple-input-multiple-output channels, which are the typical channels in HSR scenarios, are analyzed. Some novel concepts such as composite transportation and key challenging techniques such as train-to-train communication, vacuum maglev train techniques, the security for HSR, and the fifth-generation wireless communications related techniques for future HSR development for safer, more comfortable, and more secure HSR operation are also discussed. <s> BIB020 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> High-speed railways (HSRs) have been widely introduced to meet the increasing demand for passenger rail travel. While it provides more and more conveniences to people, the huge cost of the HSR has laid big burden on the government finance. Reducing the cost of HSR has been necessary and urgent. Optimizing arrangement of base stations (BS) by improving prediction of the communication link is one of the most effective methods, which could reduce the number of BSs to a reasonable number. However, it requires a carefully developed propagation model, which has been largely neglected before in the research on the HSR. In this paper, we propose a standardized path loss/shadow fading model for HSR channels based on an extensive measurement campaign in 4594 HSR cells. The measurements are conducted using a practically deployed and operative GSM-Railway (GSM-R) system to reflect the real conditions of the HSR channels. The proposed model is validated by the measurements conducted in a different operative HSR line. Finally, a heuristic method to design the BS separation distance is proposed, and it is found that using an improved propagation model can theoretically save around 2/5 cost of the BSs. <s> BIB021 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> I. INTRODUCTION <s> In this paper, the bit error rate (BER) performance of spatial modulation (SM) systems under a novel 3-D vehicle-to-vehicle (V2V) multiple-input multiple-output (MIMO) channel model is investigated both theoretically and by simulations. The impact of vehicle traffic density, Doppler effect, and 3-D and 2-D V2V MIMO channel models on the BER performance are thoroughly investigated. Simulation results show that the performance of SM is mainly affected by the spatial correlation of the underlying channel model. Compared with other MIMO technologies, the SM system can offer a better tradeoff between spectral efficiency and system complexity. <s> BIB022
|
H IGH-MOBILITY scenarios, e.g., high-speed train (HST) and vehicle-to-vehicle (V2V) scenarios, are expected to be typical scenarios for the fifth generation (5G) wireless communication systems BIB019 . Unlike V2V communication channels that have been thoroughly investigated in the literature BIB005 - BIB022 , a comprehensive study of HST communication channels is still missing. With the rapid development of HSTs, an increasing volume of wireless communication data is required to be transferred to train passengers. HST users demand high network capacity and reliable communication services regardless of their locations or speeds. To satisfy these demands, HST wireless communication systems have to overcome many challenges resulting from the high speed of the train that can easily exceed 250 km/h, such as fast handover, fast travel through diverse scenarios, and large Doppler spreads BIB002 , BIB006 besides some challenges inherited from conventional trains such as high penetration losses, limited visibility in tunnels, and the harsh electromagnetic environment BIB020 . Since 1998, the Global System for Mobile Communication Railway (GSM-R) has widely been adopted as Europe standard for train communications and control. However, GSM-R can only provide a data rate of up to 200 kbps BIB001 , besides the fact that it is mainly used for train control rather than providing communications for train passengers BIB003 . Therefore, GSM-R cannot meet the requirements for future high speed data transmissions and International Union of Railways has recommended that GSM-R has to be replaced by longterm evolution-Railway (LTE-R) BIB016 - BIB017 , which is a broadband railway wireless communication system based on LTEAdvanced (LTE-A) . Nevertheless, both systems still adopt the conventional cellular architecture where mobile stations (MSs) inside trains communicate directly with outdoor base stations (BSs). Such an architecture leads to a spotty coverage and high penetration losses of wireless signals traveling through the metal carriages of HSTs. In addition, the receiving signals at MSs on board will experience fast changing channels resulting in high signaling overhead and high possibility of drop calls and handover failure BIB007 . The aforementioned problems can be mitigated by deploying other cellular architectures, such as distributed antenna system (DAS) BIB012 - BIB008 , coordinated multipoint (CoMP) BIB013 , , mobile relay station (MRS) BIB010 - BIB014 (or mobile femtocell BIB019 , BIB011 , ) technologies, or a combination of these architectures, e.g., DAS with MRS BIB018 or CoMP with MRS BIB004 . In a DAS, distributed antenna elements are connected to a BS via wires or fibers (radio over fibers (RoF)) , BIB009 to provide considerable gain in coverage and capacity in comparison with the conventional cellular architecture. The spatially separated antenna elements can be used to transmit the same signal at different locations to provide spatial diversity against the fading. Combined with spatial diversity, frequency reuse in the DAS is an effective technique to increase system capacity. The enhancement in spectral efficiency of DASs in comparison with conventional systems was presented in BIB012 . In BIB015 , the authors analyzed the deployment of DAS over HST communication systems and some of the resulting problems such as the coverage of the remote antenna units (RAUs) and echo channel effect. In CoMP systems, the transmission of neighboring BSs will be coordinated in the downlink while the received signals at the uplink will be jointly processed. This will reduce the inter-cell interference and improve the cell edge throughput. CoMP systems will also provide an enhanced channel capacity by using the statistically independent properties of the channels resulting from the wide spatial separation of antenna elements. Adopting mobile femtocell architecture in HST communication systems can be performed by deploying dedicated MRSs on the surface of the train to extend the coverage of the outdoor BS into train carriages. As a result, we will have two channels: an outdoor channel between the BS and MRS, and an indoor one between the MRS and an MS of a train passenger as illustrated in Fig. 1 . In this case, the BS will mainly communicate with the MRS at high data rates instead of communicating with large numbers of MSs directly. An MRS and its associated MSs within a train carriage are all viewed as a single unit to the BS, while the MSs will see the relevant MRS as a regular BS. It follows that an MRS can perform a group handover on behalf of all its associated MSs, which can greatly reduce the frequent handover burden of the HST system . Since the complexity of radio resource allocation (i.e., transmit power, data rates, scheduling, power and frequency allocation, and antenna selection) in a BS is related to the number of active users BIB012 , the radio resource management complexity in one BS will be reduced significantly when dealing with a "group of users" rather than individuals. This promising MRS technology has been adopted by IMT-Advanced (IMT-A) and WINNER II channel models. Moreover, the transmitter (Tx) and receiver (Rx) of a HST wireless communication system encounter different channel conditions due to the difference of surrounding geographical environments. The HST environment can be generally classified into the following main scenarios: open space, viaduct, cutting, hilly terrain, tunnels, and stations. Considering some unique setup of the aforementioned scenarios and some other special HST scenarios, HST environment can be further classified into 12 scenarios . HSTs can operate across one or more of these scenarios during its travel. The propagation characteristics change significantly with the change of environments and the distance between the Tx and Rx, even in the same terrain. Scenarios have a close relationship with channel modeling and measurements. Most standard channel models in the literature, like UMTS , COST 2100 , and IMT-2000 , failed to introduce any of the HST scenarios. The moving networks scenario in the WINNER II channel model and rural macro-cell (RMa) scenario in the IMT-A channel model have only considered a rural environment for HSTs, while neglecting other HST scenarios. The aforementioned propagation scenarios will be introduced and explained in detail in Section II. The features of HST channels, e.g., non-stationarity and large Doppler shift, significantly differ from those of lowmobility mobile cellular communication channels. Therefore, many measurement campaigns have been conducted in the literature to understand the underlying physical phenomenon in HST propagation environments. Accurate channel models that are able to mimic key characteristics of wireless channels play an important role in designing and testing HST communication systems. Realistic and reliable large-scale fading channel models, i.e., path loss (PL) and shadow fading (SF) models, are indispensable for efficient and trustworthy network deployment and optimization. Small-scale fading channel models are crucial in physical layer design in order to develop and test different transmission schemes, such as diversity of transmission/reception, error correction coding, interleaving, and equalization algorithms. Inaccurate channel models may lead to over-optimistic or over-pessimistic performance evaluation results that will result in misjudgments in product development. Moreover, inaccurate channel models may lead to inaccurate link budgets that will result in huge errors of the estimated maximum distance between adjacent BSs. Consequently, this will cause poor coverage and increased drop calls due to failed handovers between BSs when the distance is underestimated, and unnecessary overlapped coverage area with unjustified installation and maintenance cost of the extra installed BSs when the distance is overestimated BIB021 . In the literature, several large-scale and small-scale fading HST channel models were proposed. This article will focus on the recent advances in HST channel measurements and modeling and their future challenges. The rest of this paper is organized as follows. In Section II, an overview of HST channel measurements is provided. The stateof-the-art of HST channel models is presented in Section III. Future research directions in HST channel measurements and models are outlined in Section IV. Finally, concluding remarks are highlighted in Section V.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> We present and analyse the results of wideband radio channel measurements performed in tunnels. Both a high speed train tunnel and a smaller test tunnel have been investigated with both antennas and leaky feeders as fixed radiators. The results show typical features of the tunnel radio channel with typically low delay spread combined to significant slow fading of the LOS signal due to interferences. The delay spread may increase substantially during the fading dips. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper focuses on propagation path loss modeling in viaduct and plain scenarios of the High-speed Railway. The data used for modeling comes from measurement on Zhengzhou-Xi'an passenger dedicated line with the maximum moving speed of 340Km/h. Based on the measurement data, tuned Free-space path loss models in these two scenarios are proposed. The performance of the tuned models is compared with that of the Hata model. The evaluation of the models is in terms of mean error, root mean square error and standard deviation of the residuals between the models and measurement. The simulation results and related analysis show better performance of the proposed tuned models compared with the conventional Hata model. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents the results of path loss measurements in "Zhengzhou-Xi'an" high-speed railway environment at 930 MHz band. A transmitter directional antenna height of 20~30 meters above the rail surface and a receiver omni-directional antenna height of 3.5 meters were used on the high-speed viaducts height of 10~30 meters above the ground. An automatic acquisition system was utilized in the measurements. The model makes distinctions among different terrain. The results of measurements provide practical values for path loss exponent and standard deviation of shadowing affected by the viaduct factor in suburban, open area, mountain area and urban propagation regions where the high-speed trains travel. Based on the measurement data, the empirical path loss model was developed, which could be used for predicting the path loss for the future railway communication systems, and provide the facilities for network optimization. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Near-ground channel characterization is an important issue in most military applications of wireless sensor networks. However, the channel at the ground level lacks characterization. In this paper, we present a path loss model for three near-ground scenarios. The path loss values for each scenario were captured through extensive measurements, and then a least-square linear regression was performed. This indicates that the log-distance-based model is still suitable for path loss modeling in near-ground scenarios, and the prediction accuracy of the two-slope model is superior to that of the one-slope model. The validity of the proposed model was further verified by comparisons between the predicted and measured far-field path losses. Finally, compared to the generic models, the proposed model is more effective for the path loss prediction in near-ground scenarios. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> LTE is expected to substitute GSM as the basis technology for railway communications. Recently, special attention has been deserved to HST as this particular environment (mainly due to the high speed condition) can severely impact wireless systems performance. Although several channel models have been derived during the few last years, most of them are not accurate enough as they are not supported by measurement campaigns. In this paper, the main requirements for HST environments are analyzed and a flexible, cost-affordable, and easily-scalable software and hardware architecture for a test bed suitable for assessing LTE at high speeds is proposed. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> High-speed railway (HSR), as an important deployment scenario for both the present and the future mobile wideband radio communication systems, has attracted more and more attention all over the world with the rapid increasing demand of the high data rate communication service on traveling. For the purpose of capturing the wideband channel characteristics of HSR, a channel measurement campaign was conducted at the center frequency of 2.35 GHz with 50 MHz bandwith in the U-shape cutting scenario of Zhengzhou--Xian HSR line in China. Based on the field measured data, we analyze the small scale characteristics in detail, which maily include path number, root mean square delay spread (rms DS), and doppler shift. It is found that the distribution of the path number is well fitted by a Gamma distribution. The statistics of rms DS in the U- shape cutting scenario are larger than the results in other scenario of HSR. In addition, an increasing tendency of rms DS against the transmitter-to- receiver distance is observed and can be modeled by a linear function. Finally, the doppler frequency shift is verified and meets the theoretical value. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Compared with the leaky feeder, distributed antenna systems (DAS) are treated as a more economic and promising solution to support radio coverage in tunnels. Based on the measurements performed in realistic subway tunnels in Madrid at 2.4 GHz, a statistic model for the propagation in tunnels is presented. Two groups of measurements (conducted in subway tunnels and railway tunnels, respectively) are employed to validate the model. The results in this paper could be helpful for networking planning and interference analysis in the design of DAS in tunnels. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents the nonisotropic scattering characteristic of the mobile radio channel in an alternant tree-blocked viaduct scenario on high-speed railway (HSR) by real field measuring at 2.35 GHz. An angle of arrival (AOA) probability density function (PDF) is proposed for the nonuniform AOA at the mobile caused by stochastically distributed scatterers. Two Von Mises angular distributions with broad applicability are used to represent the line of sight (LOS) component and part of the scattering component in the AOA model. Based on such a PDF statistical characteristics of Ricean factor, , and AOA of the scattering component are modeled in LOS and obstructed line of sight (OLOS) cases, respectively. The results may give a meaningful and accurate channel model and could be utilized in HSR viaduct scenario evaluation. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> A semiempirical multiple-input multiple-output (MIMO) channel model is proposed for high-speed railway (HSR) viaduct scenarios. The proposed MIMO model is based on the combination of realistic single-input single-output (SISO) channel measurement results and a theoretical geometry-based stochastic model (GBSM). Temporal fading characteristics involving -factor and Doppler power spectral density (PSD) are derived from the wideband measurement under an obstructed viaduct on Zhengzhou-Xi’an HSR in China. The GBSM composed of a one-ring model and an elliptical model is employed to describe the entire propagation environment. Environment-related parameters in the GBSM are determined by the measured temporal fading properties. And a close agreement is achieved between the model results and measured data. Finally, a deterministic simulation model is established to perform the analysis of the space-time correlation function, the space-Doppler PSD, and the channel capacity for the measured scenario. This model is more realistic and particularly beneficial for the performance evaluation of MIMO systems in HSR environments. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> This paper presents results for delay and Doppler spread characterization in high-speed railway (HSR) hilly scenario. To investigate the propagation characteristics in this specific terrain, a measurement campaign is conducted along the “Guangzhou-Shenzhen” HSR in China. A wideband channel sounder with 40 MHz bandwidth is used to collect raw data at 2.4 GHz band. The delay spread and Doppler frequency features are analyzed based on measured data. It is found that there are abundant multipath components (MPCs) in this scenario. We present the relationship between the delay spreads and the transceiver distances. The measured route can be divided into four areas with different delay and Doppler characteristics. Finally, a tapped delay line (TDL) model is proposed to parameterize the channel responses in the HSR hilly environment, which is supposed to provide criterions for evaluations of the radio interface and development of wireless communication system. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Train stations are one of the most common structures along a high-speed railway. They can block the line of sight (LOS), generate multiple reflected and scattered waves, and aggravate the fading behavior; however, these effects have been rarely investigated. This paper presents a group of 930-MHz measurements conducted on train stations of high-speed railways in China. The whole process of a train passing stations has been measured with two typical types of stations. The results indicate that, when the station is far from the transmitter (Tx), the semi-closed station (in which the awnings cover both the platforms and the rails) influences the propagation much more seriously than the open station (in which the awnings only cover the platforms supporting a clear free space over the tracks). When the station is near the Tx, the fact of whether the train keeps the LOS and stays inside the station determines the propagation for both types of stations. All the propagation characteristics, including extra propagation loss, shadow fading, small-scale fading, level crossing rate (LCR), average fade duration (AFD), and fading depth (FD), have been measured and computed for the first time. Specific findings of propagation characteristics in the train station scenario are provided. Afterward, by filling the gap of the train station scenario, a table is made to establish the comprehensive understanding of main scenarios in the high-speed railway. Furthermore, comparisons of the propagation characteristics between the train station scenario and ten standard scenarios are made to emphasize the significance of the modeling exclusively for the train station scenario. Finally, rules of the influence of four conditions are quantitatively revealed. The measured results and quantitative analysis are significant for leading the simulation and design of signaling and train control communications systems toward the reality. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Train stations are one of the largest and most unavoidable obstructions for electromagnetic wave propagation on a high-speed railway. They can bring about severe extra propagation loss, and therefore, lead to poor coverage or handover failure. However, their influence has been rarely investigated before. Based on rich experimental results of 930 MHz measurements conducted on train stations of high-speed railway in China, this paper proposes two empirical models for the extra propagation loss owing to train stations for the first time. The extra loss depends on four conditions: the distance between the transmitter (Tx) and the train station, the type of the train station, the track carrying the train, and the propagation mechanism zones. Hence, the models are established for every case of all the combinations of these four conditions. The validation shows that the proposed models accurately predict the extra propagation loss and support an effective way to involve the influence of the train station in the simulation and design of the signaling and train control communications systems. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> Bridges that cross a railway's right-of-way are one of the most common obstacles for wave propagation along a highspeed railway. They can lead to poor coverage or handover failure but have been rarely investigated before. To describe the influence of this nonnegligible structure on propagation, measurements have been taken at 930 MHz along a real high-speed railway in China. Based on different mechanisms, the entire propagation process is presented by four zones in the case of an independent crossing bridge (ICB) and two zones in the case of groups of crossing bridges. First, all the propagation characteristics, including extra propagation loss, shadow fading, small-scale fading, and fading depth, have been measured and extracted. The results are shown in a complete table for accurate statistical modeling. Then, two empirical models, i.e., ICB and crossing bridges group (CBG), are first established to describe the extra loss owing to the crossing bridges. The proposed models improve on the state-of-the-art models for this problem, achieving a root mean square error (RMSE) of 3.0 and 3.7 dB, respectively. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> II. HST CHANNEL MEASUREMENTS <s> In this paper, a recently conducted measurement campaign for high-speed-train (HST) channels is introduced, where the downlink signals of an in-service Universal Mobile Terrestrial System (UMTS) deployed along an HST railway between Beijing and Shanghai were acquired. The channel impulse responses (CIRs) are extracted from the data received in the common pilot channels (CPICHs). Within 1318 km, 144 base stations (BSs) were detected. Multipath components (MPCs) estimated from the CIRs are clustered and associated across the time slots. The results show that, limited by the sounding bandwidth of 3.84 MHz, most of the channels contain a single line-of-sight (LoS) cluster, and the rest consists of several LoS clusters due to distributed antennas, leaking cable, or neighboring BSs sharing the same CPICH. A new geometry-based random-cluster model is established for the clusters' behavior in delay and Doppler domains. Different from conventional models, the time-evolving behaviors of clusters are characterized by random geometrical parameters, i.e., the relative position of BS to railway, and the train speed. The distributions of these parameters, and the per-cluster path loss, shadowing, delay, and Doppler spreads, are extracted from the measurement data. <s> BIB019
|
Special attention has been given to HST channel measurements in recent years. Due to the high speed of the train and the hostile HST environments, conducting accurate channel measurements for HST communication systems is challenging and needs to address particular hardware and software requirements, e.g., robustness, scalability, hardware redundancy and traceability BIB008 . Many measurement campaigns - BIB019 for different HST environments were presented in the literature. Here, we will briefly review and classify the important measurements for HST communications according to the scenarios, cellular architecture, measurements' setup parameters (i.e., antenna configuration, carrier frequency, and bandwidth), and measured channel statistics, as shown in Table I . , also called plain scenario , the Rx is moving at a very high speed in a rural area where the BS antenna is much higher than the surroundings BIB009 . This environment focuses on large cells and continuous coverage where the link between the fixed Tx and moving Rx normally has a dominant line-of-sight (LoS) component. However, after a certain distance, called breakpoint distance, the impact of the sparse scatterers will be noticed at the Rx represented by non-LoS (NLoS) components. As a result, the slopes of the PL and Ricean K -factor will be noticeably changed at the breakpoint leading to dual-slope PL model . It has been proved that there is a strong link between the breakpoint distance and the antenna height. For a certain site, as the antenna height decreases, the breakpoint moves closer to the Tx. This is because a bigger Fresnel zone is intercepted by the ground, usually covered by vegetation, when the antenna height is lower. Furthermore, due to the influences of different environments, slight variations in the breakpoint distance can be noticed in different scenarios. Therefore, it can be concluded that the breakpoint distance is mainly determined by the antenna height while slightly affected by environments BIB006 . Based on the geographic nature and the distribution/height of the surrounding scatterers, the open scenarios can be further classified into rural , urban, and suburban scenarios as illustrated in Fig. 2 . 2) The viaduct scenario is very common for HSTs BIB003 - BIB013 . The main purpose of viaducts is to ensure the smoothness of the rail and the high speed of the train. In this scenario, the radio reflection, scattering, and diffraction caused by nearby scatterers, e.g., trees and buildings, can also be reduced significantly. The viaduct height and relative BS height have great influence on the received signal. Because of the relatively high altitude of the viaduct in comparison with the surrounding terrain, the LoS component is dominant in this scenario. However, the sparsity of the scatterers in the environment around the viaduct will still influence the received signal at the Rx BIB004 . Based on the relative altitude between the scatterers and the viaduct, this scenario can be further classified into high viaduct and low viaduct scenarios. In the former, most scatterers located within 50 m from the viaduct are lower than the surface of the viaduct and therefore their impact on the propagation characteristics is negligible. In the low viaduct scenario BIB013 , BIB014 , some of the nearby scatterers are higher than the surface of the viaduct and consequently they introduce rich reflections and scattering components that may result in a severe shadow fading and/or extra pathloss . 3) The cutting scenario is another common scenario for HST wireless communications BIB005 - BIB010 . It represents an environment where the HST passes a U-shaped geographical cut surface between the hills. The cutting is widely used for HST construction to ensure the smoothness of the rail and help to achieve a high speed of the train when passing through hills. The propagation of radio waveforms in this scenario is significantly affected by the steep walls on both sides. The LoS component can be observed along the route of the HST in this scenario. Here, we can recognize between two cutting scenarios: deep cutting if the receive antenna mounted on top of the train is lower than the upper eave of the cutting and low cutting if the height of the upper eave is lower than the top of the receive antenna. 4) In the hilly terrain scenario BIB011 , BIB015 , the surrounding environment is densely scattered with objects distributed irregularly and non-uniformly. With high-altitude transmit antennas and low-altitude obstacles, the LoS component is observable and can be detected along the entire railway. However, multipath components scattered/reflected from the surrounding obstacles will cause serious constructive or destructive effects on the received signal and therefore influence the channel's fading characteristics. 5) The tunnel scenario represents an environment where HST passes through tunnels BIB001 , BIB002 with different lengths ranging from hundreds of meters to several kilometers. The length, size, and shape of the tunnels and the encountered waveguide phenomena have significant impact on the communication channel. Because of the long limited space, bounding of tunnel, and poor smoothness of the interior wall, propagation characteristics of signals in tunnels are quite different from other scenarios. To overcome the problem of the limited visibility encountered in tunnels and to design an optimal wireless communication network, leaky feeders and DAS are often deployed. However, as HSTs may require long tunnels, the leaky feeder solution is very expensive especially at high operating frequencies and its maintenance is considerably complex BIB007 . As a result, DAS is more practical BIB012 . It can provide considerable gains in coverage and capacity, and provide spatial diversity against the fading by using the antenna elements at different locations. It also has advantages in future applications such as higher distance between repeaters and easy maintenance after being opened. 6) The stations scenario represents the railway facility where HSTs stop regularly to load/unload passengers BIB016 , BIB017 . HST stations can be classified according to their size or architecture. Based on the size of the station, which reflects the estimated communication traffic, station scenario can be categorized into small to medium size stations, large stations, and marshalling stations . From the architecture perspective, which affects the propagation characteristics inside the station, three HST station scenarios can be recognized, i.e., open station, semi-closed station, and closed station BIB017 as illustrated in Fig. 2 . Table II briefly summarizes the description and key-parameters of the aforementioned scenarios. The aforementioned scenarios are the most encountered ones in HST environments. However, recent measurement campaigns have shed some light on other special HST scenarios such as crossing bridges BIB018 . Besides the previous "individual" scenarios, HSTs may encounter more than one scenario (the socalled combination scenario ) in one cell. Two combination scenarios are reported in the literature. The first one is a combination between tunnel and viaduct where viaducts are usually used as transition between tunnels in mountain environments. The frequent transition between tunnels and viaducts will increase the severity of fading at the transition points causing a drop in the communication quality. The second combination is between cutting scenarios, i.e., deep and low cuttings, and rural scenario. The frequent and fast transition between these scenarios can degrade the quality of the communication link and makes signal prediction quite challenging.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper focuses on propagation path loss modeling in viaduct and plain scenarios of the High-speed Railway. The data used for modeling comes from measurement on Zhengzhou-Xi'an passenger dedicated line with the maximum moving speed of 340Km/h. Based on the measurement data, tuned Free-space path loss models in these two scenarios are proposed. The performance of the tuned models is compared with that of the Hata model. The evaluation of the models is in terms of mean error, root mean square error and standard deviation of the residuals between the models and measurement. The simulation results and related analysis show better performance of the proposed tuned models compared with the conventional Hata model. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> A network with high quality of service (QoS) is required for railway wireless communication and control systems. Research on radio-wave propagation in railway environment has great significance for the design and optimization of the railway wireless network. In this paper, measurements are taken in railway viaduct area using track side base stations of the GSM-R network. Comparison between the measured path loss values and the estimated values by a few prediction models shows a large deviation. Thus a new path loss prediction model for viaduct area is derived from statistical analysis of the measurement results in this paper. The novel proposed model has proven to be accurate for the planning of the railway wireless network. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents the results of path loss measurements in "Zhengzhou-Xi'an" high-speed railway environment at 930 MHz band. A transmitter directional antenna height of 20~30 meters above the rail surface and a receiver omni-directional antenna height of 3.5 meters were used on the high-speed viaducts height of 10~30 meters above the ground. An automatic acquisition system was utilized in the measurements. The model makes distinctions among different terrain. The results of measurements provide practical values for path loss exponent and standard deviation of shadowing affected by the viaduct factor in suburban, open area, mountain area and urban propagation regions where the high-speed trains travel. Based on the measurement data, the empirical path loss model was developed, which could be used for predicting the path loss for the future railway communication systems, and provide the facilities for network optimization. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> As a very important parameter in link budget and channel modeling, the Ricean K factor in the viaduct and cutting scenarios along the high speed railway is estimated by using a moment-based estimator. The practical measurement is taken in the train at a speed of more than 250 km/h. The measured distributions are compared with the Ricean distributions and it's seen that the estimation of Kis accurate. Channel conditions of the two special scenarios are analyzed based on the measurement and estimation results. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Presented is the statistical analysis of radio wave propagation in a high-speed railway cutting scenario, derived from 930 MHz measurements taken along the ‘Zhengzhou-Xi'an’ high-speed railway of China. The crown width and bottom width of the cutting are well-covered by the proposed models. The Ricean distribution offers a good fit and the K-factor is found to be lognormal, with a mean value of 1.88 dB and standard deviation of 3.29 dB. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Narrow band measurements at 930.2 MHz are carried out in two kinds of viaduct scenarios on the Zhengzhou-Xi'an high-speed railway at the speed of 300 km/h. The first-order and second-order statistics of the measured data, such as level crossing rate (LCR) and average fade duration (AFD), are compared with theoretical values of Rayleigh, Rice and Nakagami models. An emulation system is set up in the State Key Lab of Rail Traffic Control and Safety using a Propsim C8Radio Channel Emulator. Two new models based on WINNER II D2a channel model are proposed for viaduct scenarios according to the emulation results. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper presents a set of 930 MHz measurements conducted along the “Zhengzhou-Xi'an” high-speed rail of China, to characterize short-term fading behavior of the rail viaduct scenario. Three measurement cases covering viaducts with different heights are reported. The analysis results include fade depth (FD), Ricean distribution fit and K-factor modeling, level crossing rates (LCR), and average fade duration (AFD). A small value of fade depth, around 15 dB, is observed. The Ricean distribution offers good fit in this line-of-sight (LOS) propagation scenario, and the K-factor estimated using moment-based method is modeled as a piecewise function, whose break point equals to the reference distance. It is found that the viaduct height H greatly affects the severity of fading and the feature parameters. The results are applicable to the design of high-speed rail communication systems and the modeling of the rail viaduct fading channels. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper proposes a distance-dependent Ricean K-factor model for a line-of-sight (LOS) viaduct scenario in the high-speed rail (HSR) of China. Extensive narrowband measurements conducted at 930 MHz are utilized. The propagation environment can be categorized into two cases: moderate suburban and dense suburban. The estimated K-factors are modeled as a piecewise-linear function of distance. The statistical fluctuations of K-factors are well considered by introducing the standard deviation to the expression. A detailed comparison between the piecewise-linear K-factor model and that of other literature validates the proposed model. Our results will be useful in the modeling of HSR viaduct channels and the performance analysis such as channel capacity and throughput for HSR wireless communication systems. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> In this paper, based on the measured data at Wuhan-Guangzhou high speed railway, the statistics characteristics of high speed mobile fading channel including long-term and short-term fading in this environment are presented. The measurement campaigns were conducted at GSM-R downlink band. The experimental data was analyzed to provide path loss model and short-term fading statistics including probability density functions (PDF) of signal amplitude, delay statistics, number of paths and path power statistics. From these statistics, it is shown that the path loss index at this railway area is n=2∼5 with standard deviation ranging from 3 to 6dB; the test PDF of the short-term fading in such channels approximately fit Nakagami distributions better; the cumulative density functions of average delay and root mean square delay demonstrated that the mean and root mean square delay are not larger than respectively 1.37µs and 1.69µs and maximum delays less than 6.7µs occur most frequently. At last, the PDFs of paths and PDFs of relative amplitude of paths with different threshold levels were computed and drawn. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> The paper describes the measurement campaigns for the broadband channel properties under the high- speed condition, which have been carried out on Zhengzhou to Xi'an (ZX) High-Speed Railway and Beijing to Tianjin (BT) High-Speed Railway. WCDMA with the bandwidth of 3.84MHz is employed as the excitation signal that is transmitted from the base station along the railway and received by the TSMQ by ROHDE & SCHWARZ inside the train. Different scenarios including plain, U-shape cutting, station and hilly terrain are chosen in the measurements and the parameters about the channel multipath properties are extracted, analyzed and briefly reported here. These results are informative for the system designers in the future wireless communication of High-Speed Railway. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> An efficient channel sounding method using cellular communication systems is proposed for high-speed railway (HSR) propagation environments. This channel measurement technique can be used conveniently to characterize different HSR scenarios, which can significantly improve the measurement efficiency. Based on downlink signals of wideband code division multiple access (WCDMA) and the long term evolution (LTE), principles and methodologies of HSR channel sounding are presented. Using the WCDMA signal, a measurement campaign is conducted in real-world HSR scenarios and statistical characterizations are provided using a radio network analyzer. Due to the limits of the radio network analyzer, afterwards, a software defined radio (SDR)-based channel data recorder is developed allowing users to collect the signals from different wireless cellular systems. Especially, the estimation accuracies are validated in lab by the faded signals emitted from a vector signal generator. The results show that the channel data recorder provides a particularly good match to the configured fading channels. Therefore, this measurement method can be employed to investigate the HSR channel, and to establish the channel models under the various HSR scenarios. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Accurate characterization of the radio channel in tunnels is of great importance for new signaling and train control communications systems. To model this environment, measurements have been taken at 2.4 GHz in a real environment in Madrid subway. The measurements were carried out with four base station transmitters installed in a 2-km tunnel and using a mobile receiver installed on a standard train. First, with an optimum antenna configuration, all the propagation characteristics of a complex subway environment, including near shadowing, path loss, shadow fading, fast fading, level crossing rate (LCR), and average fade duration (AFD), have been measured and computed. Thereafter, comparisons of propagation characteristics in a double-track tunnel (9.8-m width) and a single-track tunnel (4.8-m width) have been made. Finally, all the measurement results have been shown in a complete table for accurate statistical modeling. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Cuttings are widely used in high-speed railway (HSR) transportation to ensure the flatness of rails. The special structure of cuttings results in rich reflection and scattering, and creates dense multipath components. This paper presents a series of measurements of the propagation channel at 930 MHz conducted along the “Zhengzhou-Xi'an” HSR of China, to characterize the small-scale fading behavior of rail-cutting scenarios as a function of the geometry of cuttings, including crown width and bottom width. Raw data are collected in six cuttings (five cuttings are used for developing the model, while the other one is used for validation) in rural and suburban environments. We propose a set of effective methods to statistically model the spatial/temporal variations – including fade depth (FD), level crossing rate (LCR), average fade duration (AFD), and Ricean ${K}$ -factor – as a function of the structural parameters of cuttings. Akaike's Information Criterion (AIC)-based evaluation indicates that the Ricean distribution is the best to describe small-scale fading. In addition, the rich multipath and directionality of the transmitting antennas lead to a non-monotonous dependence of the ${K}$ -factor on the distance between transmitter and receiver. The autocovariance function of the deviation of the extracted ${K}$ -factors from the proposed model is presented and the coherence length is investigated. Our results show that even though a cutting is a scenario with severe fading, a “wide” cutting (i.e., with both wide crown and bottom widths) is conducive to the reduction of the severity of fading. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> With the rapid development of high speed railway (HSR), propagation characteristics of channels in HSR scenarios are therefore in urgent demand. We conducted numerous single input single output (SISO) measurements at 2.6 GHz with a bandwidth of 20 MHz along the Harbin-Dalian passenger dedicated railway line. Here, first analytical results in hilly terrains are provided. A double-slope path loss model fits measured data well and shadow fading is extracted to be log-normal distributed. Statistical results of small-scale fading are presented and compared in near regions and far regions relative to the transmitter, including the mean excess delay, root-mean-square (RMS) delay spread and the number of paths. Meanwhile, the delay Doppler spectrum is given out and verified. Finally, tapped-delay-line (TDL) channel model is established in detail based on the measured data. It is supposed that these results and models have a promotion for the further evaluation, simulation and design of the wireless communication system in HSR. <s> BIB020 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Channel Statistics <s> Based on wideband radio channel measurements with a bandwidth of up to 50 MHz at 2.35 GHz in a Ushape cutting environment, we analyze the Ricean K-factor for high-speed railway communications. Three types of the K-factor, consisting of narrowband, wideband and delay K-factor, are extracted according to the measured channel responses by using the channel partitioning and combining method. Due to the rich reflecting and scattering components in the U-shape cutting scenario, the K-factor dramatically changes with the frequency. A distance-based statistical narrowband K-factor model covering the frequency variability is proposed. The channel bandwidth dependent property of the wideband K-factor is observed and then a bandwidth-based statistical wideband K-factor model is developed. Moreover, it is found that the K-factor just exists at the beginning of the delay bins in the deep U-shape cutting scenario. These results are provided for use in system design and channel modeling of high-speed railway communications. <s> BIB021
|
Channel statistics are essential for the analysis and design of a communication system. Most of HST measurement campaigns have concentrated on large-scale fading statistics, i.e., path loss (PL) and shadowing. The measurement campaign presented in studied the PL in HST channels when the Tx and Rx were located inside the same HST carriage and when they were located in different carriages. The measured results showed that the waves traveling inside the same train carriage are dominant compared to the ones reflected from scatterers outside the HST due to the high penetration loss of wireless signals traveling through the metal body of the carriages. On the contrary, the waves reflected from outer scatterers are dominant compared to the waves traveling inside the train carriages when the communication devices are located in different carriages due to the high insulation between these carriages. In BIB001 , the PL of indoor wideband HST channels was also investigated using two different indoor Tx configurations, i.e., omni-directional antenna mounted on the ceiling of the HST and a planner antenna mounted on the wall of the carriage. Measurements showed that the channel between the Tx planner antenna and Rx can suffer 10 dB greater loss compared with the one between the Tx omni-directional antenna and Rx. The aforementioned results from both measurement campaigns are very useful for the design of HSTs and measurement scenarios. However, more measurements for indoor scenarios in HSTs are needed before confirming that these observations are conclusive. PLs of HST channels in open space and hilly terrain scenarios were reported in , , BIB016 and BIB017 , BIB020 . Measurement data reported in both hilly terrain scenarios showed a breakpoint in the estimated PLs. A dominant and strong LoS component can be easily observed before the breakpoint while the impact of scatterers starts and grows beyond the breakpoint distance. The breakpoint distance depends on the clearance of the first Frensal zone and can be calculated based on the Tx and Rx antenna heights and the wavelength of the transmitted signal . Therefore, different breakpoint distances were reported in the aforementioned hilly terrain measurements, i.e., 778 m in BIB017 and 500 m in BIB020 . Since viaduct is a common HST scenario, the PL of HST viaduct channels has thoroughly been studied in the literature, e.g., BIB002 , BIB003 , BIB004 , BIB009 , BIB010 , . Most of these measurements highlighted the impact of the height of the viaduct and the relative height of the BS on the estimated PL. In general, there are two main observations that can be concluded from the aforementioned viaduct measurements. First, the higher the BS antenna, the smaller the PL exponent for a given viaduct height. Second, the viaduct reduces the severity of the channel fading. In other words, the higher the viaduct, the less fading severity. Both observations are physically meaningful considering that the increase of the heights of the BS and the viaduct over the surrounding obstacles will lead to a clear LoS and reduce the impact of these scatterers on the received signal. The measurements of HST channels in cutting scenarios reported in BIB006 , BIB007 , BIB018 , BIB019 have demonstrated the impact of the cutting structure, i.e., the depth and the widths of the top and bottom of the cutting, on the estimated PLs. A shallow cutting, or low cutting, will lead to a strong LoS condition while deep cutting will lead to a large PL exponent due to the reflections from the cuttings' slopes. A comparison between the PL of cutting and viaduct scenarios was carried out in BIB006 . It was suggested that the propagation conditions in the cutting scenarios can be worse than those of viaduct ones because of the reflected and scattered components caused by the slopes of the cutting. It is important to note that such a conclusion is highly dependant on the dimensions of the studied viaduct and cutting, as we have highlighted the impact of those dimensions on the estimated PL earlier. On the other hand, shadowing has generally been modeled as log-normal distributed in different HST scenarios. Various channel statistics studied in HST channel measurement campaigns are shown in Table I . The Ricean K -factor is a very important parameter in link budget and channel modeling. Therefore, many papers presented the estimation of K -factors in different scenarios, e.g., open space , viaduct BIB002 , BIB005 , BIB011 , BIB009 - , cutting BIB006 - BIB019 , and hilly terrain BIB017 . The previous discussions of the dominance of the LoS component, the breakpoint distance, and the impact of the viaduct and cutting structure are also related to the K -factor. For example, showed how a higher value of the viaduct height will lead to a higher value of the K -factor. In the same context, it showed that lower viaducts lead to more surrounding scatterers, which results in an increase in the severity of the fading and a considerable fluctuation of the K values. Moreover, the measurement in showed while the K -factor is a linear function of distance, the slopes of K values are different before and after the breakpoint. Similar comprehensive studies on K -factors of HST channels but in cutting scenarios were reported in BIB018 , BIB021 . The analysis showed that wide cuttings increase the possibility of dominant LoS components, which leads to higher K -values. Distance-dependant linear K models for different cutting dimensions before and after the breakpoint distance were proposed in BIB018 . In BIB008 , BIB010 , , BIB018 , the spatial/temporal variations, e.g., fade depth (FD), level-crossing rate (LCR), and average fade duration (AFD), were investigated. FD is a measure of variation in the channel energy about its local mean due to small scale fading and it is calculated from the difference in signal levels between 1% and 50%. Measurements in viaduct scenarios have shown that FD is independent of the viaduct's height but is affected by the number and closeness of surrounding scatterers that are higher than the viaduct BIB008 , . LCR is defined as the expected rate at which the received signal crosses a specified level in a positive-going or negative-going direction, while AFD is defined as the average period of time for which the received signal is below this specified level, i.e., threshold. LCR and AFD were statistically modeled as functions of the structural parameters of the viaduct and cutting scenarios in , BIB018 . The results showed that the severity of fading in viaduct scenarios is greatly reduced compared with that in open space , while power delay profiles (PDPs) were investigated in , , BIB012 , BIB013 , BIB014 . In BIB015 , a measurement was carried out in a tunnel scenario and the signal propagation characteristics at the breakpoint were discussed. The stationarity interval, defined as the maximum time duration over which the channel satisfies the wide sense stationary (WSS) condition, of HST channels was investigated in based on measurements. It showed that conventional channel models offered stationary intervals much larger than the actual measured ones. In , the non-stationarity of a HST channel in a cutting scenario was investigated using a metric called non-stationarity index. The non-stationarity index was defined as the distance between the auto-correlation of a real time-variant transfer function and the auto-correlation of this transfer function under the WSS assumption. The reported measurement data showed that the non-stationarity index increases when the Doppler frequency shift varies fast. In the future, more channel statistics, especially those related to small-scale fading parameters, are necessary to be investigated in measurements.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> We present and analyse the results of wideband radio channel measurements performed in tunnels. Both a high speed train tunnel and a smaller test tunnel have been investigated with both antennas and leaky feeders as fixed radiators. The results show typical features of the tunnel radio channel with typically low delay spread combined to significant slow fading of the LOS signal due to interferences. The delay spread may increase substantially during the fading dips. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper covers some of the work carried out in the planning of the global system for mobile communication for railway (GSM-R) of the tunnels on the new high-speed trains in Spain. Solutions based on distributed antenna systems have been tested by installing several 900-MHz transmitters inside and outside of a 4000-m tunnel and measuring the propagation in different conditions. The measurements have been used to model the effects of tunnel propagation, including curves, trains passing from the outside to the inside, and the effect of two trains passing inside the tunnel. All cases have been tested by comparing solutions using isofrequency and multifrequency distributed transmitters inside the tunnel. The improvements of signal-to-noise ratio and the reduction of the blocking effects of two trains passing have demonstrated the advantages of using isofrequency distributed antenna systems in tunnels. Finally, a complete propagation model combining both modal analysis and ray tracing has been applied to predict the propagation loss inside and outside these tunnels, and results have been compared with the measurements. The model has proven to be very useful for radio planning in new railway networks. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents an empirical path loss model derived from the 930MHz measurements along "Zhengzhou-Xi'an" high-speed railway in China. All the measurements were taken on the viaduct with the height of 23 meters above the ground surface. It applies to distances and base station antenna effective heights h not well-covered by existing models. The Least Squares Method (LS) is utilized in the curve fitting. The path loss exponent n determined by the slop of the linear fitting curve is statistically modeled, with the considering of base station antenna effective height h. Based on the log-normal shadowing model, a novel path loss model was developed. The proposed path loss model applies to high-speed railway viaduct scenarios, with base antenna heights from 15 to 30 m, base-to-train distances from 0.5 to 4 km. Compared with Hata and Winner II models, it raises path loss prediction accuracy for 3~10 dB and reduces the standard deviation by 1~3 dB. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper focuses on the shadow fading characteristic in viaduct scenario of the High-speed Railway. Measurement is done on Beijing to Shanghai High-speed Railway. Based on the measurement data, distribution and autocorrelation of the shadow fading are researched. Statistical values of the shadow fading standard deviation and the decorrelation distance are analyzed. It is shown that the lognormal distribution suits most groups of the measurement data well. Evaluation of the exponential shadow fading autocorrelation model and the double exponential shadow fading autocorrelation model are made in terms of mean error, standard deviation and correlation coefficient of the residuals between the models and the measurement data. Simulation results show better performance of the double exponential model compared with the exponential model. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> Narrow band measurements at 930.2 MHz are carried out in two kinds of viaduct scenarios on the Zhengzhou-Xi'an high-speed railway at the speed of 300 km/h. The first-order and second-order statistics of the measured data, such as level crossing rate (LCR) and average fade duration (AFD), are compared with theoretical values of Rayleigh, Rice and Nakagami models. An emulation system is set up in the State Key Lab of Rail Traffic Control and Safety using a Propsim C8Radio Channel Emulator. Two new models based on WINNER II D2a channel model are proposed for viaduct scenarios according to the emulation results. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper proposes a distance-dependent Ricean K-factor model for a line-of-sight (LOS) viaduct scenario in the high-speed rail (HSR) of China. Extensive narrowband measurements conducted at 930 MHz are utilized. The propagation environment can be categorized into two cases: moderate suburban and dense suburban. The estimated K-factors are modeled as a piecewise-linear function of distance. The statistical fluctuations of K-factors are well considered by introducing the standard deviation to the expression. A detailed comparison between the piecewise-linear K-factor model and that of other literature validates the proposed model. Our results will be useful in the modeling of HSR viaduct channels and the performance analysis such as channel capacity and throughput for HSR wireless communication systems. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents a set of 930 MHz measurements conducted along the “Zhengzhou-Xi'an” high-speed rail of China, to characterize short-term fading behavior of the rail viaduct scenario. Three measurement cases covering viaducts with different heights are reported. The analysis results include fade depth (FD), Ricean distribution fit and K-factor modeling, level crossing rates (LCR), and average fade duration (AFD). A small value of fade depth, around 15 dB, is observed. The Ricean distribution offers good fit in this line-of-sight (LOS) propagation scenario, and the K-factor estimated using moment-based method is modeled as a piecewise function, whose break point equals to the reference distance. It is found that the viaduct height H greatly affects the severity of fading and the feature parameters. The results are applicable to the design of high-speed rail communication systems and the modeling of the rail viaduct fading channels. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, based on the measured data at Wuhan-Guangzhou high speed railway, the statistics characteristics of high speed mobile fading channel including long-term and short-term fading in this environment are presented. The measurement campaigns were conducted at GSM-R downlink band. The experimental data was analyzed to provide path loss model and short-term fading statistics including probability density functions (PDF) of signal amplitude, delay statistics, number of paths and path power statistics. From these statistics, it is shown that the path loss index at this railway area is n=2∼5 with standard deviation ranging from 3 to 6dB; the test PDF of the short-term fading in such channels approximately fit Nakagami distributions better; the cumulative density functions of average delay and root mean square delay demonstrated that the mean and root mean square delay are not larger than respectively 1.37µs and 1.69µs and maximum delays less than 6.7µs occur most frequently. At last, the PDFs of paths and PDFs of relative amplitude of paths with different threshold levels were computed and drawn. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The paper describes the measurement campaigns for the broadband channel properties under the high- speed condition, which have been carried out on Zhengzhou to Xi'an (ZX) High-Speed Railway and Beijing to Tianjin (BT) High-Speed Railway. WCDMA with the bandwidth of 3.84MHz is employed as the excitation signal that is transmitted from the base station along the railway and received by the TSMQ by ROHDE & SCHWARZ inside the train. Different scenarios including plain, U-shape cutting, station and hilly terrain are chosen in the measurements and the parameters about the channel multipath properties are extracted, analyzed and briefly reported here. These results are informative for the system designers in the future wireless communication of High-Speed Railway. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> An efficient channel sounding method using cellular communication systems is proposed for high-speed railway (HSR) propagation environments. This channel measurement technique can be used conveniently to characterize different HSR scenarios, which can significantly improve the measurement efficiency. Based on downlink signals of wideband code division multiple access (WCDMA) and the long term evolution (LTE), principles and methodologies of HSR channel sounding are presented. Using the WCDMA signal, a measurement campaign is conducted in real-world HSR scenarios and statistical characterizations are provided using a radio network analyzer. Due to the limits of the radio network analyzer, afterwards, a software defined radio (SDR)-based channel data recorder is developed allowing users to collect the signals from different wireless cellular systems. Especially, the estimation accuracies are validated in lab by the faded signals emitted from a vector signal generator. The results show that the channel data recorder provides a particularly good match to the configured fading channels. Therefore, this measurement method can be employed to investigate the HSR channel, and to establish the channel models under the various HSR scenarios. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The validity of the maximum capacity criterion applied to realize high-rank line-of-sight (LoS) multiple-input multiple-output (MIMO) channels is investigated for high speed railway scenarios. Performance is evaluated by ergodic capacity. Numerical results demonstrate that by simply adjusting antenna spacing according to the maximum capacity criterion, significant capacity gains are achievable. We find relatively low sensitivity of the system to displacements from the optimal point and angle in relatively short range. Thus, we present two proposals to reconfigure antenna arrays so as to maximize LoS MIMO capacity in the high speed railway scenarios <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> The high-speed railway propagation channel has significant effect on the design and performance analysis of wireless railway control systems. An important feature of the high-speed railway communications is the usage of directional transmitting antennas, due to which the receiver may experience strong attenuation of the line-of-sight (LOS) path under the base station (BS). This leads to a drop, and strong variations, of the signal strength under the BS. While the physical origin of the signal variations is different from conventional shadowing, it can be described by similar statistical methods. However, the effect has been largely neglected in the literature. In this paper we first define the region of the bottom of the BS, and then present a simple shadow fading model based on the measurements performed in high-speed railways at 930 MHz. It is found that the bottom area of the BS has a range of 400 m – 800 m; the standard deviation of the shadowing also follows a Gaussian distribution; the double exponential model fits the autocovariance of the shadow fading very well. We find that the directivity of the transmitting antenna leads to a higher standard deviation of shadowing and a smaller decorrelation distance under the BS compared to the region away from the BS. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> Cuttings are widely used in high-speed railway (HSR) transportation to ensure the flatness of rails. The special structure of cuttings results in rich reflection and scattering, and creates dense multipath components. This paper presents a series of measurements of the propagation channel at 930 MHz conducted along the “Zhengzhou-Xi'an” HSR of China, to characterize the small-scale fading behavior of rail-cutting scenarios as a function of the geometry of cuttings, including crown width and bottom width. Raw data are collected in six cuttings (five cuttings are used for developing the model, while the other one is used for validation) in rural and suburban environments. We propose a set of effective methods to statistically model the spatial/temporal variations – including fade depth (FD), level crossing rate (LCR), average fade duration (AFD), and Ricean ${K}$ -factor – as a function of the structural parameters of cuttings. Akaike's Information Criterion (AIC)-based evaluation indicates that the Ricean distribution is the best to describe small-scale fading. In addition, the rich multipath and directionality of the transmitting antennas lead to a non-monotonous dependence of the ${K}$ -factor on the distance between transmitter and receiver. The autocovariance function of the deviation of the extracted ${K}$ -factors from the proposed model is presented and the coherence length is investigated. Our results show that even though a cutting is a scenario with severe fading, a “wide” cutting (i.e., with both wide crown and bottom widths) is conducive to the reduction of the severity of fading. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, Empirical results characterizing the joint statistical properties of the shadow fading (SF), the root-mean-square (rms) delay spread (DS), and the Ricean K-factor are presented. Measurement data from high-speed railway in viaduct scenario have been analyzed. It is found that a log-normal distribution accurately fits the distribution function of all the investigated parameters. The spatial autocorrelation function of SF, rms DS, and Ricean K-factor can be modeled with an exponential decay function. However, The spatial autocorrelation functions of all three variables are better characterized by a composite of double exponential decaying functions. A positive cross correlation is found between the SF and the Ricean K-factor, while both parameters are negatively correlated with rms DS. All essential parameters required for the implementation of a simulation model considering the joint statistical properties of SF, rms DS, and the Ricean K-factor are provided. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Measurement's Setup Parameters <s> In this paper, a recently conducted measurement campaign for high-speed-train (HST) channels is introduced, where the downlink signals of an in-service Universal Mobile Terrestrial System (UMTS) deployed along an HST railway between Beijing and Shanghai were acquired. The channel impulse responses (CIRs) are extracted from the data received in the common pilot channels (CPICHs). Within 1318 km, 144 base stations (BSs) were detected. Multipath components (MPCs) estimated from the CIRs are clustered and associated across the time slots. The results show that, limited by the sounding bandwidth of 3.84 MHz, most of the channels contain a single line-of-sight (LoS) cluster, and the rest consists of several LoS clusters due to distributed antennas, leaking cable, or neighboring BSs sharing the same CPICH. A new geometry-based random-cluster model is established for the clusters' behavior in delay and Doppler domains. Different from conventional models, the time-evolving behaviors of clusters are characterized by random geometrical parameters, i.e., the relative position of BS to railway, and the train speed. The distributions of these parameters, and the per-cluster path loss, shadowing, delay, and Doppler spreads, are extracted from the measurement data. <s> BIB019
|
Carrier Frequency and Bandwidth: most of the measurement campaigns in the literature were conducted at the carrier frequency of 930 MHz in GSM-R systems BIB014 - BIB007 , BIB003 - BIB004 , BIB005 , BIB008 , , BIB006 - BIB015 , BIB002 , BIB009 , . Correspondingly, all of the aforementioned measurements were for narrowband channels with bandwidth of 200 kHz. Wideband channel measurements with higher bandwidths, i.e., 10-100 MHz, and higher carrier frequencies, i.e., 2.1-5.2 GHz, were reported in - , BIB010 , BIB016 , BIB017 - BIB018 , BIB001 , BIB011 - BIB019 . Antenna Configuration: The majority of HST measurements campaigns so far have focused on single-input single-output (SISO) systems - , BIB014 - BIB004 , BIB005 - , BIB006 - BIB018 , BIB001 , BIB002 , BIB009 , , BIB011 , BIB012 . Multiple-input multipleoutput (MIMO) systems, where multiple antennas are equipped at both ends, are essential for providing higher capacity to meet the requirements of future high speed data transmissions BIB013 . The channel measurement, particularly the MIMO channel measurement at high moving speeds, remains to be a challenging task. So far, only very few measurement campaigns were conducted using multiple antennas at either the Tx, i.e., single-input multiple-output (SIMO) systems , , or Rx, i.e., multiple-input single-output (MISO) systems . Hence, HST MIMO wideband channel measurement campaigns with carrier frequency and bandwidth larger than GSM-R ones are needed for future HST communication system developments.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> A network with high quality of service (QoS) is required for railway wireless communication and control systems. Research on radio-wave propagation in railway environment has great significance for the design and optimization of the railway wireless network. In this paper, measurements are taken in railway viaduct area using track side base stations of the GSM-R network. Comparison between the measured path loss values and the estimated values by a few prediction models shows a large deviation. Thus a new path loss prediction model for viaduct area is derived from statistical analysis of the measurement results in this paper. The novel proposed model has proven to be accurate for the planning of the railway wireless network. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper presents an empirical path loss model derived from the 930MHz measurements along "Zhengzhou-Xi'an" high-speed railway in China. All the measurements were taken on the viaduct with the height of 23 meters above the ground surface. It applies to distances and base station antenna effective heights h not well-covered by existing models. The Least Squares Method (LS) is utilized in the curve fitting. The path loss exponent n determined by the slop of the linear fitting curve is statistically modeled, with the considering of base station antenna effective height h. Based on the log-normal shadowing model, a novel path loss model was developed. The proposed path loss model applies to high-speed railway viaduct scenarios, with base antenna heights from 15 to 30 m, base-to-train distances from 0.5 to 4 km. Compared with Hata and Winner II models, it raises path loss prediction accuracy for 3~10 dB and reduces the standard deviation by 1~3 dB. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Based on the narrowband 930-MHz measurements taken along the “Zhengzhou-Xi'an” high-speed railway in China, an empirical path loss model is proposed. It is applicable to high-speed railway viaduct scenarios, considering the influences of viaduct height H and base station antenna relative height h, which are not well-covered by existing large-scale models. The path loss exponents are investigated, based on which the influence of viaduct on propagation is discussed. The fading depth up to 15.96 dB and the Ricean K -factor with mean value of 3.79 dB are obtained. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> A high performance wireless network is essential for for the railway communication and control systems. Research on the fading characteristics in railway environment is of great importance for the design of the railway wireless network. In this paper, measurements are taken in railway terrain cuttings area using track side base stations of the GSM-R network. The fitted path loss model, shadow fading, and dynamaic range of the small scale fading are obtained and compared to the results of viaduct scenario. The propagation environment of the terrain cuttings turns out to be worse than the viaduct area. The path loss exponent is found to be 4.3. The shadow loss can be reasonably described by a log-normal distribution. It is also found that the bridges over the cuttings can cause extra loss of about 5 dB. The dynamaic range of the small scale fading is from 27 dB to 40 dB with a mean value of about 33 dB. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Presented is the statistical analysis of radio wave propagation in a high-speed railway cutting scenario, derived from 930 MHz measurements taken along the ‘Zhengzhou-Xi'an’ high-speed railway of China. The crown width and bottom width of the cutting are well-covered by the proposed models. The Ricean distribution offers a good fit and the K-factor is found to be lognormal, with a mean value of 1.88 dB and standard deviation of 3.29 dB. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> For the design and performance evaluation of broadband wireless communication systems in High-Speed Railway (HSR) environments, it is of crucial importance to have accurate and realistic propagation channel model. Based on real measurement data in U-Shape Groove (USG) scenarios at 2.35 GHz on Zhengzhou-Xi'an (ZX) HSR in China, the channel fading characteristics such as path loss, shadowing, K factor, time dispersivity and Doppler effects are specialized. These technical guidelines will promote the development of the wireless communication system under HSR. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> III. HST CHANNEL MODELS <s> This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR) in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE) algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. <s> BIB011
|
HST channel models in the literature can be categorized as large-scale fading models , BIB001 , , BIB002 , BIB003 - BIB004 , BIB007 , , BIB005 , BIB006 , BIB010 , BIB011 and small-scale fading models , , , BIB008 - BIB009 . The state-of-the-art of HST channel models has not been investigated yet. Therefore, we will first categorize PL models in Table III. In Table IV , the important HST small-scale fading channel models are briefly reviewed and classified according to the modeling approach, scenario, stationarity, antenna configuration, frequency selectivity (FS), scatterer region, and cellular architecture.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Large-Scale Fading Models <s> The paper presents results of measurements and simulations concerning the application of the European GSM system in high speed trains travelling at up to 500 km/h. The aim is to answer the question to what extent GSM (performance specified up to 250 km/h) can cope with the high velocities which are demanded for future railways. Measurements along railway lines have shown that a railway mobile radio channel results in better performance (Rice channel) than standard mobile radio channels (Rayleigh or weak Rice channel, see GSM-Recs). BER and block error rate of GSM traffic channels up to 500 km/h are simulated. Comparison of the results at 250 km/h and 500 km/h shows that the GSM high velocity problem can be solved either by increasing the SNR by about 2 dB or by increasing the Rice parameter c by about 6 dB (numerical values for profile RA=rural area; railway channel with c=6 dB against standard channel with c=0 dB), i.e. the BER at 500 km/h (railway channel) is not worse than the BER at 250 km/h (standard channel). A simple example shows that the benefit in the transmission of telegrams consisting of blocks of decoded bits can be much higher, The desired channel performance, i.e. a strong direct path (high Rice parameter), can be achieved by careful radio coverage planning along the railway line. This means a GSM standard receiver is sufficient to cope with the GSM high velocity problem and no additional means are needed. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Large-Scale Fading Models <s> Vehicle-to-vehicle communications have recently received much attention due to some new applications, such as wireless mobile ad hoc networks, relay-based cellular networks, and intelligent transportation systems for dedicated short range communications. The underlying V2V channels, as a foundation for the understanding and design of V2V communication systems, have not yet been sufficiently investigated. This article aims to review the state-of-the-art in V2V channel measurements and modeling. Some important V2V channel measurement campaigns and models are briefly described and classified. Finally, some challenges of V2V channel measurements and modeling are addressed for future studies. <s> BIB002
|
PL estimation is essential for wireless link budget computation and wireless network planning. PL and shadow fading channel models for various HST scenarios have been developed based on measurement results conducted in the open literature BIB002 - BIB001 . These PL models are typically expressed as where d is the distance between the Tx and Rx in meters (m), n is the PL exponent, and A is the intercept. Note that the SF follows log-normal distributions, the standard deviation of which for each model is given in Table III .
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> In this paper, the statistical channel properties and channel modeling of indoor part for high-speed train communication system is presented based on wideband channel measurement at 2.35GHz. Two configurations of base station (BS) antennas, the omni-directional ceiling antenna and the planar antenna on the wall, are used in the measurement, in order to compare different channel characteristics and facilitate the future wideband system deployment. Channel properties, such as Path Loss (PL), Delay Spread (DS) and Ricean K-factor, are analyzed and modeled. The empirical log-distance PL models are derived. It is found that PL with planar antenna at BS is 10dB bigger than that with omni-directional antenna. The latter is even smaller than the PL of the free space. The distributions of DS under these two configurations are both well fitted with the lognormal distribution, and the mean values of them are similar. However, K-factors in decibel are quite different, although both follow well with the normal distribution. The mean values of K-factor with the omni-directional antenna and the planar antenna at BS are 10.41 dB and 4.09 dB, respectively. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> This paper focuses on the Geometry based Stochastic Channel Model (GSCM) and its application in high speed railway (HSR) multiple-input multiple-output (MIMO) systems. Different probability distribution functions (PDFs) of scatterers are studied and simulation results show that scatterers in Gaussian distribution make the best approximation to the realistic power delay profile (PDP) and power azimuth spectra (PAS). Additionally, the impacts of different scattering area shapes on the PDP and PAS in high speed railway scenario are simulated. Based on the conclusions above and existing measurement data, a novel channel model is established with local and far scatterer clusters for high speed railway scenario. The simulation results verify that our model is realistic. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Cellular Architectures and Scenarios <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB010
|
As mentioned earlier, adopting conventional cellular architecture in HST wireless communication systems may lead to several problems in terms of providing reliable and fast communication to HST passengers. Therefore, other cellular architectures, such as DAS, CoMP, and MRS need to be considered. In the literature, most of the proposed channel models have considered the conventional architecture where fixed BSs are installed on the track-side to provide wireless coverage to HST passengers inside carriages , BIB005 , BIB003 , BIB001 - BIB006 , BIB007 . By considering MRS solution, we will have two channels, outdoor channel between the BS and the MRS and an indoor one between the MRS and train passengers. The properties of radio channels in the carriages resemble those of indoor environments and hence they can be modeled using existing indoor channel models BIB004 . Therefore, , , BIB008 - , BIB009 have focused on modeling the outdoor channel because of the challenges that this channel faces due to the high velocity of the Rx. HST scenarios have been presented in details earlier in this paper in Section II. While most of these scenarios can only be encountered in railway environments, open space scenario is similar to the rural or urban scenarios that can be found in conventional V2I or V2V communication systems. Therefore, most of the current HST channel models, developed from V2I and V2V channel models by taking into account the effect of the high velocity of the Rx on the channel parameters, have been proposed for open space scenario , , , BIB005 , BIB003 , BIB006 - , BIB007 . Channel models for tunnel, cutting, and viaduct scenarios were studied in BIB002 , BIB010 , and BIB009 . In summary, more HST channel models that consider other cellular architectures, such as DAS, are needed in the future. In addition, more HST scenarios should be considered in proposing future HST channel models.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> C. Modeling Approaches of HST Small-Scale Fading Models <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB005
|
In terms of modeling approaches, the current HST channel models in the literature, presented in Table IV , can be classified as deterministic BIB002 - BIB001 and stochastic channel models. The latter can be further classified into geometrybased stochastic models (GBSMs) , , BIB003 - and non-geometrical stochastic models (NGSMs) BIB004 , BIB005 , as illustrated in Fig. 3 .
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> For the simulation of practical frequency-diversity wireless communication systems, such as frequency-hopping systems, multicarrier code-division multiple-access systems, and orthogonal frequency-division multiplexing systems, it is often desirable to produce multiple Rayleigh fading processes with given frequency correlation properties. In this paper, a novel stochastic wide-sense stationary sum-of-sinusoids channel simulator is proposed to emulate frequency-correlated wideband fading channels, where the frequency correlation properties are controlled by only adjusting the constant phases. Closed-form expressions are provided for all the parameters of the simulation model. This enables us to investigate analytically the overall correlation properties (not only the correlation coefficients) of the simulated processes with respect to both time separation and frequency separation. It is shown that the wideband channel simulator will be reduced to a narrowband Rayleigh fading-channel simulator by removing the frequency selectivity. Furthermore, the COST 207 typical-urban and rural-area channels are applied to evaluate the performance of the resulting wideband and narrowband channel simulators, respectively. The correlation properties of the simulation models approach the desired ones of the underlying reference models as the number of exponential functions tends to infinity, while very good approximations are achieved with the chosen limited number of exponential functions <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The delay caused by the reflected ray in broadband communication has a great influence on the communications in subway tunnel. This paper presents measurements taken in subway tunnels at 2.4 GHz, with 5 MHz bandwidth. According to propagation characteristics of tunnel, the measurements were carried out with a frequency domain channel sounding technique, in three typical scenarios: line of sight (LOS), Non-line-of-sight (NLOS) and far line of sight (FLOS), which lead to different delay distributions. Firstly IFFT was chosen to get channel impulse response (CIR) h(t) from measured three-dimensional transfer functions. Power delay profile (PDP) was investigated to give an overview of broadband channel model. Thereafter, a long delay caused by the obturation of tunnel is observed and investigated in all the scenarios. The measurements show that the reflection can be greatly remained by the tunnel, which leads to long delay cluster where the reflection, but direct ray, makes the main contribution for radio wave propagation. Four important parameters: distribution of whole PDP power, first peak arriving time, reflection cluster duration and PDP power distribution of reflection cluster were studied to give a detailed description of long delay characteristic in tunnel. This can be used to ensure high capacity communication in tunnels. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper focuses on the Geometry based Stochastic Channel Model (GSCM) and its application in high speed railway (HSR) multiple-input multiple-output (MIMO) systems. Different probability distribution functions (PDFs) of scatterers are studied and simulation results show that scatterers in Gaussian distribution make the best approximation to the realistic power delay profile (PDP) and power azimuth spectra (PAS). Additionally, the impacts of different scattering area shapes on the PDP and PAS in high speed railway scenario are simulated. Based on the conclusions above and existing measurement data, a novel channel model is established with local and far scatterer clusters for high speed railway scenario. The simulation results verify that our model is realistic. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In this paper, a non-stationary wideband geometry-based stochastic model (GBSM) is proposed for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model employs multiple confocal ellipses model, where the received signal is a superposition of the line-of-sight (LoS) and single-bounced rays. Because of the time-varying feature of angles of arrival (AoAs), angles of departure (AoDs), and LoS angle, the proposed GBSM has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. From the proposed model, the local spatial cross-correlation function (CCF) and the local temporal autocorrelation (ACF) are derived for different taps. Numerical results and analysis show that the proposed channel model is capable of characterizing the time-variant HST wireless channel. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB010 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> Based on the realistic channel measurement on High-Speed Railway (HSR) in viaduct scenarios at 2.35 GHz, the dynamic evolution of multipath components is investigated from the birth-death process point of view. Due to the distinction in the amount of resolvable multipath signals, the channel is divided into five segments and can be completely parameterized by several sets of statistical parameters associated with the type of environment and scenario. Then the four-state Markov chain, describing the birth-death number variation of the detected propagation waves, is employed to specialize the temporal stochastic properties. Furthermore, the steady probabilities and transition probabilities are provided which will facilitate the development and evaluation of wireless communication systems under HSR. <s> BIB011 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper presents a novel and practical study on the position-based radio propagation channel for High-Speed Railway by performing extensive measurements at 2.35 GHz in China. The specification on the path loss model is developed. In particular, small scale fading properties such as K-factor, Doppler frequency feature and time delay spread are parameterized, which show dynamic variances depending on the train location and the transceiver separation. Finally, the statistical position-based channel models are firstly established to characterize the High-Speed Railway channel, which significantly promotes the evaluation and verification of wireless communications in relative scenarios. <s> BIB012 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> In a realistic high-speed railway environment, the track, terrain, vegetation, cuttings, barriers, pylons, buildings, and crossing bridges are the main sources of reflection, diffraction, and scattering. Moreover, the radiation pattern and the polarization of the transmitting and receiving antennas considerably influence the propagation. This paper presents a deterministic modeling approach covering all the effects in a realistic highspeed railway environment for the first time. The antenna influence and the mechanisms of transmission, scattering, and reflection are evaluated by developing a 3D ray-optical tool. The diffraction loss is obtained by the multi-edge diffraction models using raster databases. This approach compensates the limitation of the existent empirical and stochastic models used for the high-speed railway, and promotes the deterministic modeling towards to the realistic environment. Therefore, it allows a detailed and realistic evaluation and verification of the train control communications systems. <s> BIB013 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> Semi-deterministic modeling with low data resolution requirement and low computation time is always of interest. By conjunctively utilizing the extended Hata model and the Deygout model, this letter presents a hybrid model for viaduct and cutting scenarios of high-speed railway. The proposed model achieves higher accuracy than empirical and statistical models, but uses totally free sources. It can be easily implemented for the network planning, and therefore, it meets the demand for fast development of high-speed railway. <s> BIB014 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> A field channel measurement is carried out in highspeed railways (HSRs) along the “Luoyang South-Lintong East” line of China, and the finite-state Markov channel (FSMC) modeling is exploited to characterize the small-scale fading channels. The large-scale path loss can be predicted relatively precisely since the line-of-sight (LOS) propagation component dominates the wireless channel in HSR, while the small-scale fadings will be a key to the future wireless network for HSR. Hence, this paper proposes a first-order FSMC modeling to describe the fast small-scale fadings in two typical HSR scenarios, i.e., viaduct and terrain cutting. Firstly, the sliding window method is used to remove the large-scale effect of the field data. Then the Rayleigh, Rician and Nakagami distributions are respectively tested to fit the envelope of small-scale fadings, and the results show that Rician distribution can effectively capture the statistical property of HSR channels. Then, a first-order FSMC is proposed based on the Rician distribution. Finally, the experimental results reveal that four-state FSMC modeling provides an effective way to reflect the dynamic nature of the fast fadings in HSR. <s> BIB015 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> High-speed railway (HSR) brings convenience to peoples' lives and is generally considered as one of the most sustainable developments for ground transportation. One of the important parts of HSR construction is the signaling system, which is also called the “operation control system,” where wireless communications play a key role in the transmission of train control data. We discuss in detail the main differences in scientific research for wireless communications between the HSR operation scenarios and the conventional public land mobile scenarios. The latest research progress in wireless channel modeling in viaducts, cuttings, and tunnels scenarios are discussed. The characteristics of nonstationary channel and the line-of-sight (LOS) sparse and LOS multiple-input-multiple-output channels, which are the typical channels in HSR scenarios, are analyzed. Some novel concepts such as composite transportation and key challenging techniques such as train-to-train communication, vacuum maglev train techniques, the security for HSR, and the fifth-generation wireless communications related techniques for future HSR development for safer, more comfortable, and more secure HSR operation are also discussed. <s> BIB016 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The recent development of high-speed trains (HSTs) introduce new challenges to wireless communication systems for HSTs. For demonstrating the feasibility of these systems, accurate channel models which can mimic key characteristics of HST wireless channels are essential. In this paper, we focus on HST channel models for the tunnel scenario, which is different from other HST channel environments, such as rural area and viaducts. Considering unique characteristics of tunnel channel environments, we extend the existing multi-mode waveguide tunnel channel model to be time dependent, obtain the channel impulse responses, and then further investigate certain key tunnel channel characteristics such as temporal autocorrelation function (ACF) and power spectrum density (PSD). The impact of time on ACFs and PSDs, and the impact of frequency on the received power are revealed via numerical results. <s> BIB017 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB018 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> High-speed railways (HSRs) have been widely introduced to meet the increasing demand for passenger rail travel. While it provides more and more conveniences to people, the huge cost of the HSR has laid big burden on the government finance. Reducing the cost of HSR has been necessary and urgent. Optimizing arrangement of base stations (BS) by improving prediction of the communication link is one of the most effective methods, which could reduce the number of BSs to a reasonable number. However, it requires a carefully developed propagation model, which has been largely neglected before in the research on the HSR. In this paper, we propose a standardized path loss/shadow fading model for HSR channels based on an extensive measurement campaign in 4594 HSR cells. The measurements are conducted using a practically deployed and operative GSM-Railway (GSM-R) system to reflect the real conditions of the HSR channels. The proposed model is validated by the measurements conducted in a different operative HSR line. Finally, a heuristic method to design the BS separation distance is proposed, and it is found that using an improved propagation model can theoretically save around 2/5 cost of the BSs. <s> BIB019 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> 1) Deterministic Channel Models: <s> The high-speed railway (HSR) propagation channel has a significant impact on the design and performance analysis of wireless railway control systems. This paper derives a stochastic model for the HSR wireless channel at 930 MHz. The model is based on a large number of measurements in 100 cells using a practically deployed and operative communication system. We use the Akaike information criterion to select the distribution of the parameter distributions, including the variations from cell to cell. The model incorporates the impact of directional base station (BS) antennas, includes several previously investigated HSR deployment scenarios as special cases, and is parameterized for practical HSR cell sizes, which can be several kilometers. The proposed model provides a consistent prediction of the propagation in HSR environments and allows a straightforward and time-saving implementation for simulation. <s> BIB020
|
Deterministic channel models are usually based on the detailed description of specific propagation environment and antenna configuration. The amplitudes, phases, and delays of the propagated waves are obtained using intensive simulations that incorporate details of propagation environments like roads, buildings, trees, houses, etc. Therefore, deterministic models are physically meaningful and potentially accurate. Geometry-based deterministic models (GBDMs) based on ray-tracing method were proposed in BIB002 - BIB001 to model HST propagation channels in different HST scenarios. In BIB001 , a three-dimensional (3D) ray-tracing approach for wave propagation modeling in HST tunnels was presented. The proposed model resulted in the complex channel impulse response that incorporates channel information, e.g., the wave-guide effect observed in tunnels and the impact of another train passing in the opposite direction on the Doppler shift and time delay. The authors in BIB013 adopted a similar approach to model HST channels in various scenarios. Both BIB001 and BIB013 used measurement results to verify the proposed channel models. Another HST channel model based on 3D ray-tracing approach was presented in BIB002 to analyze channel characteristics, e.g., the FS and time-variance (Doppler spread). The objects, e.g., trees, buildings, or barriers, on both sides of the railway track were modeled using rectangular boxes, the dimensions of which were statistically generated. Since the propagation characteristics of electromagnetic (EM) waves in tunnels are significantly different from those in other HST environments, a multi-mode waveguide channel model was proposed in BIB017 . The proposed model, which is a hybrid model that combines the geometrical optical model and waveguide model, can characterize the wave propagation both in near and far regions of the source. However, the aforementioned model failed to discuss the far LoS (FLOS) phenomena observed inside tunnels BIB004 or provide a mechanism to determine the breakpoint for different propagation regions in tunnels BIB016 . A GBDM based on random propagation-graph was proposed in BIB005 to characterize time-variant HST channels in open space scenarios. Similar to ray-tracing method, propagation-graph can predict channel impulse responses by a thorough search of propagation paths that connect the Tx and Rx. This modeling approach can be performed by considering the geometry of the simulated environments, e.g., the distribution, mobility, and visibility of the scatterers. Despite their high accuracy, GBDMs require detailed descriptions of the propagation environments and extensive computational resources to be implemented. To avoid the high complexity of implementing GBDMs while maintaining sufficient accuracy, semi-deterministic models for HST viaduct and cutting scenarios were proposed in BIB014 . However, the proposed models only considered large-scale fading and neglected the effect of small-scale fading parameters on the received signal. 2) GBSMs: In GBSMs, the impulse responses of HST channels are characterized by the law of wave propagation applied to specific Tx, Rx, and scatterer geometries which are predefined in a stochastic fashion according to certain probability distributions. Different types of GBSMs differ mainly in the proposed scatterer distributions. Based on the position of the effective scatterers, GBSMs can be further classified as regularshaped GBSMs (RS-GBSM) such as one-ring BIB006 , two-ring, and ellipse models BIB007 - , and irregular shaped GBSMs (IS-GBSMs) , , BIB008 . RS-GBSMs assume that all the effective scatterers are placed on regular shapes and therefore, different RS-GBSMs have different shapes of scatterer distributions, e.g., one-ring, two-ring, and ellipses for two-dimensional (2D) models and one sphere, two-sphere, and elliptic-cylinders for 3D ones. RS-GBSMs often result in closed-form solutions or at least mathematically tractable formulas. The generalized principle of designing RS-GBSMs follows the following steps. First, a geometrical model is adopted assuming that scatterers are located on regular shapes. Then, a stochastic reference model with an infinite number of scatterers is developed based on the adopted geometrical model. However, the reference model cannot be used for simulations and therefore a corresponding simulation model with a finite number of effective scatterers is needed. The parameters of the simulation model are computed by using proper parameter computation methods, e.g., the extended method of exact Doppler spread (EMEDS), modified method of equal area (MMEA) BIB003 , or the L p -Norm Method (LPNM) . In BIB006 , a one-ring RS-GBSM was proposed to model HST channels in open space scenarios. The scatterers were assumed to be distributed on a ring around the MS where different PDFs of the scatterers were analyzed. Considering the narrowband GSM-R for a HST communication system, a 3D one-sphere RS-GBSM was proposed in BIB008 for open space scenarios. The proposed model used the Von Mises distribution to describe the azimuth angles and the space-time (ST) crosscorrelation function (CCF) was derived. However, both of the aforementioned models assumed that the HST channel satisfies the WSS condition that has been proved incorrect by measurements . To fill this gap, non-stationary RS-GBSMs were proposed in BIB007 - BIB018 for wideband MIMO HST channels considering the deployment of MRS on the top of the train. Fig. 4 illustrates the proposed RS-GBSMs, which consist of multiple confocal ellipses with single-bounced rays and the LoS component. The model was first introduced in BIB007 , BIB009 , where it considered the distance between the Tx and Rx as time-varying to capture the non-stationarity of the HST channel. Then, the model was further developed in by considering other time-varying model parameters, i.e., angles of departure (AoDs) and angles of arrival (AoAs). By adopting some key scenario-specific channel parameters, this model was further extended in BIB018 to be applicable to the three most common HST scenarios, i.e., open-space, viaduct, and cutting scenarios BIB019 , and hence is the first generic HST channel model. To demonstrate its applicability, the proposed generic nonstationary HST channel model was verified by measurements in terms of stationary time for the open space scenario and the autocorrelation function (ACF), LCR, and stationary distance for the viaduct and cutting scenarios BIB018 . IS-GBSMs place the effective scatterers with predefined properties at random locations with certain statistical distributions usually obtained/approximated from measurements BIB020 . Unlike RS-GBSMs, the random locations of the scatterers do not form regular shapes and the signal contributions of the effective scatterers are determined from a greatly-simplified ray-tracing method and finally the total signal is summed up to obtain the complex impulse response. IS-GBSMs for HST channels were introduced in the RMa scenario in WINNER II channel model and the moving networks scenario in IMT-A channel model . In both cases, the train speed can be up to 350 km/h and the MRS technology is employed. In BIB008 , an IS-GBSM was proposed for HST channels in cutting scenarios assuming the scatterers to be uniformly distributed on the surface of the two slopes of the cutting. However, the aforementioned channel models have neglected the non-stationarity of HST channels and assumed that the WSS assumption can still be applied. Moreover, GBSMs are very complex for upper-layer protocol design and performance analysis and less complex channel models are preferred. 3) NGSMs: NGSMs characterize physical parameters of a HST propagation channel in a completely stochastic manner by providing their underlying probability distribution functions without assuming an underlying geometry. An NGSM based on finite-state Markov chains for HST wireless communication channels was proposed in BIB010 . The proposed model is able to capture the characteristics of time-varying HST wireless channels by using Markov chains to track the channel state variation at different received signal-to-noise ratio (SNR) intervals. However, the model has not been verified by using real-field measurements and thus deserves more investigation. The authors in BIB011 followed a similar approach to model the dynamic evolution of multi-path components, i.e., birth-death process, using a four-state Markov chain model. The four proposed states are no birth/death, births only, deaths only, and both births and deaths. The transition matrix of the birth-death process was calculated based on the measurement presented in BIB012 . Based on measurement of HST channels in viaduct and cutting scenarios, a finite-state Markov channel was also proposed in BIB015 . Simulation results showed that Ricean distribution can well characterize the measured amplitude of the small-scale fading in both HST scenarios and an NGSM can effectively capture the dynamic nature of the fast fading in HST channels.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> In this paper, a channel modeling method based on random-propagation-graph is elaborated, validated, and applied to characterizing time-variant channels observed in typical environments for high-speed railway wireless communications. The advantage of the proposed method is that the frequency-tempo-spatial channel coefficients, as well as the multi-dimensional channel impulse responses in delay, Doppler frequency, direction of arrival (i.e. azimuth and elevation of arrival) and direction of departure are calculated analytically for specific environments. The validation of the proposed method is performed by comparing the statistics of two large-scale parameters obtained with those described in the well-established standards. Finally, stochastic geometry-based models in the same format as the well-known spatial channel model enhanced (SCME) are generated by using the proposed method for the high-speed scenarios in the rural, urban, and suburban environments. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> How to provide reliable, cost-effective wireless services for high-speed railway (HSR) users attracts increasing attention due to the fast deployment of HSRs worldwide. A key issue is to develop reasonably accurate and mathematically tractable models for HSR wireless communication channels. Finite-state Markov chains (FSMCs) have been extensively investigated to describe wireless channels. However, different from traditional wireless communication channels, HSR communication channels have the unique features such as very high speed, deterministic mobility pattern and frequent handoff events, which are not described by the existing FSMC models. In this paper, based on the Winner II physical layer channel model parameters, we propose a novel FSMC channel model for HSR communication systems, considering the path loss, fast fading and shadowing with high mobility. Extensive simulation results are given, which validate the accuracy of the proposed FSMC channel model. The model is not only ready for performance analysis, protocol design and optimization for HSR communication systems, but also provides an effective tool for faster HSR communication network simulation. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> This paper proposes a non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed model has the ability to investigate the non-stationarity of HST environment caused by the high speed movement of the receiver. Based on the proposed model, the space-time-frequency (STF) correlation function (CF) and STF local scattering function (LSF) are derived for different taps. Numerical results show the non-stationarity of the proposed channel model. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> In a realistic high-speed railway environment, the track, terrain, vegetation, cuttings, barriers, pylons, buildings, and crossing bridges are the main sources of reflection, diffraction, and scattering. Moreover, the radiation pattern and the polarization of the transmitting and receiving antennas considerably influence the propagation. This paper presents a deterministic modeling approach covering all the effects in a realistic highspeed railway environment for the first time. The antenna influence and the mechanisms of transmission, scattering, and reflection are evaluated by developing a 3D ray-optical tool. The diffraction loss is obtained by the multi-edge diffraction models using raster databases. This approach compensates the limitation of the existent empirical and stochastic models used for the high-speed railway, and promotes the deterministic modeling towards to the realistic environment. Therefore, it allows a detailed and realistic evaluation and verification of the train control communications systems. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> A. Nonstationarity of HST Channels <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB006
|
Measurements in the literature have demonstrated that HST channels are non-stationary since the stationary conditions, measured by stationary interval, retain to a very short period of time in comparison with other types of channels, e.g., V2I and V2V channels . This is mainly caused by the very high speed of the trains and the encountered changes in surrounding areas. Although the non-stationarity of HST channels has been implicitly considered in GBDMs BIB002 - BIB005 , BIB001 , but these models are mainly site-specific and cannot be easily generalized to a variety of scenarios. The non-stationarity feature of HST channels has been considered in the NGSM proposed in BIB003 by implementing the birth-death process to simulate the appearance and disappearance of the scatterers, and in RS-GBSMs in BIB004 - BIB006 by providing the time-variant functions of several channel model parameters, i.e., angular parameters, Doppler frequency, Ricean K -factor, and the distance between the Tx and Rx. However, verifying the proposed models by real-field measurements was only performed in BIB006 and therefore more comprehensive investigations are required to validate the accuracy of those models. Future non-stationary channel models should consider more time-variant model parameters, such as cluster powers and delays, and investigate the effect of the drift of scatterers into different delay taps on the non-stationarity of HST channels and the resulting correlation between these taps.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Statistical Properties of HST Channels <s> The geometry-based stochastic channel models are proposed in this paper for the terrain cutting, suburb, and urban scenarios in high-speed railway. First, the special scenarios in high-speed railway are described. And the channel models based on the geometry scenarios are introduced. Some channel parameters are based on measurement data. Then, the space-time correlation functions in analytical form are obtained in suburb and urban scenarios. Finally, the space correlation characteristics in three scenarios are compared. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> B. Statistical Properties of HST Channels <s> This paper proposes a generic non-stationary wideband geometry-based stochastic model (GBSM) for multiple-input multiple-output (MIMO) high-speed train (HST) channels. The proposed generic model can be applied on the three most common HST scenarios, i.e., open space, viaduct, and cutting scenarios. A good agreement between the statistical properties of the proposed generic model and those of relevant measurement data from the aforementioned scenarios demonstrates the utility of the proposed channel model. <s> BIB002
|
Investigating the statistical properties of HST channels is essential for understanding and analyzing HST communication systems. In Table I , several channel statistics obtained from measurements were presented. However, most of proposed HST channel models in the literature have failed to provide the corresponding theoretical analysis. In BIB001 , the ST CCF was derived based on the proposed stationary narrowband HST channel model. In , a novel theoretical framework that characterizes non-stationary mobile fading channels in terms of their system functions and correlation functions was proposed. Based on this theoretical framework, different time-variant statistical properties of the RS-GBSMs in , BIB002 were derived, i.e., time-variant space CCFs, time-variant ACFs, timevariant space-Doppler (SD) power spectrum densities (PSDs), local scattering functions (LSFs) , and LCRs BIB002 . It is highly desirable to investigate the statistical properties of other HST channel models and further develop the aforementioned theoretical framework to include more statistical properties.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> A new deterministic approach for wave propagation modeling in high-speed train tunnels is presented. The model is based on a new ray launching method and results in the polarimetric and complex channel impulse response as well as the Doppler diagram for radio links between on-train stations and tunnel-fixed stations. Different channel simulations under certain propagation conditions are presented. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Many channel models for MIMO systems have appeared in the literature. However, with the exception of a few recent results, they are largely focussed on two dimensional (2D) propagation, i.e., propagation in the horizontal plane, and the impact of elevation angle is not considered. The assumption of 2D propagation breaks down when in some propagation environments the elevation angle distribution is significant. Consequently, the estimation of ergodic capacity assuming a 2D channel coefficient alone can lead to erroneous results. In this paper, for cross polarized channels, we define a composite channel model and channel coefficient that takes into account both 2D and 3D propagation. Using this composite channel coefficient we assess the ergodic channel capacity and discuss its sensitivity to a variety of different azimuth and elevation power distributions and other system parameters. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> For the design of an OFDM train communications system it is essential to characterise and consider the channel parameters. The transmission channel of a high-speed train scenario is frequency selective as well as time variant. Thus, delay spread and Doppler spread are investigated as crucial parameters for the OFDM system performance. Using a ray-tracing tool realistic impulse responses of the transmission channels are simulated. The investigated system includes three base stations operating in common frequency mode along a railway track and one mobile station situated on a high-speed train. For the mobile station different antenna patterns are included in the simulation model. The results are compared and assessed with respect to delay spread, Doppler spread and receive power. When using directional antennas a distinct reduction in Doppler spread is achieved. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Recently there have been proposals to extend MIMO processing to the elevation dimension in addition to the azimuth direction. To accurately assess the promised gains of these "3D-MIMO" techniques, a channel model is needed that accurately accounts for the elevation angles of the rays. In addition it would be desirable for the 3D channel model to be a simple extension of an already defined 2D channel model to allow for ease of implementation and to assist the 3GPP standardization effort in the 3D MIMO area. In this paper we propose an extension of the ITU 2D channel model to 3D by adding a distance dependent elevation spread based on observations from ray tracing. Through system-level simulations we observe that the behavior of 3D MIMO is greatly impacted by the modeling of the 3D channel. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> D. 3D HST Channel Models <s> Recently there have been many proposals to generate a complete channel model that covers wide range of carrier frequencies and take into consideration different aspects of channel propagation statistics. Many of these models focus on two dimensional propagation, i.e. propagation in the azimuth plane only. The assumption of 2D propagation may lead to inaccurate estimation of channel capacity and system level performance. In addition, few studies have focused on the propagation characteristics in the 800 MHz band. In this paper a complete 3D channel model is generated and examined through 3D ray tracer tool. The paper proposes detailed channel related parameters for urban macro and micro-cell environments at carrier frequencies of 800 MHz and 2.6 GHz. The paper analyzes the channel in terms of best-fit normal parameters for large scale parameters, path loss models, cross-correlation of large scale parameters, and de-correlation distance for both line-of-sight and none line-of-sight conditions. The paper uses the generated statistics to extend the current 2D 3GPP/ITU channel model to 3D model and compare the propagation statistics generated by this model with the ray tracer predictions. <s> BIB005
|
Apart from the GBDMs that use 3D ray-tracing tool to model HST channels BIB003 - BIB001 , HST channel models were generally proposed assuming that propagation waves are traveling in two dimensions and therefore ignore the impact of the elevation angle on channel statistics. In reality, radio waves propagate in three dimensions and scatterers are disperse in elevation, i.e., vertical plane, as well as in azimuth, i.e., horizontal plane. Recently, the 3GPP has developed a 3D channel model in urban microcell and urban macrocell scenarios following the framework of WINNER II channel model . The proposed 3D 3GPP channel model introduced the zenith AoD and zenith AoA that are modeled by inverse Laplacian functions . The 3D extensions of SCM and the WINNERII/WINNER+ channel models were proposed in BIB002 and , respectively, and an extension of the IMT-A channel model to the elevation plane was proposed in BIB004 , BIB005 . However, none of the aforementioned channel models considered any of the HST scenarios. Thus, 3D channel measurements and models are necessary, especially when the HST is close to the BS where considering elevation angles can demonstrate the impact of the waves reflected from ground on the received signal.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> E. HST-to-HST Communications <s> Railway telematics applications are currently attracting attention and are under intense research. Reliable railway telematics applications increasingly tend to require a subsidiary means to help existent control system make train operation safer and more efficient. Since 2006, train-to-train communication has been studied to respond to such requirements. A key characteristic of train-to-train communication is that operation control to avoid possible accidents is conducted among trains without help of a base station. This paper proposes a novel train-to-train communication model in a physical layer based on multihop and cooperation, taking a high-speed railway propagation channel into account. The mechanism of this model lies in the idea that a source train uses trains on other tracks as relays to transmit signals to destination train on the same track. Based on occurrence of these potential relays, such mechanism can be divided into three cases. In each case, BER is applied to evaluate properties of the proposed communication model. Simulation results show that BER of the train-to-train communication model decreases to when SNR is 10 dB and that the minimum receiving voltage of this model is −84 dBm, which is 8 dBm lower than the standards established by the International Union of Railways (UIC) in a high-speed railway scenario. <s> BIB001
|
HST-to-HST communication has been proposed to enable HSTs exchange controlling and traffic information such as road obstacles and accidents via wireless communication links. A HST-to-HST communication model based on multihop and cooperation was proposed in BIB001 . In the proposed model, a source train uses trains on other tracks as relays to transmit signals to the destination train on the same track. Based on proposed HST-to-HST channel model, the bit error rate (BER) performance was investigated in BIB001 using the suburban scenario of COST 207 channel model and the outage capacity was analyzed in using Nakagami-m channel model. Despite its importance as a safety measure to avoid accidents, the development of HST-to-HST channel models is still in its preliminary phase and further investigations are required.
|
Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The demand of broadband high-mobility communication increased dramatically with the rapid development of high speed railway system. A beamforming platform through changing transmitter antenna direction base on real time receiver Global Positioning System (GPS) information was proposed to improve communication quality. Experiments were carried out over Taiwan High Speed Railway (THSR) train to analysis the tracking capability and path loss model. The results show that the received signal strength indicator (RSSI), carrier to interference plus noise ratio (CINR) and throughput were improved. <s> BIB001 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> While mobile broadband performance measured from moving vehicles in metropolitan areas has drawn significant attentions in recent studies, similar investigations have not been conducted for regional areas. Compared to metropolitan cities, regional suburbs are often serviced by wireless technologies with significantly lower data rates and less dense deployments. Conversely, vehicle speeds are usually much higher in the regional areas. In this paper, we seek to provide some insights to user experience of mobile broadband in terms of TCP throughput when travelling in a regional train. We find that (1) using a single broadband provider may lead to a large number of blackouts, which could be reduced drastically by simultaneously subscribing to multiple providers (provider blackouts are not highly correlated), (2) the choice of train route may have a more significant effect on broadband experience than the time-of-day of a particular trip, and (3) the speed of the train itself has no deterministic effect on TCP throughput. <s> BIB002 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The impact of antenna array geometry on MIMO (Multiple-input Multiple-output) system in high speed railway scenario is investigated in this paper. The capacity of different antenna arrays and the effect of ULA (uniform linear array) azimuthal orientation on capacity are studied with a double-directional channel model including antenna effects and Doppler shift. <s> BIB003 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> Due to frequent handovers in broadband wireless communications in high-speed rail, communication interruption during handover could seriously degrade the experiences of passengers on the train. Aiming to reduce the interruption time, this paper proposes a seamless handover scheme based on a dual-layer and dual-link system architecture, where a Train Relay Station is employed to execute handover for all users in a train and two antennas are mounted at the front and rear of a train. In the proposed scheme, the front antenna executes handover while the rear antenna is still communicating with BS, so that the communication can keep non-interruptive throughout the handover. Moreover, bi-casting is adopted to eliminate the data forwarding delay between the serving BS and target BS. A complete handover protocol is designed and the performance of the proposed scheme is analyzed. It can be seen from analytical results that the handover failure probability decreases as cell overlap increases and the communication interruption probability decreases with the decrease of train handover location and the increase of cell overlap. The simulation results show that in the proposed scheme, the communication interruption probability is smaller than 1% when the handover location is properly selected and the system throughput is not affected by handover. In conclusion, both theoretical and simulation results show that the proposed scheme can efficiently perform seamless handover for high-speed rail with low implementation overhead. <s> BIB004 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> The recent advent of high speed trains introduces new mobility patterns in wireless environments. The LTE-A (Long Term Evolution of 3GPP - Advanced) networks have largely tackled the Doppler effect problem in the physical layer and are able to keep wireless service with 100Mpbs throughput within a cell in speeds up to 350 km/h. Yet the much more frequent handovers across cells greatly increases the possibility of service interruptions, and the problem is prominent for multimedia communications that demand both high-throughput and continuous connections. In this paper, we present a novel LTE-based solution to support high throughput and continuous multimedia services for high speed train passengers. Our solution is based on a Cell Array that smartly organizes the cells along a railway, together with a femto cell service that aggregates traffic demands within individual train cabins. Given that the movement direction and speed of a high-speed train are generally known, our Cell Array effectively predicts the upcoming LTE cells in service, and enables a seamless handover that will not interrupt multimedia streams. To accommodate the extreme channel variations, we further propose a scheduling and resource allocation mechanism to maximize the service rate based on periodical signal quality changes. Our simulation under diverse network and railway/train configurations demonstrates that the proposed solution achieves much lower handover latency and higher data throughput, as compared to existing solutions. It also well resists to network and traffic dynamics, thus enabling uninterrupted quality multimedia services for passengers in high speed trains. <s> BIB005 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> A new time-domain transmit beamforming algorithm is proposed for cancelling inter-channel-interference (ICI) due to Doppler frequency shift under high speed train communication scenario. Simulation results show that by employing the algorithm a high speed train communication system is capable of providing continuous 100Mbps data rate for passengers at a speed of 450km/h. This would guarantee continuous data-intensive services for today's high speed train passengers. <s> BIB006 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> With the deployment of high speed train (HST) systems increasing worldwide and their popularity with travelers growing, providing broadband wireless communications (BWC) in HSTs is becoming crucial. In this paper, a tutorial is presented on recent research into BWC provision for HSTs. The basic HST BWC network architecture is described. Two potential cellular architectures, microcells and distributed antenna systems (DASs) based cells, are introduced. In particular, the DAS is discussed in conjunction with radio over fiber (RoF) technology for BWC for HSTs. The technical challenges in providing DAS-based BWC for HSTs, such as handoff and RoF are discussed and outlined. <s> BIB007 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In high speed train (HST) system, real-time multimedia entertainments are very important applications in which a data stream often contains packets with different quality of service requirements. For example, video stream encoded with scalability contains the base layer packets with high quality (HQ) bit error rate (BER) requirement and enhancement layers' packets with low quality (LQ) BER requirement. When a conventional allocation approach, which only considers one BER constraint for one data stream, is applied to orthogonal frequency division multiple access (OFDMA) systems, the BER constraint will be the strictest one among multiple requirements from different types of packets, which leads to inefficient allocation when each data stream has multiple BER requirements. This paper aims to develop novel resource allocation approach by considering multiple BER requirements for different types of packets in one data stream. In order to not only simplify the resource allocation, but also to compensate for the channel estimation error caused by Doppler shift in the HST environment, a proper number of contiguous subcarriers are grouped into chunks and spectrum is allocated chunk by chunk. Simulation results show that the developed resource allocation scheme outperforms the conventional scheme, particularly when the BER ratio of HQ packets to LQ packets is larger than one. Furthermore, in order to reduce the complexity of resource allocation further, an empirical allocation scheme is proposed to allocate better chunks to HQ packets. It is shown that the performance of the empirical allocation scheme is quite close to that of the optimal scheme. <s> BIB008 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In this paper, we propose a reliable downlink (DL) transmission scheme exploiting both location- and speed- related information in a high-speed railway scenario, which relies on a multi-functional antenna array (MFAA) combining space-time block coding (STBC) with adaptive receive beamforming techniques. Firstly, the state-of-the-art STBC and adaptive beamforming techniques are reviewed and analyzed in the context of both block-fading and time-varying channels. Then we propose to employ an antenna array on board of a high-speed train to form two beams for receiving the STBC signals from the DL transmit antennas in order to improve the reliability of the system. It is demonstrated that in the context of combined schemes, receive beamforming is more beneficial than transmit beamforming under high-speed railway linear topology to achieve low bit error rate (BER). Hence it is more attractive to employ receive beamforming antennas on the top of the train. <s> BIB009 </s> Channel Measurements and Models for High-Speed Train Communication Systems: A Survey <s> F. System Performance <s> In this paper, the bit error rate (BER) performance of a new multiple-input-multiple-output technique, named spatial modulation (SM), is studied under a novel non-stationary wideband high-speed train (HST) channel model in different scenarios. Time-varying parameters obtained from measurement results are used to configure the channel model to make all results more realistic. A novel statistic property called the stationary interval in terms of the space-time correlation function is proposed to describe the channel model’s time-varying behavior. The accurate theoretical BER expression of SM systems is derived under the time-varying wideband HST channel model with the non-ideal channel estimation assumption. The simulation results demonstrate that the BER performance of SM systems shows a time-varying behavior due to the non-stationary property of the employed HST channel model. The system performance can maintain a relative stationary status within the specified stationary interval. It can also be observed that the BER performance of SM systems under the HST channel model is mainly affected by the correlation between sub-channels, inter-symbol-interference, Doppler shift, and channel estimation errors. <s> BIB010
|
Investigating the performance of HST communication systems is the basis for system design and network planning. In BIB004 , the HST communication system performance was investigated using data throughput to evaluate a seamless dual-link handover scheme. Another handover scheme was proposed in BIB005 and the system performance was evaluated by tracking the changes of throughput and signal-to-noise-and-interference ratio (SINR) over the time. The changes of SINR over the HST velocity were investigated in BIB006 to evaluate a transmit beamforming algorithm proposed for canceling the interchannel interference (ICI) in HST communication systems. The performance of HST communication system that implements beamforming technique was also evaluated in BIB001 using measured throughput, SINR, and received signal strength indicator level. The deployment of DAS in HST communication systems was evaluated in BIB007 by using spectrum efficiency as a system performance metric. In BIB008 , BER was used to evaluate a proposed radio resource allocation scheme for orthogonal frequency-division multiple access (OFDMA) HST systems. The BER performance of a HST communication system was also investigated in BIB009 where beamforming and Alamouti combined downlink transmission schemes were proposed. Mobile broadband performance experienced from regional HSTs was investigated in BIB002 by monitoring the fluctuation of system throughput caused by the varying distance between the BS and HST, multi-path fading, and co-channel interference conditions. A temporal proportional fair power allocation scheme for HST wireless communication systems was proposed in . The proposed scheme was designed to achieve a trade-off between power efficiency and fairness along the time. HST channel capacity was analyzed in BIB003 to study the impact of different antenna array configurations on MIMO HST communication systems. In BIB010 , the BER performance of spatial modulation systems was studied using the proposed non-stationary HST MIMO channel model in with different HST scenarios. It was shown that the correlation between sub-channels, inter-symbol-interference, Doppler shift, and channel estimation errors are the main factors that affect the BER performance of SM systems under the HST channel model. More comprehensive system performance analysis that evaluates other schemes and considers more system performance indicators, e.g., capacity and quality of service (QoS), is required in the future.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Most designers know that yellow text presented against a blue background reads clearly and easily, but how many can explain why, and what really are the best ways to help others and ourselves clearly see key patterns in a bunch of data? When we use software, access a website, or view business or scientific graphics, our understanding is greatly enhanced or impeded by the way the information is presented. ::: ::: This book explores the art and science of why we see objects the way we do. Based on the science of perception and vision, the author presents the key principles at work for a wide range of applications--resulting in visualization of improved clarity, utility, and persuasiveness. The book offers practical guidelines that can be applied by anyone: interaction designers, graphic designers of all kinds (including web designers), data miners, and financial analysts. ::: ::: ::: ::: Complete update of the recognized source in industry, research, and academic for applicable guidance on information visualizing. ::: ::: Includes the latest research and state of the art information on multimedia presentation. ::: ::: More than 160 explicit design guidelines based on vision science. ::: ::: A new final chapter that explains the process of visual thinking and how visualizations help us to think about problems. ::: ::: Packed with over 400 informative full color illustrations, which are key to understanding of the subject. ::: ::: Table of Contents ::: ::: ::: Chapter 1. Foundations for an Applied Science of Data Visualization ::: ::: Chapter 2. The Environment, Optics, Resolution, and the Display ::: ::: Chapter 3. Lightness, Brightness, Contrast and Constancy ::: ::: Chapter 4. Color ::: ::: Chapter 5. Visual Salience and Finding Information ::: ::: Chapter 6. Static and Moving Patterns ::: ::: Chapter 7. Space Perception ::: ::: Chapter 8. Visual Objects and Data Objects ::: ::: Chapter 9. Images, Narrative, and Gestures for Explanation ::: ::: Chapter 10. Interacting with Visualizations ::: ::: Chapter 11. Visual Thinking Processes <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> We discuss techniques for the visualization of medical volume data dedicated for their clinical use. We describe the need for rapid dynamic interaction facilities with such visualizations and discuss emphasis techniques in more detail. Another crucial aspect of medical visualization is the integration of 2d and 3d visualizations. In order to organize this discussion, we introduce 6 "Golden" rules for medical visualizations. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> One of the most important goals in volume rendering is to be able to visually separate and selectively enable specific objects of interest contained in a single volumetric data set, which can be approached by using explicit segmentation information. We show how segmented data sets can be rendered interactively on current consumer graphics hardware with high image quality and pixel-resolution filtering of object boundaries. In order to enhance object perception, we employ different levels of object distinction. First, each object can be assigned an individual transfer function, multiple of which can be applied in a single rendering pass. Second, different rendering modes such as direct volume rendering, iso-surfacing, and non-photorealistic techniques can be selected for each object. A minimal number of rendering passes is achieved by processing sets of objects that share the same rendering mode in a single pass. Third, local compositing modes such as alpha blending and MIP can be selected for each object in addition to a single global mode, thus enabling high-quality two-level volume rendering on GPUs. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> The need to improve medical diagnosis and reduce invasive surgery is dependent upon seeing into a living human system. The use of diverse types of medical imaging and endoscopic instruments has provided significant breakthroughs, but not without limiting the surgeon's natural, intuitive and direct 3D perception into the human body. This paper presents a method for the use of augmented reality (AR) for the convergence of improved perception of 3D medical imaging data (mimesis) in context to the patient's own anatomy (in-situ) incorporating the physician's intuitive multi- sensory interaction and integrating direct manipulation with endoscopic instruments. Transparency of the video images recorded by the color cameras of a video see-through, stereoscopic head- mounted-display (HMD) is adjusted according to the position and line of sight of the observer, the shape of the patient's skin and the location of the instrument. The modified video image of the real scene is then blended with the previously rendered virtual anatomy. The effectiveness has been demonstrated in a series of experiments at the Chirurgische Klinik in Munich, Germany with cadaver and in-vivo studies. The results can be applied for designing medical AR training and educational applications. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Insight into the dynamics of blood-flow considerably improves the understanding of the complex cardiovascular system and its pathologies. Advances in MRI technology enable acquisition of 4D blood-flow data, providing quantitative blood-flow velocities over time. The currently typical slice-by-slice analysis requires a full mental reconstruction of the unsteady blood-flow field, which is a tedious and highly challenging task, even for skilled physicians. We endeavor to alleviate this task by means of comprehensive visualization and interaction techniques. In this paper we present a framework for pre-clinical cardiovascular research, providing tools to both interactively explore the 4D blood-flow data and depict the essential blood-flow characteristics. The framework encompasses a variety of visualization styles, comprising illustrative techniques as well as improved methods from the established field of flow visualization. Each of the incorporated styles, including exploded planar reformats, flow-direction highlights, and arrow-trails, locally captures the blood-flow dynamics and may be initiated by an interactively probed vessel cross-section. Additionally, we present the results of an evaluation with domain experts, measuring the value of each of the visualization styles and related rendering parameters. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Introduction <s> Line drawing techniques are important methods to illustrate shapes. Existing feature line methods, e.g., suggestive contours, apparent ridges, or photic extremum lines, solely determine salient regions and illustrate them with separate lines. Hatching methods convey the shape by drawing a wealth of lines on the whole surface. Both approaches are often not sufficient for a faithful visualization of organic surface models, e.g., in biology or medicine. In this paper, we present a novel object-space line drawing algorithm that conveys the shape of such surface models in real-time. Our approach employs contour- and feature-based illustrative streamlines to convey surface shape (ConFIS). For every triangle, precise streamlines are calculated on the surface with a given curvature vector field. Salient regions are detected by determining maxima and minima of a scalar field. Compared with existing feature lines and hatching methods, ConFIS uses the advantages of both categories in an effective and flexible manner. We demonstrate this with different anatomical and artificial surface models. In addition, we conducted a qualitative evaluation of our technique to compare our results with exemplary feature line and hatching methods. <s> BIB008
|
The purpose of medical-image-data visualization is to support "the inspection, analysis and interpretation of patient data" and, more specifically, to enable "physicians to explore patient data rapidly and accurately with minimal cognitive effort" [ABK * 15]. Medical image data, such as CT and MRI, are physical measurements which exhibit noise and inhomogeneities. The anatomical structures represented in the image data have organic shapes and may be quite complex (e. g., highly curved, branching). The spatial relations between the anatomical surfaces are often quite complex, which makes medical visualization problems unique. Although slice-based 2D visualizations dominate in the field of radiological diagnosis, there are many tasks-such as in treatment planning and in dealing with complex fractures-where 3D visualizations are employed (see BIB002 for a discussion of medical 2D and 3D visualizations). A large variety of medical visualization techniques are available . These techniques include basic surface and volume rendering techniques, tagged volume rendering to enable the selective emphasis of relevant objects, and smart visibility techniques BIB005 to reveal important structures that may otherwise be occluded. Illustrative visualization techniques may be used to represent surface details faithfully BIB008 . They may be combined with surface and volume rendering techniques BIB003 , display additional elements or details , and generally facilitate the use of abstraction [RBGV08] . Special techniques were developed to clearly display elongated branching structures such as vasculature [JQD * 08, KOCC14]. The rendering of fiber tracts extracted from Diffusion Tensor Imaging developed into its own research direction and a lot of research has been devoted to displaying blood flow [LGV * 16, vPBB * 10]. The above-mentioned techniques require users of visualization systems to adjust several parameters such as color, texture, or transparency to effectively represent tissue properties. Moreover, the final appearance depends on preprocessing (e. g., noise removal, vesselness filtering) and postprocessing (e. g., mesh smoothing or simplification). Consequently, the variety of methods, the resulting broad range of parameters, and the large number of possible parameter values-not to mention the impressive number of possible combinations-can be overwhelming for developers who want to create 3D visualizations for specific medical tasks. Perception guidance. In general, visualization design decisions may benefit from visual perception research. For example, there is an extensive literature on contrast and shape perception, on the effectiveness of depth cues, on attentional guidance for goal-directed emphasis of important structures, and on other low-level (i. e., uses simple very visual information such as edges, contrast, color, motion, etc.), bottom-up (i. e., data-driven) processes that explain why some objects in a larger scene may be immediately recognized without special efforts. Moreover, it is clear that shading, shadows, and surface texture contribute to the perception of 3D shapes from images . While this basic research is an essential background for designing medical visualizations, it is by far not sufficient. Research in visual perception is (for good reasons) often focused on simple geometries and simple layouts with a few objects, and interaction is usually not taken into account. Thus, the results cannot be easily generalized to complex visualizations of irregular anatomical structures that are interactively explored by experts who know the particular anatomical region well. Both Healey and Enns BIB007 as well as Ware BIB001 provide a comprehensive summary of visual perception research and its consequences for information visualization. Similarly, Bartz and colleagues discussed perception research and its con-sequences for computer graphics as well as virtual and augmented reality. Likewise, Thompson and colleagues BIB006 discuss visual perception at length, with a focus on its applications to computer graphics. Pouli and colleagues have examined image statistics and their relationship to both perception and computer graphics. This survey extends these other reviews, in particular by adding an explicit focus on medical visualization. Thus, we discuss perceptual experiments that take realistic medical visualization scenarios into account, and we discuss the details of designing evaluation experiments in order to help the reader design experiments for concrete medical visualization problems. Medical Tasks. In order to place this survey into an applicationrelevant context, it is necessary to consider the general functions that medical visualizations serve. In clinical practice, physicians analyze medical image data in a very goal-directed manner based on knowledge of clinical symptoms and previous examinations. They also use these images and derived visualizations to communicate with colleagues. Finally they sometimes, albeit much less often, freely explore medical image data without a clear hypothesis. There are a number of general tasks for which 3D medical visualizations are used. They provide an overview when there is a rare anatomical variant or complex fracture. They are used for treatment planning; for example, making decisions about resectability (can a tumor be resected at all?), the amount of surgery, and access paths. For these tasks faithful representations of local regions including vasculature are required. The display of fiber tracts is essential for neurosurgery planning. Physicians are interested in local shape variations, for example in order to assess bones and possible rheumatic modifications [ZCBM14] or to assess the malignancy of a tumor. Possible infiltrations, such as the specific relation between a tumor and its surrounding structures, are also often essential. The investigation of anatomical details for selecting an implant has a similar level of complexity. These tasks require a thorough understanding of the relevant structures-including their appearance and shape-which makes it essential to take perceptual findings into account. Scope and Organization. This state-of-the-art report (STAR) will focus on medical visualization techniques that display one dataset. Multimodal visualization, comparative visualization of datasets over time, or special data-such as functional MRI or perfusion data-are not considered here since there are very few perception-based studies for them. Blood flow and fiber tract visualization are considered, since there are a number of perceptually motivated techniques. Medical augmented reality is also not considered, although perceptionbased research is highly relevant there (see, e. g., Bichlmeier et al. BIB004 ). Furthermore, we restrict ourselves to true 3D visualizations and do not discuss projections, such as colon flattening, or curved planar reformation [KFW * 02]. This decision is motivated by the unique advantages and problems of 3D visualizations (e. g., occlusion). Furthermore, glyph-based medical visualization is not considered here, as it augments the anatomical 3D structures with artificial shapes. Moreover, we do not discuss the influence of display types such as stereo monitors [BHS * 14]. The remainder of this STAR is structured as follows. In Sect. 2, we provide the basic findings of visual perception research that are relevant for medical visualization, with a particular focus on depth and shape perception. In Sect. 3, we introduce a number of perceptually-motivated, 3D, medical-visualization techniques, including volume rendering, vascular visualization, blood flow, and fiber tract visualization. In Sect. 4, we discuss general issues in experimental design with a focus on evaluating (medical) visualization techniques. This should not only help the reader to understand existing studies but also should provide guidance for designing new studies (and ensure that the results are valid). In Sect. 5, we come back to a selection of the techniques described in Sect. 3 in order to discuss how they were evaluated with respect to perceptual effectiveness. Since there is clearly a need for future research, we discuss a research agenda in Sect. 6.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> The haloed line effect is a technique where when a line in three-dimensional space passes in front of another line, a gap is produced in the projection of the more distant line. The gap is produced as if an opaque halo surrounded the closer line. This method for approximate hidden-line-elimination is advantageous because explicit surface equations are not necessary. The relative depth of lines, axes, curves and lettering is easily perceived. This technique is especially suitable for the display of finite element grids, three-dimensional contour maps and ruled surfaces. When the lines or curves on a surface are closer than the gap size, the gaps produced close up to produce a complete hidden-line-elimination. A simple but efficient implementation is described which can be used in the rendering of a variety of three-dimensional situations. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> This study investigates human performance when using semitransparent tools in interactive 3D computer graphics environments. The article briefly reviews techniques for presenting depth information and examples of applying semitransparency in computer interface design. We hypothesize that when the user moves a semitransparent surface in a 3D environment, the “partial-occlusion” effect introduced through semitransparency acts as an effective cue in target localization—an essential component in many 3D interaction tasks. This hypothesis was tested in an experiment in which subjects were asked to capture dynamic targets (virtual fish) with two versions of a 3D box cursor, one with and one without semitransparent surfaces. Results showed that the partial-occlusion effect through semitransparency significantly improved users' performance in terms of trial completion time, error rate, and error magnitude in both monoscopic and stereoscopic displays. Subjective evaluations supported the conclusions drawn from performance measures. The experimental results and their implications are discussed, with emphasis on the relative, discrete nature of the partial-occlusion effect and on interactions between different depth cues. The article concludes with proposals of a few future research issues and applications of semitransparency in human-computer interaction. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Depth Perception <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB005
|
The study of depth perception is a core research area in visual perception with studies dating back to the late 1800's. It is clear that the speed and accuracy with which 3D scenes are perceived depends on depth cues [RHFL10]. Classes of Depth Cues. Monoscopic depth cues can be seen with a single eye. Shadows, perspective projection, partial occlusion, and shading are essential monoscopic depth cues. Motion parallax is one of the main motion-based, monoscopic depth cues. It exploits the image changes that occur when a 3D object or scene moves relative to the observer. There are a number of other motion-based cues (e. g., kinetic depth effect), all of which are collected under the term shape-from-motion. Stereoscopic depth cues employ the fact that the two eyes have slightly different views of the world. The two primary stereoscopic cues are binocular disparity (i. e., the difference in the location of an object in the two retinal images), and convergence (i. e., the angular deviation of the two eyes from straight ahead required to fixate on an object). In addition to categorizing depth cues based on how many eyes they use (monoscopic versus stereoscopic), one can categorize them based on the class of information they use. In general, there are motion-based cues, surface-texture cues, and illuminationbased cues. This last category is often referred to as shapefrom-shading [Hor70, BCD * 12] and follows the "Dark is Deep" paradigm . That is, the darkness of a small patch of a 2D image is directly related to the depth of that area in the 3D scene [TM83, Ram88, LB00]. Depth Cues in Stylization. In a photograph of the real world, a large number of monoscopic depth cues work together to provide explicit, metric information about the 3D layout of the scene, including information specifying that the input is a 2D image of a 3D scene. Careful attention to as many of these cues as possible allows us to synthesize photorealistic images. Using a subset of the cues still provides an effective way of clearly specifying the 3D structure of a scene without requiring full photorealism. Indeed, artists selectively use various image cues to create a stylized version of a scene. Naturally, computer graphics researchers have adopted and adapted the artist's stylized depth techniques. For example, the distance to a point on an object can be explicitly encoded by adapting line widths, by adapting the parameters of hatching techniques, or by indicating layering through halos BIB001 BIB003 BIB004 . Fig. 1 shows how some of these illustrative depth cues are used in medical visualization. The depth cues used here are based on real-world phenomena: silhouettes arising from grazing lighting (Fig. 1, left) and shadows from a camera-mounted light source (Fig. 1, right) . Both of these cues are known to work acceptably well in humans, and are also used in computer vision models of shape-from-shading [BCD * 12]. Beyond the effect of individual depth cues, there are a number of studies that examine the interaction between cues. For example, Zhai and colleagues BIB002 found that stereo projection and semitransparent volume cursors reinforced each other and enabled a faster and more accurate selection of objects compared to monoscopic rendering and opaque volume cursors. For more on depth perception research, the reader is directed to the overview books by Thompson and colleagues BIB005 and by Goldstein .
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> 1. The striate cortex was studied in lightly anaesthetized macaque and spider monkeys by recording extracellularly from single units and stimulating the retinas with spots or patterns of light. Most cells can be categorized as simple, complex, or hypercomplex, with response properties very similar to those previously described in the cat. On the average, however, receptive fields are smaller, and there is a greater sensitivity to changes in stimulus orientation. A small proportion of the cells are colour coded.2. Evidence is presented for at least two independent systems of columns extending vertically from surface to white matter. Columns of the first type contain cells with common receptive-field orientations. They are similar to the orientation columns described in the cat, but are probably smaller in cross-sectional area. In the second system cells are aggregated into columns according to eye preference. The ocular dominance columns are larger than the orientation columns, and the two sets of boundaries seem to be independent.3. There is a tendency for cells to be grouped according to symmetry of responses to movement; in some regions the cells respond equally well to the two opposite directions of movement of a line, but other regions contain a mixture of cells favouring one direction and cells favouring the other.4. A horizontal organization corresponding to the cortical layering can also be discerned. The upper layers (II and the upper two-thirds of III) contain complex and hypercomplex cells, but simple cells are virtually absent. The cells are mostly binocularly driven. Simple cells are found deep in layer III, and in IV A and IV B. In layer IV B they form a large proportion of the population, whereas complex cells are rare. In layers IV A and IV B one finds units lacking orientation specificity; it is not clear whether these are cell bodies or axons of geniculate cells. In layer IV most cells are driven by one eye only; this layer consists of a mosaic with cells of some regions responding to one eye only, those of other regions responding to the other eye. Layers V and VI contain mostly complex and hypercomplex cells, binocularly driven.5. The cortex is seen as a system organized vertically and horizontally in entirely different ways. In the vertical system (in which cells lying along a vertical line in the cortex have common features) stimulus dimensions such as retinal position, line orientation, ocular dominance, and perhaps directionality of movement, are mapped in sets of superimposed but independent mosaics. The horizontal system segregates cells in layers by hierarchical orders, the lowest orders (simple cells monocularly driven) located in and near layer IV, the higher orders in the upper and lower layers. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> SUMMARY In natural vision, information overspecifies the relative distances between objects and their layout in three dimensions. Directed perception applies (Cutting, 1986), rather than direct or indirect perception, because any single source of information (or cue) might be adequate to reveal relative depth (or local depth order), but many are present and useful to observers. Such overspecification presents the theoretical problem of how perceivers use this multiplicity of information to arrive at a unitary appreciation of distance between objects in the environment. This article examines three models of directed perception: selection, in which only one source of information is used; addition, in which all sources are used in simple combination; and multiplication, in which interactions among sources can occur. To establish perceptual overspecification, we created stimuli with four possible sources of monocular spatial information, using all combinations of the presence or absence of relative size, height in the projection plane, occlusion, and motion parallax. Visual stimuli were computer generated and consisted of three untextured parallel planes arranged in depth. Three tasks were used: one of magnitude estimation of exocentric distance within a stimulus, one of dissimilarity judgment in how a pair of stimuli revealed depth, and one of choice judgment within a pair as to which one revealed depth best. Grouped and individual results of the one direct and two indirect scaling tasks suggest that perceivers use these sources of information in an additive fashion. That is, one source (or cue) is generally substitutable for another, and the more sources that are present, the more depth is revealed. This pattern of results suggests independent use of information by four separate, functional subsystems within the visual system, here called minimodules. Evidence for and advantages of mmimodularity are discussed. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Three experiments were conducted to test Hoffman and Richards's (1984) hypothesis that, for purposes of visual recognition, the human visual system divides three-dimensional shapes into parts at negative minima of curvature. In the first two experiments, subjects observed a simulated object (surface of revolution) rotating about a vertical axis, followed by a display of four alternative parts. They were asked to select a part that was from the object. Two of the four parts were divided at negative minima of curvature and two at positive maxima. When both a minima part and a maxima part from the object were presented on each trial (experiment 1), most of the correct responses were minima parts (101 versus 55). When only one part from the object—either a minima part or a maxima part—was shown on each trial (experiment 2), accuracy on trials with correct minima parts and correct maxima parts did not differ significantly. However, some subjects indicated that they reversed figure and ground, thereby changing ... <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Line drawings produced by contours traced on a surface can produce a vivid impression of the surface shape. The stability of this perception is notable considering that the information provided by the surface contours is quite ambiguous. We have studied the stability of line drawing perception from psychophysical and computational standpoints. For a given family of simple line drawings, human observers could perceive the drawings as depicting either an elliptic (egg-shaped) or hyperbolic (saddle-shaped) smooth surface patch. Rotation of the image along the line of sight and change in aspect ratio of the line drawing could bias the observer toward either interpretation. The results were modeled by a simple Bayesian observer that computes the probability to choose either interpretation given the information in the image and prior preferences. The model’s decision rule is noncommitting: for a given input image its responses are still probabilistic, reflecting variability in the modeled observers’ judgements. A good fit to the data was obtained when three observer assumptions were introduced: a preference for convex surfaces, a preference for surface contours aligned with the principal lines of curvature, and a preference for a surface orientation consistent with an object viewed from above. We discuss how these assumptions might reflect regularities of the visual world. © 1998 Elsevier Science Ltd. All rights reserved. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Under typical viewing conditions, we find it easy to distinguish between different materials, such as metal, plastic, and paper. Recognizing materials from their surface reflectance properties (such as lightness and gloss) is a nontrivial accomplishment because of confounding effects of illumination. However, if subjects have tacit knowledge of the statistics of illumination encountered in the real world, then it is possible to reject unlikely image interpretations, and thus to estimate surface reflectance even when the precise illumination is unknown. A surface reflectance matching task was used to measure the accuracy of human surface reflectance estimation. The results of the matching task demonstrate that subjects can match surface reflectance properties reliably and accurately in the absence of context, as long as the illumination is realistic. Matching performance declines when the illumination statistics are not representative of the real world. Together these findings suggest that subjects do use stored assumptions about the statistics of real-world illumination to estimate surface reflectance. Systematic manipulations of pixel and wavelet properties of illuminations reveal that the visual system’s assumptions about illumination are of intermediate complexity (e.g., presence of edges and bright light sources), rather than of high complexity (e.g., presence of recognizable objects in the environment). <s> BIB008 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This paper presents a shading model for volumetric data which enhances the perception of surfaces within the volume. The model incorporates uniform diffuse illumination, which arrives equally from all directions at each surface point in the volume. This illumination is attenuated by occlusions in the local vicinity of the surface point, resulting in shadows in depressions and crevices. Experiments by other authors have shown that perception of a surface is superior under uniform diffuse lighting, compared to illumination from point source lighting. <s> BIB009 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. <s> BIB010 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB011 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This book provides an introduction to human visual perception suitable for readers studying or working in the fields of computer graphics and visualization, cognitive science, and visual neuroscience. It focuses on how computer graphics images are generated, rather than solely on the organization of the visual system itself; therefore, the text provides a more direct tie between image generation and the resulting perceptual phenomena. It covers such topics as the perception of material properties, illumination, the perception of pictorial space, image statistics, perception and action, and spatial cognition. <s> BIB012 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Shape Perception <s> This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. <s> BIB013
|
The visual perception of 3D shapes is quite complex, in part due to the loss of information when the 3D object is projected to a 2D (retinal) image. Since the pattern of light on the retina is affected by an intricate interaction between the illumination and the geometry, orientation, and texture of the object, the same pattern of light sensations on the retina could have been caused by different 3D shapes. Thus, visual shape perception is inherently ambiguous. The ambiguity of diffusely shaded images, which is called bas-reliefambiguity, cannot be resolved by any change in lighting [BKY97] . Despite this ambiguity, shape-from-shading is believed to be evolutionarily one of the earliest depth mechanisms and is very effective ZTCS99] . The visual system relies on past experience and on several assumptions to resolve the ambiguities. For example, surfaces tend to be perceived as convex [CSD * 09]. These assumptions are not always appropriate, and can cause incorrect perception of surface category and local orientation . Moreover, the most frequently used model of the human visual system assumes a single light source which is above and to the right . This assumption has significant consequences for many perceptual phenomena beyond shape perception. There is, however, some evidence that the human visual system may in fact inherently assume a number of (locally independent) light sources (see, e. g., [GKB * 99]). Moreover, the visual system is remarkably insensitive to illumination inconsistencies under certain conditions . There is also evidence that the correct perception of material properties requires more realistic lighting conditions, such as multiple light sources BIB008 . The perception of 3D shapes occurs at different spatial scales. At least two levels need to be distinguished : a local scale, where the shape of individual objects is assessed and a global scale, where spatial relations, including depth relations and proximity of objects, are assessed. Indeed, there is considerable evidence that the human visual system represents the entire scene in a linear scale space, with a large number of scales, where each scale is a copy of the scene which has been convolved by a Gaussian kernel (and subsequent scales increase the size of the kernel; for more, see ). Thus, research on the influence of depth cues should incorporate be aware of different scales. Shape-From-Shading. The changes in brightness along a surface can provide shape information. Depending on the illumination model, shadow areas represent strong discontinuities in brightness (for point light sources) or smooth transitions, such as soft shadows (area light sources). For complex anatomical surfaces, such as the brain with its many creases, advanced shadow generation using diffuse lighting improves the depth perception BIB009 . The influence of the illumination model on perception was recently studied [HBM * 14]. Shape-From-Texture. Most surfaces are textured. This can be seen as a violation of the assumption that neighboring parts of a surface affect light in the same way and it poses a problem for both edge-detection-based segmentation and shape-from-shading techniques. Texture can, however, provide information about shape. Altough a considerable amount of information exists about the largescale structure of images , most of the information about textures is implicit (such as the structure of the Fourier transform of an image). One of the earliest examinations of texture is from Gibson . The most influential model of texture structure comes from Julesz and Caelli , which models texture elements as Gabor patches (a sinusoid convolved with a 2D Gaussian). Interestingly, Gabor patches bear a strong resemblance to the receptive field structure of human vision. Texture is particularly useful in determining the local curvature of a surface BIB012 . For example, surface textures that represent principal curvature directions (PCDs) improve shape perception: observers tend to interpret lines on a surface as curvature directions BIB006 . In visualization, texture has been used to represent essential properties of shape. Lines on a surface may help the viewer to separate it into meaningful substructures. If shapes are familiar, viewers look for features that enable such a separation. Interrante and colleagues have shown that a certain type of line-frequently used by illustrators-supports this separation BIB003 BIB004 . These lines are called valley lines and represent regions of a curved surface, where the curvature along the PCD has a local minimum (i. e., the location, where the surface is flattest). These regions are heavily affected by occlusion of surrounding structures and are thus drawn with dark colors. If there are not enough features that can be displayed with valley lines, ridge lines may be added, representing regions with a local maximum of the curvature along the PCD (i. e., the regions, where the surface curvature is highest; see BIB013 for mathematical descriptions of-and algorithms to compute-these lines). Such a sparse representation of a surface may be useful in displaying an outer surface in a multi-layer visualization (e. g., to display an organ surface and a deep-seated tumor as well as surrounding risk structures). This is a promising alternative to a semi-transparent display, where the ordinal depth cues, such as occlusion and shading are hardly recognizable for a transparent surface BIB004 . There is some debate about whether texture cues can be interpreted correctly when a 3D model is displayed in orthographic projection (a typical situation in medical visualization). Li and Zaidi found that "the surface must be viewed with a noticeable amount of perspective projection" BIB007 . Kim and colleagues BIB010 , however, found that curvature-directed lines convey shape even with orthogonal projection. Using only ridge lines may be "uninformative" if most of them are almost aligned with the viewing direction. Thus, a combination of ridge and valley lines yields better performance BIB011 . Shape From-Silhouettes. Most physiological studies on the neural basis of early visual processing show that one of the first steps in the visual cortex is to extract edges BIB001 . Edges are critical for segmenting an object from its background, and as such they are important for both human vision and for visualization. The explicit display of silhouettes [IFH * 03]-as boundary between an object and the background-supports object recognition. The display of silhouettes is particularly effective in low-contrast regions with a high density of objects. In medical visualization, this gives rise to the incorporation of edge detection and boundary emphasis techniques KWTM03] . Combining Cues. Depth and shape perception benefit from combining several depth cues that tend to reinforce each other instead of being just redundant BIB002 BIB005 . As an example, the combination of silhouettes and surface textures is effective . However, combining cues does not always improve perception and may even hamper it, as in case of various feature lines .
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> The sources of visual information that must be present to correctly interpret spatial relations in images, the relative importance of different visual information sources with regard to metric judgments of spatial relations in images, and the ways that the task in which the images are used affect the visual information's usefulness are discussed. Cue theory, which states that the visual system computes the distances of objects in the environment based on information from the posture of the eyes and from the patterns of light projected onto the retinas by the environment, is presented. Three experiments in which the influence of pictorial cues on perceived spatial relations in computer-generated images was assessed are discussed. Each experiment examined the accuracy with which subjects matched the position, orientation, and size of a test object with a standard by interactively translating, rotating, and scaling the test object. > <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Lighting has a crucial impact on the appearance of 3D objects and on the ability of an image to communicate information about a 3D scene to a human observer. This paper presents a new automatic lighting design approach for comprehensible rendering of 3D objects. Given a geometric model of a 3D object or scene, the material properties of the surfaces in the model, and the desired viewing parameters, our approach automatically determines the values of various lighting parameters by optimizing a perception-based image quality objective function. This objective function is designed to quantify the extent to which an image of a 3D scene succeeds in communicating scene information, such as the 3D shapes of the objects, fine geometric details, and the spatial relationships between the objects. Our results demonstrate that the proposed approach is an effective lighting design tool, suitable for users without expertise or knowledge in visual perception or in lighting design. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> In this paper, we present a user study in which we have investigated the influence of seven state-of-the-art volumetric illumination models on the spatial perception of volume rendered images. Within the study, we have compared gradient-based shading with half angle slicing, directional occlusion shading, multidirectional occlusion shading, shadow volume propagation, spherical harmonic lighting as well as dynamic ambient occlusion. To evaluate these models, users had to solve three tasks relying on correct depth as well as size perception. Our motivation for these three tasks was to find relations between the used illumination model, user accuracy and the elapsed time. In an additional task, users had to subjectively judge the output of the tested models. After first reviewing the models and their features, we will introduce the individual tasks and discuss their results. We discovered statistically significant differences in the testing performance of the techniques. Based on these findings, we have analyzed the models and extracted those features which are possibly relevant for the improved spatial comprehension in a relational task. We believe that a combination of these distinctive features could pave the way for a novel illumination model, which would be optimized based on our findings. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Volume Visualization <s> Visualizing complex volume data usually renders selected parts of the volume semitransparently to see inner structures of the volume or provide a context. This presents a challenge for volume rendering methods to produce images with unambiguous depth-ordering perception. Existing methods use visual cues such as halos and shadows to enhance depth perception. Along with other limitations, these methods introduce redundant information and require additional overhead. This paper presents a new approach to enhancing depth-ordering perception of volume rendered images without using additional visual cues. We set up an energy function based on quantitative perception models to measure the quality of the images in terms of the effectiveness of depth-ordering and transparency perception as well as the faithfulness of the information revealed. Guided by the function, we use a conjugate gradient method to iteratively and judiciously enhance the results. Our method can complement existing systems for enhancing volume rendering results. The experimental results demonstrate the usefulness and effectiveness of our approach. <s> BIB005
|
Corcoran and colleagues Fig. 2 ) and those which support shape perception (see Fig. 3 ). The shape perception techniques that are based on shading are shown in a separate diagram (see Fig. 4 ). In the following, we will discuss these techniques in greater detail. Ebert and Rheingans BIB002 showed that this weighting does not need to be linear-exponential functions can be employed. In their application scenarios, the background color is often blue. This is inspired by artists who use blue backgrounds to depict an aerial perspective. Svakhine et al. enhance depth perception for large-and small-scale features by employing color-based techniques which also mimic the effects of the aerial perspective. To give the user more control over how features are emphasized, Svakhine et al. introduce a depth filtering function, which allows depth enhancement to be constrained to a subset of the overall depth range. Illumination-Based Techniques. The second group of physicsbased techniques focus on illumination. These techniques exploit the peculiarities of light transport and the fact that the human visual system has evolved to interpret the effects resulting from the underlying physics. Thus, shadowing, shading and other effects play an important role in this group of techniques. Volume rendering with advanced illumination-based techniques was recently introduced in commercial medical diagnosis software (e. g. SIEMENS syngo.via Frontier) and is referred to as cinematic rendering. In addition to lighting effects, light source placement affects shape perception considerably. While lighting design for polygonal surface rendering was studied in depth (see BIB003 for a seminal contribution), it recently attracted interest in (medical) volume visualization [TLD * 12, ZWM13, ZCBM14]. Aerial perspective could also be considered an illumination-based technique, as it is based on the attenuation of light. Due to the striking similarity to chromadepth, however, we have classified it as chromadepth-based technique. In the following, we will briefly discuss other illumination-based techniques as they are often applied in 3D medical visualization. While most of the techniques follow the widely-spread, gradientbased illumination model proposed by Levoy , a large number of illumination models that consider shadowing, ambient occlusion, halos have recently been proposed. Shadowing Effects. Due to the importance of shadowing effects in depth perception BIB001 , shadows are often taken into account in perceptually-motivated volume rendering. Due to the computational complexity of these lighting effects, algorithms are often constrained to single scattering and to the use of a point or a directional light. To optimize the required computations, several approaches have been proposed in the area of medical visualization. Lighting goodness assesses the quality of lighting basically by analyzing differences between an unilluminated image and an illuminated one. Lighting similarity measures whether a light source is highly representative, which is desired for the placement of several light sources to ensure that they complement each other well. Finally, light stability refers to the differences that result when a light source position slightly changes. Ideally, the depiction of an object's shape is robust against small positional changes. Tao and colleagues employ this metric to optimally place an initial light source and then to add additional sources as long as they improve the recognizeability of shapes (according to their metric of shape perception). In their perceptual experiment, participants were asked to compare pairs of images with respect to recognizability of surface details. In most cases, images, where the light sources were optimized were rated as better than images with randomly placed light sources. The new metric turned out to be superior to optimizations based on other metrics (e. g., from Gumhold and colleagues ). These results are also true for the medical volume data. The following setup for light sources has been shown to be perceptually effective: • a key light with high intensity at the top left of the scene, • an auxiliary fill light placed in front of the scene, and • a back light that emphasizes the silhouettes. The back light should be blue and the other light sources should be white. The key light should have the highest intensity and the back light should have the lowest. One drawback of this configuration is that some thin structures may be overexposed BIB005 . One possible remedy is to use a global tone mapping. This configuration was used in a case study on the analysis of rheumatoid changes. The lighting configuration was perceptually evaluated and discussed with respect to a specific diagnostic task, namely the detection of small erosions from rheumatoid arthritis [ZCBM14]. Zheng and colleagues compared local and global illumination and found that local illumination depicts excessive detail, whereas global illumination leads to a softer appearance resulting in a lower rate of false positives. With this type of lighting and global illumination, the number of diagnostic errors decreased considerably and participants were twice as fast. It is also important to mention that the participants (who were all physicians) wanted to see both the globally illuminated data and the locally illuminated data. In addition to surface orientation and category assessment tasks, Zheng and colleagues employed lighting-specific metrics BIB005 to measure the degree to which-under different lighting conditions-the luminance histogram was nearly equalized and the degree to which edges (based on an edge detector) were very salient. Perceptual Benefits. Several studies have been conducted to investigate the effects of advanced volume illumination techniques on depth and shape perception. Lindemann and Ropinski BIB004 have compared seven state-of-the-art volumetric illumination techniques with respect to depth and size perception as well as to subjective preference. They presented participants with volume-rendered images generated using different illumination models and asked the participants to perform depth, size, and beauty-judgment tasks. The results indicate that global illumination improves the perceptual qualities of volume-rendered images. In particular, directional occlusion shading [SPH * 09] improved depth perception significantly. Interestingly, participants nonetheless had a subjective preference for the simple gradient-based shading technique. Soltészová and colleagues investigated the influence of shadow chromaticity through depth testing and found that shadow chromaticity influenced the perceptual qualities of volume-rendered images. In another work byŠoltészová and colleagues [STPV12], shape perception for complex slanted shapes-such as they occur in anatomy-was analyzed. Like previous authors, they found a systematic error in estimating surface slant. They also discovered that upwards-pointing normals are underestimated less than downwardspointing normals. This finding enabled them to automatically adjust the shading scheme to correct for these errors. In a follow-up experiment, they showed that shape orientation was indeed more precisely perceived after the correction. More recently, Diaz and colleagues [DRN * 16] investigated the influence of global volume illumination techniques in desktop-based VR systems and found a positive effect on depth perception.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> A novel stereoscopic depth encoding/decoding process has been developed which considerably simplifies the creation and presentation of stereoscopic images in a wide range of display media. The patented chromostereoscopic process is unique because the encoding of depth information is accomplished in a single image. The depth encoded image can be viewed with the unaided eye as a normal two dimensional image. The image attains the appearance of depth, however, when viewed by means of the inexpensive and compact depth decoding passive optical system. The process is compatible with photographic, printed, video, slide projected, computer graphic, and laser generated color images. The range of perceived depth in a given image can be selected by the viewer through the use of "tunable depth" decoding optics, allowing infinite and smooth tuning from exaggerated normal depth through zero depth to exaggerated inverse depth. The process is insensitive to the head position of the viewer. Depth encoding is accomplished by mapping the desired perceived depth of an image component into spectral color. Depth decoding is performed by an optical system which shifts the spatial positions of the colors in the image to create left and right views. The process is particularly well suited to the creation of stereoscopic laser shows. Other applications are also being pursued. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing operations instead of line tracking processes, so that they can be efficiently combined with conventional surface rendering algorithms.Data about the geometric properties of the surfaces are preserved as Geometric Buffers (G-buffers). Each G-buffer contains one geometric property such as the depth or the normal vector of each pixel. By using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses. This permits a user to rapidly examine various combinations of enhancement techniques without excessive recomputation, and easily obtain the most comprehensible image.Our method can be widely applied for various purposes. Several of these, edge enhancement, line drawing illustrations, topographical maps, medical imaging, and surface analysis, are presented in this paper. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Authors and Editors. Acknowledgements. Introduction. PART I. BASICS. Generalized Steps. Studio Basics. Archival Considerations. Light on Form. PAT II. RENDERING TECHNIQUES. Line and Ink. Pencil. Carbon Dust. Watercolor and Wash. Gouache and Acrylics. Airbrush. Murals and Dioramas. Model Building. Introduction to Computer Graphics. From 2-D to 3-D. PART III. SUBJECT MATTER. Illustrating Molecules. Illustrating Earth Sciences. Illustrating Astronomy. Illustrating Plants. Illustrating Fossils. Illustrating Invertebrates. Illustrating Fishes. Illustrating Amphibians and Reptiles. Illustrating Birds. Illustrating Mammals. Illustrating Animals in Their Habitats. Illustrating Humans and Their Artifacts. Illustrating Medical Subjects. PART IV. BEYOND BASICS. Using the Microscope. Charts and Diagrams. Cartography for the Scientific Illustrator. Copy Photography. The Printing Process. PART V. THE BUSINESS OF SCIENTIFIC ILLUSTRATION. Copyright. Contracts. Operating a Freelance Business. Index of Illustrators. Index. About the Editors. <s> BIB003 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> There are many applications that can benefit from the simultaneous display of multiple layers of data. The objective in these cases is to render the layered surfaces in a such way that the outer structures can be seen and seen through at the same time. The paper focuses on the particular application of radiation therapy treatment planning, in which physicians need to understand the three dimensional distribution of radiation dose in the context of patient anatomy. We describe a promising technique for communicating the shape and position of the transparent skin surface while at the same time minimally occluding underlying isointensity dose surfaces and anatomical objects: adding a sparse, opaque texture comprised of a small set of carefully chosen lines. We explain the perceptual motivation for explicitly drawing ridge and valley curves on a transparent surface, describe straightforward mathematical techniques for detecting and rendering these lines, and propose a small number of reasonably effective methods for selectively emphasizing the most perceptually relevant lines in the display. <s> BIB004 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Transparency can be a useful device for simultaneously depicting multiple superimposed layers of information in a single image. However, in computer-generated pictures-as in photographs and in directly viewed actual objects-it can often be difficult to adequately perceive the three-dimensional shape of a layered transparent surface or its relative depth distance from underlying structures. Inspired by artists' use of line to show shape, we have explored methods for automatically defining a distributed set of opaque surface markings that intend to portray the three-dimensional shape and relative depth of a smoothly curving layered transparent surface in an intuitively meaningful (and minimally occluding) way. This paper describes the perceptual motivation, artistic inspiration and practical implementation of an algorithm for "texturing" a transparent surface with uniformly distributed opaque short strokes, locally oriented in the direction of greatest normal curvature, and of length proportional to the magnitude of the surface curvature in the stroke direction. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. <s> BIB005 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. <s> BIB006 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. <s> BIB007 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We discuss volume line integral convolution (LIC) techniques for effectively visualizing 3D flow, including using visibility-impeding halos and efficient asymmetric filter kernels. Specifically, we suggest techniques for selectively emphasizing critical regions of interest in a flow; facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines; efficiently incorporating an indication of orientation into a flow representation; and conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations. <s> BIB008 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Accurately and automatically conveying the structure of a volume model is a problem which has not been fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images which may match the appearance of translucent materials in nature but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance but generally require substantial hand-tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination model with the ability to enhance important features using non-photorealistic rendering techniques. Since the features to be enhanced are defined on the basis of local volume characteristics rather than volume sample values, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing the structural perception of volume models through the amplification of features and the addition of illumination effects. <s> BIB009 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Li and Zaidi (Li, A., and Zaidi, Q. (2000) Vision Research, 40, 217–242) showed that the veridical perception of the 3-dimensional (3D) shape of a corrugated surface from texture cues is entirely dependent on the visibility of critical patterns of oriented energy. These patterns are created by perspective projection of surface markings oriented along lines of maximum 3D curvature. In images missing these orientation modulations, observers confused concavities with convexities, and leftward slants with rightward slants. In this paper, it is shown that these results were a direct consequence of the physical information conveyed by different oriented components of the texture pattern. For texture patterns consisting of single gratings of arbitrary spatial frequency and orientation, equations are derived from perspective geometry that describe the local spatial frequency and orientation for any slant at any height above and below eye level. The analysis shows that only gratings oriented within a few degrees of the axis of maximum curvature exhibit distinct patterns of orientation modulations for convex, concave, and leftward and rightward slanted portions of a corrugated surface. All other gratings exhibit patterns of frequency and orientation modulations that are distinct for curvatures on the one hand and slants on the other, but that are nearly identical for curvatures of different sign, and nearly identical for slants of different direction. The perceived shape of surfaces was measured in a 5AFC paradigm (concave, convex, leftward slant, rightward slant, and flat-frontoparallel). Observers perceived all five shapes correctly only for gratings oriented within a few degrees of the axis of maximum curvature. For all other oriented gratings, observers could distinguish curvatures from slants, but could not distinguish signs of curvature or directions of slant. These results demonstrate that human observers utilize the shape information provided by texture components along both critical and non-critical orientations. © 2001 Elsevier Science Ltd. All rights reserved. <s> BIB010 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We propose a simple and effective method for detecting view-and scale-independent ridge-valley lines defined via first- and second-order curvature derivatives on shapes approximated by dense triangle meshes. A high-quality estimation of high-order surface derivatives is achieved by combining multi-level implicit surface fitting and finite difference approximations. We demonstrate that the ridges and valleys are geometrically and perceptually salient surface features, and, therefore, can be potentially used for shape recognition, coding, and quality evaluation purposes. <s> BIB011 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB012 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We describe the results of two comprehensive controlled observer experiments intended to yield insight into the following question: If we could design the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? We begin by reviewing the results of our initial study in this series, which were presented at the 2003 IEEE Symposium on Information Visualization, and offer an expanded analysis of those findings. We continue by presenting the results of a follow-on study in which we sought to more specifically investigate the separate and combined influences on shape perception of particular texture components, with the goal of obtaining a clearer view of their potential information carrying capacities. In each study, we investigated the observers' ability to identify the intrinsic shape category of a surface patch (elliptical, hyperbolic, cylindrical, or flat) and its extrinsic surface orientation (convex, concave, both, or neither). In our first study, we compared performance under eight different texture type conditions, plus two projection conditions (perspective or orthographic) and two viewing conditions (head-on or oblique). We found that: 1) shape perception was better facilitated, in general, by the bidirectional "principal direction grid" pattern than by any of the seven other patterns tested; 2) shape type classification accuracy remained high under the orthographic projection condition for some texture types when the viewpoint was oblique; 3) perspective projection was required for accurate surface orientation classification; and 4) shape classification accuracy was higher when the surface patches were oriented at a (generic) oblique angle to the line of sight than when they were oriented (in a nongeneric pose) to face the viewpoint straight on. In our second study, we compared performance under eight new texture type conditions, redesigned to facilitate gathering insight into the cumulative effects of specific individual directional components in a wider variety of multidirectional texture patterns. We found that shape classification accuracy was equivalently good under a variety of test patterns that included components following either the first or first and second principal directions, in addition to other directions, suggesting that a principal direction grid texture is not the only possible "best option" for enhancing shape representation. <s> BIB013 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate. <s> BIB014 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> Three-dimensional shape can be drawn using a variety of feature lines, but none of the current definitions alone seem to capture all visually-relevant lines. We introduce a new definition of feature lines based on two perceptual observations. First, human perception is sensitive to the variation of shading, and since shape perception is little affected by lighting and reflectance modification, we should focus on normal variation. Second, view-dependent lines better convey smooth surfaces. From this we define view-dependent curvature as the variation of the surface normal with respect to a viewing screen plane, and apparent ridges as the loci of points that maximize a view-dependent curvature. We present a formal definition of apparent ridges and an algorithm to render line drawings of 3D meshes. We show that our apparent ridges encompass or enhance aspects of several other feature lines. <s> BIB015 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> We present a psychophysical experiment to determine the effectiveness of perceptual shape cues for rigidly moving objects in an interactive, highly dynamic task. We use standard non-photorealistic (NPR) techniques to carefully separate and study shape cues common to many rendering systems. Our experiment is simple to implement, engaging and intuitive for participants, and sensitive enough to detect significant differences between individual shape cues. We demonstrate our experimental design with a user study. In that study, participants are shown 16 moving objects, 4 of which are designated targets, rendered in different shape-from-X styles. Participants select targets projected onto a touch-sensitive table. We find that simple Lambertian shading offers the best shape cue in our user study, followed by contours and, lastly, texturing. Further results indicate that multiple shape cues should be used with care, as these may not behave additively. <s> BIB016 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Techniques <s> This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models. <s> BIB017
|
Illustrative techniques do not aim to mimic the real world, but instead borrow from art and illustrations BIB003 . This class of technique often helps to guide the viewer's attention in a goaldirected manner, emphasizing important aspects and suppressing or omitting other aspects. Selected examples, such as boundary emphasis, tone shading, feature lines, and texturing will be discussed in detail below. Boundary Emphasis. Boundary emphasis-usually as a contourhas shown much promise in enhancing volume rendering, presumably since silhouettes play a central role in object recognition. Early methods evaluated (only) the angle between the surface normal n and the view vector v, emphasizing regions, where the dot product of these vectors was close to zero. Unfortunately, the width of the contour cannot be controlled in this technique. Kindlmann and colleagues [KWTM03] solved this by analyzing the normal curvature in the viewing direction and then using this value to regulate contour thickness. While this method produces perceptually meaningful renditions, it requires curvature values (second order derivatives). A more computationally-effective solution was introduced by Bruckner and Gröller . Despite the fact that this latter method is not accurate-since curvature is only approximated by the change of normal directions-it is sufficient for creating expressive visualizations from volume data. Toon Shading. Many forms of medical image data-such as CT, MRI and PET-have no inherent color. Thus, color may be used to enhance the shape perception. A widespread strategy is to map the surface direction (approximated as normalized gradients in direct volume rendering) with a cool-to-warm color scale. This illustrative-rendering technique was introduced by Gooch and colleagues and is also used in medical visualization [JQD * 08]. In RGB space, the cool color uses a non-zero blue component, while the warm color is based on yellow and uses the red and green components: with L being the light vector, N being the surface normal or normalized gradient, K cool = (0, 0, T cool ), and K warm = (T warm , T warm , 0). Chromadepth. The selection of blue as background color in distance color blending [ER00] is very consistent with perceptual considerations since the light-sensitive cells that respond to blue col- ors primarily have a slow response time. Furthermore, the lens of the eye refracts colored light with different wavelength at different angles. Thus, the refraction of blue wavelength light at the eye's lens can result in an offset of the retinal image, which makes these objects seem to be further away than, for instance, red objects. Thus, the blue background naturally supports the focus on the foreground, which is typically rendered in red. This effect, called chromadepth, is employed for stereo perception (with diffraction grating glasses). It can also be used, however, for depth perception without glasses BIB001 , if the depth value is mapped to the rainbow color scale (red is proximal, blue is distal). Due to these benefits, chromadepth-based techniques have also been applied in medical visualization [RSH06, BGP * 11, SPV11]. One central application of chromadepth is to improve the depiction of shadows. The realistic simulation of shadows darkens the affected regions so strongly that there is often next to no contrast there, effectively hiding any information present there . Soltészová and colleagues noticed that illustrators often do not mix the object color with black, as shadowing algorithms do. Instead, they prefer to mix the original color with blue such that shadowed regions have a luminance and a color contrast.Šoltészová and colleagues suggested that shadowiness is mapped using an appropriate transfer function to a blueish color and to opacity. The specific color scale is derived from the perceptually motivated CIELAB color space, where the Euclidean distances roughly correspond to our perception of color differences. With this shadow transfer function they effectively compensate for the lower luminance range in the shadow region, and thus reveal more details by avoiding black concavities. This is an inspiring idea, as it mixes a depth cue from real-world perception (shadow) with an artificial depth cue (since the color assignment clearly deviates from physical illumination). The method was applied to a variety of medical datasets, including CT and ultrasound data. Fig. 6 illustrates the difference between chromadepth and conventional shadows. This kind of shadow generation is similar to illustrative cool-warm color shading . Halo Effects. Halos can be thought of as the opposite of shadows: shadows arise when occluding structures decrease the amount of illumination received by adjacent objects while halos are rim-like structures that shine on adjacent objects. Since halo effects are designed to support depth perception, the foreground features are usually emphasized with a bright surrounding halo BIB009 . The background object is made less prominent by making the surrounding more opaque or darker. When the halo color is dark, halos closely resemble shadowing effects. This well-known artistic technique was first applied in visualization in the context of flow visualization BIB006 BIB008 . There, the halo effect was computed per voxel by adding halo influences in the neighborhood. Fig. 7 shows an example, where halos are applied to medical volume rendering. Feature Lines. As mentioned above, object outlines and boundary emphasis techniques can improve the space perception. In addition to the outer object boundaries, a variety of lines exist to represent discontinuities in visibility, surface normal, curvature, and illumination. Generally, two classes of feature lines exist: • view-independent lines and • view-dependent lines. View-independent features are solely influenced by the shape of an object, and as such they are the same for different vantage points. These include crease lines based on "large" dihedral angles between adjacent faces and the previously mentioned ridge and valley lines (e. g., BIB004 BIB011 ) that are derived from second-order features (curvature) and are very sensitive to noise. In contrast, view-dependent feature lines take the view direction (and sometimes the illumination) into account. Among the viewdependent feature lines, suggestive contours [DFRS03] and apparent ridges BIB015 have been frequently used in medical visualization . Suggestive contours [DFRS03] characterize regions in a surface that would be silhouette regions if the viewpoint of the camera was to change slightly. Thus, they provide continuity during interactive exploration. Apparent ridges BIB015 are viewdependent versions of the static ridge-and-valley line concept: They extend the definition of ridges with a view-dependent curvature term. In interactive exploration, apparent ridges thus adapt to the viewing direction and slide over a surface instead of being constant. In contrast to suggestive contours, apparent ridges also include lines in convex regions. Both suggestive contours and apparent ridges have a relevance threshold that can be adjusted for drawing or suppressing lines. ridge-and-valley lines are subject to noise BIB017 , "seem to exaggerate curvature" [CGL * 08], make features "look overly sharp" BIB015 or "like surface markings" [DFRS03], and due to being locked to the surfaces they are easily occluded by the very features they represent. Only for mathematically ideal shapes with unrealistically sharp features (3D models of implants are a typical medical example) can static ridge-and-valley lines be equivalent to view-dependent concepts, such as apparent ridges. For organic shapes, in particular models obtained from medical scans, a large amount of smoothing needs to be used to avoid problems with view-independent lines. For specific recommendations on which view-dependent or -independent line concept should be used, we refer the reader to the survey of Lawonn and Preim BIB017 . Silhouettes [IFH * 03]-which are view-dependent lines on the surface of an object-were employed along with surface and volume rendering BIB014 to display context objects in a sparse manner to support attention to the focus objects (see Fig. 9 ). Corcoran and colleagues adjusted two-level volume rendering to incorporate object-space silhouettes and suggestive contours. Overall, shape perception was improved with both feature line techniques. By far the most comprehensive evaluation of the perceptual effectiveness of feature lines was performed by Cole and colleagues [CSD * 09] who performed an experiment with 275.000 gauge figure measurements using Amazon's mechanical turk. They investigated all major features lines (including apparent ridges and suggestive contours) and compared them with shaded images and illustrations Figure 9 : The focus objects-a liver tumor and the vascular trees of the liver-are displayed as colored, opaque objects. The liver surface is a near-focus structure rendered transparently but also colored. Other organs and skeletal structures are rendered with silhouettes. In the right image, skeletal structures are rendered also as strongly transparent shaded surfaces (from BIB014 ). performed by an artist. Among the twelve models used in the study were four (partially) complex anatomical structures (including the cervical bone, a vertebrae) and two less complex models (a tooth and a femur bone). The major results of that study are: • There are statistically significant differences between almost all pairs of feature line techniques. • All feature line techniques were less effective than shading (for all 12 models). • Shape perception was poor for the anatomical models with any type of feature line (even with ridge and valley lines, where the mean deviation was 35°, compared to 24°with shading). As a consequence, the sole use of feature lines for displaying single anatomical structures is perceptually not recommended. Hatching. Shape representation using feature lines can yield images that are too sparse when the shapes have only a few landmarks, such as is the case for the liver and the kidney. When an appropriate surface parameterization exists, hatching textures may improve shape perception. The strokes of such a hatching texture are more regularly distributed over a surface. The strokes are fully opaque, whereas the remaining elements of the texture are fully transparent. Obtaining an adequate surface representation is challenging, especially if surface models are derived from (noisy) medical-image data. Usually, mesh smoothing must be performed. The perceptual benefit of hatching strokes is influenced by the amount to which they "follow the shape," especially for organic (curved) shapes BIB010 . One of the earliest applications of this principle comes from Saito and Takahashi BIB002 , who applied regular hatching lines (latitude and longitude lines) to curved surfaces. Hatching has been shown to improve shape perception when it is used in combination with conventional shading with a local illumination model. It was also successfully used (based on experiments) for multi-layer medical visualizations BIB004 . It is unclear how well hatching works in isolation, as this has rarely been investigated. Hatching-like feature lines-may be stylized (i. e., parameters may be mapped to line style, width, brightness, or even color hue). This can be used to discriminate objects (e. g., by different hues) or to encode depth explicitly. So far, there has been no experimental comparison of feature lines, hatching, and shaded surfaces for anatomical surface models with respect to shape perception. Currently, one may suppose that the joint use of shading and appropriate hatching yields better performance than shading alone or feature lines alone. How a joint use of feature lines and shading would perform is also not known. The only comparison of feature lines, hatching, and shading that we are aware of was performed for moving objects with very simple shapes that do not resemble anatomy BIB016 . Interrante and colleagues BIB005 discussed another strategy that is more concretely rooted in perceptual research : they created strokes that indicate the local curvature of the surface. For this purpose, they computed the two PCDs and their respective scalar values. This computation results in two vector fields: a vector field representing vectors with maximum curvature and a second field with orthogonal vectors representing minimum curvature. The actual placement of the strokes is essential to the successful use of curvature-based hatching. The strokes provide essential shape cues in regions, where there is a considerable curvature. In flat regions, maximum curvature directions are unreliable, and therefore no hatching strokes should be generated there. Thresholding is thus necessary to avoid perceptual problems. Sweet and Ware BIB012 examined the perceptual effectiveness of parallel lines on surfaces in all three directions separately and compared it with a regular grid composed of parallel lines in two directions. In their large study, the average angular deviation was 20 degrees for surfaces that had only shading information. All types of line-based enhancements improved accuracy scores. The best results were achieved with a regular grid texture (angular deviation was reduced to 12 degrees). The regular grid texture even produced significantly better performance than overlays with horizontal and vertical lines. Fig. 10 depicts three of the six viewing conditions. Hatching Textures for Nested Anatomical Surfaces. Hatching textures are particularly useful for multilayered visualizations, especially when they are used to depict the outer shape in a manner so that the display of the inner shapes is only minimally occluded. Thus, instead of a semi-transparent outer surface, a small set of opaque strokes-indicating the surface location and its curvaturerepresents the outer surface. Interrante and colleagues applied this strategy to medical surface models (e. g., to indicate the dose distribution of simulated radiation treatment planning in anatomical models). In their first system, they used a hatching texture created from ridge and valley lines BIB004 . Unfortunately, not all dose distributions could be conveyed with these sparse feature lines. More evenly spaced curvature-directed hatching lines better revealed the outer surface BIB007 . In a series of experiments, they showed that hatching textures with lines that follow the PCDs conveyed the local orientation of smooth curved surfaces with convex and concave regions better than Phong shading BIB005 BIB007 BIB013 . Fig. 11 shows an example of the stimuli from these experiments. Hatching Medical 3D Visualizations. Hatching techniques in medical visualization may be adapted to the specific anatomical objects. The display of muscles, for example, benefits from hatching textures representing their fiber structures [DCLK03,TPB * 08]. Elongated structures, such as vasculature and long bones, are hatched orthogonally to their local centerline (following the tradition of medical illustrations ). These papers discuss generating high quality surface and volume textures, but do not perform any perceptual experiments or evaluations.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Vascular Visualization <s> Transparency can be a useful device for depicting multiple overlapping surfaces in a single image. The challenge is to render the transparent surfaces in such a way that their 3D shape can be readily understood and their depth distance from underlying structures clearly perceived. This paper describes our investigations into the use of sparsely-distributed discrete, opaque texture as an artistic device for more explicitly indicating the relative depth of a transparent surface and for communicating the essential features of its 3D shape in an intuitively meaningful and minimally occluding way. The driving application for this work is the visualization of layered surfaces in radiation therapy treatment planning data, and the technique is illustrated on transparent isointensity surfaces of radiation dose. We describe the perceptual motivation and artistic inspiration for defining a stroke texture that is locally oriented in the direction of greatest normal curvature (and in which individual strokes are of a length proportional to the magnitude of the curvature in the direction they indicate), and we discuss two alternative methods for applying this texture to isointensity surfaces defined in a volume. We propose an experimental paradigm for objectively measuring observers' ability to judge the shape and depth of a layered transparent surface, in the course of a task which is relevant to the needs of radiotherapy treatment planning, and use this paradigm to evaluate the practical effectiveness of our approach through a controlled observer experiment based on images generated from actual clinical data. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Vascular Visualization <s> A large variety of techniques has been developed to visualize vascular structures. These techniques differ in the necessary preprocessing effort, in the computational effort to create the visualizations, in the accuracy with respect to the underlying image data and in the visual quality of the result. In this overview, we compare 3D visualization methods and discuss their applicability for diagnosis, therapy planning and educational purposes. We consider direct volume rendering as well as surface rendering. <s> BIB002
|
Many different 3D vessel visualization techniques have been developed to support the treatment planning. One family of vessel visualization techniques employs direct volume rendering and uses a transfer function to emphasize vascular structures [JQD * 08, KGNP12]. While most of these techniques serve to enhance preoperatively Figure 11 : The inner surface represents a tumor and the outer surface an isosurface resulting from the dose simulation in radiation treatment planning. Both are shown together in order to assess whether the tumor is likely to be completely destroyed. The outer surface is rendered as a strongly transparent isosurface enhanced with curvature-directed strokes (from BIB001 , © IEEE, reprinted with permission). acquired images, the technique by [WSzBD * 14] is aimed at incorporating depth cues for improving interventional images of vascular structures. A second family of techniques reconstruct a surface mesh of vascular structures with explicit, implicit, or parametric visualization techniques (see, e. g., the survey of Preim and Oeltze BIB002 ). In the present survey, we do not consider the different geometric approaches, but assume that a smooth and accurate surface mesh is available. We do, however, discuss different ways of displaying this surface mesh (e. g., with illustrative methods). Vascular visualization has the same requirements as other 3D visualizations as well as a few new ones (this is particularly true wen the visualizations will be used for treatment planning): [RHP * 06]: • the spatial distance between vessel segments is essential (e. g., indications of when one segment occludes another); • the discrimination of vascular systems is needed since vessel segments can belong to the arterial or the venous system; • the spatial distance between lesions (e. g., tumors) and vessel segments is essential, especially if the vessel segments exhibit a larger diameter; and • during treatment planning, the exploration of vascular trees should be possible. During surgery, on the other hand, static images are desired in order to better reveal the important information at a glance. The visualization techniques described in the following are driven by these requirements. All of them are illustrative. Since vascular structures are particularly complex shapes, it comes as no surprise that the basic, perceptually-motivated techniques (recall Sect. 2), such as chromadepth shading, distance color shading, toon shading and halos are used [RSH06, JQD * 08]. The effect of distance color blending (with blue as distant color) is shown in Fig. 12.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> Cerebral aneurysms are a vascular dilatation induced by a pathological change of the vessel wall and often require treatment to avoid rupture. Therefore, it is of main interest, to estimate the risk of rupture, to gain a deeper understanding of aneurysm genesis, and to plan an actual intervention, the surface morphology and the internal blood flow characteristics. Visual exploration is primarily used to understand such complex and variable type of data. Since the blood flow data is strongly influenced by the surrounding vessel morphology both have to be visually combined to efficiently support visual exploration. Since the flow is spatially embedded in the surrounding aneurysm surface, occlusion problems have to be tackled. Thereby, a meaningful visual reduction of the aneurysm surface that still provides morphological hints is necessary. We accomplish this by applying an adapted illustrative rendering style to the aneurysm surface. Our contribution lies in the combination and adaption of several rendering styles, which allow us to reduce the problem of occlusion and avoid most of the disadvantages of the traditional semi-transparent surface rendering, like ambiguities in perception of spatial relationships. In interviews with domain experts, we derived visual requirements. Later, we conducted an initial survey with 40 participants (13 medical experts of them), which leads to further improvements of our approach. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Blood Flow Visualization <s> The investigation of hemodynamic information for the assessment of cardiovascular diseases CVDs gained importance in recent years. Improved flow measuring modalities and computational fluid dynamics CFD simulations yield in reliable blood flow information. For a visual exploration of the flow information, domain experts are used to investigate the flow information combined with its enclosed vessel anatomy. Since the flow is spatially embedded in the surrounding vessel surface, occlusion problems have to be resolved. A visual reduction of the vessel surface that still provides important anatomical features is required. We accomplish this by applying an adaptive surface visualization inspired by the suggestive contour measure. Furthermore, an illustration is employed to highlight the animated pathlines and to emphasize nearby surface regions. Our approach combines several visualization techniques to improve the perception of surface shape and depth. Thereby, we ensure appropriate visibility of the embedded flow information, which can be depicted with established or advanced flow visualization techniques. We apply our approach to cerebral aneurysms and aortas with simulated and measured blood flow. An informal user feedback with nine domain experts, we confirm the advantages of our approach compared with existing methods, e.g. semi-transparent surface rendering. Additionally, we assessed the applicability and usefulness of the pathline animation with highlighting nearby surface regions. <s> BIB003
|
For any kind of blood flow visualization, measured or simulated, it is essential that the patterns of the flow can be studied along with the morphology of the surrounding vessels. Changes, such as narrowings or dilatations of vascular structures, cause vortices or helical flow patterns. These subtle changes may be true representations of the patients' state, but they might be due to artifacts. Three dimensional flow is often represented with streamlines (which may be illuminated) or pathlines in case of unsteady blood flow (see [VPvP * 14] for a survey). Color is used to convey the velocity magnitude and thus cannot be used to enhance the shape and depth perception (e. g.with toon shading or distance color-blending.) Due to the complexity of the underlying information, perceptually-motivated blood flow visualization techniques primarily employ illustrative concepts. The simultaneous visualization of vascular structures and embedded flow is an instance of a multi- Ghosted views. Ghosted views are a type of smart visibility technique. Often, the region, where real flow is represented, would define a 3D mesh (e. g., a hull), and the transparency of the vessel is adjusted such that the flow becomes visible. Regions of the vessel surface that do not occlude flow are rendered opaque. Gasteiger and colleagues BIB002 developed such a ghosted view technique, where the transparency is adjusted in a view-dependent manner so that vessel contours are clearly visible. Ghosted views can also be combined with feature lines that indicate, where a pathology starts and which vessels drain and feed the pathologic dilatation. Moreover, an optional depth enhancement has been introduced with a fog simulation and a simple approximate shadow representation. This gives rise to possible visualizations: simple global transparency adjustment, ghosting, and ghosting with additional depth enhancements (see Fig. 17 ). While Gasteiger and colleagues BIB002 only assessed the subjective preference of the techniques, a full perception-based study of this technique has been performed [BGCP11] and will be described in Sect. 5.3. The combination of blood flow and vascular structures was later refined and adapted to animating time-dependent flow BIB003 . Illustrative techniques were developed to provide simplified abstract flow representations [BMGS13, vPBB * 10], motivated by artist-created flow illustrations. Occluding contours emphasize major arteries and their branchings if drawn over a strongly transparent surface. Illustrative arrow glyphs were employed to display aggregated flow (using clustering). Long arrow glyphs are beneficial for the perception of the flow direction BIB001 .
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> We present a threads and halos representation for interactive volume rendering of vector-field structure and describe a number of additional components that combine to create effective visualizations of multivalued 3D scientific data. After filtering linear structures, such as flow lines, into a volume representation, we use a multilayer volume rendering approach to simultaneously display this derived volume along with other data values. We demonstrate the utility of threads and halos in clarifying depth relationships within dense renderings and we present results from two scientific applications: visualization of second-order tensor valued magnetic resonance imaging (MRI) data and simulated 3D fluid flow data. In both application areas, the interactivity of the visualizations proved to be important to the domain scientists. Finally, we describe a PC-based implementation of our framework along with domain specific transfer functions, including an exploratory data culling tool, that enable fast data exploration. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Projection and Illumination of Stream Tubes <s> Many rendering algorithms can be understood as numerical solvers for the light-transport equation. Local illumination is probably the most widely implemented rendering algorithm: it is simple, fast, and encoded in 3D graphics hardware. It is not, however, derived as a solution to the light-transport equation. We show that the light-transport equation can be re-interpreted to produce local illumination by using vector-valued light and matrix-valued reflectance. This result fills an important gap in the theory of rendering. Using this framework, local and global illumination result from merely changing the values of parameters in the governing equation, permitting the equation and its algorithmic implementation to remain fixed. <s> BIB003
|
Weigle and Banks created artificial datasets resembling fiber tracts visualized with stream tubes. To investigate shape perception (local scale), they modified the perspective (orthographic and perspective) and the illumination model (local and global), the latter having been introduced by Beason and Banks BIB003 . The illumination model includes shadow generation and multiple reflections that can be precomputed and thus be used in interactive settings. To fully exploit the perceptual potential of global illumination, several light sources need to be placed carefully (serving as key lights and fill lights). Overall, they found that global illumination and perspective projection improved the assessment of depth with highly significant results and a moderate effect size. Global illumination improved the depth perception in case of orthographic and perspective projection. Thus, the effects of realistic perspective and illumination are cumulative. In addition to, or instead of, using (local) illumination of the tubular fiber tract structures, researchers have also investigated the use of graphical techniques (i. e., illustrative visualization techniques) that have an effect similar to global illumination (recall Sect. 3.1.1), but which can be more computed rapidly. In particular, Wenger and colleagues BIB002 employed tube halos and motivated the use of halos with their improvement of perception, as shown in the previously mentioned work on flow visualization BIB001 . Generally, this use of similar visualization approaches illustrates the close connection of fiber tract visualization to that of other types of dense line data (e. g., streamlines) extracted from flow simulations. As an alternative, Klein and colleagues [KRH * 06] removed tube shading entirely and, instead, applied distance-encoded contours and tube shadows to improve the spatial perception-freeing up the tube surface for the visualization of additional data properties. No studies have been conducted to evaluate the perceptual benefits of the either of these visualization techniques.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> An alternative to computing line shadows is the use of illustrative <s> Rendering large numbers of dense line bundles in three dimensions is a common need for many visualization techniques, including streamlines and fiber tractography. Unfortunately, depiction of spatial relations inside these line bundles is often difficult but critical for understanding the represented structures. Many approaches evolved for solving this problem by providing special illumination models or tube-like renderings. Although these methods improve spatial perception of individual lines or related sets of lines, they do not solve the problem for complex spatial relations between dense bundles of lines. In this paper, we present a novel approach that improves spatial and structural perception of line renderings by providing a novel ambient occlusion approach suited for line rendering in real time. <s> BIB001
|
All line-based approaches discussed so far, however, only work well for sparse datasets or selections of fiber tracts in which tract bundles are clearly emerging. Otherwise, the dense nature of fiber tract data coupled with unstructured fiber tract orientations toward the outside of a dataset (at least for brain data) make it difficult to perceive the overall 3D structure. Eichelbaum and colleagues BIB001 thus developed LineAO (Fig. 19) , a physics-based technique that specifically supports the perception of complex 3D structures in dense fiber tract datasets. Eichelbaum and colleagues claim that the sampling of dense linear structures can handle both the global structure and the local detail. Moreover, the technique can also be combined with line illumination and the visualization of additional data properties along the tracts. LineAO's primary limitation is that it is not so well suited for use with sparse data due to its lack of occlusion BIB001 .
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> A specific form for the internal representation of local surface orientation is proposed, which is similar to Gibson's (1950) “amount and direction of slant”. Slant amount is usually quantifed by the angle σ between the surface normal and the line of sight (0°≦σ≦90°). Slant direction corresponds to the direction of the gradient of distance from the viewer to the surface, and may be defined by the image direction τ to which the surface normal would project (0°≦τ≦360°). Since the direction of slant is specified by the tilt of the projected surface normal, it is referred to as surface tilt (Stevens, 1979; Marr, 1982). The two degrees of freedom of orientation are therefore quantified by slant, an angle measured perpendicular to the image plane, and tilt, an angle measured in the image plane. The slanttilt form provides several computational advantages relative to some other proposals and is consistent with various psychological phenomena. Slant might be encoded by various means, e.g. by the cosine of the angle, by the tangent, or linearly by the angle itself. Experimental results are reported that suggest that slant is encoded by an internal parameter that varies linearly with slant angle, with resolution of roughly one part in 100. Thus we propose that surface orientation is encoded in human vision by two quantities, one varying linearly with slant angle, the other varying linearly with tilt angle. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Common Direct Tasks <s> Recovering 3D shape from shading is an ill-posed problem that the visual system can solve only by making use of additional information such as the position of the light source. Previous research has shown that people tend to assume light is above and slightly to the left of the object [Sun and Perona 1998]. We present a study to investigate whether the visual system also assumes the angle between the light direction and the viewing direction. We conducted a shape perception experiment in which subjects estimated surface orientation on smooth, virtual 3D shapes displayed monocularly using local Lambertian shading without cast shadows. We varied the angle between the viewing direction and the light direction within a range +/- 66 deg (above/below), and subjects indicated local surface orientation by rotating a gauge figure to appear normal to the surface [Koenderink et al. 1992]. Observer settings were more accurate and precise when the light was positioned above rather than below the viewpoint. Additionally, errors were minimized when the angle between the light direction and the viewing direction was 20--30 deg. Measurements of surface slant and tilt error support this result. These findings confirm the light-from-above prior and provide evidence that the angle between the viewing direction and the light direction is assumed to be 20--30 deg above the viewpoint. <s> BIB003
|
In addition to actual manipulation tasks, common tasks include some form of verbal report, a forced choice among a short list of items, or a rating along a fixed scale (such as a Likert scale). Rating is usually done on a Likert scale, which uses (mostly) an odd number of possibilities (5, 7, or 9) and anchors the two ends of the scale, usually with opposing terms. The most common Likert scale is a 7-point scale with 1 meaning "strongly agree" and 7 meaning "strongly disagree". Typical quantitative tasks from perception research may be adapted to medical applications. Table 1 summarizes important tasks and specific measures with a focus on the shape and depth perception. Orientation Matching Tasks are rather complex, and require a more detailed discussion. The most common orientation matching task asks participants to place gauge figures (disks centered around an orthogonal line) at selected positions of a surface. Participants are asked to manipulate the orientation of each gauge figure so that its base plane is tangent to the surface and thus the orthogonal lines match to the normal vector at that point of the surface. The curved surface is thus probed at different positions. Gauge figure tasks were pioneered by Stevens BIB001 and are widely used to assess the influence of visualization techniques on the shape perception (e. g., [BGCP11,CSD * 09,KVDK92,SPV11]). Cole and colleagues [CSD * 09], for example, used a repeated measures shape task to determine which technique provided better shape perception as well as to measure how certain the participants were. Cole and colleagues also pioneered gauge string tasks, where a number of gauges (15 in their case) were placed on a horizontal line to analyze shape perception in a local region in-depth and to correlate the results with differential geometric properties, such as the occurrence of inflection points. Placing gauge figures is not easy, in particular because the gauge occludes parts of the surface. O'Shea and colleagues BIB003 have discussed guidelines for gauge figure tasks, suggesting that gauge figures should be • drawn in red, • drawn with a small line width to reduce occlusions, • initially oriented randomly, and • shown in perspective projection. Moreover, it is useful to present the gauge figure in its current orientation enlarged at the boundary of the image (where the currently interesting part of the surface is not occluded) BIB002 . It is also important that the gauge figure does not interact with the surface (including occlusion effects!), since this would give direct feedback as to the correct location of the surface. Participants need to practice placing gauge figures and should be shown correct and bad placements [CSD * 09].
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> This article presents a method for automating rendering parameter selection to simplify tedious user interaction and improve the usability of visualization systems. Our approach acquires the important/interesting regions of a dataset through simple user interaction with an eye tracker. Based on this importance information, we automatically compute reasonable rendering parameters using a set of heuristic rules, which are adapted from visualization experience and psychophysical experiments. A user study has been conducted to evaluate these rendering parameters, and while the parameter selections for a specific visualization result are subjective, our approach provides good preliminary results for general users while allowing additional control adjustment. Furthermore, our system improves the interactivity of a visualization system by significantly reducing the required amount of parameter selections and providing good initial rendering parameters for newly acquired datasets of similar types. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch’s (Psychol Rev 102:211–245, 1995) theory of long-term working memory, Haider and Frensch’s (J Exp Psychol Learn Mem Cognit 25:172–190, 1999) information-reduction hypothesis, and the holistic model of image perception of Kundel et al. (Radiology 242:396–402, 2007). Eye movement and performance data were cumulated from 819 experts, 187 intermediates, and 893 novices. In support of the evaluated theories, experts, when compared with non-experts, had shorter fixation durations, more fixations on task-relevant areas, and fewer fixations on task-redundant areas; experts also had longer saccades and shorter times to first fixate relevant information, owing to superiority in parafoveal processing and selective attention allocation. Eye movements, reaction time, and performance accuracy were moderated by characteristics of visualization (dynamics, realism, dimensionality, modality, and text annotation), task (complexity, time-on-task, and task control), and domain (sports, medicine, transportation, other). These findings are discussed in terms of their implications for theories of visual expertise in professional domains and their significance for the design of learning environments. <s> BIB002 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Eye Tracking-Based Research <s> Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel visualization techniques beyond time and error measures. Up to now, most stimuli in eye tracking are either static stimuli or videos. Since interaction is an integral part of visualization, an evaluation should include interaction. In this paper, we present an extensive literature review on evaluation methods for interactive visualizations. Based on the literature review we propose ideas for analyzing eye movement data from interactive stimuli. This requires looking critically at challenges induced by interactive stimuli. The first step is to collect data using different study methods. In our case, we look at using eye tracking, interaction logs, and thinking-aloud protocols. In addition, this requires a thorough synchronization of the mentioned study methods. To analyze the collected data new analysis techniques have to be developed. We investigate existing approaches and how we can adapt them to new data types as well as sketch ideas how new approaches can look like. <s> BIB003
|
The use of eye tracking has become quite popular for evaluating user interfaces, web sites, and 2D visualizations BIB003 . Modern eye trackers can deliver precise and reliable results about foveatic vision (i. e., the regions of a 2D image observed in high resolution). Eye motion evaluation focuses on scan paths and fixation regions in 2D screen coordinates. The disadvantages of eye tracking include that eye movements are often unintentional, that we may fail to recognize an object even if we have looked at it for long time, and that eye motion is only weakly correlated with cognitive processes. Eye motion also does not indicate at which distance (e. g., which layer of a semitransparent 3D model) a person is focusing. Furthermore, peripheral vision cannot be detected with eye tracking [KDX * 12]. Eye tracking has been used in the visualization of medical image data, in particular to analyze how physicians inspect X-ray images in, for example, mammography data (see, e. g., [BHKS13, Kru00]). With respect to 3D medical visualization, Burgert and colleagues [BOJ * 07] investigated 3D renderings of the neck anatomy with enlarged lymph nodes. Experienced participants had significantly less saccadic movements and looked longer at the relevant regions while novices tend to look around more. Eye tracking has been used to automatically adjust parameters for volume rendering to highlight regions of interest, determined by means of eye tracking BIB001 . The central and by far most reliable result, however, is that novices and experts have different eye-motion behavior. In a meta-study of eye motion when looking at visualizations, Gegenfurtner and colleagues BIB002 found that when a large enough number of participants is used, experts show shorter fixation durations and have longer saccades. They also have more fixations in the relevant areas and take less time before the first fixation on relevant information. This is the same pattern found in many non-medical tasks, such as chess or driving [BYW * 11], and seems to reflect the degree of expertise in the relevant task. Lu and colleagues BIB001 provide a comprehensive overview of eye tracking-based research in visualization.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Illustrative Visualization of Vascular Surfaces <s> Depth assessment of 3D vascular models visualized on 2D displays is often difficult, especially in complex workspace conditions such as in the operating room. To address these limitations, we propose a new visualization technique for 3D vascular models. Our technique is tailored to static monoscopic 2D representations, as they are often used during surgery. To improve depth assessment, we propose a combination of supporting lines, view-aligned quads, and illustrative shadows. In addition, a hatching scheme that uses different line styles depending on a distance measure is applied to encode vascular shape as well as the distance to tumors. The resulting visualization can be displayed on monoscopic 2D monitors and on 2D printouts without the requirement to use color or intensity gradients. A qualitative study with 15 participants and a quantitative study with 50 participants confirm that the proposed visualization technique significantly improves depth assessment of complex 3D vascular models. <s> BIB001
|
Two evaluations were performed in order to assess the perceptual consequences of a new illustrative vascular visualization technique BIB001 (recall Sect. 3.2). The first qualitative study basically served as a pilot study to design a more thorough quantitative experiment. Type of evaluation: Web-based questionnaire. Participants: 50 (8 physicians, 19 females, 17-48 years of age). The largest group (24) were computer scientists. Stimuli: 24 vascular trees along with a tumor with two selected and emphasized points. Participants had to assess their depth order. Static images were presented without a facility to change the viewpoint. In total, 1200 depth comparisons were evaluated. Tasks: The participants saw vascular trees and a tumor as a typical situation in treatment planning. Two points were selected and the participants were asked to assess their depth order. Major results: Participants were more accurate in depth assessment when illustrative shadows and supportive lines were present than when chromadepth shadows and Phong shading were used. The effects are statistically significant and the effect sizes are at least moderate. The increased accuracy corresponds to the improved confidence. At the same time, participants needed more time for the depth assessment with illustrative visualization (mean: 14 seconds compared to 11 seconds with chromadepth shadows). Thus, the interpretation of illustrative shadows and supportive lines takes time and cognitive effort. An interesting result is that the differences in accuracy depend on the locations of the points used for depth comparison. The effect was stronger when one or both points were far away (more than the average depth of the model). The experiment also confirmed the effectiveness of chromadepth shadows compared to Phong shading. Further remarks. The results suggest that illustrative shadows and supportive lines are effective for exploring vascular trees. It is important, however, several limitations are kept in mind. First, there was no control over the display conditions. Second, the participants had a large range of expertise or background knowledge about vascular trees, vessel visualizations, and computer graphics. Third, rotation was not possible. Thus, in an interactive exploration with 3D rotation, the advantage of the additional techniques may be (strongly) reduced BIB001 . The effects of supportive lines and illustrative shadows were not analyzed separately.
|
A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Combined visualization of instruments and anatomical data. <s> Cable conduit installation equipped with L-and T-connecting members which are provided with releasable means for sliding on and interengagement with the installation conduit and/or means for sliding on and underlapping or overlapping with the conduit covering. <s> BIB001 </s> A Survey of Perceptually Motivated 3D Visualization of Medical Image Data <s> Combined visualization of instruments and anatomical data. <s> Currently, most researchers in visualization pay very little attention to vision science. The exception is when the effective use of color is the subject. Little research in flow visualization includes a discussion of the related perceptual theory. Nor does it include an evaluation of effectiveness of the display techniques that are generated. This is so, despite Laidlaw's paper showing that such an evaluation is relatively straightforward. Of course, it's not always necessary to relate visualization research to perceptual theory. If the purpose of the research is to increase the efficiency of an algorithm, then the proper test is one of efficiency, not of perceptual validity. But when a new representation of data is the subject of research, addressing how perceptually effective it is - either by means of a straightforward empirical comparison with existing methods or analytically, relating the new mapping to perceptual theory - should be a matter of course. A strong interdisciplinary approach, including the disciplines of perception, design, and computer science will produce better science and better design in that empirically and theoretically validated visual display techniques will result. <s> BIB002
|
Often medical image data is visualized along with instruments, such as biopsy needles, stents, electrodes, and implants of all kinds. The precise location of instruments relative to anatomical structures needs to be conveyed. We know of no perception-based studies that compare different visualization techniques for such problems. Perception-guided visualization of blood flow. Compared to the large variety of blood flow visualizations [VPvP * 14], only few techniques are perceptually-motivated and only one was evaluated in a quantitative study. This evaluation relates to the nested visualization problem of displaying vascular structures and embedded flow. There is relevant research on flow perception (e. g., how to convey flow direction and orientation effectively BIB002 ) which can be used for guidance. Designing perceptually effective blood flow visualizations is particularly challenging for unsteady flow and has to consider motion perception as well. Exaggerated shading. One perceptually-motivated technique for displaying shape is exaggerated shading (ES), where subtle local changes of the geometry are performed to enhance features [RBD06, ZCF * 10]. The deliberate emphasis of surface features may be beneficial for educational applications. Multimodal medical visualization based on combined scanners, such as PET/CT and PET/MRI scanners, is increasingly important. The visualization challenge is to fuse these images in a visualization such that the essential information from both datasets is visible and the overall visualization conveys the shape and depth information correctly. Many multimodal visualization techniques have been developed, but there is no empirical, quantitative comparison between them. The role of reflection. Certain established depth and shape cues have not been considered in medical visualization so far. For instance, specular reflections, as they may also occur at some body organs, reveal a lot of information on spatial relations BIB001 . The effect of specular reflection is worth investigating (e. g., for virtual colonoscopy, where a procedure is simulated that includes real-life wetness and reflections). Patient-doctor communication. Medical visualizations-in particular perceptually-motivated, illustrative visualizations-have a great potential for patient-doctor communication and for interdisciplinary discussions (e. g., in a tumor board). In both settings, users include those that are not familiar with slice-based visualizations and benefit from visualizations that emphasize important features and abstract from unnecessary details. Only very few papers mention these use cases and even fewer assess whether medical visualization techniques are indeed useful for such use cases. Perceptual consequences of interaction. In this survey article, we discussed the influence of enhanced visualization techniques on shape and depth perception. Ultimately, an enhanced understanding of the spatial relations is desired. Advanced interaction techniques, such as cutting, (selective) clipping, lens-based exploration, and virtual resection contribute to this spatial understanding of 3D medical visualizations. It remains to be investigated how variants of these interaction techniques and combinations with the visualization techniques influence spatial understanding. More clinically relevant studies. The most important goal of medical visualization is to support diagnostic and treatment decisions in clinical practice, where 3D visualization techniques are incorporated in complex software assistants. To understand the consequences of decisions relating to visualization techniques, experiments with clinically-used software assistants (or very similar research prototypes) are required. Clinical decision situations, such as tumor board meetings, should be simulated to investigate, for example, whether the assessment of tumor infiltration changes as a consequence of advanced light source placement or global illumination. More studies are needed that focus on specific clinical tasks with medical experts as test persons. Such studies can reveal the influence of improved perception on cognitive processes, such as the selection of a treatment option. The ultimate goal is to understand whether the use of advanced visualization techniques matters for clinical decisions. Explore relations to other areas. Medical visualization has some special requirements based on the peculiarities of medical image data and the complex anatomical shapes to be depicted often along with instruments or simulation results. There are, however, similarities to other areas, such as in the visualization of plants and animals that also exhibit organic shapes as well as molecular visualization. Thus, an analysis of visualization techniques developed in these areas may inspire future medical visualization development.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> INTRODUCTION <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB010
|
C ONTENT-BASED image retrieval (CBIR) has been a longstanding research topic in the computer vision society. In the early 1990s, the study of CBIR truly started. Images were indexed by the visual cues, such as texture and color, and a myriad of algorithms and image retrieval systems have been proposed. A straightforward strategy is to extract global descriptors. This idea dominated the image retrieval community in the 1990s and early 2000s. Yet, a well-known problem is that global signatures may fail the invariance expectation to image changes such as illumination, translation, occlusion and truncation. These variances compromise the retrieval accuracy and limit the application scope of global descriptors. This problem has given rise to local feature based image retrieval. The focus of this survey is instance-level image retrieval. In this task, given a query image depicting a particular object/scene/architecture, the aim is to retrieve images containing the same object/scene/architecture that may be captured under different views, illumination, or with occlusions. Instance retrieval departs from class retrieval in that the latter aims at retrieving images of the same class with the query. In the following, if not specified, we use "image retrieval" and "instance retrieval" interchangeably. The milestones of instance retrieval in the past years are presented in Fig. 1 , in which the times of the SIFT-based and CNN-based methods are highlighted. The majority of traditional methods can be considered to end in 2000 when Smeulders et al. presented a comprehensive survey of CBIR "at the end of the early years". Three years later (2003) the Bag-of-Words (BoW) model was introduced to the image retrieval community BIB001 , and in 2004 was applied to image classification BIB002 , both relying on the SIFT descriptor BIB003 . The retrieval community has since witnessed the prominence of the BoW model for over a decade during which many improvements were proposed. In 2012, Krizhevsky et al. BIB004 with the AlexNet achieved the state-of-the-art recognition accuracy in ILSRVC 2012, exceeding previous best results by a large margin. Since then, research focus has begun to transfer to deep learning based methods BIB005 , BIB006 , BIB007 , BIB008 , especially the convolutional neural network (CNN). The SIFT-based methods mostly rely on the BoW model. BoW was originally proposed for modeling documents because the text is naturally parsed into words. It builds a word histogram for a document by accumulating word responses into a global vector. In the image domain, the introduction of the scale-invariant feature transform (SIFT) BIB003 makes the BoW model feasible BIB001 . Originally, SIFT is comprised of a detector and descriptor, but which are used in isolation now; in this survey, if not specified, SIFT usually refers to the 128-dim descriptor, a common practice in the community. With a pre-trained codebook (vocabulary), local features are quantized to visual words. An image can thus be represented in a similar form to a document, and classic weighting and indexing schemes can be leveraged. In recent years, the popularity of SIFT-based models seems to be overtaken by the convolutional neural network, a hierarchical structure that has been shown to outperform hand-crafted features in many vision tasks. In retrieval, competitive performance compared to the BoW models has been reported, even with short CNN vectors BIB008 , BIB009 , BIB010 . The CNN-based retrieval models usually compute compact representations and employ the Euclidean distance or some approximate nearest neighbor (ANN) search methods for retrieval. Current literature may directly employ the pretrained CNN models or perform fine-tuning for specific retrieval tasks. A majority of these methods feed the image into the network only once to obtain the descriptor. Some are based on patches which are passed to the network multiple times, a similar manner to SIFT; we classify them into hybrid methods in this survey.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> CATEGORIZATION METHODOLOGY <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB017
|
According to the different visual representations, this survey categorizes the retrieval literature into two broad types: SIFT-based and CNN-based. The SIFT-based methods are further organized into three classes: using large, mediumsized or small codebooks. We note that the codebook size is closely related to the choice of encoding methods. The CNNbased methods are categorized into using pre-trained or fine-tuned CNN models, as well as hybrid methods. Their similarities and differences are summarized in Table 1 . The SIFT-based methods had been predominantly studied before 2012 BIB007 (good works also appear in recent years BIB009 , BIB013 ). This line of methods usually use one type of detector, e.g., Hessian-Affine, and one type of descriptor, e.g., SIFT. Encoding maps a local feature into a vector. Based on the size of the codebook used during encoding, we classify SIFT-based methods into three categories as below. , Sivic and Zisserman BIB001 proposed Video Google in 2003, marking the beginning of the BoW model. Then, the hierarchical k-means and approximate k-means were proposed by Stew enius and Nist er and Philbin et al. BIB002 , respectively, marking the use of large codebooks in retrieval. In 2008, J egou et al. BIB003 proposed Hamming Embedding, a milestone in using medium-sized codebooks. Then, compact visual representations for retrieval were proposed by Perronnin et al. BIB005 and J egou et al. BIB006 in 2010. Although SIFT-based methods were still moving forward, CNN-based methods began to gradually take over, following the pioneering work of Krizhevsky et al. BIB007 . In 2014, Razavian et al. BIB010 proposed a hybrid method extracting multiple CNN features from an image. Babenko et al. BIB011 were the first to fine-tune a CNN model for generic instance retrieval. Both BIB014 , BIB015 employ the column features from pre-trained CNN models, and BIB015 inspires later state-of-the-art methods. These milestones are the representative works of the categorization scheme in this survey. VLAD, FV, pooling Low ANN methods Fine-tuned, single-pass A global feat. is end-to-end extracted from fine-tuned CNN models. Low ANN methods For SIFT-based methods, hand-crafted local invariant features are extracted, and according to the codebook sizes, different encoding and indexing strategies are leveraged. For CNN-based methods, pre-trained, fine-tuned CNN models and hybrid methods are the primary types; fixed-length compact vectors are usually produced, combined with approximate nearest neighbor (ANN) methods. Using small codebooks. The visual words are fewer than several thousand. Compact vectors are generated BIB005 , BIB006 before dimension reduction and coding. Using medium-sized codebooks. Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used BIB003 . The trade-off between accuracy and efficiency is a major influencing factor BIB008 . Using large codebooks. Given the sparse BoW histograms and the high discriminative ability of visual words, the inverted index and memory-friendly signatures are used . Approximate methods are used in codebook generation and encoding , BIB002 . The CNN-based methods extract features using CNN models. Compact (fixed-length) representations are usually built. There are three classes: Hybrid methods. Image patches are fed into CNN multiple times for feature extraction BIB010 . Encoding and indexing are similar to SIFT-based methods BIB012 . Using pre-trained CNN models. Features are extracted in a single pass using CNN pre-trained on some large-scale datasets like ImageNet BIB004 . Compact Encoding/pooling techniques are used BIB014 , BIB015 . Using fine-tuned CNN models. The CNN model (e.g., pre-trained on ImageNet) is fine-tuned on a training set in which the images share similar distributions with the target database BIB011 . CNN features can be extracted in an end-to-end manner through a single pass to the CNN model. The visual representations exhibit improved discriminative ability BIB016 , BIB017 .
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pipeline <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004
|
The pipeline of SIFT-based retrieval is introduced in Fig. 2 . Local Feature Extraction. Suppose we have a gallery G consisting of N images. Given a feature detector, we extract local descriptors from the regions around the sparse interest points or dense patches. We denote the local descriptors of D detected regions in an image as ff i g D i¼i ; f i 2 R p . Codebook Training. SIFT-based methods train a codebook offline. Each visual word in the codebook lies in the center of a subspace, called the "Voronoi cell". A larger codebook corresponds to a finer partitioning, resulting in more discriminative visual words and vice versa. Suppose that a pool of local descriptors F ff i g M i¼1 are computed from an unlabeled training set. The baseline approach, i.e., k-means, partitions the M points into K clusters; the K visual words thus constitute a codebook of size K. Feature Encoding. A local descriptor f i 2 R p is mapped into a feature embedding g i 2 R l through the feature encoding process, f i ! g i . When k-means clustering is used, f i can be encoded according to its distances to the visual words. For large codebooks, hard , BIB001 and soft quantization BIB002 are good choices. In the former, the resulting embedding g i has only one non-zero entry; in the latter, f i can be quantized to a small number of visual words. A global signature is produced after a sum-pooling of all the embeddings of local features. For medium-sized codebooks, additional binary signatures can be generated to preserve the original information. When using small codebooks, popular encoding schemes include vector of locally aggregated descriptors (VLAD) BIB003 , Fisher vector (FV) BIB004 , etc.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. ::: ::: This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49% improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77% precision improvement over the baseline bag-of-words approach, and a 24% improvement over full geometric verification alone. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Many visual search and matching systems represent images using sparse sets of "visual words": descriptors that have been quantized by assignment to the best-matching symbol in a discrete vocabulary. Errors in this quantization procedure propagate throughout the rest of the system, either harming performance or requiring correction using additional storage or processing. This paper aims to reduce these quantization errors at source, by learning a projection from descriptor space to a new Euclidean space in which standard clustering techniques are more likely to assign matching descriptors to the same cluster, and nonmatching descriptors to different clusters. ::: ::: To achieve this, we learn a non-linear transformation model by minimizing a novel margin-based cost function, which aims to separate matching descriptors from two classes of non-matching descriptors. Training data is generated automatically by leveraging geometric consistency. Scalable, stochastic gradient methods are used for the optimization. ::: ::: For the case of particular object retrieval, we demonstrate impressive gains in performance on a ground truth dataset: our learnt 32-D descriptor without spatial re-ranking outperforms a baseline method using 128-D SIFT descriptors with spatial re-ranking. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We describe a scalable approach to 3D smooth object retrieval which searches for and localizes all the occurrences of a user outlined object in a dataset of images in real time. The approach is illustrated on sculptures. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The bag-of-features(BOF) image representation [7] is popular in largescale image retrieval. With BOF, the memory to store the inverted index file and the search complexity are both approximately linearly increased with the number of images. To address the retrieval efficiency and the memory constraint problem, besides some improvement work based on BOF, there come alternative approaches which aggregate local descriptors in one image into a single vector using Fisher Vector [6] or Vector of Local Aggregated Descriptor (VLAD) [1]. It has been shown in [1] that with as few as 16 bytes to represent an image, the retrieval performance is still comparable to that of the BOF representation. In this paper, we illustrate that Fisher Vector, VLAD and BOF can be uniformly derived in two steps: i Encoding – separately map each local descriptor into a code, and ii Pooling – aggregate all codes from one image into a single vector. Motivated by the success of these two-step approaches, we propose to use sparse coding(SC) framework to aggregate local feature for image retrieval. SC framework is firstly introduced by [10] for the task of image classification. It is a classical two-step approach: Step 1: Encoding. Each local descriptor x from an image is encoded into an N-dimensional vector u = [u1,u2, ...,uN ] by fitting a linear model with sparsity (L1) constraint: <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper proposes a pooling strategy for local descriptors to produce a vector representation that is orientation-invariant yet implicitly incorporates the relative angles between features measured by their dominant orientation. This pooling is associated with a similarity metric that ensures that all the features have undergone a comparable rotation. This approach is especially effective when combined with dense oriented features, in contrast to existing methods that either rely on oriented features extracted on key points or on non-oriented dense features. The interest of our approach in a retrieval scenario is demonstrated on popular benchmarks comprising up to 1 million database images. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. . <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> This paper focuses on the image retrieval task. We propose the use of dense feature points computed on several color channels to improve the retrieval system. To validate our approach, an evaluation of various SIFT extraction strategies is performed. Detected SIFT are compared with dense SIFT. Dense color descriptors: C-SIFT and T-SIFT are then utilized. A comparison between standard and rotation invariant features is further achieved. Finally, several encoding strategies are studied: Bag of Visual Words (BOW), Fisher vectors, and vector of locally aggregated descriptors (VLAD). The presented approaches are evaluated on several datasets and we show a large improvement over the baseline. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Local Feature Extraction <s> We consider a pipeline for image classification or search based on coding approaches like bag of words or Fisher vectors. In this context, the most common approach is to extract the image patches regularly in a dense manner on several scales. This paper proposes and evaluates alternative choices to extract patches densely. Beyond simple strategies derived from regular interest region detectors, we propose approaches based on superpixels, edges, and a bank of Zernike filters used as detectors. The different approaches are evaluated on recent image retrieval and fine-grained classification benchmarks. Our results show that the regular dense detector is outperformed by other methods in most situations, leading us to improve the state-of-the-art in comparable setups on standard retrieval and fined-grained benchmarks. As a byproduct of our study, we show that existing methods for blob and superpixel extraction achieve high accuracy if the patches are extracted along the edges and not around the detected regions. <s> BIB021
|
Local invariant features aim at accurate matching of local structures between images BIB007 . SIFT-based methods usually share a similar feature extraction step composed of a feature detector and a descriptor. Local Detector. The interest point detectors aim to reliably localize a set of stable local regions under various imaging conditions. In the retrieval community, finding affine-covariant regions has been preferred. It is called "covariant" because the shapes of the detected regions change with the affine transformations, so that the region content (descriptors) can be invariant. This kind of detectors are different from keypoint-centric detectors such as the Hessian detector , and from those focusing on scale-invariant regions such as the difference of Gaussians (DoG) BIB012 detector. Elliptical regions which are adapted to the local intensity patterns A general pipeline of SIFT-and CNN-based retrieval models. Features are computed from hand-crafted detectors for SIFT, and densely applied filters or image patches for CNN. In both methods, under small codebooks, encoding/pooling is employed to produce compact vectors. In SIFT-based methods, the inverted index is necessary under large/medium-sized codebooks. The CNN features can also be computed in an end-toend way using fine-tuned CNN models. are produced by affine detectors. This ensures that the same local structure is covered under deformations caused by viewpoint variances, a problem often encountered in instance retrieval. In the milestone work BIB001 , the Maximally Stable Extremal Region (MSER) detector BIB002 and the affine extended Harris-Laplace detector are employed, both of which are affine-invariant region detectors. MSER is used in several later works , BIB009 . Starting from BIB006 , the Hessianaffine detector has been widely adopted in retrieval. It has been shown to be superior to the difference of Gaussians detector BIB008 , BIB018 , due to its advantage in reliably detecting local structures under large viewpoint changes. To fix the orientation ambiguity of these affine-covariant regions, the gravity assumption is made BIB010 . The practice which dismisses the orientation estimation is employed by later works BIB015 , BIB019 and demonstrates consistent improvement on architecture datasets where the objects are usually upright. Other non-affine detectors have also been tested in retrieval, such as the Laplacian of Gaussian (LOG) and Harris detectors used in BIB016 . For objects with smooth surfaces BIB013 , few interest points can be detected, so the object boundaries are good candidates for local description. On the other hand, some employ the dense region detectors. In the comparison between densely sampled image patches and the detected patches, Sicre et al. BIB020 report the superiority of the former. To recover the rotation invariance of dense sampling, the dominant angle of patches is estimated in BIB017 . A comprehensive comparison of various dense sampling strategies, the interest point detectors, and those in between can be accessed in BIB021 . Local Descriptor. With a set of detected regions, descriptors encode the local content. SIFT BIB012 has been used as the default descriptor. The 128-dim vector has been shown to outperform competing descriptors in matching accuracy BIB004 . In an extension, PCA-SIFT BIB003 reduces the dimension from 128 to 36 to speed up the matching process at the cost of more time in feature computation and loss of distinctiveness. Another improvement is RootSIFT BIB015 , calculated by two steps: 1) ' 1 normalize the SIFT descriptor, 2) square root each element. RootSIFT is now used as a routine in SIFT-based retrieval. Apart from SIFT, SURF BIB005 is also widely used. It combines the Hessian-Laplace detector and a local descriptor of the local gradient histograms. The integral image is used for acceleration. SURF has a comparable matching accuracy with SIFT and is faster to compute. See for comparisons between SIFT, PCA-SIFT, and SURF. To further accelerate the matching speed, binary descriptors BIB014 replace Euclidean distance with Hamming distance during matching. Apart from hand-crafted descriptors, some also propose learning schemes to improve the discriminative ability of local descriptors. For example, Philbin et al. BIB011 proposes a non-linear transformation so that the projected SIFT descriptor yields smaller distances for true matches. Simoyan et al. BIB019 improve this process by learning both the pooling region and a linear descriptor projection.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Small Codebooks <s> This paper addresses the construction of a short-vector (128D) image representation for large-scale image and particular object retrieval. In particular, the method of joint dimensionality reduction of multiple vocabularies is considered. We study a variety of vocabulary generation techniques: different k-means initializations, different descriptor transformations, different measurement regions for descriptor extraction. Our extensive evaluation shows that different combinations of vocabularies, each partitioning the descriptor space in a different yet complementary manner, results in a significant performance improvement, which exceeds the state-of-the-art. <s> BIB004
|
A small codebook has several thousand, several hundred or fewer visual words, so the computational complexity of codebook generation and encoding is moderate. Representative works include BoW BIB001 , VLAD BIB002 and FV BIB003 . We mainly discuss VLAD and FV and refer readers to BIB004 for a comprehensive evaluation of the BoW compact vectors.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation <s> For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. <s> BIB006
|
Clustering complexity depends heavily on the codebook size. In works based on VLAD BIB003 or FV BIB004 , the codebook sizes are typically small, e.g., BIB002 BIB005 256 . For VLAD, flat kmeans is employed for codebook generation. For FV, the Gaussian mixture model (GMM), i.e., u ðxÞ ¼ P K i¼1 w i u i ðxÞ, where K is the number of Gaussian mixtures, is trained using the maximum likelihood estimation. GMM describes the feature space with a mixture of K Gaussian distributions, and can be denoted as ¼ fw i ; m i ; P i ; i ¼ 1; . . . ; Kg, where w i , m i and P i represent the mixture weight, the mean vector and the covariance matrix of Gaussian u i , respectively. Approximate methods are critical in assigning data into a large number of clusters. In the retrieval community, two representative works are hierarchical k-means (HKM) and approximate k-means (AKM) BIB001 , as illustrated in Figs. 1 and 3. Proposed in 2006, HKM applies standard kmeans on the training features hierarchically. It first partitions the points into a few clusters (e.g., k ( K) and then recursively partitions each cluster into further clusters. In every recursion, each point should be assigned to one of the k clusters, with the depth of the cluster tree being Oðlog KÞ, where K is the target cluster number. The computational cost of HKM is therefore Oð kM log KÞ, where M is the number of training samples. It is much smaller than the complexity of flat k-means OðMKÞ when K is large (a large codebook). The other milestone in large codebook generation is AKM BIB001 . This method indexes the K cluster centers using a forest of random k-d trees so that the assignment step can be performed efficiently with ANN search. In AKM, the cost of assignment can be written as OðK log K þ vM log KÞ ¼ OðvM log KÞ, where v is the number of nearest cluster candidates to be accessed in the k-d trees. So the computational complexity of AKM is on par with HKM and is significantly smaller than flat k-means when K is large. Experiments show that AKM is superior to HKM BIB001 due to its lower quantization error (see Section 3.4.2). In most AKM-based methods, the default choice for ANN search is FLANN BIB006 .
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC utilizes the locality constraints to project each descriptor into its local-coordinate system, and the projected coordinates are integrated by max pooling to generate the final representation. With linear classifier, the proposed approach performs remarkably better than the traditional nonlinear SPM, achieving state-of-the-art performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a K-nearest-neighbor search and then solving a constrained least square fitting problem, bearing computational complexity of O(M + K2). Hence even with very large codebooks, our system can still process multiple frames per second. This efficiency significantly adds to the practical values of LLC for real applications. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Bag-of-Words lies at a heart of modern object category recognition systems. After descriptors are extracted from images, they are expressed as vectors representing visual word content, referred to as mid-level features. In this paper, we review a number of techniques for generating mid-level features, including two variants of Soft Assignment, Locality-constrained Linear Coding, and Sparse Coding. We also isolate the underlying properties that affect their performance. Moreover, we investigate various pooling methods that aggregate mid-level features into vectors representing images. Average pooling, Max-pooling, and a family of likelihood inspired pooling strategies are scrutinised. We demonstrate how both coding schemes and pooling methods interact with each other. We generalise the investigated pooling methods to account for the descriptor interdependence and introduce an intuitive concept of improved pooling. We also propose a coding-related improvement to increase its speed. Lastly, state-of-the-art performance in classification is demonstrated on Caltech101, Flower17, and ImageCLEF11 datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by Jegou et al. for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance, second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new state-of-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and localization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors). <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Recent works on image retrieval have proposed to index images by compact representations encoding powerful local descriptors, such as the closely related VLAD and Fisher vector. By combining such a representation with a suitable coding technique, it is possible to encode an image in a few dozen bytes while achieving excellent retrieval results. This paper revisits some assumptions proposed in this context regarding the handling of "visual burstiness", and shows that ad-hoc choices are implicitly done which are not desirable. Focusing on VLAD without loss of generality, we propose to modify several steps of the original design. Albeit simple, these modifications significantly improve VLAD and make it compare favorably against the state of the art. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Wining method of Fine-grain image classification challenge 2013.Late combination of two indexing and classification strategies.Good practices for fine grain image classification.Key features: descriptors filtering, spatial coordinates coding, active learning. This paper describes the joint submission of Inria and Xerox to their joint participation to the FGCOMP'2013 challenge. Although the proposed system follows most of the standard Fisher classification pipeline, we describe a few key features and good practices that significantly improve the accuracy when specifically considering fine-grain classification tasks. In particular, we consider the late fusion of two systems both based on Fisher vectors, but for which we choose drastically design choices that make them very complementary. Moreover, we propose a simple yet effective filtering strategy, which significantly boosts the performance for several class domains. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Image search systems based on local descriptors typically achieve orientation invariance by aligning the patches on their dominant orientations. Albeit successful, this choice introduces too much invariance because it does not guarantee that the patches are rotated consistently. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> State-of-the-art patch-based image representations involve a pooling operation that aggregates statistics computed from local descriptors. Standard pooling operations include sum- and max-pooling. Sum-pooling lacks discriminability because the resulting representation is strongly influenced by frequent yet often uninformative descriptors, but only weakly influenced by rare yet potentially highly-informative ones. Max-pooling equalizes the influence of frequent and rare descriptorsbut is only applicable to representations that rely on count statistics, such as the bag-of-visual-words (BOV)and its soft- and sparse-coding extensions. We propose a novel pooling mechanism that achieves the same effect as max-pooling but is applicable beyond the BOV and especially to the state-of-the-art Fisher Vector --hence the name Generalized Max Pooling (GMP). It involves equalizing the similarity between each patch and the pooled representation, which is shown to be equivalent to re-weighting the per-patch statistics. We show on five public image classification benchmarks that the proposedGMP can lead to significant performance gains with respect toheuristic alternatives. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> The bag-of-words (BoW) model treats images as sets of local descriptors and represents them by visual word histograms. The Fisher vector (FV) representation extends BoW, by considering the first and second order statistics of local descriptors. In both representations local descriptors are assumed to be identically and independently distributed (iid), which is a poor assumption from a modeling perspective. It has been experimentally observed that the performance of BoW and FV representations can be improved by employing discounting transformations such as power normalization. In this paper, we introduce non-iid models by treating the model parameters as latent variables which are integrated out, rendering all local regions dependent. Using the Fisher kernel principle we encode an image by the gradient of the data log-likelihood w.r.t. the model hyper-parameters. Our models naturally generate discounting effects in the representations; suggesting that such transformations have proven successful because they closely correspond to the representations obtained for non-iid models. To enable tractable computation, we rely on variational free-energy bounds to learn the hyper-parameters and to compute approximate Fisher kernels. Our experimental evaluation results validate that our models lead to performance improvements comparable to using power normalization, as employed in state-of-the-art feature aggregation methods. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> This paper revisits the vector of locally aggregated descriptors (VLAD), which aggregates the residuals of local descriptors to their cluster centers. Since VLAD usually adopts a small-size codebook, the clusters are coarse and residuals not discriminative. To address this problem, this paper proposes to generate a number of residual codebooks descended from the original clusters. After quantizing local descriptors with these codebooks, we pool the resulting secondary residuals as well as the primary ones to obtain the fine residuals. We show that, with two-step aggregation, the fine-residual VLAD has the same dimension as the original. Experiments on two image search benchmarks confirm the improved discriminative power of our method: we observe consistent superiority to the baseline and competitive performance to the state-of-the-arts. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Encoding <s> Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art. <s> BIB017
|
Due to the small codebook size, relative complex and information-preserving encoding techniques can be applied. We mainly describe FV, VLAD and their improvements in this section. With a pre-trained GMM model, FV describes the averaged first and second order difference between local features and the GMM centers. Its dimension is 2pK, where p is the dimension of the local descriptors and K is the codebook size of GMM. FV usually undergoes power normalization BIB002 , BIB003 to suppress the burstiness problem (to be described in Section 3.4.3). In this step, each component of FV undergoes non-linear transformation featured by parameter a, x i :¼ signðx i Þkx i k a . Then ' 2 normalization is employed. Later, FV is improved from different aspects. For example, Koniusz et al. BIB008 augment each descriptor with its spatial coordinates and associated tunable weights. In BIB011 , larger codebooks (up to 4,096) are generated and demonstrate superior classification accuracy to smaller codebooks, at the cost of computational efficiency. To correct the assumption that local regions are identically and independently distributed (iid), Cinbis et al. BIB015 propose non-iid models that discount the burstiness effect and yield improvement over the power normalization. The VLAD encoding scheme proposed by J egou et al. BIB004 can be thought of as a simplified version of FV. It quantizes a local feature to its nearest visual word in the codebook and records the difference between them. Nearest neighbor search is performed because of the small codebook size. The residual vectors are then aggregated by sum pooling followed by normalizations. The dimension of VLAD is pK. Comparisons of some important encoding techniques are presented in , BIB006 . Again, the improvement of VLAD comes from multiple aspects. In BIB007 , J egou and Chum suggest the usage of PCA and whitening (denoted as PCA w in Table 5 ) to de-correlate visual word co-occurrences, and the training of multiple codebooks to reduce quantization loss. In BIB009 , Arandjelovi c et al. extend VLAD in three aspects: 1) normalize the residual sum within each coarse cluster, called intra-normalization, 2) vocabulary adaptation to address the dataset transfer problem and 3) multi-VLAD for small object discovery. Concurrent to BIB009 , Delhumeau et al. BIB010 propose to normalize each residual vector instead of the residual sums; they also advocate for local PCA within each Voronoi cell which does not perform dimension reduction as BIB006 . A recent work BIB017 employs soft assignment and empirically learns optimal weights for each rank to improve over the hard quantization. Note that some general techniques benefit various embedding methods, such as VLAD, FV, BoW, locality-constrained linear coding (LLC) BIB005 and monomial embeddings. To improve the discriminative ability of embeddings, Tolias et al. BIB012 propose the orientation covariant embedding to encode the dominant orientation of the SIFT regions jointly with the SIFT descriptor. It achieves a similar covariance property to weak geometric consistency (WGC) BIB001 by using geometric cues within regions of interest so that matching points with similar dominant orientations are up-weighted and vice versa. The triangulation embedding BIB013 only considers the direction instead of the magnitude of the input vectors. J egou et al. BIB013 also present a democratic aggregation that limits the interference between the mapped vectors. Baring a similar idea with democratic aggregation, Murray and Perronnin BIB014 propose the generalized max pooling (GMP) optimized by equalizing the similarity between the pooled vector and each coding representation. The computational complexity of BoW, VLAD and FV is similar. We neglect the offline training and SIFT extraction steps. During visual word assignment, each feature should compute its distance (or soft assignment coefficient) with all the visual words (or Gaussians) for VLAD (or FV). So this step has a complexity of OðpKÞ. In the other steps, complexity does not exceed OðpKÞ. Considering the sum-pooling of the embeddings, the encoding process has an overall complexity of OðpKDÞ, where D is the number of features in an image. Triangulation embedding BIB013 , a variant of VLAD, has a similar complexity. The complexity of multi-VLAD BIB009 is OðpKDÞ, too, but it has a more costly matching process. Hierarchical VLAD BIB016 has a complexity of OðpKK 0 DÞ, where K 0 is the size of the secondary codebook. In the aggregation stage, both GMP BIB014 and democratic aggregation BIB013 have high complexity. The complexity of GMP is Oð P 2 K Þ, where P is the dimension of the feature embedding, while the computational cost of democratic aggregation comes from the Sinkhorn algorithm.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper deals with content-based large-scale image retrieval using the state-of-the-art framework of VLAD and Product Quantization proposed by Jegou as a starting point. Demonstrating an excellent accuracy-efficiency trade-off, this framework has attracted increased attention from the community and numerous extensions have been proposed. In this work, we make an in-depth analysis of the framework that aims at increasing our understanding of its different processing steps and boosting its overall performance. Our analysis involves the evaluation of numerous extensions (both existing and novel) as well as the study of the effects of several unexplored parameters. We specifically focus on: a) employing more efficient and discriminative local features; b) improving the quality of the aggregated representation; and c) optimizing the indexing scheme. Our thorough experimental evaluation provides new insights into extensions that consistently contribute, and others that do not, to performance improvement, and sheds light onto the effects of previously unexplored parameters of the framework. As a result, we develop an enhanced framework that significantly outperforms the previous best reported accuracy results on standard benchmarks and is more efficient. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> This paper introduces a group testing framework for detecting large similarities between high-dimensional vectors, such as descriptors used in state-of-the-art description of multimedia documents.At the crossroad of multimedia information retrieval and signal processing, we produce a set of group representations that jointly encode several vectors into a single one, in the spirit of group testing approaches. By comparing a query vector to several of these intermediate representations, we screen the large values taken by the similarities between the query and all the vectors, at a fraction of the cost of exhaustive similarity calculation. Unlike concurrent indexing methods that suffer from the curse of dimensionality, our method exploits the properties of high-dimensional spaces. It therefore complements other strategies for approximate nearest neighbor search. Our preliminary experiments demonstrate the potential of group testing for searching large databases of multimedia objects represented by vectors. We obtain a large improvement in terms of the theoretical complexity, at the cost of a small or negligible decrease of accuracy.We hope that this preliminary work will pave the way to subsequent works for multimedia retrieval with limited resources. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We consider the image retrieval problem of finding the images in a dataset that are most similar to a query image. Our goal is to reduce the number of vector operations and memory for performing a search without sacrificing accuracy of the returned images. We adopt a group testing formulation and design the decoding architecture using either dictionary learning or eigendecomposition. The latter is a plausible option for small-to-medium sized problems with high-dimensional global image descriptors, whereas dictionary learning is applicable in large-scale scenarios. We evaluate our approach for global descriptors obtained from both SIFT and CNN features. Experiments with standard image search benchmarks, including the Yahoo100M dataset comprising 100 million images, show that our method gives comparable (and sometimes superior) accuracy compared to exhaustive search while requiring only 10% of the vector operations and memory. Moreover, for the same search complexity, our method gives significantly better accuracy compared to approaches based on dimensionality reduction or locality sensitive hashing. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> ANN Search <s> We study an indexing architecture to store and search in a database of high-dimensional vectors from the perspective of statistical signal processing and decision theory. This architecture is composed of several memory units, each of which summarizes a fraction of the database by a single representative vector. The potential similarity of the query to one of the vectors stored in the memory unit is gauged by a simple correlation with the memory unit's representative vector. This representative optimizes the test of the following hypothesis: the query is independent from any vector in the memory unit versus the query is a simple perturbation of one of the stored vectors. Compared to exhaustive search, our approach finds the most similar database vectors significantly faster without a noticeable reduction in search quality. Interestingly, the reduction of complexity is provably better in high-dimensional spaces. We empirically demonstrate its practical interest in a large-scale image search scenario with off-the-shelf state-of-the-art descriptors. <s> BIB011
|
Due to the high dimensionality of the VLAD/FV embeddings, efficient compression and ANN search methods have been employed BIB004 , BIB006 . For example, the principle component analysis (PCA) is usually adapted to for dimension reduction, and it is shown that retrieval accuracy even increases after PCA BIB005 . For hashing-based ANN methods, Perronnin et al. BIB003 use standard binary encoding techniques such as locality sensitive hashing BIB001 and spectral hashing BIB002 . Nevertheless, when being tested on the SIFT and GIST feature datasets, spectral hashing is shown to be outperformed by Product Quantization (PQ) BIB004 . In these quantization-based ANN methods, PQ is demonstrated to be better than other popular ANN methods such as FLANN BIB006 as well. A detailed discussion of VLAD and PQ can be viewed in BIB007 . PQ has since then been improved in a number of works. In , Douze et al. propose to re-order the cluster centroids so that adjacent centroids have small Hamming distances. This method is compatible with Hamming distance based ANN search, which offers significant speedup for PQ. We refer readers to BIB010 for a survey of ANN approaches. We also mention an emerging ANN technique, i.e., group testing BIB008 , BIB009 , BIB011 . In a nutshell, the database is decomposed into groups, each represented by a group vector. Comparisons between the query and group vectors reveal how likely a group contains a true match. Since group vectors are much fewer than the database vectors, search time is reduced. Iscen et al. BIB009 propose to directly find the best group vectors summarizing the database without explicitly forming the groups, which reduces the memory consumption.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Large Codebooks <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Retrieval Using Large Codebooks <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB002
|
A large codebook may contain 1 million , BIB001 visual words or more BIB002 , . Some major steps undergo important changes compared with using small codebooks.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 ~ n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Bag-of-words models are among the most widely used and successful representations in multimedia retrieval. However, the quantization error which is introduced when mapping keypoints to visual words is one of the main drawbacks of the bag-of-words model. Although some techniques, such as soft-assignment to bags [23] and query expansion [27], have been introduced to deal with the problem, the performance gain is always at the cost of longer query response time, which makes them difficult to apply to large-scale multimedia retrieval applications. In this paper, we propose a simple "constrained keypoint quantization" method which can effectively reduce the overall quantization error of the bag-of-words representation and greatly improve the retrieval efficiency at the same time. The central idea of the proposed quantization method is that if a keypoint is far away from all visual words, we simply remove it. At first glance, this simple strategy seems naive and dangerous. However, we show that the proposed method has a solid theoretical background. Our experimental results on three widely used datasets for near duplicate image and video retrieval confirm that by removing a large amount of keypoints which have high quantization error, we obtain comparable or even better retrieval performance while dramatically boosting retrieval efficiency. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding (Quantization) <s> Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words. The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method. <s> BIB007
|
Feature encoding is interleaved with codebook clustering, because ANN search is critical in both components. The ANN techniques implied in some classic methods like AKM and HKM can be used in both clustering and encoding steps. Under a large codebook, the key trade-off is between quantization error and computational complexity. In the encoding step, information-preserving encoding methods such as FV BIB004 , sparse coding BIB003 are mostly infeasible due to their computational complexity. It therefore remains a challenging problem how to reduce the quantization error while keeping the quantization process efficient. Fro the ANN methods, the earliest solution is to quantize a local feature along the hierarchical tree structure . Quantized tree nodes in different levels are assigned different weights. However, due to the highly imbalanced tree structure, this method is outperformed by k-d tree based quantization method BIB001 : one visual word is assigned to each local feature, using a k-d tree built from the codebook for fast ANN search. In an improvement to this hard quantization scheme, Philbin et al. BIB002 propose soft quantization by quantizing a feature into several nearest visual words. The weight of each assigned visual word relates negatively to its distance from the feature by expðÀ d 2 2s 2 Þ, where d is the distance between the descriptor and the cluster center. While soft quantization is based on the Euclidean distance, Mikulik et al. BIB005 propose to find relevant visual words for each visual word through an unsupervised set of matching features. Built on a probabilistic model, these alternative words tend to contain descriptors of matching features. To reduce the memory cost of soft quantization BIB002 and the number of query visual words, Cai et al. BIB006 suggest that when a local feature is far away from even the nearest visual word, this feature can be discarded without a performance drop. To further accelerate quantization, scalar quantization BIB007 suggests that local features be quantized without an explicitly trained codebook. A floating-point vector is binarized, and the first dimensions of the resulting binary vector are directly converted to a decimal number as a visual word. In the case of large quantization error and low recall, scalar quantization uses bit-flop to generate hundreds of visual words for a local feature.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Burstiness, a phenomenon initially observed in text retrieval, is the property that a given visual element appears more times in an image than a statistically independent model would predict. In the context of image search, burstiness corrupts the visual similarity measure, i.e., the scores used to rank the images. In this paper, we propose a strategy to handle visual bursts for bag-of-features based image search systems. Experimental results on three reference datasets show that our method significantly and consistently outperforms the state of the art. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> There has been recent progress on the problem of recognizing specific objects in very large datasets. The most common approach has been based on the bag-of-words (BOW) method, in which local image features are clustered into visual words. This can provide significant savings in memory compared to storing and matching each feature independently. In this paper we take an additional step to reducing memory requirements by selecting only a small subset of the training features to use for recognition. This is based on the observation that many local features are unreliable or represent irrelevant clutter. We are able to select “useful” features, which are both robust and distinctive, by an unsupervised preprocessing step that identifies correctly matching features among the training images. We demonstrate that this selection approach allows an average of 4% of the original features per image to provide matching performance that is as accurate as the full set. In addition, we employ a graph to represent the matching relationships between images. Doing so enables us to effectively augment the feature set for each image through merging of useful features of neighboring images. We demonstrate adjacent and 2-adjacent augmentation, both of which give a substantial boost in performance. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> The Inverse Document Frequency (IDF) is prevalently utilized in the Bag-of-Words based image search. The basic idea is to assign less weight to terms with high frequency, and vice versa. However, the estimation of visual word frequency is coarse and heuristic. Therefore, the effectiveness of the conventional IDF routine is marginal, and far from optimal. To tackle this problem, this paper introduces a novel IDF expression by the use of Lp-norm pooling technique. Carefully designed, the proposed IDF takes into account the term frequency, document frequency, the complexity of images, as well as the codebook information. Optimizing the IDF function towards optimal balancing between TF and pIDF weights yields the so-called Lp-norm IDF (pIDF). We show that the conventional IDF is a special case of our generalized version, and two novel IDFs, i.e. the average IDF and the max IDF, can also be derived from our formula. Further, by counting for the term-frequency in each image, the proposed Lp-norm IDF helps to alleviate the visual word burstiness phenomenon. Our method is evaluated through extensive experiments on three benchmark datasets (Oxford 5K, Paris 6K and Flickr 1M). We report a performance improvement of as large as 27.1% over the baseline approach. Moreover, since the Lp-norm IDF is computed offline, no extra computation or memory cost is introduced to the system at all. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. They violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval and geometric verification. The retrieval is based on robust detection of repeated image structures and a suitable modification of weights in the bag-of-visual-word model. We also demonstrate that the explicit detection of repeated patterns is beneficial for robust visual word matching for geometric verification. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline as well as the more recently proposed burstiness weighting and Fisher vector encoding. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> This paper deals with a novel concept of an exponential IDF in the BM25 formulation and compares the search accuracy with that of the BM25 with the original IDF in a content-based video retrieval (CBVR) task. Our video retrieval method is based on a bag of keypoints (local visual features) and the exponential IDF estimates the keypoint importance weights more accurately than the original IDF. The exponential IDF is capable of suppressing the keypoints from frequently occurring background objects in videos, and we found that this effect is essential for achieving improved search accuracy in CBVR. Our proposed method is especially designed to tackle instance video search, one of the CBVR tasks, and we demonstrate its effectiveness in significantly enhancing the instance search accuracy using the TRECVID2012 video retrieval dataset. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Weighting <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB008
|
TF-IDF. The visual words in codebook C are typically assigned specific weights, called the term frequency and inverse document frequency (TF-IDF), which are integrated with the BoW encoding. TF is defined as where o j i is the number of occurrences of a visual word c i within an image j. TF is thus a local weight. IDF, on the other hand, determines the contribution of a given visual word through global statistics. The classic IDF weight of visual word c i is calculated as where N is the number of gallery images, and n i encodes the number of images in which word c i appears. The TF-IDF weight for visual word c i in image j is Improvements. A major problem associated with visual word weighting is burstiness BIB001 . It refers to the phenomenon whereby repetitive structures appear in an image. This problem tends to dominate image similarity. J egou et al. BIB001 propose several TF variants to deal with burstiness. An effective strategy consists in exerting a square operation on TF. Instead of grouping features with the same word index, Revaud et al. BIB003 propose detecting keypoint groups frequently happening in irrelevant images which are downweighted in the scoring function. While the above two methods detect bursty groups after quantization, Shi et al. BIB008 propose detecting them in the descriptor stage. The detected bursty descriptors undergo average pooling and are fed in the BoW architectures. From the aspect of IDF, Zheng et al. BIB005 propose the L p -norm IDF to tackle burstiness and Murata et al. BIB007 design the exponential IDF which is later incorporated into the BM25 formula. When most works try to suppress burstiness, Torii et al. BIB006 view it as a distinguishing feature for architectures and design new similarity measurement following burstiness detection. Another feature weighting strategy is feature augmentation on the database side BIB004 , BIB002 . Both methods construct an image graph offline, with edges indicating whether two images share a same object. For BIB002 , only features that pass the geometric verification are preserved, which reduces the memory cost. Then, the feature of the base image is augmented with all the visual words of its connecting images. This method is improved in BIB004 by only adding those visual words which are estimated to be visible in the augmented image, so that noisy visual words can be excluded.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 ~ n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> A new data structure for efficient similarity search in very large dataseis of high-dimensional vectors is introduced. This structure called the inverted multi-index generalizes the inverted index idea by replacing the standard quantization within inverted indices with product quantization. For very similar retrieval complexity and preprocessing time, inverted multi-indices achieve a much denser subdivision of the search space compared to inverted indices, while retaining their memory efficiency. Our experiments with large dataseis of SIFT and GIST vectors demonstrate that because of the denser subdivision, inverted multi-indices are able to return much shorter candidate lists with higher recall. Augmented with a suitable reranking procedure, multi-indices were able to improve the speed of approximate nearest neighbor search on the dataset of 1 billion SIFT vectors by an order of magnitude compared to the best previously published systems, while achieving better recall and incurring only few percent of memory overhead. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> The Inverted Index <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB005
|
The inverted index is designed to enable efficient storage and retrieval and is usually used under large/medium-sized codebooks. Its structure is illustrated in Fig. 4 . The inverted index is a one-dimensional structure where each entry corresponds to a visual word in the codebook. An inverted list is attached to each word entry, and those indexed in the each inverted list are called indexed features or postings. The inverted index takes advantages of the sparse nature of the visual word histogram under a large codebook. In literature, it is required that new retrieval methods be adjustable to the inverted index. In the baseline , BIB001 , the image ID and term frequency (TF) are stored in a posting. When other information is integrated, they should be small in size. For example, in BIB003 , the metadata are quantized, such as descriptor contextual weight, descriptor density, mean relative log scale and the mean orientation difference in each posting. Similarly, quantized spatial information such as the orientation can also be stored , BIB002 . In coindexing , when the inverted index is enlarged with globally consistent neighbors, semantically isolated images are deleted to reduce memory consumption. In BIB004 , the original one-dimensional inverted index is expanded to two-dimensional for ANN search, which learns a codebook for each SIFT sub-vector. Later, it is applied to instance retrieval by BIB005 to fuse local color and SIFT descriptors.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> Bag-of-words models are among the most widely used and successful representations in multimedia retrieval. However, the quantization error which is introduced when mapping keypoints to visual words is one of the main drawbacks of the bag-of-words model. Although some techniques, such as soft-assignment to bags [23] and query expansion [27], have been introduced to deal with the problem, the performance gain is always at the cost of longer query response time, which makes them difficult to apply to large-scale multimedia retrieval applications. In this paper, we propose a simple "constrained keypoint quantization" method which can effectively reduce the overall quantization error of the bag-of-words representation and greatly improve the retrieval efficiency at the same time. The central idea of the proposed quantization method is that if a keypoint is far away from all visual words, we simply remove it. At first glance, this simple strategy seems naive and dangerous. However, we show that the proposed method has a solid theoretical background. Our experimental results on three widely used datasets for near duplicate image and video retrieval confirm that by removing a large amount of keypoints which have high quantization error, we obtain comparable or even better retrieval performance while dramatically boosting retrieval efficiency. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Codebook Generation and Quantization <s> Feature fusion has been proven effective [35, 36] in image search. Typically, it is assumed that the to-be-fused heterogeneous features work well by themselves for the query. However, in a more realistic situation, one does not know in advance whether a feature is effective or not for a given query. As a result, it is of great importance to identify feature effectiveness in a query-adaptive manner. <s> BIB008
|
Considering the relatively small computational cost compared with large codebooks (Section 3.4.1), flat k-means can be adopted for codebook generation BIB006 , BIB003 . It is also shown in BIB007 , BIB008 that using AKM BIB001 for clustering also yields very competitive retrieval accuracy. For quantization, nearest neighbor search can be used to find the nearest visual words in the codebook. Practice may tell that using some strict ANN algorithms produces competitive retrieval results. So comparing with the extensive study on quantization under large codebooks (Section 3.4.2) BIB002 , BIB004 , BIB005 , relatively fewer works focus on the quantization problem under a medium-sized codebook.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper introduces a video copy detection system which efficiently matches individual frames and then verifies their spatio-temporal consistency. The approach for matching frames relies on a recent local feature indexing method, which is at the same time robust to significant video transformations and efficient in terms of memory usage and computation time. We match either keyframes or uniformly sampled frames. To further improve the results, a verification step robustly estimates a spatio-temporal model between the query video and the potentially corresponding video segments. Experimental results evaluate the different parameters of our system and measure the trade-off between accuracy and efficiency. We show that our system obtains excellent results for the TRECVID 2008 copy detection task. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper introduces the contextual dissimilarity measure, which significantly improves the accuracy of bag-of-features-based image search. Our measure takes into account the local distribution of the vectors and iteratively estimates distance update terms in the spirit of Sinkhorn's scaling algorithm, thereby modifying the neighborhood structure. Experimental results show that our approach gives significantly better results than a standard distance and outperforms the state of the art in terms of accuracy on the Nisteacuter-Steweacutenius and Lola data sets. This paper also evaluates the impact of a large number of parameters, including the number of descriptors, the clustering method, the visual vocabulary size, and the distance measure. The optimal parameter choice is shown to be quite context-dependent. In particular, using a large number of descriptors is interesting only when using our dissimilarity measure. We have also evaluated two novel variants: multiple assignment and rank aggregation. They are shown to further improve accuracy at the cost of higher memory usage and lower efficiency. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper proposes an asymmetric Hamming Embedding scheme for large scale image search based on local descriptors. The comparison of two descriptors relies on an vector-to-binary code comparison, which limits the quantization error associated with the query compared with the original Hamming Embedding method. The approach is used in combination with an inverted file structure that offers high efficiency, comparable to that of a regular bag-of-features retrieval system. The comparison is performed on two popular datasets. Our method consistently improves the search quality over the symmetric version. The trade-off between memory usage and precision is evaluated, showing that the method is especially useful for short binary signatures. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> In this paper, we propose a novel image classification framework based on patch matching. More precisely, we adapt the Hamming Embedding technique, first introduced for image search to improve the bag-of-words representation. This matching technique allows the fast comparison of descriptors based on their binary signatures, which refines the matching rule based on visual words and thereby limits the quantization error. Then, in order to allow the use of efficient and suitable linear kernel-based SVM classification, we propose a mapping method to cast the scores output by the Hamming Embedding matching technique into a proper similarity space. Comparative experiments of our proposed approach and other existing encoding methods on two challenging datasets PASCAL VOC 2007 and Caltech-256, report the interest of the proposed scheme, which outperforms all methods based on patch matching and even provide competitive results compared with the state-of-the-art coding techniques. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words. The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> Many recent object retrieval systems rely on local features for describing an image. The similarity between a pair of images is measured by aggregating the similarity between their corresponding local features. In this paper we present a probabilistic framework for modeling the feature to feature similarity measure. We then derive a query adaptive distance which is appropriate for global similarity evaluation. Furthermore, we propose a function to score the individual contributions into an image to image similarity within the probabilistic framework. Experimental results show that our method improves the retrieval accuracy significantly and consistently. Moreover, our result compares favorably to the state-of-the-art. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> This paper proposes a query expansion technique for image search that is faster and more precise than the existing ones. An enriched representation of the query is obtained by exploiting the binary representation offered by the Hamming Embedding image matching approach: The initial local descriptors are refined by aggregating those of the database, while new descriptors are produced from the images that are deemed relevant. The technique has two computational advantages over other query expansion techniques. First, the size of the enriched representation is comparable to that of the initial query. Second, the technique is effective even without using any geometry, in which case searching a database comprising 105k images typically takes 80 ms on a desktop machine. Overall, our technique significantly outperforms the visual query expansion state of the art on popular benchmarks. It is also the first query expansion technique shown effective on the UKB benchmark, which has few relevant images per query. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hamming Embedding and Its Improvements <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB011
|
The discriminative ability of visual words in medium-sized codebooks lies in between that of small and large codebooks. So it is important to compensate the information loss during quantization. To this end, a milestone work, i.e., Hamming embedding (HE) has been dominantly employed. Proposed by J egou et al. BIB001 , HE greatly improves the discriminative ability of visual words under medium-sized codebooks. HE first maps a SIFT descriptor f 2 R p from the p-dimensional space to a p b -dimensional space where P 2 R p b  p is a projecting matrix, and x is a lowdimensional vector. By creating a matrix of random Gaussian values and applying a QR factorization to it, matrix P is taken as the first p b rows of the resulting orthogonal matrix. To binarize x, Jegou et al. propose to compute the median vector x i ¼ ðx 1;i ; . . . ; x p b ;i Þ of the low-dimensional vector using descriptors falling in each Voronoi cell c i . Given descriptor f and its projected vector x, HE computes its visual word c t , and the HE binary vector is computed as where bðxÞ ¼ ðb 1 ðxÞ; . . . ; b p b ðxÞÞ is the resulting HE vector of dimension p b . The binary feature bðxÞ serves as a secondary check for feature matching. A pair of local features are a true match when two criteria are satisfied: 1) identical visual words and 2) small Hamming distance between their HE signatures. The extension of HE BIB002 estimates the matching strength between feature f 1 and f 2 reversely to the Hamming distance by an exponential function where bðx 1 Þ and bðx 2 Þ are the HE binary vector of f 1 and f 2 , respectively, HðÁ; ÁÞ computes the Hamming distance between two binary vectors, and g is a weighting parameter. As shown in Fig. 6 , HE BIB001 and its weighted version BIB002 improves accuracy considerably in 2008 and 2010. Applications of HE include video copy detection BIB003 , image classification BIB006 and re-ranking BIB010 . For example, in image classification, patch matching similarity is efficiently estimated by HE which is integrated into linear kernel-based SVM BIB006 . In image re-ranking, Tolias et al. BIB010 use lower HE thresholds to find strict correspondences which resemble those found by RANSAC, and the resulting image subset is more likely to contain true positives for query reformulation. The improvement over HE has been observed in a number of works, especially from the view of match kernel BIB008 . To reduce the information loss on the query side, Jain et al. BIB005 propose a vector-to-binary distance comparison. It exploits the vector-to-hyperplane distance while retaining the efficiency of the inverted index. Further, Qin et al. BIB009 design a higher-order match kernel within a probabilistic framework and adaptively normalize the local feature distances by the distance distribution of false matches. This method is in the spirit similar to BIB004 , in which the wordword distance, instead of the feature-feature distance BIB009 , is normalized, according to the neighborhood distribution of each visual word. While the average distance between a word to its neighbors is regularized to be almost constant in BIB004 , the idea of democratizing the contribution of individual embeddings has later been employed in BIB011 . In BIB008 , Tolias et al. show that VLAD and HE share similar natures and propose a new match kernel which trades off between local feature aggregation and feature-to-feature matching, using a similar matching function to BIB009 . They also demonstrate that using more bits (e.g., 128) in HE is superior to the original 64 bits scheme at the cost of decreased efficiency. Even more bits (256) are used in BIB007 , but this method may be prone to relatively low recall.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> The aim of salient feature detection is to find distinctive local events in images. Salient features are generally determined from the local differential structure of images. They focus on the shape-saliency of the local neighborhood. The majority of these detectors are luminance-based, which has the disadvantage that the distinctiveness of the local color information is completely ignored in determining salient image features. To fully exploit the possibilities of salient point detection in color images, color distinctiveness should be taken into account in addition to shape distinctiveness. In this paper, color distinctiveness is explicitly incorporated into the design of saliency detection. The algorithm, called color saliency boosting, is based on an analysis of the statistics of color image derivatives. Color saliency boosting is designed as a generic method easily adaptable to existing feature detectors. Results show that substantial improvements in information content are acquired by targeting color salient features. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> This paper investigates the use of color information when used within a state-of-the-art large scale image search system. We introduce a simple yet effective and efficient color signature generation procedure. It is used either to produce global or local descriptors. As a global descriptor, it outperforms several state-of-the-art color description methods, in particular the bag-of-words method based on color SIFT. As a local descriptor, our signature is used jointly with SIFT descriptors (no color) to provide complementary information. This significantly improves the recognition rate, outperforming the state of the art on two image search benchmarks. We provide an open source package of our signature (http://www.kooaba.com/en/learnmore/labs/). <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Visual reranking has been widely deployed to refine the quality of conventional content-based image retrieval engines. The current trend lies in employing a crowd of retrieved results stemming from multiple feature modalities to boost the overall performance of visual reranking. However, a major challenge pertaining to current reranking methods is how to take full advantage of the complementary property of distinct feature modalities. Given a query image and one feature modality, a regular visual reranking framework treats the top-ranked images as pseudo positive instances which are inevitably noisy, difficult to reveal this complementary property, and thus lead to inferior ranking performance. This paper proposes a novel image reranking approach by introducing a Co-Regularized Multi-Graph Learning (Co-RMGL) framework, in which the intra-graph and inter-graph constraints are simultaneously imposed to encode affinities in a single graph and consistency across different graphs. Moreover, weakly supervised learning driven by image attributes is performed to denoise the pseudo-labeled instances, thereby highlighting the unique strength of individual feature modality. Meanwhile, such learning can yield a few anchors in graphs that vitally enable the alignment and fusion of multiple graphs. As a result, an edge weight matrix learned from the fused graph automatically gives the ordering to the initially retrieved results. We evaluate our approach on four benchmark image retrieval datasets, demonstrating a significant performance gain over the state-of-the-arts. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> This paper aims for generic instance search from one example where the instance can be an arbitrary 3D object like shoes, not just near-planar and one-sided instances like buildings and logos. Firstly, we evaluate state-of-the-art instance search methods on this problem. We observe that what works for buildings loses its generality on shoes. Secondly, we propose to use automatically learned category-specific attributes to address the large appearance variations present in generic instance search. On the problem of searching among instances from the same category as the query, the category-specific attributes outperform existing approaches by a large margin. On a shoe dataset containing 6624 shoe images recorded from all viewing angles, we improve the performance from 36.73 to 56.56 using category-specific attributes. Thirdly, we extend our methods to search objects without restricting to the specifically known category. We show the combination of category-level information and the category-specific attributes is superior to combining category-level information with low-level features such as Fisher vector. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Fusion <s> Feature fusion has been proven effective [35, 36] in image search. Typically, it is assumed that the to-be-fused heterogeneous features work well by themselves for the query. However, in a more realistic situation, one does not know in advance whether a feature is effective or not for a given query. As a result, it is of great importance to identify feature effectiveness in a query-adaptive manner. <s> BIB009
|
Local-Local Fusion. A problem with the SIFT feature is that only local gradient description is provided. Other discriminative information encoded in an image is still not leveraged. In Fig. 5B , a pair of false matches cannot be rejected by HE due to their similarity in the SIFT space, but the fusion of other local (or regional) features may correct this problem. A good choice for local-local fusion is to couple SIFT with color descriptors. The usage of color-SIFT descriptors can partially address the trade-off between invariance and discriminative ability. Evaluation has been conducted on several recognition benchmarks BIB003 of the descriptors such as HSV-SIFT BIB002 , HueSIFT BIB001 and OpponentSIFT BIB003 . Both HSV-SIFT and HueSIFT are scale-invariant and shift-invariant. Oppo-nentSIFT describes all the channels in the opponent color space using the SIFT descriptor and is largely robust to the light color changes. In BIB003 , OpponentSIFT is recommended when no prior knowledge about the datasets is available. In more recent works, the binary color signatures are stored in the inverted index BIB007 , BIB004 . Despite the good retrieval accuracy on some datasets, the potential problem is that intensive variation in illumination may compromise the effectiveness of colors. Local-Global Fusion. Local and global features describe images from different aspects and can be complementary. In Fig. 5C , when local (and regional) cues are not enough to reject a false match pair, it would be effective to further incorporate visual information from a larger context scale. Early and late fusion are two possible ways. In early fusion, the image neighborhood relationship mined by global features such as FC8 in AlexNet BIB005 is fused in the SIFT-based inverted index . In late fusion, Zhang et al. build an offline graph for each type of feature, which is subsequently fused during the online query. In an improvement of , Deng et al. BIB006 add weakly supervised anchors to aid graph fusion. Both works on the rank level. For score-level fusion, automatically learned category-specific attributes are combined with pre-trained category-level information BIB008 . Zheng et al. BIB009 propose the query-adaptive late fusion by extracting a number of features (local or global, good or bad) and weighting them in a query-adaptive manner.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper, we draw an analogy between image retrieval and text retrieval and propose a visual phrase-based approach to retrieve images containing desired objects. The visual phrase is defined as a pair of adjacent local image patches and is constructed using data mining. We devise methods on how to construct visual phrases from images and how to encode the visual phrase for indexing and retrieval. Our experiments demonstrate that visual phrase-based retrieval approach can be very efficient and can be 20% more effective than its visual word-based counterpart. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> A visual word lexicon can be constructed by clustering primitive visual features, and a visual object can be described by a set of visual words. Such a "bag-of-words" representation has led to many significant results in various vision tasks including object recognition and categorization. However, in practice, the clustering of primitive visual features tends to result in synonymous visual words that over-represent visual patterns, as well as polysemous visual words that bring large uncertainties and ambiguities in the representation. This paper aims at generating a higher-level lexicon, i.e. visual phrase lexicon, where a visual phrase is a meaningful spatially co-occurrent pattern of visual words. This higher-level lexicon is much less ambiguous than the lower-level one. The contributions of this paper include: (1) a fast and principled solution to the discovery of significant spatial co-occurrent patterns using frequent itemset mining; (2) a pattern summarization method that deals with the compositional uncertainties in visual phrases; and (3) a top-down refinement scheme of the visual word lexicon by feeding back discovered phrases to tune the similarity measure through metric learning. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Known object recognition is the task of recognizing specific objects, such as cereal boxes or soda cans. Millions of such objects exist, and finding a computationally feasible method for recognition can be difficult. Ideally, the computational costs should scale with the complexity of the testing image, and not the size of the object database. To accomplish this goal we propose a method for detection and recognition based on triplets of feature descriptors. Each feature is given a label based on a modified K-means clustering algorithm. Object matching is then done by inverse lookup within a table of possible triplets. The ambiguity of the matches is further reduced by having each triplet vote on its proposed object center. For planar objects, the proposed object centers should cluster at a single point. In general, assuming orthographic projection, the proposed centers will lie along a line. If enough triplets are in agreement on a specific object’s center, the object is labeled as detected. Our algorithm has been evaluated on a new database with 118 training objects and various testing scenarios. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> There has been recent progress on the problem of recognizing specific objects in very large datasets. The most common approach has been based on the bag-of-words (BOW) method, in which local image features are clustered into visual words. This can provide significant savings in memory compared to storing and matching each feature independently. In this paper we take an additional step to reducing memory requirements by selecting only a small subset of the training features to use for recognition. This is based on the observation that many local features are unreliable or represent irrelevant clutter. We are able to select “useful” features, which are both robust and distinctive, by an unsupervised preprocessing step that identifies correctly matching features among the training images. We demonstrate that this selection approach allows an average of 4% of the original features per image to provide matching performance that is as accurate as the full set. In addition, we employ a graph to represent the matching relationships between images. Doing so enables us to effectively augment the feature set for each image through merging of useful features of neighboring images. We demonstrate adjacent and 2-adjacent augmentation, both of which give a substantial boost in performance. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> The Bag-of-visual Words (BoW) image representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the words in texts. However, massive experiments show that the commonly used visual words are not as expressive as the text words, which is not desirable because it hinders their effectiveness in various applications. In this paper, Descriptive Visual Words (DVWs) and Descriptive Visual Phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, novel descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs from classic visual words for various applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain scenes or objects are identified as the DVWs and DVPs. Experiments show that the DVWs and DVPs are compact and descriptive, thus are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including image retrieval, image re-ranking, and object recognition. The DVW and DVP combination outperforms the classic visual words by 19.5% and 80% in image retrieval and object recognition tasks, respectively. The DVW and DVP based image re-ranking algorithm: DWPRank outperforms the state-of-the-art VisualRank by 12.4% in accuracy and about 11 times faster in efficiency. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> We describe an algorithm for similar-image search which is designed to be efficient for extremely large collections of images. For each query, a small response set is selected by a fast prefilter, after which a more accurate ranker may be applied to each image in the response set. We consider a class of prefilters comprising disjunctions of conjunctions (“ORs of ANDs”) of Boolean features. AND filters can be implemented efficiently using skipped inverted files, a key component of Web-scale text search engines. These structures permit search in time proportional to the response set size. The prefilters are learned from training examples, and refined at query time to produce an approximately bounded response set. We cast prefiltering as an optimization problem: for each test query, select the OR-of-AND filter which maximizes training-set recall for an adjustable bound on response set size. This may be efficiently implemented by selecting from a large pool of candidate conjunctions of Boolean features using a linear program relaxation. Tests on object class recognition show that this relatively simple filter is nevertheless powerful enough to capture some semantic information. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49% improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77% precision improvement over the baseline bag-of-words approach, and a 24% improvement over full geometric verification alone. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Given a large-scale collection of images our aim is to efficiently associate images which contain the same entity, for example a building or object, and to discover the significant entities. To achieve this, we introduce the Geometric Latent Dirichlet Allocation (gLDA) model for unsupervised discovery of particular objects in unordered image collections. This explicitly represents images as mixtures of particular objects or facades, and builds rich latent topic models which incorporate the identity and locations of visual words specific to the topic in a geometrically consistent way. Applying standard inference techniques to this model enables images likely to contain the same object to be probabilistically grouped and ranked. ::: ::: Additionally, to reduce the computational cost of applying the gLDA model to large datasets, we propose a scalable method that first computes a matching graph over all the images in a dataset. This matching graph connects images that contain the same object, and rough image groups can be mined from this graph using standard clustering techniques. The gLDA model can then be applied to generate a more nuanced representation of the data. We also discuss how "hub images" (images representative of an object or landmark) can easily be extracted from our matching graph representation. ::: ::: We evaluate our techniques on the publicly available Oxford buildings dataset (5K images) and show examples of automatically mined objects. The methods are evaluated quantitatively on this dataset using a ground truth labeling for a number of Oxford landmarks. To demonstrate the scalability of the matching graph method, we show qualitative results on two larger datasets of images taken of the Statue of Liberty (37K images) and Rome (1M+ images). <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Accurate matching of local features plays an essential role in visual object search. Instead of matching individual features separately, using the spatial context, e.g., bundling a group of co-located features into a visual phrase, has shown to enable more discriminative matching. Despite previous work, it remains a challenging problem to extract appropriate spatial context for matching. We propose a randomized approach to deriving visual phrase, in the form of spatial random partition. By averaging the matching scores over multiple randomized visual phrases, our approach offers three benefits: 1) the aggregation of the matching scores over a collection of visual phrases of varying sizes and shapes provides robust local matching; 2) object localization is achieved by simple thresholding on the voting map, which is more efficient than subimage search; 3) our algorithm lends itself to easy parallelization and also allows a flexible trade-off between accuracy and speed by adjusting the number of partition times. Both theoretical studies and experimental comparisons with the state-of-the-art methods validate the advantages of our approach. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Exploiting local feature shape has made geometry indexing possible, but at a high cost of index space, while a sequential spatial verification and re-ranking stage is still indispensable for large scale image retrieval. In this work we investigate an accelerated approach for the latter problem. We develop a simple spatial matching model inspired by Hough voting in the transformation space, where votes arise from single feature correspondences. Using a histogram pyramid, we effectively compute pair-wise affinities of correspondences without ever enumerating all pairs. Our Hough pyramid matching algorithm is linear in the number of correspondences and allows for multiple matching surfaces or non-rigid objects under one-to-one mapping. We achieve re-ranking one order of magnitude more images at the same query time with superior performance compared to state of the art methods, while requiring the same index space. We show that soft assignment is compatible with this matching scheme, preserving one-to-one mapping and further increasing performance. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Hough voting in a geometric transformation space allows us to realize spatial verification, but remains sensitive to feature detection errors because of the inflexible quantization of single feature correspondences. To handle this problem, we propose a new method, called adaptive dither voting, for robust spatial verification. For each correspondence, instead of hard-mapping it to a single transformation, the method augments its description by using multiple dithered transformations that are deterministically generated by the other correspondences. The method reduces the probability of losing correspondences during transformation quantization, and provides high robustness as regards mismatches by imposing three geometric constraints on the dithering process. We also propose exploiting the non-uniformity of a Hough histogram as the spatial similarity to handle multiple matching surfaces. Extensive experiments conducted on four datasets show the superiority of our method. The method outperforms its state-of-the-art counterparts in both accuracy and scalability, especially when it comes to the retrieval of small, rotated objects. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Spatial verification is a key step in boosting the performance of object-based image retrieval. It serves to eliminate unreliable correspondences between salient points in a given pair of images, and is typically performed by analyzing the consistency of spatial transformations between the image regions involved in individual correspondences. In this paper, we consider the pairwise geometric relations between correspondences and propose a strategy to incorporate these relations at significantly reduced computational cost, which makes it suitable for large-scale object retrieval. In addition, we combine the information on geometric relations from both the individual correspondences and pairs of correspondences to further improve the verification accuracy. Experimental results on three reference datasets show that the proposed approach results in a substantial performance improvement compared to the existing methods, without making concessions regarding computational efficiency. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> Spatial verification is a crucial part of every image retrieval system, as it accounts for the fact that geometric feature configurations are typically ignored by the Bag-of-Words representation. Since spatial verification quickly becomes the bottleneck of the retrieval process, runtime efficiency is extremely important. At the same time, spatial verification should be able to reliably distinguish between related and unrelated images. While methods based on RANSAC’s hypothesize-and-verify framework achieve high accuracy, they are not particularly efficient. Conversely, verification approaches based on Hough voting are extremely efficient but not as accurate. In this paper, we develop a novel spatial verification approach that uses an efficient voting scheme to identify promising transformation hypotheses that are subsequently verified and refined. Through comprehensive experiments, we show that our method is able to achieve a verification accuracy similar to state-of-the-art hypothesize-and-verify approaches while providing faster runtimes than state-of-the-art voting-based methods. <s> BIB021 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Geometric Matching <s> In object recognition, the Bag-of-Words model assumes: i) extraction of local descriptors from images, ii) embedding the descriptors by a coder to a given visual vocabulary space which results in mid-level features, iii) extracting statistics from mid-level features with a pooling operator that aggregates occurrences of visual words in images into signatures, which we refer to as First-order Occurrence Pooling. This paper investigates higher-order pooling that aggregates over co-occurrences of visual words. We derive Bag-of-Words with Higher-order Occurrence Pooling based on linearisation of Minor Polynomial Kernel, and extend this model to work with various pooling operators. This approach is then effectively used for fusion of various descriptor types. Moreover, we introduce Higher-order Occurrence Pooling performed directly on local image descriptors as well as a novel pooling operator that reduces the correlation in the image signatures. Finally, First-, Second-, and Third-order Occurrence Pooling are evaluated given various coders and pooling operators on several widely used benchmarks. The proposed methods are compared to other approaches such as Fisher Vector Encoding and demonstrate improved results. <s> BIB022
|
A frequent concern with the BoW model is the lack of geometric constraints among local features. Geometric verification can be used as a critical pre-processing step various scenarios, such as query expansion BIB003 , BIB013 , feature selection BIB008 , database-side feature augmentation BIB015 , BIB008 , largescale object mining BIB012 , etc. The most well-known method for global spatial verification is RANSAC BIB004 . It calculates affine transformations for each correspondence repeatedly which are verified by the number of inliers that fit the transformation. RANSAC is effective in re-ranking a subset of top-ranked images but has efficiency problems. As a result, how to efficiently and accurately incorporate spatial cues in the SIFT-based framework has been extensively studied. A good choice is to discover the spatial context among local features. For example, visual phrases BIB002 , BIB009 , BIB014 , BIB016 are generated among individual visual words to provide more strict matching criterion. Visual word cooccurrences in the entire image are estimated BIB010 and aggregated BIB022 , while in BIB011 , BIB005 , BIB006 visual word clusters within local neighborhoods are discovered. Visual phrases can also be constructed from adjacent image patches BIB002 , random spatial partitioning BIB016 , and localized stable regions BIB011 such as MSER BIB001 . Another strategy uses voting to check geometric consistency. In the voting space, a bin with a larger value is more likely to represent the true transformation. An important work is weak geometrical consistency BIB007 , which focuses on the difference in scale and orientation between matched features. The space of difference is quantized into bins. Hough voting is used to locate the subset of correspondences similar in scale or orientation differences. Many later works can be viewed as extensions of WGC. For example, the method of Zhang et al. can be viewed as WGC using x, y offsets instead of scale and orientation. This method is invariant to object translations, but may be sensitive to scale and rotation changes due to the rigid coordinate quantization. To regain the scale and the rotation variance, Shen et al. BIB017 quantize the angle and scale of the query region after applying several transformations. A drawback of BIB017 is that query time and memory cost are both increased. To enable efficient voting and alleviate quantization artifacts, Hough pyramid matching (HPM) BIB018 distributes the matches over a hierarchical partition of the transformation space. HPM trades off between flexibility and accuracy and is very efficient. Quantization artifact can also be reduced by allowing a single correspondence to vote for multiple bins BIB019 . HPM and BIB019 are much faster than RANSAC and can be viewed as extensions in the rotation and the scale invariance to the weak geometry consistency proposed along with Hamming Embedding BIB007 . In BIB020 , a rough global estimate of orientation and scale changes is made by voting, which is used to verify the transformation obtained by the matched features. A recent method BIB021 combines the advantage of hypothesis-based methods such as RANSAC BIB004 and voting-based methods , BIB018 , BIB019 , BIB020 . Possible hypothesises are identified by voting and later verified and refined. This method inherits efficiency from voting and supports query expansion since it outputs an explicit transformation and a set of inliers.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> This paper introduces a simple yet effective method to improve visual word based image retrieval. Our method is based on an analysis of the k-reciprocal nearest neighbor structure in the image space. At query time the information obtained from this process is used to treat different parts of the ranked retrieval list with different distance measures. This leads effectively to a re-ranking of retrieved images. As we will show, this has two benefits: first, using different similarity measures for different parts of the ranked list allows for compensation of the “curse of dimensionality”. Second, it allows for dealing with the uneven distribution of images in the data space. Dealing with both challenges has very beneficial effect on retrieval accuracy. Furthermore, a major part of the process happens offline, so it does not affect speed at retrieval time. Finally, the method operates on the bag-of-words level only, thus it could be combined with any additional measures on e.g. either descriptor level or feature geometry making room for further improvement. We evaluate our approach on common object retrieval benchmarks and demonstrate a significant improvement over standard bag-of-words retrieval. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> This paper proposes a query expansion technique for image search that is faster and more precise than the existing ones. An enriched representation of the query is obtained by exploiting the binary representation offered by the Hamming Embedding image matching approach: The initial local descriptors are refined by aggregating those of the database, while new descriptors are produced from the images that are deemed relevant. The technique has two computational advantages over other query expansion techniques. First, the size of the enriched representation is comparable to that of the initial query. Second, the technique is effective even without using any geometry, in which case searching a database comprising 105k images typically takes 80 ms on a desktop machine. Overall, our technique significantly outperforms the visual query expansion state of the art on popular benchmarks. It is also the first query expansion technique shown effective on the UKB benchmark, which has few relevant images per query. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Query Expansion <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB010
|
As a post-processing step, query expansion (QE) significantly improves the retrieval accuracy. In a nutshell, a number of top-ranked images from the original rank list are employed to issue a new query which is in turn used to obtain a new rank list. QE allows additional discriminative features to be added to the original query, thus improving recall. In instance retrieval, Chum et al. BIB001 are the first to exploit this idea. They propose the average query expansion (AQE) which averages features of the top-ranked images to issue the new query. Usually, spatial verification BIB002 is employed for re-ranking and obtaining the ROIs from which the local features undergo average pooling. AQE is used by many later works BIB008 , BIB009 , BIB010 as a standard tool. The recursive AQE and the scale-band recursive QE are effective improvement but incur more computational cost BIB001 . Four years later, Chum et al. BIB003 improve QE from the perspectives of learning background confusers, expanding the query region and incremental spatial verification. In BIB005 , a linear SVM is trained online using the top-ranked and bottom-ranked images as positive and negative training samples, respectively. The learned weight vector is used to compute the average query. Other important extensions include "hello neighbor" based on reciprocal neighbors BIB004 , QE with rank-based weighting BIB006 , Hamming QE BIB007 (see Section 3.5), etc.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We propose a novel hashing scheme for image retrieval, clustering and automatic object discovery. Unlike commonly used bag-of-words approaches, the spatial extent of image features is exploited in our method. The geometric information is used both to construct repeatable hash keys and to increase the discriminability of the description. Each hash key combines visual appearance (visual words) with semi-local geometric information. Compared with the state-of-the-art min-hash, the proposed method has both higher recall (probability of collision for hashes on the same object) and lower false positive rates (random collisions). The advantages of geometric min-hashing approach are most pronounced in the presence of viewpoint and scale change, significant occlusion or small physical overlap of the viewing fields. We demonstrate the power of the proposed method on small object discovery in a large unordered collection of images and on a large scale image clustering problem. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We propose a scalable logo recognition approach that extends the common bag-of-words model and incorporates local geometry in the indexing process. Given a query image and a large logo database, the goal is to recognize the logo contained in the query, if any. We locally group features in triples using multi-scale Delaunay triangulation and represent triangles by signatures capturing both visual appearance and local geometry. Each class is represented by the union of such signatures over all instances in the class. We see large scale recognition as a sub-linear search problem where signatures of the query image are looked up in an inverted index structure of the class models. We evaluate our approach on a large-scale logo recognition dataset with more than four thousand classes. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> We present a scalable logo recognition technique based on feature bundling. Individual local features are aggregated with features from their spatial neighborhood into bundles. These bundles carry more information about the image content than single visual words. The recognition of logos in novel images is then performed by querying a database of reference images. We further propose a novel WGC-constrained RANSAC and a technique that boosts recall for object retrieval by synthesizing images from original query or reference images. We demonstrate the benefits of these techniques for both small object retrieval and logo recognition. Our logo recognition system clearly outperforms the current state-of-the-art with a recall of 83% at a precision of 99%. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Visual object retrieval aims at retrieving, from a collection of images, all those in which a given query object appears. It is inherently asymmetric: the query object is mostly included in the database image, while the converse is not necessarily true. However, existing approaches mostly compare the images with symmetrical measures, without considering the different roles of query and database. This paper first measure the extent of asymmetry on large-scale public datasets reflecting this task. Considering the standard bag-of-words representation, we then propose new asymmetrical dissimilarities accounting for the different inlier ratios associated with query and database images. These asymmetrical measures depend on the query, yet they are compatible with an inverted file structure, without noticeably impacting search efficiency. Our experiments show the benefit of our approach, and show that the visual object retrieval task is better treated asymmetrically, in the spirit of state-of-the-art text retrieval. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Retrieving objects from large image collection is challenging due to the so-called background-interference, i.e., matching between query object and reference images is usually confused by cluttered background, especially when objects are small. In this paper, we propose an object retrieval technique addressing this problem by partitioning the images. Specifically, several object proposals are partitioned from the images by jointly optimizing their objectness and coverage. The proposal set with maximum objectness score and minimum redundancy is obtained. Therefore,the interference of cluttered background is greatly reduced. Next, the objects are retrieved based on the partitioned proposals, separately and independently to the background. Our method is featured by the fine partitioning, which not only removes interferences from background, but also significantly reduces the number of objects to index. In this way, the effectiveness and efficiency are both achieved, which better suits big data retrieval. Subsequently, feature coding on partitioned objects generates much meaningful representation, and object level connectivity also introduces novel clues into the reranking. Extensive experiments on three popular object retrieval benchmark datasets (Oxford Buildings, Paris, Holiday) show the effectiveness of our method in retrieving small objects out of big data. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Small Object Retrieval <s> Query expansion is a popular method to improve the quality of image retrieval with both conventional and CNN representations. It has been so far limited to global image similarity. This work focuses on diffusion, a mechanism that captures the image manifold in the feature space. The diffusion is carried out on descriptors of overlapping image regions rather than on a global image descriptor like in previous approaches. An efficient off-line stage allows optional reduction in the number of stored regions. In the on-line stage, the proposed handling of unseen queries in the indexing stage removes additional computation to adjust the precomputed data. We perform diffusion through a sparse linear system solver, yielding practical query times well below one second. Experimentally, we observe a significant boost in performance of image retrieval with compact CNN descriptors on standard benchmarks, especially when the query object covers only a small part of the image. Small objects have been a common failure case of CNN-based retrieval. <s> BIB007
|
Retrieving objects that cover a small portion of images is a challenging task due to 1) the few detected local features and 2) the large amount of background noise. The Instance Search task in the TRECVID campaign and the task of logo retrieval are important venues/applications for this task. Generally speaking, both TRECVID and logo retrieval can be tackled with similar pipelines. For keypoint-based methods, the spatial context among the local features is important to discriminative target objects from others, especially in cases of rigid objects. Examples include BIB001 , BIB002 , BIB004 . Other effective methods include burstiness handling BIB003 (discussed in Section 3.4.3), considering the different inlier ratios between the query and target objects BIB005 , etc. In the second type of methods, effective region proposals BIB006 or multi-scale image patches BIB007 can be used as object region candidates. In BIB007 , a recent state-of-the-art method, a regional diffusion mechanism based on neighborhood graphs is proposed to further improve the recall of small objects.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Pre-Trained CNN Models <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB013
|
Popular CNN Architectures. Several CNN models serve as good choices for extracting features, including AlexNet BIB002 , VGGNet BIB004 , GoogleNet BIB009 and ResNet BIB011 , which are listed in Table 2 . Briefly, CNN can be viewed as a set of non-linear functions and is composed of a number of layers such as convolution, pooling, non-linearities, etc. CNN has a hierarchical structure. From bottom to top layers, the image undergoes convolution with filters, and the receptive field of these image filters increases. Filters in the same layer have the same size but different parameters. AlexNet BIB002 was proposed the earliest among these networks, which has five convolutional layers and three fully connected (FC) layers. It has 96 filters in the first layer of sizes 11 Â 11 Â 3 and has 256 filters of size 3 Â 3 Â 192 in the 5th layer. Zeiler et al. BIB003 observe that the filters are sensitive to certain visual patterns and that these patterns evolve from lowlevel bars in bottom layers to high-level objects in top layers. For low-level and simple visual stimulus, the CNN filters act as the detectors in the local hand-crafted features, but for the high-level and complex stimulus, the CNN filters have distinct characteristics that depart from SIFT-like detectors. AlexNet has been shown to be outperformed by newer ones such as VGGNet, which has the largest number of parameters. ResNet and GoogleNet won the ILSVRC 2014 and 2015 challenges, respectively, showing that CNNs are more effective with more layers. A full review of these networks is beyond the scope of this paper, and we refer readers to BIB002 , BIB005 , BIB004 for details. Datasets for Pre-Training. Several large-scale recognition datasets are used for CNN pre-training. Among them, the ImageNet dataset BIB001 is mostly commonly used. It contains 1.2 million images of 1,000 semantic classes and is usually thought of as being generic. Another data source for pretraining is the Places-205 dataset BIB006 which is twice as large as ImageNet but has five times fewer classes. It is a scene-centric dataset depicting various indoor and outdoor scenes. A hybrid dataset combining the Places-205 and the ImageNet datasets has also been used for pre-training BIB006 . The resulting HybridNet is evaluated in BIB009 , BIB011 , BIB012 , for instance retrieval. The Transfer Issue. Comprehensive evaluation of various CNNs on instance retrieval has been conducted in several recent works BIB012 , , BIB013 . The transfer effect is mostly concerned. It is considered in BIB012 that instance retrieval, as a target task, lies farthest from the source, i.e., ImageNet. Studies reveal some critical insights in the transfer process. First, during model transfer, features extracted from different layers exhibit different retrieval performance. Experiments confirm that the top layers may exhibit lower generalization ability than the layer before it. For example, for AlexNet pre-trained on ImageNet, it is shown that FC6, FC7, and FC8 are in descending order regarding retrieval accuracy BIB012 . It is also shown in BIB010 , BIB013 that the pool5 feature of AlexNet and VGGNet is even superior to FC6 AlexNet BIB002 60M 5+3 ImageNet BIB007 , BIB008 PlacesNet BIB006 Places BIB012 , HybridNet BIB006 ImageNet+Places BIB012 , [ when proper encoding techniques are employed. Second, the source training set is relevant to retrieval accuracy on different datasets. For example, Azizpour et al. BIB012 report that HybridNet yields the best performance on Holidays after PCA. They also observe that AlexNet pre-trained on ImageNet is superior to PlacesNet and HybridNet on the Ukbench dataset which contains common objects instead of architectures or scenes. So the similarity of the source and target plays a critical role in instance retrieval when using a pre-trained CNN model.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Many visual search and matching systems represent images using sparse sets of "visual words": descriptors that have been quantized by assignment to the best-matching symbol in a discrete vocabulary. Errors in this quantization procedure propagate throughout the rest of the system, either harming performance or requiring correction using additional storage or processing. This paper aims to reduce these quantization errors at source, by learning a projection from descriptor space to a new Euclidean space in which standard clustering techniques are more likely to assign matching descriptors to the same cluster, and nonmatching descriptors to different clusters. ::: ::: To achieve this, we learn a non-linear transformation model by minimizing a novel margin-based cost function, which aims to separate matching descriptors from two classes of non-matching descriptors. Training data is generated automatically by leveraging geometric consistency. Scalable, stochastic gradient methods are used for the optimization. ::: ::: For the case of particular object retrieval, we demonstrate impressive gains in performance on a ground truth dataset: our learnt 32-D descriptor without spatial re-ranking outperforms a baseline method using 128-D SIFT descriptors with spatial re-ranking. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching. This paper has been merged with arXiv:1406.6909 <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96% object recall at overlap threshold of 0.5 and over 75% recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. . <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> In this paper we present an efficient and accurate method to aggregate a set of Deep Convolutional Neural Network (CNN) responses, extracted from a set of image windows. CNN features are usually computed on the whole frame or with a dense multi scale approach. There is evidence that using multiple windows yields a better image representation nonetheless it is still not clear how windows should be sampled and how CNN responses should be aggregated. Instead of sampling the image densely in scale and space we show that selecting a few hundred windows is enough to obtain an effective image signature. We show how to use Fisher Vectors and PCA to obtain a short and highly descriptive signature that can be used effectively for image retrieval. We test our method on two relevant computer vision tasks: image retrieval and image tagging. We report state-of-the art results for both tasks on three standard datasets. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel "RomePatches" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> This paper considers the task of image search using the Bag-of-Words (BoW) model. In this model, the precision of visual matching plays a critical role. Conventionally, local cues of a keypoint, e.g., SIFT, are employed. However, such strategy does not consider the contextual evidences of a keypoint, a problem which would lead to the prevalence of false matches. To address this problem and enable accurate visual matching, this paper proposes to integrate discriminative cues from multiple contextual levels, i.e., local, regional, and global, via probabilistic analysis. "True match" is defined as a pair of keypoints corresponding to the same scene location on all three levels (Fig. 1). Specifically, the Convolutional Neural Network (CNN) is employed to extract features from regional and global patches. We show that CNN feature is complementary to SIFT due to its semantic awareness and compares favorably to several other descriptors such as GIST, HSV, etc. To reduce memory usage, we propose to index CNN features outside the inverted file, communicated by memory-efficient pointers. Experiments on three benchmark datasets demonstrate that our method greatly promotes the search accuracy when CNN feature is integrated. We show that our method is efficient in terms of time cost compared with the BoW baseline, and yields competitive accuracy with the state-of-the-arts. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB021 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB022 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB023 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB024 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining. <s> BIB025 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Extraction <s> State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. <s> BIB026
|
FC Descriptors. The most straightforward idea is to extract the descriptor from the fully-connected layer of the network BIB005 , BIB006 , BIB020 , e.g., the 4,096-dim FC6 or FC7 descriptor in AlexNet. The FC descriptor is generated after layers of convolutions with the input image, has a global receptive field, and thus can be viewed as a global feature. It yields fair retrieval accuracy under Euclidean distance and can be improved with power normalization BIB001 . Intermediate Local Features. Many recent retrieval methods BIB012 , BIB013 , BIB021 focus on local descriptors in the intermediate layers. In these methods, lower-level convolutional filters (kernels) are used to detect local visual patterns. Viewed as local detectors, these filters have a smaller receptive field and are densely applied on the entire image. Compared with the global FC feature, local detectors are more robust to image transformations such as truncation and occlusion, in ways that are similar to the local invariant detectors (Section 3.2). Local descriptors are tightly coupled with these intermediate local detectors, i.e., they are the responses of the input image to these convolution operations. In other words, after the convolutions, the resulting activation maps can be viewed as a feature ensemble, which is called the "column feature" in this survey. For example in AlexNet BIB004 , there are n ¼ 96 detectors (convolutional filters) in the 1st convolutional layer. These filters produces n ¼ 96 heat maps of size 27 Â 27 (after max pooling). Each pixel in the maps has a receptive field of 19 Â 19 and records the response of the image w.r.t. the corresponding filter BIB012 , BIB013 , BIB021 . The column feature is therefore of size 1 Â 1 Â 96 (Fig. 2 ) and can be viewed as a description of a certain patch in the original image. Each dimension of this descriptor denotes the level of activation of the corresponding detector and resembles the SIFT descriptor to some extent. The column feature initially appears in BIB007 , where Razavian et al. first do maxpooling over regularly partitioned windows on the feature maps and then concatenate them across all filter responses, yielding column-like features. In BIB014 , column features from multiple layers of the networks are concatenated, forming the "hypercolumn" feature. In hybrid methods, the feature extraction process consists of patch detection and description steps. For the first step, the literature has seen three major types of region detectors. The first is grid image patches. For example, in BIB008 , a twoscale sliding window strategy is employed to generate patches. In BIB005 , the dataset images are first cropped and rotated, and then divided into patches of different scales, the union of which covers the whole image. The second type is invariant keypoint/region detectors. For instance, the difference of Gaussian feature points are used in ; the MSER region detector is leveraged in BIB009 . Third, region proposals also provide useful information on the locations of the potential objects. Mopuri et al. BIB015 employ selective search to generate image patches, while EdgeBox BIB010 is used in BIB016 . In BIB022 , the region proposal network (RPN) BIB026 is applied to locate the potential objects in an image. The use of CNN as region descriptors is validated in BIB009 , showing that CNN is superior to SIFT in image matching except on blurred images. Given the image patches, the hybrid CNN method usually employs the FC or pooled intermediate CNN features. Examples using the FC descriptors include BIB005 , BIB008 , BIB015 , BIB017 . In these works, the 4,096-dim FC features are extracted from the multi-scale image regions BIB005 , BIB008 , BIB017 or object proposals BIB015 . On the other hand, Razavian et al. BIB007 also uses the intermediate descriptors after max-pooling as region descriptors. BIB023 163,671 713 Landmark Tokyo TM BIB024 112,623 n.a Landmark MV RGB-D BIB003 250,000 300 House. object Product BIB018 101,945Â2 n.a Furniture The above methods use pre-trained models for patch feature extraction. Based on the hand-crafted detectors, patch descriptors can also be learned through CNN in either supervised BIB019 or unsupervised manner , which improves over the previous works on SIFT descriptor learning BIB011 , BIB002 . Yi et al. BIB025 further propose an end-to-end learning method integrating region detector, orientation estimator and feature descriptor in a single pipeline.
|
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Deep Convolutional Neural Networks (DCNN) have established a remarkable performance benchmark in the field of image classification, displacing classical approaches based on hand-tailored aggregations of local descriptors. Yet DCNNs impose high computational burdens both at training and at testing time, and training them requires collecting and annotating large amounts of training data. Supervised adaptation methods have been proposed in the literature that partially re-learn a transferred DCNN structure from a new target dataset. Yet these require expensive bounding-box annotations and are still computationally expensive to learn. In this paper, we address these shortcomings of DCNN adaptation schemes by proposing a hybrid approach that combines conventional, unsupervised aggregators such as Bag-of-Words (BoW), with the DCNN pipeline by treating the output of intermediate layers as densely extracted local descriptors. We test a variant of our approach that uses only intermediate DCNN layers on the standard PASCAL VOC 2007 dataset and show performance significantly higher than the standard BoW model and comparable to Fisher vector aggregation but with a feature that is 150 times smaller. A second variant of our approach that includes the fully connected DCNN layers significantly outperforms Fisher vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC 2007, yet at only a small fraction of the training and testing cost. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Pooling <s> An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying highlevel context and improving the descriptive power of lowlevel and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets. <s> BIB013
|
When column features are extracted, an image is represented by a set of descriptors. To aggregate these descriptors into a global representation, currently two strategies are adopted: encoding and direct pooling (Fig. 2) . Encoding. A set of column features resembles a set of SIFT features. So standard encoding schemes can be directly employed. The most commonly used methods are VLAD BIB001 and FV BIB002 . A brief review of VLAD and FV can be seen in Section 3.3.2. A milestone work is BIB005 , in which the column features are encoded into VLAD for the first time. This idea was later extended to CNN model fine-tuning BIB010 . The BoW encoding can also be leveraged, as the case in BIB006 . The column features within each layer are aggregated into a BoW vector which is then concatenated across the layers. An exception to these fix-length representations is BIB011 , in which the column features are quantized with a codebook of size 25k and an inverted index is employed for efficiency. Pooling. A major difference between the CNN column feature and SIFT is that the former has an explicit meaning in each dimension, i.e., the response of a particular region of the input image to a filter. Therefore, apart from the encoding schemes mentioned above, direct pooling techniques can produce discriminative features as well. A milestone work in this direction consists in the Maximum activations of convolutions (MAC) proposed by Tolias et al. BIB007 . Without distorting or cropping images, MAC computes a global descriptor with a single forward pass. Specifically, MAC calculates the maximum value of each intermediate feature map and concatenates all these values within a convolutional layer. In its multi-region version, the integral image and an approximate maximum operator are used for fast computation. The regional MAC descriptors are subsequently sum-pooled along with a series of normalization and PCA-whitening operations BIB003 . We also note in this survey that several other works BIB004 , BIB012 , BIB008 also employ similar ideas with BIB007 in employing max or average pooling on the intermediate feature maps and that Razavian et al. BIB004 are the first. It has been observed that the last convolutional layer (e.g., pool5 in VGGNet), after pooling usually yields superior accuracy to the FC descriptors and the other convolutional layers BIB012 . Apart from direct feature pooling, it is also beneficial to assign some specific weights to the feature maps within each layer before pooling. In BIB008 , Babenko et al. propose the injection of the prior knowledge that objects tend to be located toward image centers, and impose a 2-D Gaussian mask on the feature maps before sum pooling. Xie et al. BIB013 improve the MAC representation BIB007 by propagating the high-level semantics and spatial context to low-level neurons for improving the descriptive ability of these bottom-layer activations. With a more general weighting strategy, Kalantidis et al. BIB009 perform both feature map-wise and channel-wise weighing, which aims to highlight the highly active spatial responses while reducing burstiness effects.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.