reference
stringlengths
141
444k
target
stringlengths
31
68k
Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution (DE) has recently proven to be an efficient method for optimizing real-valued multi-modal objective functions. Besides its good convergence properties and suitability for parallelization, DE's main assets are its conceptual simplicity and ease of use. This paper describes several variants of DE and elaborates on the choice of DE's control parameters, which corresponds to the application of fuzzy rules. Finally, the design of a howling removal unit with DE is described to provide a real-world example for DE's applicability. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> The clearing procedure is a niching method inspired by the principle stated by J.H. Holland (1975) - that of sharing limited resources within subpopulations of individuals characterized by some similarities - but instead of evenly sharing the available resources among the individuals of a subpopulation, the clearing procedure supplies these resources only to the best individuals of each subpopulation. The clearing is naturally adapted to elitist strategies. This can significantly improve the performance of genetic algorithms (GAs) applied to multimodal optimization. Moreover, the clearing procedure allows a GA to efficiently reduce the genetic drift when used with an appropriate selection operator. Some experimental results are presented for a massively multimodal deceptive function optimization. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Many real-world search and optimization problems involve inequality and/or equality constraints and are thus posed as constrained optimization problems. In trying to solve constrained optimization problems using genetic algorithms (GAs) or classical optimization methods, penalty function methods have been the most popular approach, because of their simplicity and ease of implementation. However, since the penalty function approach is generic and applicable to any type of constraint (linear or nonlinear), their performance is not always satisfactory. Thus, researchers have developed sophisticated penalty functions specific to the problem at hand and the search algorithm used for optimization. However, the most difficult aspect of the penalty function approach is to find appropriate penalty parameters needed to guide the search towards the constrained optimum. In this paper, GA's population-based approach and ability to make pair-wise comparison in tournament selection operator are exploited to devise a penalty function approach that does not require any penalty parameter. Careful comparisons among feasible and infeasible solutions are made so as to provide a search direction towards the feasible region. Once sufficient feasible solutions are found, a niching method (along with a controlled mutation operator) is used to maintain diversity among feasible solutions. This allows a real-parameter GA's crossover operator to continuously find better feasible solutions, gradually leading the search near the true optimum solution. GAs with this constraint handling approach have been tested on nine problems commonly used in the literature, including an engineering design problem. In all cases, the proposed approach has been able to repeatedly find solutions closer to the true optimum solution than that reported earlier. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> This paper introduces a new technique called species conservation for evolving parallel subpopulations. The technique is based on the concept of dividing the population into several species according to their similarity. Each of these species is built around a dominating individual called the species seed. Species seeds found in the current generation are saved (conserved) by moving them into the next generation. Our technique has proved to be very effective in finding multiple solutions of multimodal optimization problems. We demonstrate this by applying it to a set of test problems, including some problems known to be deceptive to genetic algorithms. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Two test problems on multiobjective optimization (one simple general problem and the second one on an engineering application of cantilever design problem) are solved using differential evolution (DE). DE is a population based search algorithm, which is an improved version of genetic algorithm (GA), Simulations carried out involved solving (1) both the problems using Penalty function method, and (2) first problem using Weighing factor method and finding Pareto optimum set for the chosen problem, DE found to be robust and faster in optimization. To consolidate the power of DE, the classical Himmelblau function, with bounds on variables, is also solved using both DE and GA. DE found to give the exact optimum value within less generations compared to simple GA. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Multimodal optimization is an important area of active research within the evolutionary computation community. The ability of algorithms to discover and maintain multiple optima is of great importance - in particular when several global optima exist or when other high-quality solutions might be of interest. The differential evolution algorithm (DE) is extended with a crowding scheme making it capable of tracking and maintaining multiple optima. The introduced CrowdingDE algorithm is compared with a DE using the well-known sharing scheme that penalizes similar candidate solutions. In conclusion, the introduced CrowdingDE outperformed the sharing-based DE algorithm on fourteen commonly used benchmark problems. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> In this paper, we proposed Fittest Individual Refinement (FIR), a crossover based local search method for Differential Evolution (DE). The FIR scheme accelerates DE by enhancing its search capability through exploration of the neighborhood of the best solution in successive generations. The proposed memetic version of DE (augmented by FIR) is expected to obtain an acceptable solution with a lower number of evaluations particularly for higher dimensional functions. Using two different implementations DEfirDE and DEfirSPX we showed that proposed FIR increases the convergence velocity of DE for well known benchmark functions as well as improves the robustness of DE against variation of population. Experiments using multimodal landscape generator showed our proposed algorithms consistently outperformed their parent algorithms. A performance comparison with reported results of well known real coded memetic algorithms is also presented. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> The differential evolution is a floating-point encoded evolutionary algorithm for global optimization over continuous spaces. This algorithm so far uses empirically chosen fixed search parameters. This study is to make the search more responsive to changes in the problem. This paper proposes a new adaptive form of DE having lower number of search parameters required to be set by the user a priori. The fuzzy differential evolution algorithm uses fuzzy logic controllers whose inputs incorporate the relative function values and individuals of the successive generations to adapt the search parameters for the mutation operation and the crossover operation. Standard test functions are used to demonstrate. This new algorithm results a faster convergence for these functions. <s> BIB008 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution (DE) is a simple and efficient algorithm for function optimization over continuous spaces. It has reportedly outperformed many types of evolutionary algorithms and other search heuristics when tested over both benchmark and real-world problems. However, the performance of DE deteriorates severely if the fitness function is noisy and continuously changing. In this paper two improved DE algorithms have been proposed that can efficiently find the global optima of noisy functions. This is achieved firstly by weighing the difference vector by a random scale factor and secondly by employing two novel selection strategies as opposed to the conventional one used in the original versions of DE. An extensive performance comparison of the newly proposed scheme, the original DE (DE/Rand/1/Exp), the canonical PSO and the standard real-coded EA has been presented using well-known benchmarks corrupted by zero-mean Gaussian noise. It has been found that the proposed method outperforms the others in a statistically significant way. <s> BIB009 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> This paper presents an approach of using differential evolution (DE) to solve dynamic optimization problems. Careful setting of parameters is necessary for DE algorithms to successfully solve optimization problems. This paper describes DynDE, a multipopulation DE algorithm developed specifically to solve dynamic optimization problems that doesn't need any parameter control strategy for the F or CR parameters. Experimental evidence has been gathered to show that this new algorithm is capable of efficiently solving the moving peaks benchmark. <s> BIB010 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution (DE) and evolutionary programming (EP) are two major algorithms in evolutionary computation. They have been applied with success to many real-world numerical optimization problems. Neighborhood search (NS) is a main strategy underpinning EP.There have been analyses of different NS operators’ characteristics. Although DE might be similar to the evolutionary process in EP, it lacks the relevant concept of neighborhood search. In this chapter, DE with neighborhood search (NSDE) is proposed based on the generalization of NS strategy. The advantages of NS strategy in DE are analyzed theoretically. These analyses mainly focus on the change of search step size and population diversity after using neighborhood search. Experimental results have shown that DE with neighborhood search has significant advantages over other existing algorithms on a broad range of different benchmark functions. NSDE’s scalability is also evaluated on a number of benchmark problems, whose dimension ranges from 50 to 200. <s> BIB011 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done. <s> BIB012 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution algorithm (DE) is excellent optimization tools for complex high-dimensional multimodal problems. However, they require a very large number of problem function evaluations. In many engineering optimization problems, like design optimization or structure parameters identification, a single fitness evaluation is very expensive or time consuming. Therefore, standard evolutionary computation methods are not practical for such applications. Applying models as a surrogate of the real fitness function is a quite popular approach to handle this restriction. However, those early methodologies do suffer from some limitations, the most serious of which being the extra tuning parameter. A novel surrogate-assisted DE evolutionary optimization framework based on Gaussian process for solving computationally expensive problem is present. The study result indicates gaussian process assisted differential evolution optimization procedure (GPDE) clearly outperforms standard DE evolutionary strategies on benchmark functions. Result is also presented for applications to a real-world problem: displacement back analysis for identification of rock mass parameters of a tunnel. <s> BIB013 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> In this paper, an optimization algorithm is formulated and its performance assessment for large scale global optimization is presented. The proposed algorithm is named DEwSAcc and is based on Differential Evolution (DE) algorithm, which is a floating-point encoding evolutionary algorithm for global optimization over continuous spaces. The original DE is extended by log-normal self-adaptation of its control parameters and combined with cooperative co-evolution as a dimension decomposition mechanism. Experimental results are given for seven high-dimensional test functions proposed for the Special Session on Large Scale Global Optimization at 2008 IEEE World Congress on Computational Intelligence. <s> BIB014 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Spatially-structured populations are one approach to increasing genetic diversity in an evolutionary algo- rithm (EA). However, they are susceptible to convergence to a single peak in a multimodal fltness landscape. Niching methods, such as fltness sharing, allow an EA to maintain multiple solutions in a single population, however they have rarely been used in conjunction with spatially-structured populations. This paper introduces local sharing, a method that applies sharing to the overlapping demes of a spatially-structured population. The combination of these two meth- ods succeeds in maintaining multiple solutions in problems that have previously proved di-cult for sharing alone (and vice-versa). <s> BIB015 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants. <s> BIB016 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> In this paper we investigate a Self-Adaptive Differential Evolution algorithm (jDE) where F and CR control parameters are self-adapted and a multi-population method with aging mechanism is used. The performance of the jDE algorithm is evaluated on the set of benchmark functions provided for the CEC 2009 special session on evolutionary computation in dynamic and uncertain environments. <s> BIB017 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Multi-modal optimization refers to locating not only one optimum but a set of locally optimal solutions. Niching is an important technique to solve multi-modal optimization problems. The ability of discover and maintain multiple niches is the key capability of these algorithms. In this paper, differential evolution with an ensemble of restricted tournament selection (ERTS-DE) algorithm is introduced to perform multimodal optimization. The algorithms is tested on 15 newly designed scalable benchmark multi-modal optimization problems and compared with the crowding differential evolution (Crowding-DE) in the literature. As shown by the experimental results, the proposed algorithm outperforms the Crowding-DE on the novel scalable benchmark problems. <s> BIB018 </s> Differential Evolution in Wireless Communications: A Review <s> 2.4 <s> Differential evolution (DE) is a simple and efficient evolutionary algorithm for global optimization. In distributed differential evolution (DDE), the population is divided into several sub-populations and each sub-population evolves independently for enhancing algorithmic performance. Through sharing elite individuals between sub-populations, effective information is spread. However, the information exchanged through individuals is still too limited. To address this issue, a competition-based strategy is proposed in this paper to achieve comprehensive interaction between sub-populations. Two operators named opposition-invasion and cross-invasion are designed to realize the invasion from good performing sub-populations to bad performing subpopulations. By utilizing opposite invading sub-population, the search efficiency at promising regions is improved by opposition-invasion. In cross-invasion, information from both invading and invaded sub-populations is combined and population diversity is maintained. Moreover, the proposed algorithm is implemented in a parallel master-slave manner. Extensive experiments are conducted on 15 widely used large-scale benchmark functions. Experimental results demonstrate that the proposed competition-based DDE (DDE-CB) could achieve competitive or even better performance compared with several state-of-the-art DDE algorithms. The effect of proposed competition-based strategy cooperation with well-known DDE variants is also verified. <s> BIB019
Major strengths of the differential evolution DE can be applied to tackle real world problems, which are unimodal/multimodal, linear/non-linear, differentiable/nondifferentiable convex/non-convex, continuous/ non-continuous and symmetrical/asymmetrical in nature. These are categorized as follows: Multiobjective optimisation (MO): Real world problems are often complex because of the composition of the different variables that are used in modelling them. These imply some problems have several criteria and objectives, which must be evaluated simultaneously in order to obtain the solution . DE has proven to be well suited for tackling this type of optimisation problem. The most prominent modified version of DE that is used in solving MO problems are Pareto DE and non-Pareto DE BIB005 . Constrained optimisation (CO): DE is very good at solving real world problems that come with conditions known as constraints. The most profound constraints are called the boundary constraint BIB001 or boundary value problems in numerical analysis or numerical optimisation and inequality constraints BIB003 . Large-scale optimisation (LSO): Most search abilities of EC algorithms failed at a very high dimension. This is due to, firstly, the complexity of the problem and the fatigue of the search strategy and secondly, the exponential increase in the solution space caused by the time it takes for the search to yield an optimum solution. This problem is present in almost all the EC algorithm of which DE is not an exception. However, experts in DE have provided means of which DE can be used to solve large scale optimisations. Some of the surveyed approaches are the fitness function refinement BIB007 , use of chaotic systems and simplex search method , co-evolution , self-adaptive method BIB011 , random grouping scheme BIB012 , surrogate-assisted BIB013 and hybrid of co-evolution and log-normal self-adaptation BIB014 , fuzzy adaptive method BIB008 , strategy adaptation BIB016 and competition based strategy BIB019 . Optimisation in dynamic and uncertain environments: EC algorithms generally suffers from the uncertainties present in the optimisation problems. Those uncertainties can be time, place or measurement indexed. These uncertainties can manifest as the noisiness of the fitness function, the effect of the computing environment on the parameters, case of the fitness function being approximated and the optimal nature of the candidate solution varies over time and location. Researchers have designed different strategies to address these issues in DE environment. Intermittent varying of the scale parameter and incorporated a very good search method BIB009 , optimisation of the objective functions that are slow and also changes with time BIB010 and introduction of aging mechanism to handle unstable fitness functions BIB017 are some of the available strategies. Multimodal optimisation and niching: Most of objective functions encountered in real life are multimodal in nature and as such caution must be exercised in handling them because their nature connotes that several near optimal solutions may be available for them. Researchers have proposed different niching methods to tackle this issue. Niching ensures that multiple groups are maintained within the same population in order to track different optimum solutions. Some niching techniques include, but not limited to the following: fitness sharing BIB015 , clearing BIB002 , crowding BIB006 , speciation BIB004 and restricted tournament selection BIB018 .
Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> This paper proposes a novel implementation of memetic structure for continuous optimization problems. The proposed algorithm, namely Differential Evolution with Concurrent Fitness Based Local Search (DEcfbLS), enhances the DE performance by including a local search concurrently applied on multiple individuals of the population. The selection of the individuals undergoing local search is based on a fitness-based adaptive rule. The most promising individuals are rewarded with a local search operator that moves along the axes and complements the normal search moves of DE structure. The application of local search is performed with a shallow termination rule. This design has been performed in order to overcome the limitations within the search logic on the original DE algorithm. The proposed algorithm has been tested on various problems in multiple dimensions. Numerical results show that the proposed algorithm is promising candidate to take part to competition on Real-Parameter Single Objective Optimization at CEC-2013. A comparison against modern meta-heuristics confirms that the proposed algorithm robustly displays a good performance on the testbed under consideration. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> In this work a differential evolution algorithm is adapted to solve dynamic constrained optimization problems. The approach is based on a mechanism to detect changes in the objective function and/or the constraints of the problem so as to let the algorithm to promote the diversity in the population while pursuing the new feasible optimum. This is made by combining two popular differential evolution variants and using a memory of best solutions found during the search. Moreover, random-immigrants are added to the population at each generation and a simple hill-climber-based local search operator is applied to promote a faster convergence to the new feasible global optimum. The approach is compared against other recently proposed algorithms in an also recently proposed benchmark. The results show that the proposed algorithm provides a very competitive performance when solving different types of dynamic constrained optimization problems. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> To improve the performance of differential evolution (DE) algorithm.We present an evolving surrogate model-based differential evolution (ESMDE) method.In ESMDE, the surrogate model is constructed using population members of the current generation.Surrogate model assists in producing competitive offspring during the evolution.Surrogate model evolves with population to better represent the search basin. Differential evolution (DE) is a simple and effective approach for solving numerical optimization problems. However, the performance of DE is sensitive to the choice of mutation and crossover strategies and their associated control parameters. Therefore, to achieve optimal performance, a time-consuming parameter tuning process is required. In DE, the use of different mutation and crossover strategies with different parameter settings can be appropriate during different stages of the evolution. Therefore, to achieve optimal performance using DE, various adaptation, self-adaptation, and ensemble techniques have been proposed. Recently, a classification-assisted DE algorithm was proposed to overcome trial and error parameter tuning and efficiently solve computationally expensive problems. In this paper, we present an evolving surrogate model-based differential evolution (ESMDE) method, wherein a surrogate model constructed based on the population members of the current generation is used to assist the DE algorithm in order to generate competitive offspring using the appropriate parameter setting during different stages of the evolution. As the population evolves over generations, the surrogate model also evolves over the iterations and better represents the basin of search by the DE algorithm. The proposed method employs a simple Kriging model to construct the surrogate. The performance of ESMDE is evaluated on a set of 17 bound-constrained problems. The performance of the proposed algorithm is compared to state-of-the-art self-adaptive DE algorithms: the classification-assisted DE algorithm, regression-assisted DE algorithm, and ranking-assisted DE algorithm. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> Abstract Differential Evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers for the continuous parameter spaces in recent times. Almost 5 years have passed since the first comprehensive survey article was published on DE by Das and Suganthan in 2011. Several developments have been reported on various aspects of the algorithm in these 5 years and the research on and with DE have now reached an impressive state. Considering the huge progress of research with DE and its applications in diverse domains of science and technology, we find that it is a high time to provide a critical review of the latest literatures published and also to point out some important future avenues of research. The purpose of this paper is to summarize and organize the information on these current developments on DE. Beginning with a comprehensive foundation of the basic DE family of algorithms, we proceed through the recent proposals on parameter adaptation of DE, DE-based single-objective global optimizers, DE adopted for various optimization scenarios including constrained, large-scale, multi-objective, multi-modal and dynamic optimization, hybridization of DE with other optimizers, and also the multi-faceted literature on applications of DE. The paper also presents a dozen of interesting open problems and future research issues on DE. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> Differential evolution (DE) is a simple and effective evolutionary algorithm that can be used to solve various optimization problems. In general, the population of DE tends to fall into stagnation or premature convergence so that it is unable to converge to the global optimum. To solve this issue, this paper proposes a tracking mechanism (TM) to promote population convergence when the population falls into stagnation and a backtracking mechanism (BTM) to re-enhance the population diversity when the population traps into the state of premature convergence. More specifically, when the population falls into stagnation, the TM is triggered so that the individuals who fall into the stagnant situation will evolve toward the excellent individuals in the population to promote population convergence. When the population goes into the premature convergence status, the BTM is activated so that the premature individuals go back to one of the previous statuses so as to restore the population diversity. The TM and BTM work together as a general framework and they are embedded into six classic DEs and nine state-of-the-art DE variants. The experimental results on 30 CEC2014 test functions demonstrate that the TM and BTM are able to effectively overcome the issues of stagnation and premature convergence, respectively, and therefore, enhance the performance of the DE significantly. Moreover, the experimental results also verify that the TM works together with the BTM as a general framework is better than other similar general frameworks. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> 2.5 <s> Abstract Each type of problems, such as unimodal/multimodal, linear/non-linear, convex/non-convex, and symmetrical/asymmetrical, has its own characteristics. Although various differential evolution (DE) variants have been proposed, several studies indicate that a DE variant may only exhibit high solution efficiency in solving a specific type of problems, but may perform poorly in others. Therefore, an important decision is made to automatically select a suitable DE variant among several chosen algorithms for solving a particular type of problems during the evolutionary process. To achieve this objective, an auto-selection mechanism (ASM) is introduced in this study. In the ASM, rankings attained using Friedman's test are adopted to assess the performances of DE variants. A learning strategy is employed to update the choice probabilities of DE variants, and an additional selection probability is used to alleviate the greedy selection issue. Three sets of benchmark test functions proposed in BBOB2012, IEEE CEC2005, and IEEE CEC2014 are used to evaluate the effectiveness of the ASM. The performance of the proposed algorithm is also compared with that of nine state-of-the-art DE variants and four non-DE algorithms. Statistical analysis results demonstrate that the ASM is an efficient and effective method that can take full advantages of multiple algorithms. Furthermore, the ASM is utilized to estimate the parameters of a heavy oil thermal cracking model. Experimental results indicate that the proposed algorithm outperforms the other compared algorithms in this case. <s> BIB006
Challenges and future research areas of differential evolution Irrespective of the advancement made in the modification of DE, the method is still faced with some challenges, some of which are presented. • DE is still struggling to tackle objective functions that are not linearly separable . • DE has been observed to fail to adequately convey the population to large distances across the solution spaces, especially when clustered population of candidate solutions are encountered . • Rotation invariance remains an issue BIB002 . • DE is often plagued with low convergence rate due to the action of the randomized mutation operators and competition between the population and its individuals BIB005 . • DE is yet to convincingly prove that it can compute expensive problems better than other evolutionary computation methods . • It is still very vague in DE environment; the optimum population size adaptation strategy to adopt that will yield optimum performance BIB001 . • Learning based approaches (supervised, reinforcement and unsupervised learning) have not been fully incorporated in DE BIB003 . • The problem of parameter settings indicates that more research is needed in this direction BIB004 . • The search continues for an EC that can guarantee 100% optimum solution. • The following are yet to be fully developed or estimated for DE. They include: computational complexity, convergence rate estimation, expected first hitting time, the necessary and sufficient conditions that guarantee convergence and unified formulation in the theoretical development BIB004 . • It cannot be stated, the ranking performance of DE in solving these problems which include: multiobjective optimisation, constrained optimisation, large-scale optimisation, optimisation in dynamic and uncertain environments or multimodal optimisation. • Finally, it can be seen that there is no automatic method of selecting a DE variant for a given problem since studies have shown that DE variants are designed to improve one aspect of DE and as such, may perform very well for a specific type of problem and perform very poor in others. Fan et al. BIB006 has proposed an autoselection mechanism (ASM) in order to tackle the challenge.
Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> In a machine-to-machine network, the throughput performance plays a very important role. Recently, an attractive energy harvesting technology has shown great potential to the improvement of the network throughput, as it can provide consistent energy for wireless devices to transmit data. Motivated by that, an efficient energy harvesting-based medium access control (MAC) protocol is designed in this paper. In this protocol, different devices first harvest energy adaptively and then contend the transmission opportunities with energy level related priorities. Then, a new model is proposed to obtain the optimal throughput of the network, together with the corresponding hybrid differential evolution algorithm, where the involved variables are energy-harvesting time, contending time, and contending probability. Analytical and simulation results show that the network based on the proposed MAC protocol has greater throughput than that of the traditional methods. In addition, as expected, our scheme has less transmission delay, further enhancing its superiority. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> Location information for wireless sensor nodes is needed in most of the routing protocols for distributed sensor networks to determine the distance between two particular nodes in order to estimate the energy consumption. Differential evolution obtains a suboptimal solution based on three features included in the objective function: area, energy, and redundancy. The use of obstacles is considered to check how these barriers affect the behavior of the whole solution. The obstacles are considered like new restrictions aside of the typical restrictions of area boundaries and the overlap minimization. At each generation, the best element is tested to check whether the node distribution is able to create a minimum spanning tree and then to arrange the nodes using the smallest distance from the initial position to the suboptimal end position based on the Hungarian algorithm. This work presents results for different scenarios delimited by walls and testing whether it is possible to obtain a suboptimal solution with inner obstacles. Also, a case with an area delimited by a star shape is presented showing that the algorithm is able to fill the whole area, even if such area is delimited for the peaks of the star. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> Cognitive radio (CR) networks have drawn great attention in wireless communication fields. Efficient and reliable communication is a must to provide good services and assure a high-quality life for human beings. Resource allocation is one of the key problems in information transmission of CR networks. This paper studies power allocation in cognitive multiple input and multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Power allocation is modeled as a minimization problem with three practical constraints. To deal with the problem, a population-adaptive differential evolution (PADE) algorithm is proposed. All algorithmic parameters are adaptively controlled in PADE. In numerical experiment, three test cases are simulated to study the performance of the proposed algorithm. Particle swarm optimization, differential evolution (DE), an adaptive DE, and artificial bee colony algorithms are taken as baseline. The results show that PADE presents the best performance among all test algorithms over all test cases. The proposed PADE algorithm can also be used to tackle other resource allocation problems. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> An intelligent energy controller is proposed to manage operation of wireless sensor nodes equipped with energy harvesting devices. The energy controller uses Takagi-Sugeno fuzzy logic and has inputs for the state of the energy buffer and forecasts of solar energy available for harvest. Two different forecasting horizons were investigated, current and next-day, using ideal and pressure-based forecasts. Differential evolution is used to optimize the controller. To validate the evolved controller, a wireless sensor network is simulated using real field-collected environmental data. The optimization goal is to best utilize the solar energy available for harvest while preserving a backup energy reserve. Performing the highest number of operations possible while leaving the energy reserve intact increases deployment time and reliability. The controller using current and next-day energy forecasts made better use of the available energy, indicated by a lower fitness function. However, while it took more measurements when compared to the controller only using the current-day forecast, it also used more reserve energy while still remaining at only a small fraction of the total available reserve. Reserve energy usage using the pressure-based forecast was higher for both forecasting horizons compared to the ideal energy forecast, pointing to further performance improvements possible for a more accurate forecast. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> The primary challenge in organizing sensor networks is energy efficacy. This requisite for energy efficacy is because sensor nodes capacities are limited and replacing them is not viable. This restriction further decreases network lifetime. Node lifetime varies depending on the requisites expected of its battery. Hence, primary element in constructing sensor networks is resilience to deal with decreasing lifetime of all sensor nodes. Various network infrastructures as well as their routing protocols for reduction of power utilization as well as to prolong network lifetime are studied. After analysis, it is observed that network constructions that depend on clustering are the most effective methods in terms of power utilization. Clustering divides networks into inter-related clusters such that every cluster has several sensor nodes with a Cluster Head (CH) at its head. Sensor gathered information is transmitted to data processing centers through CH hierarchy in clustered environments. The current study utilizes Multi-Objective Particle Swarm Optimization (MOPSO)-Differential Evolution (DE) (MOPSO-DE) technique for optimizing clustering. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> Abstract The major concerns in Wireless Sensor Networks (WSN) are energy efficiency as they utilize small sized batteries, which can neither be replaced nor be recharged. Hence, the energy must be optimally utilized in such battery operated networks. One of the traditional approaches to improve the energy efficiency is through clustering. In this paper, a hybrid differential evolution and simulated annealing (DESA) algorithm for clustering and choice of cluster heads is proposed. As cluster heads are usually overloaded with high number of sensor nodes, it tends to rapid death of nodes due to improper election of cluster heads. Hence, this paper aimed at prolonging the network lifetime of the network by preventing earlier death of cluster heads. The proposed DESA reduces the number of dead nodes than Low Energy Adaptive Clustering Hierarchy (LEACH) by 70%, Harmony Search Algorithm (HSA) by 50%, modified HSA by 40% and differential evolution by 60%. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> Abstract An enormously increasing number of mobile communications devices and IoT sensors have driven rapid advance in wireless and cellular network technologies. Owing to limited energy resources, 5G technology has been expected to be designed as a ‘green’ network system. To achieve the requirement of future ‘green’ 5G networks to serve a huge number of mobile devices, this work investigates the problem of deployment and sleep control of a ‘green’ heterogeneous cellular network along a highway composed of base stations (BSs), legacy relay stations (RSs), and small cells (SCs), with two objectives: minimizing the energy consumption to decrease the impact of limited energy; as well as minimizing the electromagnet pollution from radiation of the three device types to avoid the potential harm to creatures. For decision variables, the deployment and sleep control of legacy RSs and SCs affect the total power consumption, and their coverage affects the total electromagnet pollution. First, this work creates a mathematical model for the optimization problem, and then proposes a hybrid algorithm of genetic algorithm (GA) and differential evolution (DE) with three local search operators to solve the problem, in which GA and DE can effectively handle discrete and continues decision variables, respectively. Simulation of the concerned green cellular networks verifies performance of the proposed algorithm. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> Wireless sensor network (WSN) consists of densely distributed nodes that are deployed to observe and react to events within the sensor field. In WSNs, energy management and network lifetime optimization are major issues in the designing of routing protocols. Clustering is an efficient data gathering technique that effectively reduces the energy consumption by organizing nodes into groups. However, in clustering protocols, cluster heads (CHs) bear additional load for coordinating various activities within the cluster. Improper selection of CHs causes increased energy consumption and also degrades the performance of WSN. Therefore, proper CH selection and their load balancing using efficient routing protocol is a critical aspect for the long run operation of WSN. Clustering a network with proper load balancing is an NP-hard problem. To solve such problems having vast search area, optimization algorithm is the preeminent possible solution. In this paper, differential evolution based clustering algorithm for WSNs named threshold-sensitive energy-efficient delay-aware routing protocol (TEDRP), is proposed to prolong network lifetime. Dual-hop communication between CHs and BS is utilized to achieve load balancing of distant CHs and energy minimization. The paper also considers stability-aware model of TEDRP named stable TEDRP (STEDRP) with an intend to extend the stability period of the network. In STEDRP, energy aware heuristics is applied for CH selection in order to improve the stability period. The results demonstrate that the proposed protocols significantly outperform existing protocols in terms of energy consumption, system lifetime and stability period. <s> BIB008 </s> Differential Evolution in Wireless Communications: A Review <s> Energy optimisation <s> The power consumption of wireless access networks is an important issue. In this paper, the power consumption of Long Term Evolution (LTE) base stations is optimized. We consider the city of Ghent, Belgium with 75 possible LTE base station locations. We optimize the network towards two objectives: the coverage maximization and the power consumption minimization. We propose a new Barebones Self-adaptive Differential Evolution. The results of the proposed method indicate the advantages and applicability of our approach. <s> BIB009
Energy is required in transmission of data in wireless networks and estimation of energy consumption is important for network planning BIB002 . Energy consumption optimisation is a predictor of overall network performance and remains the most important constraint. DE has been used to achieve efficient energy optimisation in WSN and power allocation in orthogonal frequency division multiplexing (OFDM) systems BIB003 thereby decreasing the gross impact of the limited available energy BIB007 . In order to maintain consistent energy, energy harvesting technology has been proposed to improve network throughput. DE is used to obtain the optimal throughput that will sustain consistent energy BIB001 and extend the lifetime of individual nodes in WSNs BIB004 . Delay in forwarding packets of data is a strategy to ensure efficient energy consumption, which can be achieved by providing the optimum solution of clustering and routing in wireless sensor networks (WSN) using DE . Clustering ensures that data is transmitted in hierarchical order and reduces into distinct groups which helps to improve power utilization. DE has been used in clusters optimisation and an effective energy optimisation strategy BIB005 which guarantees network longevity BIB008 and optimum packet delivery ratio [118] . A hybrid of DE and simulated annealing has been used as clustering algorithm and it achieves the set goals of efficient energy utilization by reduction of loss of cluster heads and sustaining network lifetime BIB006 . DE was applied in the power consumption minimisation in Long Term Evolution (LTE) base stations BIB009 .
Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Due to the huge popularity of wireless networks, future designs will not only consider the provided capacity, but also the induced exposure, the corresponding power consumption, and the economic cost. As these requirements are contradictory, it is not straightforward to design optimal wireless networks. Those contradicting demands have to satisfy certain requirements in practice. In this paper, a combination of two algorithms, a genetic algorithm and a quasi-particle swarm optimization, is developed, yielding a novel hybrid algorithm that generates further optimizations of indoor wireless network planning solutions, which is named hybrid indoor genetic optimization algorithm. The algorithm is compared with a heuristic network planner and composite differential evolution algorithm for three scenarios and two different environments. Results show that our hybrid-algorithm is effective for optimization of wireless networks which satisfy four demands: maximum coverage for a user-defined capacity, minimum power consumption, minimal cost, and minimal human exposure. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> In this paper, we present a multi-objective optimization approach for indoor wireless network planning subject to constraints for exposure minimization, coverage maximization and power consumption minimization. We consider heterogeneous networks consisting of WiFi access points (APs) and long term evolution (LTE) femtocells. We propose a design framework based on multi-objective biogeography-based optimization (MOBBO). We apply the MOBBO algorithm to network planning design cases in a real office environment. To validate this approach we compare results with other multi-objective algorithms like the nondominated sorting genetic algorithm-II (NSGA-II) and the generalized differential evolution (GDE3) algorithm. The results of the proposed method indicate the advantages and applicability of the multi-objective approach. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Mesh node placement problem is one of the major design issues in Wireless Mesh Network (WMN). Mesh networking is one of the cost effective solution for broadband internet connectivity. Gateway is one of the active devices in the backbone network to supply internet service to the users. Multiple gateways will be needed for high density networks. The budget and the time to setup these networks are important parameters to be considered. Given the number of gateways and routers with the number of clients in the service area, an optimization problem is formulated such that the installation cost is minimized satisfying the QOS constraints. In this paper a traffic weight algorithm is used for the placement of gateways based on the traffic demand. A cost minimization model is proposed and evaluated using three global optimization search algorithms such as Simulated Annealing (SA), Differential Evolution (DE) and Fuzzy DE (FDE). The simulation result shows that FDE method achieves best minimum compared with other two algorithms. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Display Omitted To maximize the network utilization, spectrum allocation technique fairly allocates the channels to secondary users.SA problem is solved by Differential Evolution algorithm and compared the performance with PSO and FA.DE improved the quality of solution and time complexity by 29.9%, 242.32% and 19.04%, 46.3% compared to PSO and FA.We propose FPGA based coprocessor for DE-SA IP and interfaced to PowerPC.The coprocessor accelerates SA task by 76.79-105x and 5.19-6.91x compared to float and fixed DE-SA software. Cognitive radio is an emerging technology in wireless communications for dynamically accessing under-utilized spectrum resources. In order to maximize the network utilization, vacant channels are assigned to cognitive users without interference to primary users. This is performed in the spectrum allocation (SA) module of the cognitive radio cycle. Spectrum allocation is a NP hard problem, thus the algorithmic time complexity increases with the cognitive radio network parameters. This paper addresses this by solving the SA problem using Differential Evolution (DE) algorithm and compared its quality of solution and time complexity with Particle Swarm Optimization (PSO) and Firefly algorithms. In addition to this, an Intellectual Property (IP) of DE based SA algorithm is developed and it is interfaced with PowerPC440 processor of Xilinx Virtex-5 FPGA via Auxiliary Processor Unit (APU) to accelerate the execution speed of spectrum allocation task. The acceleration of this coprocessor is compared with the equivalent floating and fixed point arithmetic implementation of the algorithm in the PowerPC440 processor. The simulation results show that the DE algorithm improves quality of solution and time complexity by 29.9% and 242.32%, 19.04% and 46.3% compared to PSO and Firefly algorithms. Furthermore, the implementation results show that the coprocessor accelerates the SA task by 76.79-105i? and 5.19-6.91i? compared to floating and fixed point implementation of the algorithm in PowerPC processor. It is also observed that the power consumption of the coprocessor is 26.5mW. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Graphical abstractDisplay Omitted HighlightsWe propose use of CMODE for resource allocation in OFDMA systems.We use CMODE for both joint as well as separate subcarrier and power allocation.Proposed solutions achieve better capacity as compared to traditional methods.Because of lower complexity the proposed schemes are faster as compared to traditional methods. Orthogonal frequency division multiple access (OFDMA) is a promising technique, which can provide high downlink capacity for the future wireless systems. The total capacity of OFDMA systems can be maximized by adaptively assigning subcarriers to the user with the best gain for that subcarrier, with power subsequently distributed by water-filling. In this paper, we propose the use of a differential evolution combined with multi-objective optimization (CMODE) algorithm to allocate the resources to the users in a downlink OFDMA system. Specifically, we propose two approaches for resource allocation in downlink OFDMA systems using CMODE algorithm. In the first approach, CMODE algorithm is used only for subcarrier allocation (OSA), while in the second approach, the CMODE algorithm is used for joint subcarrier and power allocation (JSPA). The CMODE algorithm is population-based where a set of potential solutions evolves to arrive at a near-optimal solution for the problem under study. During the past decade, solving constrained optimization problems with evolutionary algorithms has received considerable attention among researchers and practitioners. CMODE combines multi-objective optimization with differential evolution (DE) to deal with constrained optimization problems. The comparison of individuals in CMODE is based on multi-objective optimization, while DE serves as the search engine. In addition, infeasible solution replacement mechanism based on multi-objective optimization is used in CMODE, with the purpose of guiding the population towards the promising solutions and the feasible region simultaneously. It is shown that both the proposed approaches obtain higher sum capacities as compared to that obtained by previous works, with comparable computational complexity. It is also shown that the JSPA approach provides near optimal results at the slightly higher computational cost. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> The reliability and real time of industrial wireless sensor networks (IWSNs) are the absolute requirements for industrial systems, which are two foremost obstacles for the large-scale applications of IWSNs. This paper studies the multi-objective node placement problem to guarantee the reliability and real time of IWSNs from the perspective of systems. A novel multi-objective node deployment model is proposed in which the reliability, real time, costs and scalability of IWSNs are addressed. Considering that the optimal node placement is an NP-hard problem, a new multi-objective binary differential evolution harmony search (MOBDEHS) is developed to tackle it, which is inspired by the mechanism of harmony search and differential evolution. Three large-scale node deployment problems are generated as the benCHmarks to verify the proposed model and algorithm. The experimental results demonstrate that the developed model is valid and can be used to design large-scale IWSNs with guaranteed reliability and real-time performance efficiently. Moreover, the comparison results indicate that the proposed MOBDEHS is an effective tool for multi-objective node placement problems and superior to Pareto-based binary differential evolution algorithms, nondominated sorting genetic algorithm II (NSGA-II) and modified NSGA-II. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Genetic algorithms (GAs) and simulated annealing (SA) have emerged as leading methods for search and optimization problems in heterogeneous wireless networks. In this paradigm, various access technologies need to be interconnected; thus, vertical handovers are necessary for seamless mobility. In this paper, the hybrid algorithm for real-time vertical handover using different objective functions has been presented to find the optimal network to connect with a good quality of service in accordance with the user’s preferences. As it is, the characteristics of the current mobile devices recommend using fast and efficient algorithms to provide solutions near to real-time. These constraints have moved us to develop intelligent algorithms that avoid slow and massive computations. This was to, specifically, solve two major problems in GA optimization, i.e. premature convergence and slow convergence rate, and the facilitation of simulated annealing in the merging populations phase of the search. The hybrid algorithm was expected to improve on the pure GA in two ways, i.e., improved solutions for a given number of evaluations, and more stability over many runs. This paper compares the formulation and results of four recent optimization algorithms: artificial bee colony (ABC), genetic algorithm (GA), differential evolution (DE), and particle swarm optimization (PSO). Moreover, a cost function is used to sustain the desired QoS during the transition between networks, which is measured in terms of the bandwidth, BER, ABR, SNR, and monetary cost. Simulation results indicated that choosing the SA rules would minimize the cost function and the GA–SA algorithm could decrease the number of unnecessary handovers, and thereby prevent the ‘Ping-Pong’ effect. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Multicast routing improves the efficiency of a network by effectively utilizing the available network bandwidth. In multichannel multiradio wireless mesh networks the channel allocation strategy plays a vital role along with multicast tree construction. However, the multicast routing problem in multichannel multiradio wireless mesh networks is proven to be NP-hard. With this paper, we propose a Quality of Service Channel Assignment and multicast Routing (Q-CAR) algorithm. The proposed algorithm jointly solves the channel assignment and multicast tree construction problem by intelligent computational methods. We use a slightly modified differential evolution approach for assigning channels to links. We design a genetic algorithm based multicast tree construction strategy which determines a delay, jitter bounded low cost multicast tree. Moreover, we define a multi objective fitness function for the tree construction algorithm which optimizes interference as well as tree cost. Finally, we compare the performance of Q-CAR with QoS Multicast Routing and Channel Assignment(QoS-MRCA) and intelligent Quality of service multicast routing and Channel Assignment(i-QCA) algorithm in multichannel multiradio wireless mesh network (simulated) environments. Our experimental results distinctly show the outstanding performance of the proposed algorithm. <s> BIB008 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> The wireless mesh network (WMN) is a challenging technology that offers high quality services to the end users. With growing demand for real-time services in the wireless networks, quality-of-service-based routing offers vital challenges in WMNs. In this paper, a discrete multi-objective differential evolution (DMODE) approach for finding optimal route from a given source to a destination with multiple and competing objectives is proposed. The objective functions are maximization of packet delivery ratio and minimization of delay. For maintaining good diversity, the concepts of weight mapping crossover (WMX)-based recombination and dynamic crowding distances are implemented in the DMODE algorithm. The simulation is carried out in NS-2 and it is observed that DMODE substantially improves the packet delivery ratio and significantly minimizes the delay for various scenarios. The performance of DMODE, DEPT and NSGA-II is compared with respect to multi-objective performance measures namely as `spread'. The results demonstrate that DMODE generates true and well-distributed Pareto-optimal solutions for the multi-objective routing problem in a single run. <s> BIB009 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> The exponential growth in data traffic due to the modernization of smart devices has resulted in the need for a high-capacity wireless network in the future. To successfully deploy 5G network, it must be capable of handling the growth in the data traffic. The increasing amount of traffic volume puts excessive stress on the important factors of the resource allocation methods such as scalability and throughput. In this paper, we define a network planning as an optimization problem with the decision variables such as transmission power and transmitter (BS) location in 5G networks. The decision variables lent themselves to interesting implementation using several heuristic approaches, such as differential evolution (DE) algorithm and Real-coded Genetic Algorithm (RGA). The key contribution of this paper is that we modified RGA-based method to find the optimal configuration of BSs not only by just offering an optimal coverage of underutilized BSs but also by optimizing the amounts of power consumption. A comparison is also carried out to evaluate the performance of the conventional approach of DE and standard RGA with our modified RGA approach. The experimental results showed that our modified RGA can find the optimal configuration of 5G/LTE network planning problems, which is better performed than DE and standard RGA. <s> BIB010 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Graphical abstractDisplay Omitted HighlightsEAs can work effectively to design useful FLCs for MAC in WBANs.Three coding schemes are proposed to balance performance and interpretability.Our approach provides a good balance between network reliability and performance.Improved the efficacy of the designed process using surrogate.Two different design targets are defined to meet varied generality requirements. Soft computing techniques including fuzzy logic have been successfully applied to wireless body area networks (WBANs). However, most of the existing research works rely on manual design of the fuzzy logic controller (FLC). To address this issue, in this paper, we propose an evolutionary approach to automate the design of FLCs for cross layer medium access control in WBANs. With the goal of improving network reliability while keeping the communication delay at a low level, we have particularly studied the usefulness of three coding schemes with different levels of flexibility during the evolutionary design process. The influence of fitness functions that measure the effectiveness of each possible FLC design has also been examined carefully in order to achieve a good balance between reliability and performance. Moreover, we have utilised surrogate models to improve the efficiency of the design process. In consideration of practical usefulness, we have further identified two main design targets. The first target is to design effective FLCs for a specific network configuration. The second target focuses on designing FLCs to function across a wide range of network settings. In order to examine the usefulness of our design approach, we have utilised two widely used evolutionary algorithms, i.e. particle swarm optimisation (PSO) and differential evolution (DE). The FLC designed by our approach is also shown to outperform some related algorithms as well as the IEEE 802.15.4 standard. <s> BIB011 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> The hybrid algorithm for real-time vertical handover using different objective functions has been presented to find the optimal network to connect with a good quality of service in accordance with the user's preferences. Markov processes are widely used in performance modelling of wireless and mobile communication systems. We address the problem of optimal wireless network selection during vertical handover, based on the received information, by embedding the decision problem in a Markov decision process (MDP) with genetic algorithm (GA), we use GA to find a set of optimal decisions that ensures the best trade-off between QoS based on their priority level. Then, we emerge improved genetic algorithm (IGA) with simulated annealing (SA) as leading methods for search and optimization problems in heterogeneous wireless networks. We formulate the vertical handoff decision problem as a MDP, with the objectives of maximizing the expected total reward and minimizing average number of handoffs. A reward function is constructed to assess the QoS during each connection, and the AHP method are applied in an iterative way, by which we can work out a stationary deterministic handoff decision policy. As it is, the characteristics of the current mobile devices recommend using fast and efficient algorithms to provide solutions near to real-time. These constraints have moved us to develop intelligent algorithm that avoid the slow and massive computations. This paper compares the formulation and results of five recent optimization algorithms: artificial bee colony, GA, differential evolution, particle swarm optimization and hybrid of (GA---SA). Simulation results indicated that choosing the SA rules would minimize the cost function, and also that, the IGA---SA algorithm could decrease the number of unnecessary handovers, and thereby prevent the `Ping-Pong' effect. <s> BIB012 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Many research efforts are deployed today in order to design techniques that allow continuous metaheuristics to also solve binary problems. However, knowing that no work thoroughly studied these techniques, such a task is still difficult since these techniques are still ambiguous and misunderstood. The bat algorithm (BA) is a continuous algorithm that has been recently adapted using one of these techniques. However, that work suffered from several shortfalls. This paper conducts a systematic study in order to investigate the efficiency and usefulness of discretising continuous metaheuristics. This is done by proposing five binary variants of the BA (BBAs) based on the principal mapping techniques existing in the literature. As problem benchmark, two optimisation problems in cellular networks, the antenna positioning problem (APP) and the reporting cell problem (RCP) are used. The proposed BBAs are evaluated using several types, sizes and complexities of data. Two of the top-ranked algorithms designed to solve the APP and the RCP, the population-based incremental learning (PBIL) and the differential evolution (DE) algorithm are taken as comparison basis. Several statistical tests are conducted as well. <s> BIB013 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> The significant forecasted increase in the number of devices and mobile data requirements has posed stringent requirements for future wireless communication networks. Massive MIMO is one of the chief candidates for future 5G wireless communication systems, but to fully reap the true benefits many research problems still need to be solved or require further analysis. Among many, the problem of estimating channel between the user terminals and each BS antenna holds a significant place. In this paper, we deal with the accurate and timely acquisition of massive Channel State Information as an optimization problem that is solved using heuristic optimization techniques i.e. Genetic Algorithm, Particle Swarm Optimization and Differential Evolution. Results have been obtained by exploiting the parallel processing property bestowed when using match filtering and beamforming for precoding and decoding respectively. Monte Carlos simulation have been presented for the purpose of performance comparison among aforementioned optimization techniques based on Mean Squared Error criterion. <s> BIB014 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large number of variables are divided into several groups. Finally, individual evaluations are allocated across many core processing units, and calculations are performed in parallel. Consequently, the computation time is greatly reduced. The proposed algorithm integrates the idea of immune algorithms, which tend to explore sparse areas in the objective space and use simulated binary crossover for mutation. The proposed algorithm is employed to optimize the 3D terrain deployment of a wireless sensor network, which is a self-organization network. In experiments, compared with several state-of-the-art multi-objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively. <s> BIB015 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Random deployment of sensor nodes is susceptible to initial communication hole, even when the network is densely populated. However, eliminating holes using structural deployment poses its difficulties. In either case, the resulting coverage holes can degrade overall network performance and lifetime. Many solutions utilizing Relay Nodes (RNs) have been proposed to alleviate this problem. In this regard, one of the recent solutions proposed using artificial bee colony (ABC) to deploy RNs. This paper proposes RN deployment using two other evolutionary techniques - gravitational search algorithm (GSA) and differential evolution (DE) and compares them with existing solution that uses ABC. These popular optimization tools are deployed to optimize the positions of relay nodes for lifetime maximization. Proposed algorithms guarantee satisfactory RNs utilization while maintaining desired connectivity level. It is shown that DE-based deployment improves the network lifetime better than other optimization heuristics considered. <s> BIB016 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> In order to alleviate the spectrum scarcity problem in wireless networks, achieve efficient allocation of spectrum resources and balance the users' permission to spectrum, a hybrid mutation artificial bee colony algorithm based on the artificial bee colony algorithm is presented. The presented algorithm aims to enhance the efficiency of global searching by improving the way of leaders searching the nectar source with differential evolution algorithm. In addition, the onlookers' searching method is improved by bat algorithm to guarantee the convergence efficiency of the algorithm and the precision of the result and it is assumed that the onlookers are equipped with bats' echolocation to get close to the nectar source by adjusting the rate of pulse emission and loudness when they are searching nectar source. The simulation results show that the proposed algorithm has faster convergence, higher efficiency and more optimal solutions compared with other algorithms. <s> BIB017 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Group-based topologies offer good performance in wireless sensor networks (WSNs) by avoiding the bottleneck issues, reducing points of failure and simplifying the management. Border nodes collaboration is an important issue as those nodes play a critical role in the inter-group routing. An efficient scheme between those nodes is then required. In this paper, the overall problem is formulated as an optimization problem. The search process is based on two evolutionary algorithms combined separately with the Minimum Spanning tree (MST) algorithm. It iteratively evolve a population until finding the best solutions. The aim is to minimize the number of links and the communication cost. Simulations confirm the potential of those mechanisms to find an efficient optimized tree which relates all border nodes, in particular, the logic-gate-based algorithm presents an original and a competent procedure to solve the considered problem to the Differential Evolution algorithm. <s> BIB018 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Wireless Mesh Networks (WMNs) have received a greater attention in wireless communication field. The conventional node deployment allows random distribution of mesh routers, which increases the number of mesh routers, and hence the design cost also increases. In order to have an optimal placement of mesh nodes, the node placement problem is considered as an optimization problem. Here, the problem is formulated as a facility location problem. A Fuzzy Differential Evolution (FDE) approach is proposed along with a traffic weight (TW) assignment method for optimal placement of mesh nodes and allotting gateways. Design Cost (DC) and Transmission Cost (TC) are the two minimization objectives, which are solved using the proposed method. The simulation results show that, on average, the DC using FDE approach is minimized 10% compared to TW algorithm, 2.8% less than SA, and 1.2% less than DE methods. A network performance metric called failure rate (FR) and the TC objective are considerably reduced using the FDE based placement. The performance of the network is evaluated with multiple CBR flows, and the simulation results show 10% to 5% increase in the throughput and packet delivery rate compared to the existing approaches. <s> BIB019 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Abstract When the traditional anchor aided location algorithm is used to select the mobile beacon path in the sensor network, there is no analysis of the energy imbalance of nodes in non-dense conditions, the optimal network node cannot be selected, and the selection error of the optimal path of the beacon is larger. A path selection algorithm for mobile beacons in a sensor network under non-dense distribution is proposed. Using the mobile beacon based wireless sensor network location algorithm, the weighted centroid algorithm and the extended Kalman filter (EKF) are used to obtain the accurate location results of the unknown nodes around the mobile beacon in the sensor network under non-dense distribution condition. The optimal node energy partition of the unknown node is obtained by the chaotic differential evolution method, and the optimal location of the optimal energy node in the wireless sensor network is calculated using the dynamic escape particle swarm optimization method, and the optimal beacon path is extracted. The experimental results show that the proposed algorithm can enhance the clustering performance of the optimal node in the wireless sensor network and has a better performance of dynamic node selection in wireless sensor network, and the convergence speed is faster and the operation time is shorter. <s> BIB020 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Due to adverse effects of hostile deployment environment(s), sensors nodes (SNs) deployed in the network may drain or get damaged. This attributes for loss of connectivity between the sensors and the sink node. Hence it is necessary to design the wireless sensor network (WSN) in a manner that the network is capable of coping up with the failure of few nodes/links. Using single hop data communication model reduces the lifetime of the network as the SNs are resource constrained (in terms of battery). Relay nodes (RNs) facilitate in improving network lifetime, fault tolerance and/or connectivity. Finding minimum number of relay nodes for a fault tolerant, fully connected network is a NP Hard problem. In this paper, we follow two phase relay node placement procedure to achieve the objective. In the first phase, we cluster the SNs using mean shift algorithm and place RNs as the cluster heads. In the second phase, we use metaheuristic algorithms like Moth Flame Optimization (MFO), Differential Evolution (DE), Bat algorithm (BA) and Biogeography Based Optimization algorithm (BBO) to place the RNs such that the required fault tolerance is achieved along with fully connected network. Extensive simulation results prove that MFO algorithm performs better for fault tolerant relay node placement problem. <s> BIB021 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Wireless communication technologies such as Near Field Communication (NFC) found its way into our everyday life. The antenna structures used in such systems have to comply with several standards to achieve all requirements defined for the specific application. However, in practice there are scenarios for such antennas which are not considered in standards or design guides, but mainly influence the antenna behavior. In the present paper the synthesis of an antenna used for NFC-cards in a contactless payment system under multi-card condition is presented. The optimization relies on the differential evolution (DE) strategy. The computation of the forward problem is based on the partial element equivalent circuit (PEEC) method. <s> BIB022 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> A novel wideband four-arm sinuous antenna with dual circular polarizations (CPs) and unidirectional radiation is proposed. Different from the conventional designs, this sinuous antenna is realized in a conical form and no ground plane or absorptive cavity is required to obtain unidirectional radiation. The beamforming network for dual circularly polarized operations consists of a wideband quadrature coupler and two wideband baluns, and an auxiliary feeding patch is introduced to facilitate the connection between baluns and sinuous arms. The design of baluns and coupler is inspired from the printed exponentially tapered microstrip balun and broadside-coupled microstrip coupler, respectively. The dynamic differential evolution algorithm is employed to optimize the geometry of coupler for optimal performance. For both polarizations, the presented antenna has wide impedance bandwidth, good axial ratio, moderate realized gain, and front-to-back ratio within 2–5 GHz. An antenna prototype is fabricated and tested. The agreement between simulation and measurement results validates the proposed antenna framework. The demonstrated antenna has advantages of wide bandwidth, dual CPs, unidirectional radiation, lightweight, and low cost, and is promising for applications in wireless systems. <s> BIB023 </s> Differential Evolution in Wireless Communications: A Review <s> Quality of service improvement <s> Wireless networking is experiencing a tremendous growth in new standards of communication and computer applications. Currently, wireless networks exist in various forms, providing different facilities. However, due to some limitations as compared with wired counterparts, wireless networks face several major challenges and one of them is optimum bandwidth allocation. The focus of optimum bandwidth allocation is to reduce the losses and satisfying quality of service (QoS) requirements. In wireless networks, the term bandwidth allocation is attributed as the distribution of bandwidth resources among different users, which affects the serviceability of the entire system. Though many studies related to bandwidth allocation have been reported already, however, only sub-optimal solutions have been provided so far. In this research, we proposed to use the differential evolution (DE) algorithm to allocate bandwidth through a bandwidth reservation scheme in the Cellular IP network, in order to improve the QoS at an acceptable level. DE belongs to a class of evolutionary algorithms (EA), like particle swarm optimization and genetic algorithm. A DE-based method is used which looks for any free bandwidth in the cell or in adjacent cells and provides it to the cell where required. In case, it fails to find the free available bandwidth then it will search the bandwidth which is standby for non real-time users and allocates it to the real-time users that will help in improving the QoS in terms of connection/call dropping probability for real-time users. Simulation results show that the proposed method performs better as compared to previously used EA models for bandwidth allocation. <s> BIB024
In improving the quality of network, it is always desirable to optimize the constraints that will yield maximum quality of service (QOS). DE has been applied to the optimisation of network coverage, power consumption, cost and human exposure minimisation to the network BIB001 BIB011 . Different examples are given. They are: the case of heterogenous networks consisting of WiFi access points BIB002 ; multi objective node deployment to ensure reliable and efficient real time performance BIB006 BIB015 and lifetime maximisation BIB016 ; optimum allocation of spectrum in wireless networks BIB017 and minimisation of the number of links in WSNs BIB018 . DE has been applied to minimize the installation cost satisfying QOS constraints in Wireless Mesh Network (WMN) BIB003 and minimisation of overall mobility management cost in wireless cellular networks . Others are the minimisation of design and transmission costs with the aid of DE BIB019 . The following QOS constraints were optimised using DE; the bit error rate (BER), bandwidth, associativity-based routing (ABR), monetary cost and signal to noise ratio (SNR) BIB007 . For instance, a hybrid of DE and genetic algorithm was used to minimise the BER and multi-path effect of the channel thereby increasing the convergence speed BIB020 . Also, DE was used to minimize the BER in Multi-User Multiple Input Multiple Output (MU-MIMO) and in general, solving the beamforming problems subjected to different variables and constraints . DE was used in the optimum allocation of bandwidth in Cellular IP network, thereby improving the QoS BIB024 . The consequence of the optimisation is the minimisation of hangovers, that is the "Ping Pong" effect BIB012 , minimisation of end to end video reconstruction distortion and resilience strategy which guarantees that a network can withstand the failure of few a nodes or links. Relay nodes is one of the resilience strategy and DE is used to find the optimum number of relay nodes that will improve connectivity and minimize network downtime BIB021 . Multicast routing is often preferred strategy in quality service delivery, especially in multichannel multiradio wireless mesh networks. DE has shown to be efficient in finding the optimal performance in routing BIB008 , packet delivery ratio maximisation , delay minimisation BIB009 and optimum reassigning vacant channel to cognitive users without network deterioration BIB004 . Assignment can also take the form of allocation in download link systems BIB005 . Transmission rate, transmitter location and network throughput of different wireless networks have been optimized using the DE BIB010 . Application of DE helped in the optimisation of network throughput in networks with dynamic topological structure . In order to improve network performance, DE was applied in the reporting cell problem (RCP), antenna positioning problem (APP) BIB013 , antenna synthesis problem used in Near Field Communication (NFC) technologies BIB022 BIB022 and optimisations of channel state information BIB014 and geometry of coupler BIB023 .
Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Localization is considered as one of the most significant research issues in Wireless Sensor Network (WSN). The objective of localization is to determine the physical co-ordinates of sensor nodes distributed over the sensing field. Location information plays a vital role for coverage, deployment of sensor nodes, routing and target tracking applications. Initially, the localization of sensor nodes can be performed by Mobile Anchor Positioning (MAP), a range-free localization method. To further enhance the location accuracy obtained by MAP, we propose three algorithms, viz. Differential Evolution with MAP (DE-MAP), Ant Colony Optimization with MAP (ACO-MAP) and Simulated Annealing-Differential Evolution with MAP (SA-DE-MAP). The scope of this work is to compare the performance of these three algorithms. Root Mean Square Error (RMSE) has been used as the metrics for comparing the performance. Simulation result demonstrates that out of the proposed algorithms, SA-DE-MAP algorithm achieves better performance in minimizing the localization error when compared to DE-MAP and ACO-MAP algorithms. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Sensor network localization based on connectivity can be modeled as a nonconvex optimization problem. However, current models only consider the convex constraints, i.e., connections among the nodes. The proposed method considers not only the connection constraints but the disconnection constraints, which are nonconvex in nature, as well. It is argued that the connectivity-based localization problem should be represented as an optimization problem with both convex and nonconvex constraints. In this paper, an algorithm combining a modified differential evolution (DE) algorithm and heuristics is presented for the situation in which the communication range value is unknown. The developed algorithm has a new crossover procedure, with refined procedures to produce a new generation of individuals/candidates. A “single node treatment” procedure is also designed for the search procedure to formulate a new set of coordinate locations to jump out from the local minimum. The final solution can reach the most suitable configuration of the unknown nodes (nodes without knowing their location) because all the information on the constraints has been used. Simulation results have shown that better solutions can be obtained when compared with other convex-constraint methods. The proposed method also gets better results than other general nonconvex optimization methods. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> In order to solve the problem of connecting the wireless sensor network with the Internet in Cyber- physical systems, a gateway deployment algorithm based on differential evolution is proposed. This algorithm uses the differential evolution algorithm to optimize the minimum coverage radius and gateway load balancing. With the improvement of adaptive opposition-based search and dynamic parameters adjustment, this algorithm can keep the variety of the whole swarm and solve the geometric K -center problem. Simulation results show that, this algorithm gets good global explorative ability and convergence speed, and can benefit the network QoS level of the Cyber-physical systems by obtaining good load balancing and minimum coverage radius. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Differential Evolution (DE) algorithm is known in evolutionary computation. However, DE with DE/best/1 mutation has some drawbacks such as premature convergence and local optimum. To address these drawbacks, we improve the DE/best/1 mutation operator and propose a sine cosine mutation based differential evolution algorithm, named SCDE. In the proposed method, a new sine cosine mutation operator inspired by sine cosine algorithm (SCA) is adopted to balance exploration and exploitation. In the experimental simulation, the proposed algorithm is compared with three state-of-the-art algorithms on the well-known benchmark test functions. The results of test functions and performance metrics show that the proposed algorithm is able to avoid local optima and converge towards the global optimum. In addition, the proposed algorithm is used to solve sensor node location in wireless sensor network. Results show that our algorithm is effective. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> The modernization of smart devices has emerged in exponential growth in data traffic for a high-capacity wireless network. 5G networks must be capable of handling the excessive stress associated with resource allocation methods for its successful deployment. We also need to take care of the problem of causing energy consumption during the dense deployment process. The dense deployment results in severe power consumption because of fulfilling the demands of the increasing traffic load accommodated by base stations. This paper proposes an improved Artificial Bee Colony (ABC) algorithm which uses the set of variables such as the transmission power and location of each base station (BS) to improve the accuracy of localization of a user equipment (UE) for the efficient energy consumption at BSes. To estimate the optimal configuration of BSes and reduce the power requirement of connected UEs, we enhanced the ABC algorithm, which is named a Modified ABC (MABC) algorithm, and compared it with the latest work on Real-Coded Genetic Algorithm (RCGA) and Differential Evolution (DE) algorithm. The proposed algorithm not only determines the optimal coverage of underutilized BSes but also optimizes the power utilization considering the green networks. The performance comparisons of the modified algorithms were conducted to show that the proposed approach has better effectiveness than the legacy algorithms, ABC, RCGA, and DE. <s> BIB005 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Firefly algorithm (FA) has shown good performance on many engineering optimisation problems. Recent study has pointed out that FA suffers from slow convergence. To enhance the performance of FA, this paper presents a dual population based FA (called DPFA). In DPFA, the entire population consists of two sub-populations. A memetic FA (MFA) and the standard differential evolution are used to generate new solutions in different sub-populations. To verify the performance of DPFA, we test it on nine benchmark functions. Simulation results show that DPFA outperforms MFA and other improved FA algorithms. Finally, we use the proposed DPFA to solve wireless sensor network coverage optimisation problems. Results show that DPFA can also achieve promising solutions. <s> BIB006 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches. <s> BIB007 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Abstract Localization technology has been a core component for Internet of Things (IoT), especially for Wireless Sensor Network (WSN). Among all localization technologies, Distance Vector-Hop (DV-Hop) algorithm is a very frequently used algorithm for WSN. DV-Hop estimates the distance through the hop-count between nodes in which the value of hop-count is discrete, and thus there is a serious consequence that some nodes have the same estimated distance when their hop-count with respect to identical node is equal. In this paper, we ameliorate the value of hop-count by the number of common one-hop nodes between adjacent nodes. The discrete values of hop-count will be converted to more accurate continuous values by our proposed method. Therefore, the error caused by the estimated distance can be effectively reduced. Furthermore, we formulate the location estimation process to be a minimizing optimization problem based on the weighted squared errors of estimated distance. We apply Differential Evolution (DE) algorithm to acquire the global optimum solution which corresponds to the estimated location of unknown nodes. The proposed localization algorithm based on improved DV-Hop and DE is called DECHDV-Hop. We conduct substantial experiments to evaluate the effectiveness of DECHDV-Hop including the comparison with DV-Hop, GADV-Hop and PSODV-Hop in four different network simulation situations. Experimental results demonstrate that DECHDV-Hop can achieve much higher localization accuracy than other algorithms in these network situations. <s> BIB008 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> The localization of sensor node is an essential problem for many economic forecasting applications in wireless sensor networks. Considering that the mobile sensors change their locations frequently over time, Monte Carlo localization algorithm utilizes the moving characteristics of nodes and employs the probability distribution function (PDF) in the previous time slot to estimate the current location by using a weighted particle filter. However, it also has the problem of insufficient number of valid samples, which further affects the node’s localization accuracy. In this paper, differential evolution method is introduced into the Monte Carlo localization algorithm. The sample weight is taken as the objective function, and differential evolution algorithm is implemented in sample stage. Finally, the node position is estimated by making the sample close to the actual location of the node instead of being filtered out. The simulation results demonstrate that the proposed algorithm provides a better position estimation with less localization error. <s> BIB009 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Nature-inspired algorithms have the characteristics to learn and decide and to be adaptable, intelligent, and robust, and so they can be used for solving complex problems. This paper deals with one such algorithm named hybrid genetic algorithm–differential evolution for localization in wireless sensor network. This algorithm is used to estimate the position of sensor node. A novel hybrid algorithm is analyzed, designed, and implemented. This algorithm provides better accuracy and is simple to implement. <s> BIB010 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> In wireless sensor network (WSN) there are many sensors and tiny devices which are used to sense the real-time environmental circumstances. The sensed data will be meaningless if each node in WSN doesn't know its location in the real world. There are many cost-effective techniques for localization used to locate the sensor node. Among those techniques, the range-based localization techniques are known for their accuracy in predicting sensor node location. Differential Evolution Algorithm (DEA) is popular optimization technique as it has good convergence properties but it has few control parameters, which are fixed throughout the entire iteration process and it is not an easy task to tune that control parameters. So, in this paper, we propose Adaptive Differential Evolution Algorithm (ADEA) for obtaining adaptive control over the parameters. In DEA we consider appropriate solutions to have more probability of reproduction compared to inappropriate ones, but there is always a possibility that population elements that look inappropriate in each stage may contain more useful information than appropriate ones. So, we propose Invasive Weed Optimization Algorithm (IWO) which reaches an optimal solution more easily by giving a chance for inappropriate one's to survive and reproduce similar to the mechanism that happens in nature. <s> BIB011 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> In this study, a novel multi-tier framework is proposed for randomly deployed WMSNs. Low cost directional Passive Infrared Sensors (PIR sensors) are randomly deployed across a Region of Interest (RoI), which are activated according to the Differential Evolution (DE) algorithm proposed for coverage optimization. The proposed DE and the Genetic Algorithms are applied to optimize the coverage maximization using minimum sensors. Results obtained using the two approaches are tested and compared. Only the scalar sensors that are yielded by the coverage optimization process are kept active throughout the network lifetime while the multimedia sensors are kept in silent. When an event is detected by a scalar sensor, the corresponding multimedia sensor(s), in whose effective coverage field of view (FoV) that the target falls, is then activated to capture the event (target point/scene). The analysis of the network total energy expenditure and a comparison of the proposed framework to current approaches and frameworks is made. Simulation results show that the proposed architecture achieves a remarkable network lifetime prolongation while extending the coverage area. <s> BIB012 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Lifetime requirements and coverage demands are emphasized in wireless sensor networks. An area coverage algorithm based on differential evolution is developed in this study to obtain a given covera... <s> BIB013 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Abstract Sensor distribution is a significant issue in wireless sensor networks and has been frequently sub-optimally solved by several heuristic algorithms. This research applies multi-objective differential evolution algorithm to jointly optimize the sensors distribution over diverse area shapes, increase the coverage area and reduce the network energy at the same time. A case base and different scenarios with constraints are considered. The restrictions are based on the boundaries of the delimited areas to prevent their centers to be close to the given boundaries, and on the area of interest by reducing the overlap among the covered areas of the nodes. At the end, the shortest distance between the initial node positions and the final node positions is determined finding which node should go in which position using the Hungarian algorithm. Finally, a minimum spanning tree among the nodes is also obtained. The results for different sensor network sizes from 9 up to 56 sensors and different sizes of target areas are presented (fitness, coverage area, energy and needed generations). The computed results show that the right combination of the control parameters leads to an optimized energy and a total coverage area of at least 87% of the target area. <s> BIB014 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> Wireless sensor networks (WSNs) are applied more and more widely in real life. In actual scenarios, 3-D directional wireless sensor nodes are constantly employed, thus, research on the real-time deployment optimization issue of 3-D directional WSNs based on terrain big data has more practical significance. Based on this, we study the deployment optimization issue of directional WSNs in the 3-D terrain through comprehensive consideration of coverage, lifetime, connectivity of sensor nodes, connectivity of cluster headers, and reliability of directional WSNs. We present a modified differential evolution algorithm by adopting crossover rate sort and polynomial-based mutation on the basis of the cooperative coevolutionary framework, and apply it to address the deployment problem of 3-D directional WSNs. In addition, to reduce computation time, we realize implementation of message passing interface parallelism. As is revealed by the experimentation results, the modified algorithm proposed in this paper achieves better performance with respect to either optimization results or operation time. <s> BIB015 </s> Differential Evolution in Wireless Communications: A Review <s> Localisation and coverage area maximisation <s> The aim of this article is to study the two-objective coverage problem of wireless sensor networks (WSNs) by means of differential evolution algorithm. Firstly, in order to reduce the computing redundancy of multi-objective optimization, namely to reduce the number of individuals which participate in non-dominated solution sorting, we introduced a fast two-objective differential evolution algorithm (FTODE). The FTODE contains a fast non-dominated solution sorted and a uniform crowding distance calculation method. The fast sorting method just handles the highest rank individuals but not all individuals in the current population. Meanwhile, during the individuals sorted, it can choose some of individuals into next generation and reduce the time complexity. The uniform crowding distance calculation can enhance the diversity of population due to it will retain the outline of optimal solution set by choosing the individual uniformly. Secondly, we use the FTODE framework to research the two-objective coverage problem of WSNs. The two objectives are formulated as: the minimum number of sensor used and the maximum coverage rate. For this specific problem, decimal integer encoding are used and a recombination operation is introduced into FTODE, which performs after initialization and guarantees at least one critical target’s sensor is divided into different disjoint sets. Finally, the simulation experiment shows that the FTODE provides competitive results in terms of time complexity and performance, and it also obtains better solutions than comparison algorithms on the two-objective coverage problem of WSNs. <s> BIB016
Localisation in wireless networks often involves the location of sensed data in wireless sensors and devices. Location information on localisation is crucial in coverage, sensor node deployment, target tracking and routing. DE was applied as a localisation algorithm to enhance the quality of information and for convergence purposes of determining the optimal distances between nodes . Location quality can be enhanced using DE BIB001 BIB004 and specifically in base stations (BS) BIB005 . Apart from accuracy of location estimation, DE has shown to be useful in reducing the time complexity, thereby leading to localisation error reduction BIB008 . A further reduction of the localisation error was achieved by the hybrid of DE and Monte Carlo localisation algorithm. This is a case where the sample weight is taken as the objective function BIB009 . A hybrid of DE and genetic algorithm has been used as a localisation algorithm in the estimation of the location of nodes in WSN BIB010 . Moreover, localisation by using DE can be improved by adaptive controls over the parameters to ensure adequate tuning BIB011 . Generally, coverage problems in WSN are modelled as optimisation problem and can be solved using evolutionary algorithms such as DE BIB006 . DE is used in solving connection based localisation problem features prominently in wireless sensor networks where connections can be modelled as a nonconvex or non-convex optimisation problem which can easily be handled using DE BIB002 . DE has been used in finding the minimum subset of sensor nodes to cover all the targets in wireless multimedia sensor networks BIB012 and hence solving the targets coverage problem BIB007 and nudge redundant active nodes into sleep mode BIB013 or reduction of number of individual nodes which participate in non-dominated solution sorting BIB016 . Also available is the use of DE to optimize sensor nodes over diverse area shape, thereby increasing the coverage area BIB014 . Coverage radius and load balancing were optimized using DE which acts as the gateway deployment algorithm BIB003 . DE was used as deployment algorithm in optimisation of variables defined for directional WSNs BIB015 .
Differential Evolution in Wireless Communications: A Review <s> 3.4 <s> A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> 3.4 <s> In this paper, a novel relative position and orientation (R-P&O) measurement method for large-volume components is proposed. Based on the method, the parallel distances between the cooperative point pairs (CPPs) are collected by multiple pairs of wireless ranging sensors which are installed on respective components and finally turned into the R-P&O. Accordingly, a measurement model is built and an algorithm is designed to solve the model, in which the radial basis function neural network (RBFNN) produces a preliminary solution by offline training and the differential evolution (DE) strategy finds the accurate solution by online heuristic searching. Furthermore, the crucial parameters and the performance of the algorithm are analyzed through simulating a virtual alignment process which proves that the RBFNN-DE algorithm can quickly and accurately find the global optimal solution in the whole effective workspace. Besides the theory study, a ranging device based on ultrasound has been developed along with a calibration method. Depending on the device, an experiment of actual alignment is implemented to verify the algorithm. Experimental results indicate that the error of R-P&O is no more than 4.1 mm and 0.32° when the ranging error is 0.1 mm, compared with the measurement result of indoor GPS (iGPS). <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> 3.4 <s> Air pollution obtains a key concern in India owing to faster economic development, urbanisation and industrialisation connected with increased energy demands. But these methods are expensive and provide low resolution sensing data. Also the monitoring system has high communication overhead, power consuming and time. To solve the above problem a clustered wireless sensor network-based air pollution monitoring system with swarm intelligence is discussed. Initially, the sensor nodes in the networks are grouped into clusters and the cluster head is selected using the glowworm swarm optimisation (GSO) algorithm and Cuckoo search algorithm (CSA). Then the air quality index (AQI)-based fuzzy rule is formed using fuzzy inference system (FIS). Then the data aggregation is using the improved artificial fish swarm algorithm (IAFSA) and hybrid bat algorithm (HBA) to find the optimal path for efficient data transmission by reducing the communication overhead. The bat fitness function is calculated using differential evolution (DE). The result shows that the proposed method is improved than the obtainable one in stipulations of network energy utilisation, delay and throughput and aggregation latency. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> 3.4 <s> To study the optimization problem of wireless sensor network (WSN) based on differential evolution, the single objective differential evolution algorithm is applied and combined with the advantages and disadvantages crossover strategy. Firstly, the path optimization problem in WSN is analyzed, and the optimization model is established. Then, the differential evolution algorithm is used as the search tool to solve the minimum energy consumption in the path optimization model, that is, the optimal path problem. Finally, the comparison experiment is carried out on the classical algorithm genetic algorithm (GA), particle swarm optimization (PSO) and standard differential evolution (DE) algorithm. The results show that the performance of differential evolution algorithm based on crossover strategy is superior to or not worse than that of several contrast algorithms. It can be seen that the differential evolution algorithm based on advantage and disadvantage crossover strategy has good effectiveness. <s> BIB004
Updating mechanism DE algorithm can be applied as position updating mechanism where candidate positions learn from a large diversified search region such as online heuristic searching BIB002 and search equations for the purpose of reliable data collection . The outcome is to determine the optimum path that satisfies the different quality of service (QOS) constraints in Mobile Ad Hoc Networks (MANETs) BIB001 . DE has been found to be a crossover strategy used as a search tool in solving optimisation problems in WSNs and superior to genetic algorithm and particle swarm optimisation BIB004 . DE was used to compute the fitness function in a hybrid algorithm aimed at finding the optimal path for efficient data transmission in WSN based air pollution monitoring system BIB003 .
Differential Evolution in Wireless Communications: A Review <s> Security <s> As the important elements of the Internet of Things system, wireless sensor network (WSN) has gradually become popular in many application fields. However, due to the openness of WSN, attackers can easily eavesdrop, intercept, and rebroadcast data packets. WSN has also faced many other security issues. Intrusion detection system (IDS) plays a pivotal part in data security protection of WSN. It can identify malicious activities that attempt to violate network security goals. Therefore, the development of effective intrusion detection technologies is very important. However, many dimensions of the datasets of IDS are irrelevant or redundant. This causes low detection speed and poor performance. Feature selection is thus introduced to reduce dimensions in IDS. At the same time, many evolutionary computing (EC) techniques were employed in feature selection. However, these techniques usually have just one Candidate Solution Generation Strategy (CSGS) and often fall into local optima when dealing with feature selection problems. The self-adaptive differential evolution (SaDE) algorithm is adopted in our paper to deal with feature selection problems for IDS. The adaptive mechanism and four effective CSGSs are used in SaDE. Through this method, an appropriate CSGS can be selected adaptively to generate new individuals during evolutionary process. Besides, we have also improved the control parameters of the SaDE. The K-Nearest Neighbour (KNN) is used for performance assessment for feature selection. KDDCUP99 dataset is employed in the experiments, and experimental results demonstrate that SaDE is more promising than the algorithms it compares. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> Security <s> In Multimedia Sensor Networks (WMSNs) the devices are interconnected in the wireless manner that is able to ubiquitously retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from environments. Many research works have been undertaken to enhance the quality of multimedia contents and energy consumption, but not security. Till, security problems are critical issue in WMSNs. Trust inference is one of the methods that solve security problems in wireless sensor networks (WSNs) and mobile ad hoc networks (MANET). In this paper, we propose an energy efficient trusted cluster (E2TC) based routing protocol for WMSNs for overcome multi-objective problems, which can eventually maximize the network lifetime. The proposed routing protocol consist of two algorithms, first the multi-dimension differential evolution based trust (MDET) inference model used to compute trust value for each node in the network, then the energy efficient cluster formation is performed using load balance enhanced chemical reaction optimization (LBCRO) algorithm. The routing path between the source nodes to destination nodes framed by computed trust values. The result obtained through Network simulator tool and shows that the proposed routing protocol performs better than existing protocols in terms of energy consumption, QoS metrics, and network lifetime. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> Security <s> With the development of the Internet of Things (IoT) technology, a vast amount of the IoT data is generated by mobile applications from mobile devices. Cloudlets provide a paradigm that allows the mobile applications and the generated IoT data to be offloaded from the mobile devices to the cloudlets for processing and storage through the access points (APs) in the Wireless Metropolitan Area Networks (WMANs). Since most of the IoT data is relevant to personal privacy, it is necessary to pay attention to data transmission security. However, it is still a challenge to realize the goal of optimizing the data transmission time, energy consumption and resource utilization with the privacy preservation considered for the cloudlet-enabled WMAN. In this paper, an IoT-oriented offloading method, named IOM, with privacy preservation is proposed to solve this problem. The task-offloading strategy with privacy preservation in WMANs is analyzed and modeled as a constrained multi-objective optimization problem. Then, the Dijkstra algorithm is employed to evaluate the shortest path between APs in WMANs, and the nondominated sorting differential evolution algorithm (NSDE) is adopted to optimize the proposed multi-objective problem. Finally, the experimental results demonstrate that the proposed method is both effective and efficient. <s> BIB003 </s> Differential Evolution in Wireless Communications: A Review <s> Security <s> The problem of binary hypothesis testing is considered in a bandwidth-constrained low-power wireless sensor network operating over insecure links. To prevent passive eavesdropping from enemy fusion center (EFC), the sensor observations are randomly flipped according to pre-deployed flipping rates before transmission. Accordingly, a constrained optimization problem is formulated to minimize the fusion error of ally fusion center (AFC) while maintain EFC’s error at high level. We demonstrated that the fusion error is a non-convex function of the flipping rates, thus an immune based differential evolution algorithm is designed to search the optimal flipping rates, such that the EFC always gets high error probability at the cost of a small degeneration of the AFC’s fusion performance. Furthermore, the optimal thresholds of the fusion rules are calculated based on the statistics of the sensor data, which further degenerates the detection performance of the EFC, since it is not aware of the statistics of the sensor observations after data flipping, resulting in its threshold does not match the observations. Simulation results demonstrated that the AFC can appropriately acquire the original nature state, while the EFC is prevent to detect the target regardless of the signal-to-noise and sensor numbers. <s> BIB004 </s> Differential Evolution in Wireless Communications: A Review <s> Security <s> Abstract In this paper, the optimal power schedule is studied for the wireless communication network under Denial-of-Service (DoS) attacks. Different from Nash Equilibrium (NE), a Stackelberg Equilibrium (SE) framework is proposed to analyze the game between the defender and the attacker under two different types of incomplete information. In the first scenario, the defender only knows the statistical characteristics about whether the attacker exists or not, which means that the defender knows the probability that there is only background noise on the environment. In the second situation, the defender does not know the total power of the attacker exactly and only acquires to know the total power with certain probability. Moreover, the detailed steps of designing both players’ optimal strategies are given and analyzed. Adaptive Penalty Function (APF) approach and Differential Evolution (DE) algorithm are combined to deal with the corresponding nonlinear and non-convex optimization issues. Finally, examples are provided to illustrate the results proposed in this work. <s> BIB005
Security issues are one of several issues facing WSN and intrusion detection system (IDS) is indispensable in the security of WSN. The aim of IDS is to detect malicious activities that affect the predefined network protocols. The multi-dimensional nature of the datasets of IDS causes data redundancy which leads to poor performance and slow speed. In order to address the data dimension issue, feature selections are often used in IDS which can be effectively optimised by the application of DE BIB001 . Apart from IDS, trust interference is another method of addressing security issues in WSN. DE was applied to compute trust values for each individual node in the WSN BIB002 . Another aspect of the security issues in WSN is the data aggregation caused by an enormous connectivity from different devices connected to the network. As a result, the network is vulnerable to security threats at the aggregated nodes. To solve the problem, DE was used to compute the trusted aggregated node among multiple nodes . DE was combined with artificial immune system in the optimisation of the distribution and effectiveness of the detector generator in WSN intrusion detection . The strategy of maintaining network reliability and at the same time achieving privacy preservation in WMAN can be handled using IoT-oriented offload method. DE can be used to optimize the variables while preserving privacy BIB003 . Random flipped is often recommended in preventing security attacks of WSN on an insecure link. Optimum flipping can be obtained using DE to minimize the fusion error and ensuring secure data transmission BIB004 . DE was applied to obtain optimal power schedule in wireless networks thereby, minimizing the occurrence of the denial of service (DoS) attacks BIB005 .
Differential Evolution in Wireless Communications: A Review <s> 3.6 <s> Graphical abstractDisplay Omitted HighlightsFuzzy control is used to manage energy consumption in wireless sensor device.Control is based on energy available in storage and from the environment.Differential evolution tailors system parameters for intended deployment site.Validity of the proposed approach is confirmed using real weather forecast data.Comparison with other common optimization approaches is provided. Environmentally-powered wireless sensors use ambient energy from their environment to support their own energy needs. As such, they must operate without significant maintenance or user supervision. Due to the stochastic availability of ambient energy, its harvesting, storage and consumption must be managed by an efficient and robust controller that maintains data collection and transmission rates at desired levels, while maximizing the useful operational time of the system. To accomplish this task, the control system must observe the state of charge of an internal energy storage device, and consider the amount of energy available for harvest in the future. At the same time, the complexity of the controller must be limited so that it can be implemented on the simple embedded system of the sensor hardware. This paper presents a comprehensive synthesis of desired behavior of such controllers, and describes procedures for their design and optimization through an evolutionary fuzzy approach. The main contribution is the formalization of design objectives and development of the fitness function that drives the optimization process. Additional contributions include a comprehensive evaluation of several soft computing optimization approaches, thorough analysis of the optimized controller, its comparison to baseline control strategies, and validation of its operation with real energy availability forecasts. <s> BIB001 </s> Differential Evolution in Wireless Communications: A Review <s> 3.6 <s> Water distribution system (WDS) plays a vital role in supplying water for population living in the city and urban areas. An expensive infrastructure of the WDS drives researchers to seek the least-cost design. This paper first presents the mathematical model to determine the optimal design for the WDS with two conflicting objectives; minimization of construction cost and minimization of total head loss in the network. To deal with large-scale problem in the real-world practice, metaheuristic approach is required to solve the problem. Therefore, this study proposes Differential Evolution (DE) algorithm with encoding and decoding procedures to handling the complexity of decision making in designing pipe sizes of all arcs in the water distribution network. The experiments are executed using the scenarios from the real case-study. Results obtained show that the proposed DE is able to find good a quality front with a set of non-dominated solutions in a single run without prejudice. <s> BIB002 </s> Differential Evolution in Wireless Communications: A Review <s> 3.6 <s> This paper describes a reflexive multilayered mission planner with a mounted energy efficient local path planner for unmanned underwater vehicle’s (UUV) navigation throughout complex subsea volume in a time variant semi-dynamic operation network. The UUV routing protocol in underwater wireless sensor network is generalized with a homogeneous dynamic knapsack-traveler salesman problem emerging with an adaptive path planning mechanism to address UUV’s long-duration missions on dynamically changing subsea volume. The framework includes a base layer of global path planning, an inner layer of local path planning and an environmental sublayer. Such a multilayer integrated structure facilitates the framework to adopt any algorithm with real-time performance. The evolutionary technique known as differential evolution (DE) algorithm is employed by both base and inner layers to examine the performance of the framework in efficient mission timing and its resilience against the environmental disturbances. Relying on reactive nature of the framework and fast computational performance of the DE algorithm, the simulations show promising results and this new framework guarantees a safe and efficient deployment in a turbulent uncertain marine environment passing through a proper sequence of stations considering various constraint in a complex environment. <s> BIB003
Related field applications DE is applied when the studied problem is modelled as a network with given objective function to be minimised or maximised. Another aspect is when DE is combined with other methods and applied in fields related to wireless communication. DE was applied to determine the optimal design for the appropriate pipes that fits the network distribution in water distribution system subject to cost and total loss constraints BIB002 . DE was applied to predict gas concentration while WSN systems were used to collect the data . DE was used in energy optimisation of environmental driven WSN BIB001 . DE is used in path planning for unmanned underwater vehicle's (UUV) BIB003 .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Error associated with the remote sensing and GIS data acquisition, processing, analysis, conversion, and final product presentation can have a significant impact on the confidence of decisions made using the data. The goal of this paper is to provide a broad overview of spatial data error sources, and to identify priority research topics which will reduce impediments and enhance the quality of integrated remote sensing and GIS data. Potential sources of error will be identified at each data integration process step. Impacts of error propagation on decision making and implementation processes will be assessed, and priority error quantification research topics will be recommended. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like "Notre Dame" or "Trevi Fountain." This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world's well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> This paper discusses the historical evolution of imaging spectroscopy in Earth observation as well as directional (or multiangular) research leading to current achievements in spectrodirectional remote sensing. It elaborates on the evolution from two separate research areas into a common approach to quantify the interaction of light with the Earth surface. The contribution of spectrodirectional remote sensing towards an improved understanding of the Earth System is given by discussing the benefits of converging from individual pixel analysis to process models in the land-biosphere domain. The paper concludes with an outlook of research focus and upcoming areas of interest emphasizing towards multidisciplinary approaches using integrated system solutions based on remote and in situ sensing, data assimilation, and state space estimation algorithms. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Rangeland comprises as much as 70% of the Earth's land surface area. Much of this vast space is in very remote areas that are expensive and often impossible to access on the ground. Unmanned Aerial Vehicles (UAVs) have great potential for rangeland management. UAVs have several advantages over satellites and piloted aircraft: they can be deployed quickly and repeatedly; they are less costly and safer than piloted aircraft; they are flexible in terms of flying height and timing of missions; and they can obtain imagery at sub-decimeter resolution. This hyperspatial imagery allows for quantification of plant cover, composition, and structure at multiple spatial scales. Our experiments have shown that this capability, from an off-the-shelf mini-UAV, is directly applicable to operational agency needs for measuring and monitoring. For use by operational agencies to carry out their mandated responsibilities, various requirements must be met: an affordable and reliable platform; a capability for autonomous, low altitude flights; takeoff and landing in small areas surrounded by rugged terrain; and an easily applied data analysis methodology. A number of image processing and orthorectification challenges have been or are currently being addressed, but the potential to depict the land surface commensurate with field data perspectives across broader spatial extents is unrivaled. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> PREFACE LIST OF SYMBOLS AND ABBREVIATIONS REFERENCES APPENDICES ANSWERS TO SAMPLE PROBLEMS INDEX <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Precision agriculture (PA) is the application of geospatial techniques and sensors (e.g., geographic information systems, remote sensing, GPS) to identify variations in the field and to deal with them using alternative strategies. In particular, high-resolution satellite imagery is now more commonly used to study these variations for crop and soil conditions. However, the availability and the often prohibitive costs of such imagery would suggest an alternative product for this particular application in PA. Specifically, images taken by low altitude remote sensing platforms, or small unmanned aerial systems (UAS), are shown to be a potential alternative given their low cost of operation in environmental monitoring, high spatial and temporal resolution, and their high flexibility in image acquisition programming. Not surprisingly, there have been several recent studies in the application of UAS imagery for PA. The results of these studies would indicate that, to provide a reliable end product to farmers, advances in platform design, production, standardization of image georeferencing and mosaicing, and information extraction workflow are required. Moreover, it is suggested that such endeavors should involve the farmer, particularly in the process of field design, image acquisition, image interpretation and analysis. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> *Ecologists require spatially explicit data to relate structure to function. To date, heavy reliance has been placed on obtaining such data from remote-sensing instruments mounted on spacecraft or manned aircraft, although the spatial and temporal resolutions of the data are often not suited to local-scale ecological investigations. Recent technological innovations have led to an upsurge in the availability of unmanned aerial vehicles (UAVs) – aircraft remotely operated from the ground – and there are now many lightweight UAVs on offer at reasonable costs. Flying low and slow, UAVs offer ecologists new opportunities for scale-appropriate measurements of ecological phenomena. Equipped with capable sensors, UAVs can deliver fine spatial resolution data at temporal resolutions defined by the end user. Recent innovations in UAV platform design have been accompanied by improvements in navigation and the miniaturization of measurement technologies, allowing the study of individual organisms and their spatiotemporal dynamics at close range. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Abstract We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> The fields of tropical biology and conservation face significant transformations due to rapid technological developments in remote sensing. Other fields (e.g. Archeology) are experiencing this momentous change even more rapidly. In this article, we review some of the challenges that the fields of tropical biology and conservation face during the first quarter of the twenty-first century from the perspective of various remote sensing technologies, and discuss the transformations that they may bring to these disciplines. In addition, we review two emerging technologies driving paradigm changes in the nexus of ecology, remote sensing, and analytics: near-surface remote sensing and Wireless Sensor Networks. These two technologies, arising from the eScience paradigm, offer unique opportunities to integrate field observations at hyper-temporal and spatial resolutions that were not possible as recently as 5 years ago. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> In just the past five years, the field of Earth observation has evolved from the relatively staid approaches of government space agencies into a plethora of sensing opportunities afforded by CubeSats, Unmanned Aerial Vehicles (UAVs), and smartphone technologies that have been embraced by both for-profit companies and individual researchers. Over the previous decades, space agency efforts have brought forth well-known and immensely useful satellites such as the Landsat series and the Gravity Research and Climate Experiment (GRACE) system, with costs typically on the order of one billion dollars per satellite and with concept-to-launch timelines on the order of two decades (for new missions). More recently, the proliferation of smartphones has helped to miniaturise sensors and energy requirements, facilitating advances in the use of CubeSats that can be launched by the dozens, while providing 3–5 m resolution sensing of the Earth on a daily basis. Start-up companies that did not exist five years ago now operate more satellites in orbit than any space agency and at costs that are a mere fraction of an agency mission. With these advances come new space-borne measurements, such as high-definition video for understanding real-time cloud formation, storm development, flood propagation, precipitation tracking, or for constructing digital surfaces using structure-from-motion techniques. Closer to the surface, measurements from small unmanned drones and tethered balloons have mapped snow depths, floods, and estimated evaporation at sub-meter resolution, pushing back on spatiotemporal constraints and delivering new process insights. At ground level, precipitation has been measured using signal attenuation between antennae mounted on cell phone towers, while the proliferation of mobile devices has enabled citizenscience to record photos of environmental conditions, estimate daily average temperatures from battery state, and enable the measurement of other hydrologically important variables such as channel depths using commercially available wireless devices. Global internet access is being pursued via high altitude balloons, solar planes, and hundreds of planned satellite launches, providing a means to exploit the Internet of Things as a new measurement domain. Such global access will enable real-time collection of data from billions of smartphones or from remote research platforms. This future will produce petabytes of data that can only be accessed via cloud storage and will require new analytical approaches to interpret. The extent to which today's hydrologic models can usefully ingest such massive data volumes is not clear. Nor is it clear whether this deluge of data will be usefully exploited, either because the measurements are superfluous, inconsistent, not accurate enough, or simply because we lack the capacity to process and analyse them. What is apparent is that the tools and techniques afforded by this array of novel and game-changing sensing platforms presents our community with a unique opportunity to develop new insights that advance fundamental aspects of the hydrological sciences. To accomplish this will require more than just an application of the technology: in some cases, it will demand a radical rethink on how we utilise and exploit these new observation platforms to enhance our understanding of the Earth system. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Traditional imagery—provided, for example, by RGB and/or NIR sensors—has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors’ ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre- and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectral sensors plays a center role—is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing—by isolating the common user from the processes’ mathematical complexity—several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper’s conclusions. <s> BIB013 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> The aim of this study is twofold: first, to present a survey of the actual and most advanced methods related to the use of unmanned aerial systems UASs that emerged in the past few years due to the technological advancements that allowed the miniaturization of components, leading to the availability of small-sized unmanned aerial vehicles UAVs equipped with Global Navigation Satellite Systems GNSS and high quality and cost-effective sensors; second, to advice the target audience – mostly farmers and foresters – how to choose the appropriate UAV and imaging sensor, as well as suitable approaches to get the expected and needed results of using technological tools to extract valuable information about agroforestry systems and its dynamics, according to their parcels’ size and crop’s types.Following this goal, this work goes beyond a survey regarding UAS and their applications, already made by several authors. It also provides recommendations on how to choose both the best sensor and UAV, in according with the required application. Moreover, it presents what can be done with the acquired sensors’ data through theuse of methods, procedures, algorithms and arithmetic operations. Finally, some recent applications in the agroforestry research area are presented, regarding the main goal of each analysed studies, the used UAV, sensors, and the data processing stage to reach conclusions. <s> BIB014 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Introduction <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB015
Over the past decade, the number of applications of unmanned aerial vehicles (UAVs, also referred to as drones, unmanned aerial/aircraft systems (UAS), or remotely piloted aircraft systems (RPAS) has exploded. Already in 2008, unmanned robots were envisioned to bring about a new era in agriculture . Recent studies have shown that UAV remote sensing techniques are revolutionizing forest studies BIB011 , spatial ecology BIB007 , ecohydrology BIB012 5] and other environmental monitoring applications . The main driver for this revolution is the fast pace of technological advances and the 2 of 42 miniaturization of sensors, airframes, and software [7] . A wide range of UAV platforms and sensors have been developed in the last decade. They have given individual scientists, small teams, and the commercial sector the opportunity to repeatedly obtain low-cost imagery at ultra-high spatial resolutions (1 cm to 1 m) that have been tailored to specific areas, products, and delivery times BIB007 BIB009 BIB004 . Moreover, computing power and easy to use consumer-grade software packages, which include modern computer vision and photogrammetry algorithms such as structure from motion (SfM) BIB002 , are becoming cheaper and available to many users. Before the era of UAVs, the majority of spectral datasets was produced by external data suppliers (companies or institutions) in a standardized way using a few types of sensors on-board satellites and manned aircraft. Today, research teams own or even build their own sensing systems and process their data themselves without the need for external data suppliers. Technology is developing rapidly and offering new types of sensors. This diversification makes data quality assurance considerations even more critical-in particular for quantitative and spectral remote sensing approaches, given the complexity of the geometric and radiometric corrections required for accurate spectroscopy-focused environmental remote sensing. Spectral remote sensing gathers information by measuring the radiance emitted (e.g., in the case of chlorophyll fluorescence), reflected, and transmitted from particles, objects, or surfaces. However, this information is influenced by environmental conditions (mainly the illumination conditions) and modified by the sensor, measurement protocol, and the data-processing procedure. Thus, it is critical to understand the full sensing process, since undesired effects during data acquisition and processing may have a significant impact on the confidence of decisions made using the data BIB001 . Moreover, it is also a prerequisite to later use pixels to understand the biological processes of the Earth system (c.f. BIB003 ). Recently, several papers have reviewed the literature for UAV technology and its application in Earth observation [7-10, BIB013 BIB014 BIB006 . With the issues potentially arising from an increasing diversity of small spectral sensors for UAV remote sensing, there is also a growing need to spread knowledge on sensor technology, data acquisition, protocols, and data processing. Thus, the objective of this review is to describe and discuss recent spectral UAV sensing technology, its integration on UAV platforms, and geometric and radiometric data-processing procedures for spectral data captured by UAV sensing systems based on the literature, but also on more than a decade of our own experiences. Our aim is to follow the signal through the sensing process ( Figure 1 ) and discuss important steps to acquire reliable data with UAV spectral sensing. Additionally, we reflect on the current revolution in remote sensing to identify trends and potentials. advances and the miniaturization of sensors, airframes, and software [7] . A wide range of UAV platforms and sensors have been developed in the last decade. They have given individual scientists, small teams, and the commercial sector the opportunity to repeatedly obtain low-cost imagery at ultra-high spatial resolutions (1 cm to 1 m) that have been tailored to specific areas, products, and delivery times BIB007 BIB009 BIB004 . Moreover, computing power and easy to use consumer-grade software packages, which include modern computer vision and photogrammetry algorithms such as structure from motion (SfM) BIB002 , are becoming cheaper and available to many users. Before the era of UAVs, the majority of spectral datasets was produced by external data suppliers (companies or institutions) in a standardized way using a few types of sensors on-board satellites and manned aircraft. Today, research teams own or even build their own sensing systems and process their data themselves without the need for external data suppliers. Technology is developing rapidly and offering new types of sensors. This diversification makes data quality assurance considerations even more critical-in particular for quantitative and spectral remote sensing approaches, given the complexity of the geometric and radiometric corrections required for accurate spectroscopy-focused environmental remote sensing. Spectral remote sensing gathers information by measuring the radiance emitted (e.g., in the case of chlorophyll fluorescence), reflected, and transmitted from particles, objects, or surfaces. However, this information is influenced by environmental conditions (mainly the illumination conditions) and modified by the sensor, measurement protocol, and the data-processing procedure. Thus, it is critical to understand the full sensing process, since undesired effects during data acquisition and processing may have a significant impact on the confidence of decisions made using the data BIB001 . Moreover, it is also a prerequisite to later use pixels to understand the biological processes of the Earth system (c.f. BIB003 ). Recently, several papers have reviewed the literature for UAV technology and its application in Earth observation [7-10, BIB013 BIB014 BIB006 . With the issues potentially arising from an increasing diversity of small spectral sensors for UAV remote sensing, there is also a growing need to spread knowledge on sensor technology, data acquisition, protocols, and data processing. Thus, the objective of this review is to describe and discuss recent spectral UAV sensing technology, its integration on UAV platforms, and geometric and radiometric data-processing procedures for spectral data captured by UAV sensing systems based on the literature, but also on more than a decade of our own experiences. Our aim is to follow the signal through the sensing process ( Figure 1 ) and discuss important steps to acquire reliable data with UAV spectral sensing. Additionally, we reflect on the current revolution in remote sensing to identify trends and potentials. Figure 1 . The path of information from a particle (e.g., pigments within the leaf), object, or surface to the data product. The spectral signal is influenced by the environment, the sensor, the measurement protocol, and data processing on the path to its representation as a pixel in a data product. In combination with metadata, this representation becomes information. This review is structured as follows. Different technical solutions for spectral UAV sensor technology are described in Section 2. The geometric and radiometric processing steps are elaborated in Sections 3 and 4, respectively. In Section 5, we build a more complete picture of significant 2D spectral imagers record spectral data in two spatial dimensions within every exposure. This has opened up new ways of imaging spectroscopy BIB015 , since computer vision algorithms can be used to compose a scene from individual images, and spectral and 3D information can be retrieved from the same data and composed to (hyper)spectral digital surface models BIB010 BIB008 . Since BIB010 first attempted to categorize 2D imagers (then commonly referred to as image-frame cameras or central perspective images BIB005 ), new technologies have appeared. Today, 2D imagers exist that record the spectral bands sequentially or altogether within a snapshot. In addition, multi-camera systems record spectral bands synchronously with several cameras. In the following sections, these different technologies are reviewed.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract The characterization of field properties related to water in heterogeneous open canopies is limited by the lack of spatial resolution of satellite-based imagery and by the physical constraints of point observations on the ground. We apply here models based on canopy temperature estimated from high resolution airborne imagery to calculate tree canopy conductance (Gc) and the crop water stress index (CWSI) of heterogeneous olive orchards. The Gc model requires the simulation of net radiation (Rn) and the aerodynamic resistance (ra) as a function of windspeed and canopy structure. In both cases, the Rn and ra models were tested against measurements and published data for olive orchards. Modeled values of Gc of trees varying in water status correlated well with Gc estimates obtained from stomatal conductance measurements in the same trees. The model used to calculate the Crop Water Stress Index (CWSI) took into account, not only the vapor pressure deficit but the Rn and the windspeed as well, parameters known to affect the temperature differences between the air and the tree canopy. The calculated CWSI for water deficit and well irrigated olive trees correlated with the water potential measured on the same trees. The methodology applied in this manuscript was used to validate the estimation of theoretical baselines needed for the CWSI calculations, comparing against traditional empirical baseline determination. High resolution thermal imagery obtained with the Airborne Hyperspectral Scanner (AHS), and from an Unmanned Aerial Vehicle (UAV) for two years was used to map Gc and CWSI of an olive orchard where different irrigation treatments were applied. The methodology developed here enables the spatial analysis of water use within heterogeneous orchards, and the field characterization of water stress, leading to potential applications in the improvement of orchard irrigation management using high resolution thermal remote sensing imagery. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract This paper presents a methodology for water stress detection in crop canopies using a radiative transfer modelling approach and the Photochemical Reflectance Index (PRI). Airborne imagery was acquired with a 6-band multispectral camera yielding 15 cm spatial resolution and 10 nm FWHM over 3 crops comprising two tree-structured orchards and a corn field. The methodology is based on the PRI as a water stress indicator, and a radiative transfer modelling approach to simulate PRI baselines for non-stress conditions as a function of leaf structure, chlorophyll concentration (Cab), and canopy leaf area index (LAI). The simulation work demonstrates that canopy PRI is affected by structural parameters such as LAI, Cab, leaf structure, background effects, viewing angle and sun position. The modelling work accounts for such leaf biochemical and canopy structural inputs to simulate the PRI-based water stress thresholds for non-stress conditions. Water stress levels are quantified by comparing the image-derived PRI and the simulated non-stress PRI (sPRI) obtained through radiative transfer. PRI simulation was conducted using the coupled PROSPECT-SAILH models for the corn field, and the PROSPECT leaf model coupled with FLIGHT 3D radiative transfer model for the olive and peach orchards. Results obtained confirm that PRI is a pre-visual indicator of water stress, yielding good relationships for the three crops studied with canopy temperature, an indicator of stomatal conductance ( r 2 = 0.65 for olive, r 2 = 0.8 for peach, and r 2 = 0.72 for maize). PRI values of deficit irrigation treatments in olive and peach were consistently higher than the modelled PRI for the study sites, yielding relationships with water potential ( r 2 = 0.84) that enabled the identification of stressed crowns accounting for within-field LAI and Cab variability. The methodology presented here for water stress detection is based on the visible part of the spectrum, and therefore it has important implications for remote sensing applications in agriculture. This method may be a better alternative to using the thermal region, which has limitations to acquire operationally high spatial resolution thermal imagery. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract This paper deals with the monitoring of water status and the assessment of the effect of stress on citrus fruit quality using structural and physiological remote sensing indices. Four flights were conducted over a citrus orchard in 2009 using an unmanned aerial vehicle (UAV) carrying a multispectral camera with six narrow spectral bands in the visible and near infrared. Physiological indices such as the Photochemical Reflectance Index (PRI 570 ), a new structurally robust PRI formulation that uses the 515 nm as the reference band (PRI 515 ), and a chlorophyll ratio (R 700 /R 670 ) were compared against the Normalized Difference Vegetation Index (NDVI), Renormalized Difference Vegetation Index (RDVI) and Modified Triangular Vegetation Index (MTVI) canopy structural indices for their performance in tracking water status and the effects of sustained water stress on fruit quality at harvest. The irrigation setup in the commercial orchard was compared against a treatment scheduled to satisfy full requirements (based on estimated crop evapotranspiration) using two regulated deficit irrigation (RDI) strategies. The water status of the trees throughout the experiment was monitored with frequent field measurements of stem water potential ( Ψ x ), while titratable acidity (TA) and total soluble solids (TSS) were measured at harvest on selected trees from each irrigation treatment. The high spatial resolution of the multispectral imagery (30 cm pixel size) enabled identification of pure tree crown components, extracting the tree reflectance from shaded, sunlit and aggregated pixels. The physiological and structural indices were then calculated from each tree at the following levels: (i) pure sunlit tree crown, (ii) entire crown, aggregating the within-crown shadows, and (iii) simulating a lower resolution pixel, including tree crown, sunlit and shaded soil pixels. The resulting analysis demonstrated that both PRI formulations were able to track water status, except when water stress altered canopy structure. In such cases, PRI 570 was more affected than PRI 515 by the structural changes caused by sustained water stress throughout the season. Both PRI formulations were proven to serve as pre-visual water stress indicators linked to fruit quality TSS and TA parameters ( r 2 = 0.69 for PRI 515 vs TSS; r 2 = 0.58 vs TA). In contrast, the chlorophyll (R 700 /R 670 ) and structural indices (NDVI, RDVI, MTVI) showed poor relationships with fruit quality and water status levels ( r 2 = 0.04 for NDVI vs TSS; r 2 = 0.19 vs TA). The two PRI formulations showed strong relationships with the field-measured fruit quality parameters in September, the beginning of stage III, which appeared to be the period most sensitive to water stress and the most critical for assessing fruit quality in citrus. Both PRI 515 and PRI 570 showed similar performance for the two scales assessed (sunlit crown and entire crown), demonstrating that within-crown component separation is not needed in citrus tree crowns where the shaded vegetation component is small. However, the simulation conducted through spatial resampling on tree + soil aggregated pixels revealed that the physiological indices were highly affected by soil reflectance and between-tree shadows, showing that for TSS vs PRI 515 the relationship dropped from r 2 = 0.69 to r 2 = 0.38 when aggregating soil + crown components. This work confirms a previous study that demonstrated the link between PRI 570 , water stress, and fruit quality, while also making progress in assessing the new PRI formulation (PRI 515 ), the within-crown shadow effects on the physiological indices, and the need for high resolution imagery to target individual tree crowns for the purpose of evaluating the effects of water stress on fruit quality in citrus. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract This work advances the evaluation and interpretation of the Photochemical Reflectance Index (PRI) as an indicator of water stress, over a range of canopy structures and pigment content levels. Very high resolution (VHR) narrow-band multispectral (10 cm) and thermal (20 cm) imagery was acquired diurnally, in four airborne campaigns conducted over an experimental vineyard site undergoing three different irrigation treatments. Field measurements of leaf stomatal conductance (G s ) and leaf water potential (Ψ leaf ) were acquired concurrently with the airborne campaigns and compared against the Crop Water Stress Index (CWSI), a widely accepted, thermal-based indicator of water stress, and against narrow-band multispectral indices calculated from pure-vegetation pixels. The study proposes a new formulation, a normalized PRI (PRI norm ), in which the standard PRI index is normalized by an index that is sensitive to canopy structure (Renormalized Difference Vegetation Index, RDVI) and by a red edge index that is sensitive to chlorophyll content (R 700 /R 670 ). The hypothesis investigated is that the new index, calculated as PRI norm = PRI/[RDVI · R 700 /R 670 ], not only detects xanthophyll pigment changes as a function of water stress, but also normalizes for the chlorophyll content level and canopy leaf area reduction induced by stress. Results demonstrated that when comparing PRI norm against stomatal conductance (r 2 = 0.79; p 2 = 0.77; p 2 = 0.52 and 0.49, respectively). Further, when using the four flights conducted during the diurnal experiment, the relationships with stomatal conductance also showed the superior performance of PRI norm (r 2 = 0.68) as opposed to PRI (r 2 = 0.4). The proposed normalized PRI was highly related (r 2 = 0.75; p 2 = 0.58) than that obtained for PRI norm . In summary, this study demonstrates that PRI norm isolated better than PRI the physiological changes against a changing background of altered pigments and structure, tracking more precisely the diurnal dynamics of the stomatal aperture. Simulations conducted, using leaf and canopy radiative transfer models to elucidate these results, showed that PRI norm is more linearly related to canopy pigment content than the standard PRI, and was more capable of differentiating between stress levels, providing better insight into the results of this diurnal study. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> An automatic thresholding algorithm was developed in an OBIA framework.The algorithm was tested in UAV images acquired on different herbaceous row crops.The main objective was to accurately discriminate vegetation vs bare soil.Classification accuracies about 90% were achieved.Two cameras were tested on board the UAV: visible, and visible+infrared. In precision agriculture, detecting the vegetation in herbaceous crops in early season is a first and crucial step prior to addressing further objectives such as counting plants for germination monitoring, or detecting weeds for early season site specific weed management. The ultra-high resolution of UAV images, and the powerful tools provided by the Object Based Image Analysis (OBIA) are the key in achieving this objective. The present research work develops an innovative thresholding OBIA algorithm based on the Otsu's method, and studies how the results of this algorithm are affected by the different segmentation parameters (scale, shape and compactness). Along with the general description of the procedure, it was specifically applied for vegetation detection in remotely-sensed images captured with two sensors (a conventional visible camera and a multispectral camera) mounted on an Unmanned Aerial Vehicle (UAV) and acquired over fields of three different herbaceous crops (maize, sunflower and wheat). The tests analyzed the performance of the OBIA algorithm for classifying vegetation coverage as affected by different automatically selected thresholds calculated in the images of two vegetation indices: the Excess Green (ExG) and the Normalized Difference Vegetation Index (NDVI). The segmentation scale parameter affected the vegetation index histograms, which led to changes in the automatic estimation of the optimal threshold value for the vegetation indices. The other parameters involved in the segmentation procedure (i.e., shape and compactness) showed minor influence on the classification accuracy. Increasing the object size, the classification error diminished until an optimum was reached. After this optimal value, increasing object size produced bigger errors. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Graphical abstractDisplay Omitted HighlightsThe problem of constructing a weed mapping model via machine learning techniques is assessed.The combination of spectral properties with vegetation indexes and crop rows helps the prediction.A semi-supervised classifier has been proved to perform well for the classification problem assessed with very few information provided by the user.An extended experimental design for weed mapping could be performed considering other crops. This paper presents a system for weed mapping, using imagery provided by unmanned aerial vehicles (UAVs). Weed control in precision agriculture is based on the design of site-specific control treatments according to weed coverage. A key component is precise and timely weed maps, and one of the crucial steps is weed monitoring, by ground sampling or remote detection. Traditional remote platforms, such as piloted planes and satellites, are not suitable for early weed mapping, given their low spatial and temporal resolutions. Nonetheless, the ultra-high spatial resolution provided by UAVs can be an efficient alternative. The proposed method for weed mapping partitions the image and complements the spectral information with other sources of information. Apart from the well-known vegetation indexes, which are commonly used in precision agriculture, a method for crop row detection is proposed. Given that crops are always organised in rows, this kind of information simplifies the separation between weeds and crops. Finally, the system incorporates classification techniques for the characterisation of pixels as crop, soil and weed. Different machine learning paradigms are compared to identify the best performing strategies, including unsupervised, semi-supervised and supervised techniques. The experiments study the effect of the flight altitude and the sensor used. Our results show that an excellent performance is obtained using very few labelled data complemented with unlabelled data (semi-supervised approach), which motivates the use of weed maps to design site-specific weed control strategies just when farmers implement the early post-emergence weed control. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> The study introduces a prototype multispectral camera system for aerial estimation of above-ground biomass and nitrogen (N) content in winter wheat (Triticum aestivum L.). The system is fully programmable and designed as a lightweight payload for unmanned aircraft systems (UAS). It is based on an industrial multi-sensor camera and a customizable image processing routine. The system was tested in a split fertilized N field trial at different growth stages in between the end of stem elongation and the end of anthesis. The acquired multispectral images were processed to normalized difference vegetation index (NDVI) and red-edge inflection point (REIP) orthoimages for an analysis with simple linear regression models. The best results for the estimation of above-ground biomass were achieved with the NDVI (R 2 = 0.72–0.85, RMSE = 12.3%–17.6%), whereas N content was estimated best with the REIP (R 2 = 0.58–0.89, RMSE = 7.6%–11.7%). Moreover, NDVI and REIP predicted grain yield at a high level of accuracy (R 2 = 0.89–0.94, RMSE = 9.0%–12.1%). Grain protein content could be predicted best with the REIP (R 2 = 0.76–0.86, RMSE = 3.6%–4.7%), with the limitation of prediction inaccuracies for N-deficient canopies. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract Research into remote sensing tools for monitoring physiological stress caused by biotic and abiotic factors is critical for maintaining healthy and highly-productive plantation forests. Significant research has focussed on assessing forest health using remotely sensed data from satellites and manned aircraft. Unmanned aerial vehicles (UAVs) may provide new tools for improved forest health monitoring by providing data with very high temporal and spatial resolutions. These platforms also pose unique challenges and methods for health assessments must be validated before use. In this research, we simulated a disease outbreak in mature Pinus radiata D. Don trees using targeted application of herbicide. The objective was to acquire a time-series simulated disease expression dataset to develop methods for monitoring physiological stress from a UAV platform. Time-series multi-spectral imagery was acquired using a UAV flown over a trial at regular intervals. Traditional field-based health assessments of crown health (density) and needle health (discolouration) were carried out simultaneously by experienced forest health experts. Our results showed that multi-spectral imagery collected from a UAV is useful for identifying physiological stress in mature plantation trees even during the early stages of tree stress. We found that physiological stress could be detected earliest in data from the red edge and near infra-red bands. In contrast to previous findings, red edge data did not offer earlier detection of physiological stress than the near infra-red data. A non-parametric approach was used to model physiological stress based on spectral indices and was found to provide good classification accuracy (weighted kappa = 0.694). This model can be used to map physiological stress based on high-resolution multi-spectral data. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Abstract Unmanned Aerial Vehicle (UAV) remote sensing has opened the door to new sources of data to effectively characterize vegetation metrics at very high spatial resolution and at flexible revisit frequencies. Successful estimation of the leaf area index (LAI) in precision agriculture with a UAV image has been reported in several studies. However, in most forests, the challenges associated with the interference from a complex background and a variety of vegetation species have hindered research using UAV images. To the best of our knowledge, very few studies have mapped the forest LAI with a UAV image. In addition, the drawbacks and advantages of estimating the forest LAI with UAV and satellite images at high spatial resolution remain a knowledge gap in existing literature. Therefore, this paper aims to map LAI in a mangrove forest with a complex background and a variety of vegetation species using a UAV image and compare it with a WorldView-2 image (WV2). In this study, three representative NDVIs, average NDVI (AvNDVI), vegetated specific NDVI (VsNDVI), and scaled NDVI (ScNDVI), were acquired with UAV and WV2 to predict the plot level (10 × 10 m) LAI. The results showed that AvNDVI achieved the highest accuracy for WV2 (R 2 = 0.778, RMSE = 0.424), whereas ScNDVI obtained the optimal accuracy for UAV (R 2 = 0.817, RMSE = 0.423). In addition, an overall comparison results of the WV2 and UAV derived LAIs indicated that UAV obtained a better accuracy than WV2 in the plots that were covered with homogeneous mangrove species or in the low LAI plots, which was because UAV can effectively eliminate the influence from the background and the vegetation species owing to its high spatial resolution. However, WV2 obtained a slightly higher accuracy than UAV in the plots covered with a variety of mangrove species, which was because the UAV sensor provides a negative spectral response function(SRF) than WV2 in terms of the mangrove LAI estimation. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-Camera 2D Imagers <s> Flavescence doree is a grapevine disease affecting European vineyards which has severe economic consequences and containing its spread is therefore considered as a major challenge for viticulture. Flavescence doree is subject to mandatory pest control including removal of the infected vines and, in this context, automatic detection of Flavescence doree symptomatic vines by unmanned aerial vehicle (UAV) remote sensing could constitute a key diagnosis instrument for growers. The objective of this paper is to evaluate the feasibility of discriminating the Flavescence doree symptoms in red and white cultivars from healthy vine vegetation using UAV multispectral imagery. Exhaustive ground truth data and UAV multispectral imagery (visible and near-infrared domain) have been acquired in September 2015 over four selected vineyards in Southwest France. Spectral signatures of healthy and symptomatic plants were studied with a set of 20 variables computed from the UAV images (spectral bands, vegetation indices and biophysical parameters) using univariate and multivariate classification approaches. Best results were achieved with red cultivars (both using univariate and multivariate approaches). For white cultivars, results were not satisfactory either for the univariate or the multivariate. Nevertheless, external accuracy assessment show that despite problems of Flavescence doree and healthy pixel misclassification, an operational Flavescence doree mapping technique using UAV-based imagery can still be proposed. <s> BIB011
A multi-camera 2D imager uses several integrated cameras to record a multispectral or hyperspectral image. Often, this is done by placing filters with a specific wavelength configuration in front of the detector. The first popular camera for this type for UAV application was the MCA, which had four or six cameras. Some examples of applications that have been developed with this first bulky model (2.7 kg) carried out on-board a helicopter UAV include water stress detection and precision agriculture studies BIB001 BIB003 BIB002 . Further miniaturization of the MCA model into the mini-MCA camera enabled its use from lightweight platforms for vegetation detection in herbaceous crops BIB006 and weed mapping BIB007 . However, due to its technical configuration, calibration and post-processing of data was complex BIB004 . Additionally, the camera had a rolling shutter, where not all parts of the image are recorded at the same time. For moving scenes (e.g., due to the movement of the sensor), this results in "rolling-shutter" effects that distort the images. Thus, rolling shutter cameras are not Remote Sens. 2018, 10, 1091 5 of 42 suited for taking images during UAV movement. Newer Tetracam cameras now also use global shutter in the Macaw model. Recently, similar but more compact systems have appeared on the market. Among them are the MicaSense Parrot Sequoia and RedEdge(-m) [51, 52] with four and five spectral bands (blue, green, red, red edge, near-infrared), and the MAIA camera , with nine bands captured by separate imaging sensors that operate simultaneously. Such cameras were used to assess forest health BIB009 , leaf area index in mangrove forests BIB010 and grapevine disease infestation BIB011 . In addition, self-built multi-camera spectral 2D systems have been used to identify water stress BIB005 as well as crop biomass and nitrogen content BIB008 .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> The transition from film imaging to digital imaging in photogrammetric data capture is opening interesting possibilities for photogrammetric processes. A great advantage of digital sensors is their radiometric potential. This article presents a state-of-the-art review on the radiometric aspects of digital photogrammetric images. The analysis is based on a literature research and a questionnaire submitted to various interest groups related to the photogrammetric process. An important contribution to this paper is a characterization of the photogrammetric image acquisition and image product generation systems. The questionnaire revealed many weaknesses in current processes, but the future prospects of radiometrically quantitative photogrammetry are promising. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> VTT Technical Research Centre of Finland has developed a lightweight Fabry-Perot interferometer based hyperspectral ::: imager weighting only 400 g which makes it compatible with various small UAV platforms. The concept of the ::: hyperspectral imager has been published in SPIE Proc. 7474 1 and 7668 2 . This UAV spectral imager is capable of ::: recording 5 Mpix multispectral data in the wavelength range of 500 - 900 nm at resolutions of 10-40 nm, ::: Full-Width-Half-Maximum (FWHM). An internal memory buffer allows 16 Mpix of image data to be stored during one ::: image burst. The user can configure the system to take either three 5 Mpix images or up to 54 VGA resolution images ::: with each triggering. Each image contains data from one, two or three wavelength bands which can be separated during ::: post processing. This allows a maximum of 9 spectral bands to be stored in high spatial resolution mode or up to 162 ::: spectral bands in VGA-mode during each image burst. Image data is stored in a compact flash memory card which ::: provides the mass storage for the imager. The field of view of the system is 26° × 36° and the ground pixel size at 150 m ::: flying altitude is around 40 mm in high-resolution mode. The design, calibration and test flight results will be presented. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> VTT Technical Research Centre of Finland has developed a spectral imager for short-wave infrared (SWIR) wavelength range. The spectral imager is based on a tunable Fabry-Perot interferometer (FPI) accompanied by a commercial InGaAs Camera. The FPI consists of two dielectric coated mirrors separated by a tunable air gap. Tuning of the air gap tunes also transmitted wavelength and therefore FPI acts as a tunable band bass filter. The FPI is piezo-actuated and it uses three piezo-actuators in a closed capacitive feedback loop for air gap tuning. The FPI has multiple order transmission bands, which limit free spectral range. Therefore spectral imager contains two FPI in a stack, to make possible to cover spectral range of 1000 – 1700 nm. However, in the first tests imager was used with one FPI and spectral range was limited to 1100-1600 nm. The spectral resolution of the imager is approximately 15 nm (FWHM). Field of view (FOV) across the flight direction is 30 deg. Imaging resolution of the spectral imager is 256 x 320 pixels. The focal length of the optics is 12 mm and F-number is 3.2. This imager was tested in summer 2014 in an unmanned aerial vehicle (UAV) and therefore a size and a mass of the imager were critical. Total mass of the imager is approximately 1200 grams. In test campaign the spectral imager will be used for forest and agricultural imaging. In future, because results of the UAV test flights are promising, this technology can be applied to satellite applications also. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Recently, miniaturised hyperspectral sensors operable from small unmanned airborne vehicle platforms have entered the market. The emerging hyperspectral imaging technologies, based on frame cameras and tuneable filters, are attractive alternatives to hyperspectral pushbroom sensors. This paper addresses the geometric calibration process of a hyperspectral frame camera based on a Fabry–Perot interferometer. However, the addition of more optical elements in front of the image sensor can affect the parameters related to the internal geometry of the camera, and a deficiency in knowledge regarding these parameters can have a critical effect on the accuracy of 3D measurements in photogrammetric applications. The experiments focused on assessing the self-calibrating bundle adjustment to verify the behaviour of the interior parameters, considering different spectral bands. The results indicated that the applied self-calibration method can accurately characterise the interior parameters of this camera and that one set of parameters is required for each internal sensor. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Miniaturized hyperspectral imaging sensors are becoming available to small unmanned airborne vehicle (UAV) platforms. Imaging concepts based on frame format offer an attractive alternative to conventional hyperspectral pushbroom scanners because they enable enhanced processing and interpretation potential by allowing for acquisition of the 3-D geometry of the object and multiple object views together with the hyperspectral reflectance signatures. The objective of this investigation was to study the performance of novel visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral frame cameras based on a tunable Fabry–Perot interferometer (FPI) in measuring a 3-D digital surface model and the surface moisture of a peat production area. UAV image blocks were captured with ground sample distances (GSDs) of 15, 9.5, and 2.5 cm with the SWIR, VNIR, and consumer RGB cameras, respectively. Georeferencing showed consistent behavior, with accuracy levels better than GSD for the FPI cameras. The best accuracy in moisture estimation was obtained when using the reflectance difference of the SWIR band at 1246 nm and of the VNIR band at 859 nm, which gave a root mean square error (rmse) of 5.21 pp (pp is the mass fraction in percentage points) and a normalized rmse of 7.61%. The results are encouraging, indicating that UAV-based remote sensing could significantly improve the efficiency and environmental safety aspects of peat production. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Perot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> The aim of this research was to develop a methodology involving aerial surveying using an unmanned aerial system (UAS), processing and analysis of images obtained by a hyperspectral camera, achieving results that enable discrimination and recognition of sugarcane plants infected with mosaic virus. It was necessary to characterize the spectral response of healthy and infected sugarcane plants in order to define the correct mode of operation for the hyperspectral camera, which provides many spectral band options for imaging but limits each image to 25 spectral bands. Spectral measurements of the leaves of infected and healthy sugarcane with a spectroradiometer were used to produce a spectral library. Once the most appropriate spectral bands had been selected, it was possible to configure the camera and carry out aerial surveying. The empirical line approach was adopted to obtain hemispherical conical reflectance factor values with a radiometric block adjustment to produce a mosaic suitable for the analysis. A classification based on spectral information divergence was applied and the results were evaluated by Kappa statistics. Areas of sugarcane infected with mosaic were identified from these hyperspectral images acquired by UAS and the results obtained had a high degree of accuracy. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Viewing and illumination geometry has a strong influence on optical measurements of natural surfaces due to their anisotropic reflectance properties. Typically, cameras on-board unmanned aerial vehicles (UAVs) are affected by this because of their relatively large field of view (FOV) and thus large range of viewing angles. In this study, we investigated the magnitude of reflectance anisotropy effects in the 500–900 nm range, captured by a frame camera mounted on a UAV during a standard mapping flight. After orthorectification and georeferencing of the images collected by the camera, we calculated the viewing geometry of all observations of each georeferenced ground pixel, forming a dataset with multi-angular observations. We performed UAV flights on two days during the summer of 2016 over an experimental potato field where different zones in the field received different nitrogen fertilization treatments. These fertilization levels caused variation in potato plant growth and thereby differences in structural properties such as leaf area index (LAI) and canopy cover. We fitted the Rahman–Pinty–Verstraete (RPV) model through the multi-angular observations of each ground pixel to quantify, interpret, and visualize the anisotropy patterns in our study area. The Θ parameter of the RPV model, which controls the proportion of forward and backward scattering, showed strong correlation with canopy cover, where in general an increase in canopy cover resulted in a reduction of backward scattering intensity, indicating that reflectance anisotropy contains information on canopy structure. In this paper, we demonstrated that anisotropy data can be extracted from measurements using a frame camera, collected during a typical UAV mapping flight. Future research will focus on how to use the anisotropy signal as a source of information for estimation of physical vegetation properties. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements. <s> BIB013 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global scale. Several forest attributes, including size variability, amount of dead wood, and tree species richness, can be applied in assessing biodiversity of a forest ecosystem. Remote sensing offers complimentary tool for traditional field measurements in mapping and monitoring forest biodiversity. Recent development of small unmanned aerial vehicles (UAVs) enable the detailed characterization of forest ecosystems through providing data with high spatial but also temporal resolution at reasonable costs. The objective here is to deepen the knowledge about assessment of plot-level biodiversity indicators in boreal forests with hyperspectral imagery and photogrammetric point clouds from a UAV. We applied individual tree crown approach (ITC) and semi-individual tree crown approach (semi-ITC) in estimating plot-level biodiversity indicators. Structural metrics from the photogrammetric point clouds were used together with either spectral features or vegetation indices derived from hyperspectral imagery. Biodiversity indicators like the amount of dead wood and species richness were mainly underestimated with UAV-based hyperspectral imagery and photogrammetric point clouds. Indicators of structural variability (i.e., standard deviation in diameter-at-breast height and tree height) were the most accurately estimated biodiversity indicators with relative RMSE between 24.4% and 29.3% with semi-ITC. The largest relative errors occurred for predicting deciduous trees (especially aspen and alder), partly due to their small amount within the study area. Thus, especially the structural diversity was reliably predicted by integrating the three-dimensional and spectral datasets of UAV-based point clouds and hyperspectral imaging, and can therefore be further utilized in ecological studies, such as biodiversity monitoring. <s> BIB014 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sequential 2D Imagers <s> Abstract Climate-related extended outbreaks and range shifts of destructive bark beetle species pose a serious threat to urban boreal forests in North America and Fennoscandia. Recent developments in low-cost remote sensing technologies offer an attractive means for early detection and management of environmental change. They are of great interest to the actors responsible for monitoring and managing forest health. The objective of this investigation was to develop, assess, and compare automated remote sensing procedures based on novel, low-cost hyperspectral imaging technology for the identification of bark beetle infestations at the individual tree level in urban forests. A hyperspectral camera based on a tunable Fabry-Perot interferometer was operated from a small, unmanned airborne vehicle (UAV) platform and a small Cessna-type aircraft platform. This study compared aspects of using UAV datasets with a spatial extent of a few hectares (ha) and a ground sample distance (GSD) of 10–12 cm to the aircraft data covering areas of several km2 and having a GSD of 50 cm. An empirical assessment of the automated identification of mature Norway spruce (Picea abies L. Karst.) trees suffering from infestation (representing different colonization phases) by the European spruce bark beetle (Ips typographus L.) was carried out in the urban forests of Lahti, a city in southern Finland. Individual spruces were classified as healthy, infested, or dead. For the entire test area, the best aircraft data results for overall accuracy were 79% (Cohen’s kappa: 0.54) when using three crown color classes (green as healthy, yellow as infested, and gray as dead). For two color classes (healthy, dead) in the same area, the best overall accuracy was 93% (kappa: 0.77). The finer resolution UAV dataset provided better results, with an overall accuracy of 81% (kappa: 0.70), compared to the aircraft results of 73% (kappa: 0.56) in a smaller sub-area. The results showed that novel, low-cost remote sensing technologies based on individual tree analysis and calibrated remote sensing imagery offer great potential for affordable and timely assessments of the health condition of vulnerable urban forests. <s> BIB015
Sequential band systems record bands or sets of bands sequentially in time, with a time lag between two consecutive spectral bands. These systems have often been called image-frame sensors BIB005 BIB003 BIB001 ]. An example of such a system is the Rikola hyperspectral imager by Senop Oy [61] that is based on the tunable Fabry-Pérot Interferometer (FPI). The camera weighs 720 g. The desired spectral bands are obtained by scanning the spectral range with different air gap values within the FPI BIB003 BIB002 . The current commercial camera has approximately 1010 × 1010 pixels, providing a vertical and horizontal FOV of 36.5 • [61, BIB006 . In total, 380 spectral bands can be selected with a 1-nm spectral step in the spectral range of approximately 500-900 nm, but in typical UAV operation, 50-100 bands are collected. The FWHM increases with the wavelength if the order of interference and the reflectance of the mirrors remain the same . However, in practical implementation of the Rikola camera, the resulting FWHMs are similar in the visible and near-infrared (NIR) ranges: approximately 5-12 nm. The Rikola HSI records up to 32 individual bands within a second; so, for example, a hypercube with 60 freely selectable individual bands can be captured within a 2-s interval. Recently, a sort-wave infrared (SWIR) range prototype camera was developed, with a spectral range of 0.9-1.7 µm and an image size of 320 × 256 pixels BIB007 BIB004 . Benefits of the sequential 2D imagers are the comparably high spatial resolution and the flexibility to choose spectral bands. At the same time, the more bands that are chosen, the longer it takes to record all of them. In mobile applications, the bands in individual cubes have spatial offsets that need to be corrected in post-processing (c.f. Section 3.3.2) BIB003 BIB008 BIB009 . The frame rate, exposure time, number of bands, and flying height limit the flight speed in tunable filter-based systems. The FPI cameras have been used in various environmental remote sensing studies, including precision agriculture BIB003 BIB010 BIB011 BIB013 , peat production area moisture monitoring BIB007 , tree species classification, forest stand parameter estimation, biodiversity assessment BIB012 BIB014 , mineral exploration BIB009 , and detection of insect damage in forests BIB015 .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Snapshot 2D Imagers <s> The snapshot advantage is a large increase in light collection efficiency available to high-dimensional measurement systems that avoid filtering and scanning. After discussing this advantage in the context of imaging spectrometry, where the greatest effort towards developing snapshot systems has been made, we describe the types of measurements where it is applicable. We then generalize it to the larger context of high-dimensional measurements, where the advantage increases geometrically with measurement dimensionality. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Snapshot 2D Imagers <s> Within the field of spectral imaging, the vast majority of instruments used are scanning devices. Recently, several snapshot spectral imaging systems have become commercially available, providing new functionality for users and opening up the field to a wide array of new applications. A comprehensive survey of the available snapshot technologies is provided, and an attempt has been made to show how the new capabilities of snapshot approaches can be fully utilized. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Snapshot 2D Imagers <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB003
Snapshot systems record all of the bands at the same time BIB003 BIB001 BIB002 , which has the advantage that no spatial co-registration needs to be carried out BIB003 . Currently, multi-point and filter-on-chip snapshot systems exist for UAVs. The major advantage of snapshot 2D imagers is that the spatial patterns in each image frame can be used in an SfM workflow. Through the selection of an optimal spectral band or the use of the raw 2D hyperspectral mosaic, the SfM process allows for the extraction and matching of image features. The resulting bundle adjustment will then calculate the position and orientation for each image frame without the need for GNSS/IMU sensors (although the image-matching phase can be assisted with GNSS/IMU observations), which reduces the complexity of the setup (Figure 4 ). Since this approach derives the relative position and orientation of the images, a scene with relative scaling can be generated. For several applications, this is already sufficient, and the approach is appealing, since one can forgo the additional weight and complexity of a GNSS/INS approach. Still, with the aid of an accurate on-board GNSS receiver or GCPs, geometrically accurate orthomosaics can be created (with a typical absolute accuracy of 1-2 pixels). One of the issues with this approach is that 2D imagers The major advantage of snapshot 2D imagers is that the spatial patterns in each image frame can be used in an SfM workflow. Through the selection of an optimal spectral band or the use of the raw 2D hyperspectral mosaic, the SfM process allows for the extraction and matching of image features. The resulting bundle adjustment will then calculate the position and orientation for each image frame without the need for GNSS/IMU sensors (although the image-matching phase can be assisted with GNSS/IMU observations), which reduces the complexity of the setup (Figure 4 ). Since this approach derives the relative position and orientation of the images, a scene with relative scaling can be generated. For several applications, this is already sufficient, and the approach is appealing, since one can forgo the additional weight and complexity of a GNSS/INS approach. Still, with the aid of an accurate on-board GNSS receiver or GCPs, geometrically accurate orthomosaics can be created (with a typical absolute accuracy of 1-2 pixels). One of the issues with this approach is that 2D imagers tend to have a lower spatial resolution, which can affect the number of matching features found in the SfM process. This can result in poor performance in image matching in complex terrain/vegetation, which has a direct impact on the quality of the spectral orthomosaic. This can be compensated by merging the low-resolution hyperspectral information with, e.g., a higher resolution panchromatic image BIB003 . An additional benefit of 2D imagers is that an initial bundle adjustment can be followed by an optional dense matching approach, which then allows the generation of high-resolution 3D hyperspectral point clouds and surface models (c.f. Section 5.4). Remote Sens. 2018, 10, x FOR PEER REVIEW 14 of 42 tend to have a lower spatial resolution, which can affect the number of matching features found in the SfM process. This can result in poor performance in image matching in complex terrain/vegetation, which has a direct impact on the quality of the spectral orthomosaic. This can be compensated by merging the low-resolution hyperspectral information with, e.g., a higher resolution panchromatic image BIB003 . An additional benefit of 2D imagers is that an initial bundle adjustment can be followed by an optional dense matching approach, which then allows the generation of high-resolution 3D hyperspectral point clouds and surface models (c.f. Section 5.4).
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-point spectrometer <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-point spectrometer <s> Leaf area index (LAI) is an important indicator of plant growth and yield that can be monitored by remote sensing. Several models were constructed using datasets derived from SRS and STR sampling methods to determine the optimal model for soybean (multiple strains) LAI inversion for the whole crop growth period and a single growth period. Random forest (RF), artificial neural network (ANN), and support vector machine (SVM) regression models were compared with a partial least-squares regression (PLS) model. The RF model yielded the highest precision, accuracy, and stability with V-R2, SDR2, V-RMSE, and SDRMSE values of 0.741, 0.031, 0.106, and 0.005, respectively, over the whole growth period based on STR sampling. The ANN model had the highest precision, accuracy, and stability (0.452, 0.132, 0.086, and 0.009, respectively) over a single growth phase based on STR sampling. The precision, accuracy, and stability of the RF, ANN, and SVM models were improved by inclusion of STR sampling. The RF model is suitable for estimating LAI when sample plots and variation are relatively large (i.e., the whole growth period or more than one growth period). The ANN model is more appropriate for estimating LAI when sample plots and variation are relatively low (i.e., a single growth period). <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Multi-point spectrometer <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB003
Multi-point spectrometers use a beam splitter to divide the 2D image into sections, of which the signal is spread in the spectral domain . An example is the Cubert Firefleye . The camera and a controlling computer weighs about 1 kg and records an image cube with 50 × 50 pixel of spectral data from 450-900 nm with an FWHM of 5 nm (460 nm) to 25 nm (860 nm). Simultaneously, a one megapixel grey image with the same extent is taken and can be used to collate the images into a full scene BIB001 . Multi-point snapshot cameras have been used to derive chlorophyll BIB003 , plant height BIB001 and leaf area index BIB002 in crops. The advantage of multi-point 2D imagers is that they record all of the spectral information for each point in the image at the same time, and typical integration times are very short due to the high light throughput. However, the disadvantage is the relatively low spatial resolution of the spectral information.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Mosaic filter-on-chip cameras <s> Imec has developed a unique hyperspectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS sensor at wafer level, hence enabling the design of compact, low cost and high speed spectral cameras with a high design flexibility. This paper presents the various demonstrated prototype sensors, with different filter arrangements and performance, linked to different usage modes and application domains. It also reviews the key aspects and challenges of imec's hyperspectral technology. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Mosaic filter-on-chip cameras <s> Aerial hyperspectral remote sensing technologies provide effective methods for the exploration and study of plant and crop properties. In this study a custom made hexacopter was equipped with a small scale hyperspectral imaging (HSI) camera capable of measuring 16 bands in the visible range of the light. From single HSI images geo-rectified and registered maps were calculated and a selection of spectral indices (SI’s) calculated from the provided data. The SI´s were correlated to crop traits such as leaf nitrogen (Nconc), chlorophyll (CHLtot)and total pigment concentration (Pigmtot), canopy cover (CC) and leaf area index (LAI), measured in the field. The relationships to Nconc and CHLtot are discussed in detail with respect to measurement constraints, such as the interrelationships to LAI and application for precision farming or breeding experiments. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Mosaic filter-on-chip cameras <s> Perception systems for outdoor robotics have to deal with varying environmental conditions. Variations in illumination in particular, are currently the biggest challenge for vision-based perception. In this paper we present an approach for radiometric characterization of multispectral cameras. To enable spatio-temporal mapping we also present a procedure for in-situ illumination estimation, resulting in radiometric calibration of the collected images. In contrast to current approaches, we present a purely data driven, parameter free approach, based on maximum likelihood estimation which can be performed entirely on the field, without requiring specialised laboratory equipment. Our routine requires three simple datasets which are easily acquired using most modern multispectral cameras. We evaluate the framework with a cost-effective snapshot multispectral camera. The results show that our method enables the creation of quatitatively accurate relative reflectance images with challenging on field calibration datasets under a variety of ambient conditions. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Mosaic filter-on-chip cameras <s> Single-sensor color cameras, which classically use a color filter array to sample RGB channels, have recently been extended to the multispectral domain. To sample more than three wavelength bands, such systems use a multispectral filter array that provides a raw image in which a single channel value is available at each pixel. A demosaicing procedure is then needed to estimate a fully defined multispectral image. In this paper, we review multispectral demosaicing methods and propose a new one based on the pseudo-panchromatic image (PPI). Pixel values in the PPI are computed as the average spectral values. Experimental results show that our method provides estimated images of better quality than classical ones. <s> BIB004
In the mosaic filter-on-chip technology, each pixel carries a spectral filter that has a certain transmission, such as the principle of a Bayer pattern in an RGB camera. The combined information of the pixels within a mosaic or tiles then represents the spectral information of the area seen by the tile. The technology is based on a thin wafer on top of a monochromatic complementary metal-oxide semiconductor (CMOS) sensor in which the wafer contains band pass filters that isolate spectral wavelengths according to the Fabry-Pérot interference principle [81, BIB001 . The wafers are produced in a range of spatial configurations, including linear, mosaic, and tile-based filters. This technique was developed by Imec using the FPI filters to provide different spectral bands [81] . Currently, the chip is available for the range of 470-630 nm in 16 (4 × 4 pattern) bands with a spatial resolution of 512 × 256 pixels, and in the range of 600-1000 nm in 25 bands (5 × 5 pattern) with a spatial resolution of 409 × 216 pixels. The FWHM is below 15 nm for both systems. The two chips are integrated into cameras by several companies, and weigh below 400 g (e.g., ). So far, only first attempts with this new technology have been published BIB002 BIB003 , including a study on how to optimize the demosaicing of the images BIB004 . Recently, Imec has announced a SWIR version of the camera . The advantage of filter-on-chip cameras is that they record all of the bands at the same time. Additionally, they are very light, and can be carried by small UAVs. The disadvantage is that each band is just measured once within each tile and thus, accurate spectral information for one band is only available once every few pixels. Currently, this is tackled by slightly defocusing the camera and interpolation techniques. They have a higher spatial resolution than multi-point spectrometers, but the radiometric performance of the filter-on-chip technology has not yet reached the quality of established sensing principles that is used in point and line scanning devices. This mainly results from the technical challenges during the manufacturing process (i.e., strong variation of the thickness of the filter elements between adjacent pixels) and the novelty of the technique. Figure 2 shows images captured by a sequential 2D imager, a multi-point spectrometer, and a filter-on-chip snapshot camera. In the mosaic filter-on-chip technology, each pixel carries a spectral filter that has a certain transmission, such as the principle of a Bayer pattern in an RGB camera. The combined information of the pixels within a mosaic or tiles then represents the spectral information of the area seen by the tile. The technology is based on a thin wafer on top of a monochromatic complementary metal-oxide semiconductor (CMOS) sensor in which the wafer contains band pass filters that isolate spectral wavelengths according to the Fabry-Pérot interference principle [81, BIB001 . The wafers are produced in a range of spatial configurations, including linear, mosaic, and tile-based filters. This technique was developed by Imec using the FPI filters to provide different spectral bands [81] . Currently, the chip is available for the range of 470-630 nm in 16 (4 × 4 pattern) bands with a spatial resolution of 512 × 256 pixels, and in the range of 600-1000 nm in 25 bands (5 × 5 pattern) with a spatial resolution of 409 × 216 pixels. The FWHM is below 15 nm for both systems. The two chips are integrated into cameras by several companies, and weigh below 400 g (e.g., ). So far, only first attempts with this new technology have been published BIB002 BIB003 , including a study on how to optimize the demosaicing of the images BIB004 . Recently, Imec has announced a SWIR version of the camera . The advantage of filter-on-chip cameras is that they record all of the bands at the same time. Additionally, they are very light, and can be carried by small UAVs. The disadvantage is that each band is just measured once within each tile and thus, accurate spectral information for one band is only available once every few pixels. Currently, this is tackled by slightly defocusing the camera and interpolation techniques. They have a higher spatial resolution than multi-point spectrometers, but the radiometric performance of the filter-on-chip technology has not yet reached the quality of established sensing principles that is used in point and line scanning devices. This mainly results from the technical challenges during the manufacturing process (i.e., strong variation of the thickness of the filter elements between adjacent pixels) and the novelty of the technique. Figure 2 shows images captured by a sequential 2D imager, a multi-point spectrometer, and a filter-on-chip snapshot camera.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spatiospectral filter-on-chip cameras <s> Abstract. This paper gives an overview of the new COmpact hyperSpectral Imaging (COSI) system recently developed at the Flemish Institute for Technological Research (VITO, Belgium) and suitable for remotely piloted aircraft systems. A hyperspectral dataset captured from a multirotor platform over a strawberry field is presented and explored in order to assess spectral bands co-registration quality. Thanks to application of line based interference filters deposited directly on the detector wafer the COSI camera is compact and lightweight (total mass of 500g), and captures 72 narrow (FWHM: 5nm to 10 nm) bands in the spectral range of 600-900 nm. Covering the region of red edge (680 nm to 730 nm) allows for deriving plant chlorophyll content, biomass and hydric status indicators, making the camera suitable for agriculture purposes. Additionally to the orthorectified hypercube digital terrain model can be derived enabling various analyses requiring object height, e.g. plant height in vegetation growth monitoring. Geometric data quality assessment proves that the COSI camera and the dedicated data processing chain are capable to deliver very high resolution data (centimetre level) where spectral information can be correctly derived. Obtained results are comparable or better than results reported in similar studies for an alternative system based on the Fabry–Perot interferometer. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spatiospectral filter-on-chip cameras <s> Abstract. Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2 nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm), and we discuss future work. <s> BIB002
To address the latter, a modified version of the filter-on-chip camera called COSI Cam has been developed . This sensor no longer uses a small number of spectral filters in a tiled or pixel-wise mosaic arrangement. Instead, a larger number of narrow band filters are used, which are sampled densely enough to have continuous spectral sampling. The filters are arranged in a line-wise fashion, with a n amount of lines (a small number, five or eight) of the same filter next to each other, followed by n lines of spectrally adjacent filter bands. In this arrangement, filters on adjacent pixels only vary slightly in thickness, leading to much cleaner spectral responses than the 4 × 4 and 5 × 5 pattern. The COSI Cam prototype was the first camera using such a chip, capturing more than 100 spectral bands in the range of 600 nm-900 nm. In a further development, by using two types of filter material on the chip, a larger spectral range of 475 nm-925 nm was achieved (ButterflEYE LS; BIB002 ) with a spectral sampling of less than 2.5 nm. To address the latter, a modified version of the filter-on-chip camera called COSI Cam has been developed . This sensor no longer uses a small number of spectral filters in a tiled or pixel-wise mosaic arrangement. Instead, a larger number of narrow band filters are used, which are sampled densely enough to have continuous spectral sampling. The filters are arranged in a line-wise fashion, with a n amount of lines (a small number, five or eight) of the same filter next to each other, followed by n lines of spectrally adjacent filter bands. In this arrangement, filters on adjacent pixels only vary Remote Sens. 2018, 10, 1091 7 of 42 slightly in thickness, leading to much cleaner spectral responses than the 4 × 4 and 5 × 5 pattern. The COSI Cam prototype was the first camera using such a chip, capturing more than 100 spectral bands in the range of 600 nm-900 nm. In a further development, by using two types of filter material on the chip, a larger spectral range of 475 nm-925 nm was achieved (ButterflEYE LS; BIB002 ) with a spectral sampling of less than 2.5 nm. Physically, these cameras are filter-on-chip cameras, but their filter arrangement requires a different mode of operation. Their operation includes scanning over an area, and is similar to the operation of a pushbroom camera. Therefore, these sensors are also referred to as spatiospectral scanners. The 2D sensor can be seen as a large array of 1D sensors, each capturing a different spectral band (in fact, n duplicate lines per band). To capture all of the spectral bands at every location, a new image has to be captured every time the platform has moved the equivalent of n lines. This is achieved by limiting the flying speed and operating the camera with a high frame rate (typically 30 frames per second (fps)), which means a larger portion of the flying time is used for collecting information. A specialized processing workflow then generates the full image cube for the scene BIB002 . Due to their improved design, the radiometric quality of the spectrospatial cameras is better compared to the classical filter-on-chip design. At the same time, their data enables the reconstruction of the 3D geometry similar to other 2D imagers due to the 2D spatial information within the images. Sima et al. BIB001 showed that a good spatial co-registration can be achieved that also allows extraction of digital surface models (DSMs). The drawback of these systems is that they require large storage capacity and a lower flying speed to obtain full coverage over the target of interest. A further challenge in this kind of sensor is that each band has different anisotropy effects as a result of having different view angles to the object.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Integration of Sensors and Geometric Processing <s> Micro-unmanned aerial vehicles often collect a large amount of images when mapping an area at an ultrahigh resolution. A direct georeferencing technique potentially eliminates the need for ground control points. In this paper, we developed a camera-global positioning system (GPS) module to allow the synchronization of camera exposure with the airframe's position as recorded by a GPS with 10-20-cm accuracy. Lever arm corrections were applied to the camera positions to account for the positional difference between the GPS antenna and the camera center. Image selection algorithms were implemented to eliminate blurry images and images with excessive overlap. This study compared three different software methods (Photoscan, Pix4D web service, and an in-house Bundler method). We evaluated each based on processing time, ease of use, and the spatial accuracy of the final mosaic produced. Photoscan showed the best performance as it was the fastest and the easiest to use and had the best spatial accuracy (average error of 0.11 m with a standard deviation of 0.02 m). This accuracy is limited by the accuracy of the differential GPS unit (10-20 cm) used to record camera position. Pix4D achieved a mean spatial error of 0.24 m with a standard deviation of 0.03 m, while the Bundler method had the worst mean spatial accuracy of 0.76 m with a standard deviation of 0.15 m. The lower performance of the Bundler method was due to its poor performance in estimating camera focal length, which, in turn, introduced large errors in the Z-axis for the translation equations. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Integration of Sensors and Geometric Processing <s> In unmanned aerial vehicle (UAV) photogrammetric surveys, the cameracan be pre-calibrated or can be calibrated "on-the-job" using structure-from-motion anda self-calibrating bundle adjustment. This study investigates the impact on mapping accuracyof UAV photogrammetric survey blocks, the bundle adjustment and the 3D reconstructionprocess under a range of typical operating scenarios for centimetre-scale natural landformmapping (in this case, a coastal cliff). We demonstrate the sensitivity of the process tocalibration procedures and the need for careful accuracy assessment. For this investigation, vertical (nadir or near-nadir) and oblique photography were collected with 80%–90%overlap and with accurately-surveyed (σ ≤ 2 mm) and densely-distributed ground control.This allowed various scenarios to be tested and the impact on mapping accuracy to beassessed. This paper presents the results of that investigation and provides guidelines thatwill assist with operational decisions regarding camera calibration and ground control forUAV photogrammetry. The results indicate that the use of either a robust pre-calibration ora robust self-calibration results in accurate model creation from vertical-only photography,and additional oblique photography may improve the results. The results indicate thatif a dense array of high accuracy ground control points are deployed and the UAVphotography includes both vertical and oblique images, then either a pre-calibration or anon-the-job self-calibration will yield reliable models (pre-calibration RMSEXY = 7.1 mmand on-the-job self-calibration RMSEXY = 3.2 mm). When oblique photography was Remote Sens. 2015, 7 11934 excluded from the on-the-job self-calibration solution, the accuracy of the model deteriorated(by 3.3 mm horizontally and 4.7 mm vertically). When the accuracy of the ground controlwas then degraded to replicate typical operational practice (σ = 22 mm), the accuracyof the model further deteriorated (e.g., on-the-job self-calibration RMSEXY went from3.2–7.0 mm). Additionally, when the density of the ground control was reduced, the modelaccuracy also further deteriorated (e.g., on-the-job self-calibration RMSEXY went from7.0–7.3 mm). However, our results do indicate that loss of accuracy due to sparse groundcontrol can be mitigated by including oblique imagery. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Integration of Sensors and Geometric Processing <s> Alpine areas pose challenges for many existing remote sensing methods for snow depth retrieval, thus leading to uncertainty in water forecasting and budgeting. Herein, we present the results of a field campaign conducted in Tasmania, Australia in 2013 from which estimates of snow depth were derived using a low-cost photogrammetric approach on-board a micro unmanned aircraft system (UAS). Using commercial off-the-shelf (COTS) sensors mounted on a multi-rotor UAS and photogrammetric image processing techniques, the results demonstrate that snow depth can be accurately retrieved by differencing two surface models corresponding to the snow-free and snow-covered scenes, respectively. In addition to accurate snow depth retrieval, we show that high-resolution (50 cm) spatially continuous snow depth maps can be created using this methodology. Two types of photogrammetric bundle adjustment (BA) routines are implemented in this study to determine the optimal estimates of sensor position and orientation, in addition to 3D scene information; conventional BA (which relies on measured ground control points) and direct BA (which does not require ground control points). Error sources that affect the accuracy of the BA and subsequent snow depth reconstruction are discussed. The results indicate the UAS is capable of providing high-resolution and high-accuracy (<10 cm) estimates of snow depth over a small alpine area (~0.7 ha) with significant snow accumulation (depths greater than one meter) at a fraction of the cost of full-size aerial survey approaches. The RMSE of estimated snow depths using the conventional BA approach is 9.6 cm, whereas the direct BA is characterized by larger error, with an RMSE of 18.4 cm. If a simple affine transformation is applied to the point cloud derived from the direct BA, the overall RMSE is reduced to 8.8 cm RMSE. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Integration of Sensors and Geometric Processing <s> This study investigates the potential of unmanned aerial vehicles (UAVs) to measure and monitor structural properties of forests. Two remote sensing techniques, airborne laser scanning (ALS) and structure from motion (SfM) were tested to capture three-dimensional structural information from a small multi-rotor UAV platform. A case study is presented through the analysis of data collected from a 30 × 50 m plot in a dry sclerophyll eucalypt forest with a spatially varying canopy cover. The study provides an insight into the capabilities of both technologies for assessing absolute terrain height, the horizontal and vertical distribution of forest canopy elements, and information related to individual trees. Results indicate that both techniques are capable of providing information that can be used to describe the terrain surface and canopy properties in areas of relatively low canopy closure. However, the SfM photogrammetric technique underperformed ALS in capturing the terrain surface under increasingly denser canopy cover, resulting in point density of less than 1 ground point per m2 and mean difference from ALS terrain surface of 0.12 m. This shortcoming caused errors that were propagated into the estimation of canopy properties, including the individual tree height (root mean square error of 0.92 m for ALS and 1.30 m for SfM). Differences were also seen in the estimates of canopy cover derived from the SfM (50%) and ALS (63%) pointclouds. Although ALS is capable of providing more accurate estimates of the vertical structure of forests across the larger range of canopy densities found in this study, SfM was still found to be an adequate low-cost alternative for surveying of forest stands. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Integration of Sensors and Geometric Processing <s> Small-sized unmanned aircraft systems (UAS) are restricted to use only lightweight microelectromechanical systems (MEMS)- based inertial measurement units (IMUs) due to their limited payload capacity. Still, some UAS-based geospatial remote sensing applications, such as airborne spectroscopy or laser scanning, require high accuracy pose (position and orientation) determination of the onboard sensor payload. This study presents ground-based experiments investigating the pose accuracy of two MEMS-based IMUs: the single-antennaMTi-G-700 (Xsens, Enschede, Netherlands) and the dual-antenna/dual-frequency Spatial Dual IMU (Advanced Navigation, Sydney, Australia)/global navigation satellite system (GNSS).A tightly coupled and postprocessed pose solution froma fiberoptic gyroscope (FOG)-based NovAtel synchronized position attitude navigation (SPAN) IMU (NovAtel, Calgary, Canada) served as a reference to evaluate the performance of the two IMUs under investigation. Results revealed a better position solution for the Spatial Dual, and the MTi-G-700 achieved a better roll/pitch accuracy. Most importantly, the heading solution from the dual-antenna configuration of the Spatial Dual was found to be more stable than the heading obtained with the reference SPANIMU. <s> BIB005
Accurate geometric processing is a crucial task in the data processing workflow for UAV datasets. Fundamental steps include the determination of the sensor interior characteristics of the sensor system (interior orientation), the exterior orientation of the data sequence (position and rotation of the sensor during the data capture), and the object geometric model to find the geometric relationship between the object and the recorded radiance value. Accurate position and orientation information is required to compute the location of each pixel on the ground. Full-size airborne hyperspectral sensors follow the pushbroom design, and some can use a survey-grade differential GNSS receiver (typically multi-constellation and dual frequency capabilities) and IMU to determine the position and orientation (pitch, roll, and heading) of the image lines. When post-processed against a GNSS base station established over a survey mark at a short baseline (within 5-10 km), a positioning accuracy of 1-4 cm can be achieved for the on-board GNSS antenna, after which propagation of all of the pose-related errors typically results in 5-10 cm direct georeferencing accuracy of UAS image data. This can be further improved by using ground control points (GCPs) measured with a differential GNSS rover on the ground or a total station survey BIB002 BIB004 BIB003 BIB001 . One of the challenges in hyperspectral data collection from UAVs is the limitation in weight and size of the total sensor payload. Survey-grade GNSS and IMU sensors tend to be relatively heavy, bulky, and expensive, e.g., fiber optic gyro (FOG) IMUs providing absolute accuracy in orientation of <0.05 • BIB005 . Development in microelectromechanical systems (MEMS) has resulted in small and lightweight IMUs suitable for UAV applications; however, traditionally, the absolute accuracy of these MEMS IMUs has been relatively poor (e.g., typically~1 • absolute accuracy in pitch, roll, and yaw) BIB005 . The impact of a 1 • error at a flying height of 50 m above ground level (AGL) is a 0.87-m geometric offset for a pixel on the ground. If we consider the key benefit of UAV remote sensing to be the ability to collect sub-decimeter resolution imagery, then such a large error is potentially unacceptable. The combined error in pitch, roll, heading, and position can make this even worse. There is an important requirement for the optimal combination of sensors to determine accurate position and orientation (pose) of the spectral sensor during acquisition (which also requires accurate time synchronization), or an appropriate geometric processing strategy based on image matching and ground control points (GCPs).
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Point Spectrometer Data <s> In this paper the main problems and the available solutions are addressed for the generation of 3D models from terrestrial images. Close range photogrammetry has dealt for many years with manual or automatic image measurements for precise 3D modelling. Nowadays 3D scanners are also becoming a standard source for input data in many application areas, but image-based modelling still remains the most complete, economical, portable, flexible and widely used approach. In this paper the full pipeline is presented for 3D modelling from terrestrial image data, considering the different approaches and analysing all the steps involved. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Point Spectrometer Data <s> Accurately determining field-of-view has rarely been considered in field spectroscopy where specifications for fore optics used are generally limited and the influence of the spectroradiometer rarely considered. The issue can be compounded with full wavelength spectroradiometric systems which include multiple spectrometers. In these systems, the size and alignment of the viewing optics and technology adopted to transfer light from the fore optic to individual spectrometers may cause significant nonuniformity of spectral response across the area of measurement support, and this area may not align with that assumed from the specification that is supplied for the fore optic. When recording spectra from heterogeneous earth surface targets, it is important to have the area of measurement support accurately defined as individual reflecting surfaces may be present in varying proportions within this area, and these proportions need to be determined to relate spectral reflectance or spectral radiance to state variables or target classifications being considered. The area of measurement support and the spatial and spectral responsivity of an ASD Field Spec Pro FR spectroradiometer and a SVC GER 3700 spectroradiometer have been determined by measuring the directional response function (DRF) of each instrument. This research highlights several areas of concern and makes recommendations for the improvement of field spectroradiometers and field spectroscopy methodologies. These results are specific to the spectroradiometer/fore optic combinations investigated and at the measurement distances specified. Although similar characteristics can be expected for other instruments/fore optics of the same design, and at other measurement distances, the DRFs will vary from those reported here. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Point Spectrometer Data <s> In this study we present a hyperspectral flying goniometer system, based on a rotary-wing unmanned aerial vehicle (UAV) equipped with a spectrometer mounted on an active gimbal. We show that this approach may be used to collect multiangular hyperspectral data over vegetated environments. The pointing and positioning accuracy are assessed using structure from motion and vary from σ = 1° to 8° in pointing and σ = 0.7 to 0.8 m in positioning. We use a wheat dataset to investigate the influence of angular effects on the NDVI, TCARI and REIP vegetation indices. Angular effects caused significant variations on the indices: NDVI = 0.83–0.95; TCARI = 0.04–0.116; REIP = 729–735 nm. Our analysis highlights the necessity to consider angular effects in optical sensors when observing vegetation. We compare the measurements of the UAV goniometer to the angular modules of the SCOPE radiative transfer model. Model and measurements are in high accordance (r2 = 0.88) in the infrared region at angles close to nadir; in contrast the comparison show discrepancies at low tilt angles (r2 = 0.25). This study demonstrates that the UAV goniometer is a promising approach for the fast and flexible assessment of angular effects. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Point Spectrometer Data <s> This study describes the development of a small hyperspectral Unmanned Aircraft System (HyUAS) for measuring Visible and Near-Infrared (VNIR) surface reflectance and sun-induced fluorescence, co-registered with high-resolution RGB imagery, to support field spectroscopy surveys and calibration and validation of remote sensing products. The system, namely HyUAS, is based on a multirotor platform equipped with a cost-effective payload composed of a VNIR non-imaging spectrometer and an RGB camera. The spectrometer is connected to a custom entrance optics receptor developed to tune the instrument field-of-view and to obtain systematic measurements of instrument dark-current. The geometric, radiometric and spectral characteristics of the instruments were characterized and calibrated through dedicated laboratory tests. The overall accuracy of HyUAS data was evaluated during a flight campaign in which surface reflectance was compared with ground-based reference measurements. HyUAS data were used to estimate spectral indices and far-red fluorescence for different land covers. RGB images were processed as a high-resolution 3D surface model using structure from motion algorithms. The spectral measurements were accurately geo-located and projected on the digital surface model. The overall results show that: (i) rigorous calibration enabled radiance and reflectance spectra from HyUAS with RRMSE < 10% compared with ground measurements; (ii) the low-flying UAS setup allows retrieving fluorescence in absolute units; (iii) the accurate geo-location of spectra on the digital surface model greatly improves the overall interpretation of reflectance and fluorescence data. In general, the HyUAS was demonstrated to be a reliable system for supporting high-resolution field spectroscopy surveys allowing one to collect systematic measurements at very detailed spatial resolution with a valuable potential for vegetation monitoring studies. Furthermore, it can be considered a useful tool for collecting spatially-distributed observations of reflectance and fluorescence that can be further used for calibration and validation activities of airborne and satellite optical images in the context of the upcoming FLEX mission and the VNIR spectral bands of optical Earth observation missions (i.e., Landsat, Sentinel-2 and Sentinel-3). <s> BIB004
While point spectrometers offer high spectral resolution, their data contain no spatial reference. Thus, precise positioning and orientation information as well as an accurate digital surface model are necessary to project the measurement points on the surface. One approach is to use precise GNSS/IMU equipment, which is still expensive. An alternative is to align and capture data simultaneously with a 2D imager, such as for example with an either monochrome or RGB machine vision camera (e.g., BIB004 ). In a next step, computer vision algorithms such as SfM can then be used to derive the orientation and position of the images and the associated point spectrometer measurements, assuming that the images contain sufficient features and are captured with enough overlap BIB001 . While the second approach is cheaper than the first, both add additional payload to be carried by the sensing system. For both approaches, accurate time synchronization between the spectroradiometer and the GNSS/IMU or camera is required. Furthermore, each spectroradiometer has a certain FOV determined by the slit or foreoptic, which can be constrained with additional accessories, such as a Gershun tube or collimating lens. Finally, the integration time of the spectroradiometer will also have an impact on the size of the footprint. For a spectroradiometer with a relatively long integration time (e.g., 1 s) on a moving UAV platform, the circular footprint will be 'dragged out' into an elongated shape. Additionally, in off-nadir measurements, the circular footprint also elongates to an elliptical shape BIB003 . The combined effects of the position, orientation, FOV, integration time of the spectroradiometer, flying height and speed of the UAV, and the surface topography will determine the location and size/shape of the spectral footprint. Finally, one should also consider that measurements of a field spectrometer are center weighted within their FOV, and the configuration of the fiber and fore optic might influence the measured signal BIB002 .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> A method for the radiometric correction of wide field-of-view airborne imagery has been developed that accounts for the angular dependence of the path radiance and atmospheric transmittance functions to remove atmospheric and topographic effects. The first part of processing is the parametric geocoding of the scene to obtain a geocoded, orthorectified image and the view geometry (scan and azimuth angles) for each pixel as described in part 1 of this jointly submitted paper. The second part of the processing performs the combined atmospheric/ topographic correction. It uses a database of look-up tables of the atmospheric correction functions (path radiance, atmospheric transmittance, direct and diffuse solar flux) calculated with a radiative transfer code. Additionally, the terrain shape obtained from a digital elevation model is taken into account. The issues of the database size and accuracy requirements are critically discussed. The method supports all common types of imaging airborne optical instrument... <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> Autonomous micro aerial vehicles (MAVs) will soon play a major role in tasks such as search and rescue, environment monitoring, surveillance, and inspection. They allow us to easily access environments to which no humans or other vehicles can get access. This reduces the risk for both the people and the environment. For the above applications, it is, however, a requirement that the vehicle is able to navigate without using GPS, or without relying on a preexisting map, or without specific assumptions about the environment. This will allow operations in unstructured, unknown, and GPS-denied environments. We present a novel solution for the task of autonomous navigation of a micro helicopter through a completely unknown environment by using solely a single camera and inertial sensors onboard. Many existing solutions suffer from the problem of drift in the xy plane or from the dependency on a clean GPS signal. The novelty in the here-presented approach is to use a monocular simultaneous localization and mapping (SLAM) framework to stabilize the vehicle in six degrees of freedom. This way, we overcome the problem of both the drift and the GPS dependency. The pose estimated by the visual SLAM algorithm is used in a linear optimal controller that allows us to perform all basic maneuvers such as hovering, set point and trajectory following, vertical takeoff, and landing. All calculations including SLAM and controller are running in real time and online while the helicopter is flying. No offline processing or preprocessing is done. We show real experiments that demonstrate that the vehicle can fly autonomously in an unknown and unstructured environment. To the best of our knowledge, the here-presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS-denied environment and independently of any external artificial aids. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> Unmanned Aerial Vehicles (UAVs) are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM) photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP) technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM) required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> AbstractAn Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot on board. UAVs allow close-range photogrammetric acquisitions potentially useful for building large-scale cartography and acquisitions of building geometry. This is particularly useful in emergency situations where major accessibility problems limit the possibility of using conventional surveys. Presently, however, flights of this class of UAV are planned based only on the pilot's experience and they often acquire three or more times the number of images needed. This is clearly a time-consuming and autonomy-reducing procedure, which is certainly detrimental when extensive surveys are needed. For this reason new software, to plan the UAV's survey will be illustrated. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of rather large UAVs. In this article we present a lightweight hyperspectral mapping system (HYMSY) for rotor-based UAVs, the novel processing chain for the system, and its potential for agricultural mapping and monitoring applications. The HYMSY consists of a custom-made pushbroom spectrometer (400–950 nm, 9 nm FWHM, 25 lines/s, 328 px/line), a photogrammetric camera, and a miniature GPS-Inertial Navigation System. The weight of HYMSY in ready-to-fly configuration is only 2.0 kg and it has been constructed mostly from off-the-shelf components. The processing chain uses a photogrammetric algorithm to produce a Digital Surface Model (DSM) and provides high accuracy orientation of the system over the DSM. The pushbroom data is georectified by projecting it onto the DSM with the support of photogrammetric orientations and the GPS-INS data. Since an up-to-date DSM is produced internally, no external data are required and the processing chain is capable to georectify pushbroom data fully automatically. The system has been adopted for several experimental flights related to agricultural and habitat monitoring applications. For a typical flight, an area of 2–10 ha was mapped, producing a RGB orthomosaic at 1–5 cm resolution, a DSM at 5–10 cm resolution, and a hyperspectral datacube at 10–50 cm resolution. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging is based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> Hyperspectral cameras sample many different spectral bands at each pixel, enabling advanced detection and classification algorithms. However, their limited spatial resolution and the need to measure the camera motion to create hyperspectral images makes them unsuitable for nonsmooth moving platforms such as unmanned aerial vehicles UAVs. We present a procedure to build hyperspectral images from line sensor data without camera motion information or extraneous sensors. Our approach relies on an accompanying conventional camera to exploit the homographies between images for mosaic construction. We provide experimental results from a low-altitude UAV, achieving high-resolution spectroscopy with our system. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Pushbroom Scanner Data <s> Plants like mosses can be sensitive stress markers of subtle shifts in Arctic and Antarctic environmental conditions, including climate change. Traditional ground-based monitoring of fragile polar vegetation is, however, invasive, labour intensive and physically demanding. High-resolution multispectral satellite observations are an alternative, but even their recent highest achievable spatial resolution is still inadequate, resulting in a significant underestimation of plant health due to spectral mixing and associated reflectance impurities. To resolve these obstacles, we have developed a new method that uses low-altitude unmanned aircraft system (UAS) hyperspectral images of sub-decimeter spatial resolution. Machine-learning support vector regressions (SVR) were employed to infer Antarctic moss vigour from quantitative remote sensing maps of plant canopy chlorophyll content and leaf density. The same maps were derived for comparison purposes from the WorldView-2 high spatial resolution (2.2 m) multispectral satellite data. We found SVR algorithms to be highly efficient in estimating plant health indicators with acceptable root mean square errors ( RMSE ). The systematic RMSE s for chlorophyll content and leaf density were 3.5–6.0 and 1.3–2.0 times smaller, respectively, than the unsystematic errors. However, application of correctly trained SVR machines on space-borne multispectral images considerably underestimated moss chlorophyll content, while stress indicators retrieved from UAS data were found to be comparable with independent field measurements, providing statistically significant regression coefficients of determination (median r 2 = .50, p t test = .0072). This study demonstrates the superior performance of a cost-efficient UAS mapping platform, which can be deployed even under the continuous cloud cover that often obscures optical high-altitude airborne and satellite observations. Antarctic moss vigour maps of appropriate resolution could provide timely and spatially explicit warnings of environmental stress events, including those triggered by climate change. Since our polar vegetation health assessment method is based on physical principles of quantitative spectroscopy, it could be adapted to other short-stature and fragmented plant communities (e.g. tundra grasslands), including alpine and desert regions. It therefore shows potential to become an operational component of any ecological monitoring sensor network. <s> BIB009
Pushbroom sensors need to move to build up a spatial image of a scene. Typically, these sensors collect 20-100 frames per second (depending on integration time and camera specifications). The slit width, lens focal length, and integration time determine the spatial resolution of the pixels in the along-track direction (i.e., flight direction). The number of pixels on the sensor array (i.e., number of columns) and the focal length of the sensor determine the spatial resolution of the pixels in the across-track direction. To accurately map the spatial location of each pixel in the scene, several parameters need to be provided or determined: camera/lens distortion parameters, sensor location (XYZ), sensor absolute orientation (pitch, roll, and heading), and surface model of the terrain. Pushbroom sensors are particularly sensitive to flight dynamics in pitch, roll, and heading, which makes it challenging to perform a robust geometric correction or orthorectification. For dynamic UAV airframes, such as multi-rotors, this is particularly challenging. Lucieer et al. BIB004 and Malenovský et al. BIB009 developed and used an early hyperspectral multi-rotor prototype in Antarctica that did not use GNSS/IMU observations, but rather relied on a dense network of GCPs for geometric rectification based on triangulation/rubber-sheeting. This prototype was later upgraded to include synchronized GNSS/IMU data ( Figure 3 ) in order to enable orthorectification using the PARGE geometric rectification software BIB001 103] . With the use of a limited number of GCPs and/or on-board GNSS coordinates, machine vision imagery can be used to determine the position and orientation of a hyperspectral sensor without the need for complex and expensive GNSS/IMU sensors. The main advantage of machine vision imagery is that it can be used in rigorous photogrammetric modeling BIB005 , SfM BIB003 , or simultaneous localization and mapping (SLAM) BIB002 workflows to extract 3D terrain information and pose information simultaneously. Suomalainen et al. BIB006 developed a hyperspectral pushbroom system with a synchronized GNSS/IMU unit for orthorectification; they used a photogrammetric approach based on SfM to improve the accuracy of the on-board navigation-grade GNSS receiver and derive a digital surface model for orthorectification. Habib et al. BIB007 and Ramirez-Paredes et al. BIB008 presented approaches for the georectification of hyperspectral pushbroom imagery that were purely based on imagery and image matching. Their approaches are attractive, as a fully image-based approach reduces the complexity of sensor integration on board the UAV. However, in order to achieve a high absolute accuracy, accurate GCP measurements still need to be obtained, or an accurate on-board GNSS needs to be employed. In addition, to match the frame rate of a hyperspectral sensor, a lot of machine vision data will have to be stored and processed (potentially thousands of images per flight). Recently, sensor manufacturers have started to produce turnkey hyperspectral pushbroom sensor packages that include the imaging spectrometer, data logging unit, and GNSS/IMU sensors in a small and lightweight package, e.g., Headwall Photonics nano-Hyperspec. One of the major issues with complete packages such as these is the quality of the GNSS and IMU data. The nano-Hyperspec for example carries a GNSS/IMU with a navigation-grade GNSS receiver delivering an absolute accuracy of 5-10 m. In addition, the IMU can measure yaw, but to derive an absolute heading from yaw requires an absolute baseline measurement, which is usually derived from the GNSS flight path and/or a 3D magnetometer. The heading derived from the flight path will provide the general flight direction; however, the UAV airframe can have a completely different absolute heading (i.e., a multi-rotor can have a yaw direction that is different from the flight direction). These heading measurements are notoriously inaccurate, which can result in major georectification errors. A dual antenna tightly coupled GNSS/IMU solution can overcome these issues, but they tend to be heavier and more expensive. Two GNSS antennae at a relatively short baseline, e.g.,~1 m, can offer an absolute heading accuracy of 0.1 • . Machine vision data can be used to assist pose estimation and facilitate more accurate georectification through feature image matching and co-registration [103] .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Introductory Concepts. Elementary Photogrammetry. Photogrammetric Sensing Systems. Mathematical Concepts in Photogrammetry. Resection, Intersection, and Triangulation. Digital Photogrammetry. Photogrammetric Instruments. Photogrammetric Products. Close-Range Photogrammetry. Analysis of Multispectral and Hyperspectral Image Data. Active Sensing Systems. Appendix A: Mathematics for Photogrammetry. Appendix B: Least Squares Adjustment. Appendix C: Linearization of Photogrammetric Condition Equations. Appendix D: Mathematical Description of Linear Features. Appendix E: Further Consideration of the Rotation Matrix. Apendix F: Orbital Photogrammetry. Appendix G: Software for Photogrammetric Applications. Index. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Image Registration is the first step towards using remote sensed images for any purpose. Despite numerous techniques being developed for image registration, only a handful has proved to be useful for registration of remote sensing images due to their characteristic of being computationally heavy. Recent flux in technology has prompted a legion of approaches that may suit divergent remote sensing applications. This paper presents a comprehensive survey of such literatures including recently developed techniques. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Multispectral remote sensing applications from UAS are reported in the literature less commonly than applications using visible bands, although light-weight multispectral sensors for UAS are being used increasingly. . In this paper, we describe challenges and solutions associated with efficient processing of multispectral imagery to obtain orthorectified, radiometrically calibrated image mosaics for the purpose of rangeland vegetation classification. We developed automated batch processing methods for file conversion, band-to-band registration, radiometric correction, and orthorectification. An object-based image analysis approach was used to derive a species-level vegetation classification for the image mosaic with an overall accuracy of 87%. We obtained good correlations between: (1) ground and airborne spectral reflectance (R 2 = 0.92); and (2) spectral reflectance derived from airborne and WorldView-2 satellite data for selected vegetation and soil targets. UAS-acquired multispectral imagery provides quality high resolution information for rangeland applications with the potential for upscaling the data to larger areas using high resolution satellite imagery. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> In recent times, the use of Unmanned Aerial Vehicles (UAVs) as tools for environmental remote sensing has become more commonplace. Compared to traditional airborne remote sensing, UAVs can provide finer spatial resolution data (up to 1 cm/pixel) and higher temporal resolution data. For the purposes of vegetation monitoring, the use of multiple sensors such as near infrared and thermal infrared cameras are of benefit. Collecting data with multiple sensors, however, requires an accurate spatial co-registration of the various UAV image datasets. In this study, we used an Oktokopter UAV to investigate the physiological state of Antarctic moss ecosystems using three sensors: (i) a visible camera (1 cm/pixel), (ii) a 6 band multispectral camera (3 cm/pixel), and (iii) a thermal infrared camera (10 cm/pixel). Imagery from each sensor was geo-referenced and mosaicked with a combination of commercially available software and our own algorithms based on the Scale Invariant Feature Transform (SIFT). The validation of the mosaic’s spatial co-registration revealed a mean root mean squared error (RMSE) of 1.78 pixels. A thematic map of moss health, derived from the multispectral mosaic using a Modified Triangular Vegetation Index (MTVI2), and an indicative map of moss surface temperature were then combined to demonstrate sufficient accuracy of our co-registration methodology for UAV-based monitoring of Antarctic moss beds. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Abstract MiniMCA (Miniature Multiple Camera Array) is a lightweight, frame-based, and multilens composed multispectral sensor, which is suitable to mount on an unmanned aerial systems (UAS) to acquire high spatial and temporal resolution imagery for various remote sensing applications. Since MiniMCA has significant band misregistration effect, an automatic and precise band-to-band registration (BBR) method is proposed in this study. Based on the principle of sensor plane-to-plane projection, a modified projective transformation (MPT) model is developed. It is to estimate all coefficients of MPT from indoor camera calibration, together with two systematic errors correction. Therefore, we can transfer all bands into the same image space. Quantitative error analysis shows that the proposed BBR scheme is scene independent and can achieve 0.33 pixels of accuracy, which demonstrating the proposed method is accurate and reliable. Meanwhile, it is difficult to mark ground control points (GCPs) on the MiniMCA images, as its spatial resolution is low when the flight height is higher than 400 m. In this study, a higher resolution RGB camera is adopted to produce digital surface model (DSM) and assist MiniMCA ortho-image generation. After precise BBR, only one reference band of MiniMCA image is necessary for aerial triangulation because all bands have same exterior and interior orientation parameters. It means that all the MiniMCA imagery can be ortho-rectified through the same exterior and interior orientation parameters of the reference band. The result of the proposed ortho-rectification procedure shows the co-registration errors between MiniMCA reference band and the RGB ortho-images is less than 0.6 pixels. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Perot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Georeferencing of Sequential and Multi-Camera 2D Imagers <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB009
The multispectral and hyperspectral sensors based on multiple cameras or tunable filters produce non-aligned spectral bands. The straightforward approach would be to determine the exterior orientations of each band individually using SfM. If the number of bands is large, for example 20 or 100, the separate orientation of each band can result in a significant computational challenge, and therefore, solutions based on image registration are more feasible BIB002 . The transformation can be two-dimensional (such as rigid body, Helmert, affine, polynomial, or projective) or threedimensional, based on the collinearity model and accounting for the object's 3D structure, i.e., the orthorectification BIB001 . Jhan et al. BIB007 presented an approach utilizing the relative calibration information and projective transformations for the Mini-MCA lightweight camera, which is composed of six individual, rigidly assembled cameras. They used a laboratory calibration process to determine the relative orientations of individual cameras with respect to the master camera in the multi-camera system; the relative orientations of the master camera (red band) and an additional RGB camera were determined. The multispectral and hyperspectral sensors based on multiple cameras or tunable filters produce non-aligned spectral bands. The straightforward approach would be to determine the exterior orientations of each band individually using SfM. If the number of bands is large, for example 20 or 100, the separate orientation of each band can result in a significant computational challenge, and therefore, solutions based on image registration are more feasible BIB002 . The transformation can be two-dimensional (such as rigid body, Helmert, affine, polynomial, or projective) or three-dimensional, based on the collinearity model and accounting for the object's 3D structure, i.e., the orthorectification BIB001 . Jhan et al. BIB007 presented an approach utilizing the relative calibration information and projective transformations for the Mini-MCA lightweight camera, which is composed of six individual, rigidly assembled cameras. They used a laboratory calibration process to determine the relative orientations of individual cameras with respect to the master camera in the multi-camera system; the relative orientations of the master camera (red band) and an additional RGB camera were determined. The RGB camera was oriented with bundle-block adjustment, and the exterior orientations of the rest of the bands were calculated based on the relative orientations and the exterior orientations of the reference camera. An accuracy of 0.33 pixels was reported in the registered images. Several researchers reported accuracies on the level of approximately two pixels when using approaches based on 2D transformations with the mini-MCA camera BIB003 BIB004 BIB006 . In the cases of tunable filters such as the Rikola camera, each band has a unique exterior orientation. Honkavaara et al. BIB008 showed that the geometric challenges increase with the decreasing flight height and increasing flight speed, time difference between the bands, and height differences among the objects. In several studies, good results have been reported when using 2D image transformations in flat environments BIB005 . If the object has great height differences, such as forest, rugged terrain, and built areas, the 2D image transformations do not give accurate solutions in general cases; however, good results were reported also in a rugged environment when using 2D image transformations if combined with image capture based on stopping while taking each hypercube BIB009 . Image registration based on physical exterior orientation parameters and orthorectification should be used when operating these tunable filter sensors from mobile platforms in environments where the object of interest has significant height differences. Honkavaara et al. BIB008 developed a rigorous and efficient approach to calculate co-registered orthophoto mosaics of tunable filter images. The process include the determination of orientations of three to five reference bands using SfM, subsequent matching of the unoriented bands to the reference bands, calculation of their exterior orientations, and the orthorectification of all of the bands. Registration errors of less than a pixel were obtained in forested environments. The authors emphasized the need for proper block design in order to achieve the desired precision.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> Report issued by the U.S. National Bureau of Standards discussing specifications of reflectance and proposed nomenclature. As stated in the introduction, "this monograph presents a unified approach to the specification of reflectance in relation to the beam geometry of both the incident and the reflected flux in any reflectometer or in any application of measured reflectance data" (p. 1). This report includes illustrations. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> A method for the radiometric correction of wide field-of-view airborne imagery has been developed that accounts for the angular dependence of the path radiance and atmospheric transmittance functions to remove atmospheric and topographic effects. The first part of processing is the parametric geocoding of the scene to obtain a geocoded, orthorectified image and the view geometry (scan and azimuth angles) for each pixel as described in part 1 of this jointly submitted paper. The second part of the processing performs the combined atmospheric/ topographic correction. It uses a database of look-up tables of the atmospheric correction functions (path radiance, atmospheric transmittance, direct and diffuse solar flux) calculated with a radiative transfer code. Additionally, the terrain shape obtained from a digital elevation model is taken into account. The issues of the database size and accuracy requirements are critically discussed. The method supports all common types of imaging airborne optical instrument... <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> Abstract The remote sensing community puts major efforts into calibration and validation of sensors, measurements, and derived products to quantify and reduce uncertainties. Given recent advances in instrument design, radiometric calibration, atmospheric correction, algorithm development, product development, validation, and delivery, the lack of standardization of reflectance terminology and products becomes a considerable source of error. This article provides full access to the basic concept and definitions of reflectance quantities, as given by Nicodemus et al. [Nicodemus, F.E., Richmond, J.C., Hsia, J.J., Ginsberg, I.W., and Limperis, T. (1977). Geometrical Considerations and Nomenclature for Reflectance. In: National Bureau of Standards, US Department of Commerce, Washington, D.C. URL: http://physics.nist.gov/Divisions/Div844/facilities/specphoto/pdf/geoConsid.pdf .] and Martonchik et al. [Martonchik, J.V., Bruegge, C.J., and Strahler, A. (2000). A review of reflectance nomenclature used in remote sensing. Remote Sensing Reviews, 19, 9–20.]. Reflectance terms such as BRDF, HDRF, BRF, BHR, DHR, black-sky albedo, white-sky albedo, and blue-sky albedo are defined, explained, and exemplified, while separating conceptual from measurable quantities. We use selected examples from the peer-reviewed literature to demonstrate that very often the current use of reflectance terminology does not fulfill physical standards and can lead to systematic errors. Secondly, the paper highlights the importance of a proper usage of definitions through quantitative comparison of different reflectance products with special emphasis on wavelength dependent effects. Reflectance quantities acquired under hemispherical illumination conditions (i.e., all outdoor measurements) depend not only on the scattering properties of the observed surface, but as well on atmospheric conditions, the object's surroundings, and the topography, with distinct expression of these effects in different wavelengths. We exemplify differences between the hemispherical and directional illumination quantities, based on observations (i.e., MISR), and on reflectance simulations of natural surfaces (i.e., vegetation canopy and snow cover). In order to improve the current situation of frequent ambiguous usage of reflectance terms and quantities, we suggest standardizing the terminology in reflectance product descriptions and that the community carefully utilizes the proposed reflectance terminology in scientific publications. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> 1. Introduction 2. Historical Perspective and photo Mensuration 3. Radiometry and radiation Propagation 4. The Governing Equation for Radiance Reaching the Sensor 5. Sensors 6. Atmospheric Calibration - Solutions to the Governing Equation 7. Digital Imaging Processing for Image Exploitation 8. Information Dissemination 9. Weak Links in the Chain 10. Image Modeling <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> The radiometric correction of airborne imagery aims at providing unbiased spectral information about the Earth's surface. Correction steps include system calibration, geometric correction, and the compensation for atmospheric effects. Such preprocessed data are affected by the bidirectional reflectance distribution function (BRDF), which requires an additional compensation step. We present a novel method for a surface-cover-dependent BRDF effects correction (BREFCOR). It uses a continuous index based on bottom-of-atmosphere reflectances to tune the Ross–Thick Li–Sparse BRDF model. This calibrated model is then used to correct for observation-angle-dependent anisotropy. The method shows its benefits specifically for wide-field-of-view airborne systems where BRDF effects strongly affect image quality. Evaluation results are shown for sample data from a multispectral photogrammetric Leica ADS camera system and for HYSPEX imaging spectroscopy data. The scalability of the procedure for various kinds of sensor configurations allows for its operational use as part of standard processing systems. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> In this study we present a hyperspectral flying goniometer system, based on a rotary-wing unmanned aerial vehicle (UAV) equipped with a spectrometer mounted on an active gimbal. We show that this approach may be used to collect multiangular hyperspectral data over vegetated environments. The pointing and positioning accuracy are assessed using structure from motion and vary from σ = 1° to 8° in pointing and σ = 0.7 to 0.8 m in positioning. We use a wheat dataset to investigate the influence of angular effects on the NDVI, TCARI and REIP vegetation indices. Angular effects caused significant variations on the indices: NDVI = 0.83–0.95; TCARI = 0.04–0.116; REIP = 729–735 nm. Our analysis highlights the necessity to consider angular effects in optical sensors when observing vegetation. We compare the measurements of the UAV goniometer to the angular modules of the SCOPE radiative transfer model. Model and measurements are in high accordance (r2 = 0.88) in the infrared region at angles close to nadir; in contrast the comparison show discrepancies at low tilt angles (r2 = 0.25). This study demonstrates that the UAV goniometer is a promising approach for the fast and flexible assessment of angular effects. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> General Procedure for Generating Reflectance Maps from UAVs <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB007
Radiometric processing transforms the readings of a sensor into useful data. First, sensor-related radiometric and spectral calibration needs to be carried out. Second, transformation to top-of-canopy reflectance based on radiometric reference panels and/or an empirical line method (ELM), secondary reference devices, or atmospheric modeling needs to be carried out. Third, influences of the object reflectance anisotropy (bidirectional reflectance distribution function, BRDF) effects, and shadows can be normalized. Schott BIB004 calls this entire multi-step process the image chain approach. The different calibration schemes are outlined in Figure 5 . These steps can be carried out sequentially as independent steps, which has been the typical approach in the classical approaches used for the airborne and spaceborne applications, for example, as implemented in the ATmospheric CORrection (ATCOR) BIB002 . UAVs also provide some novel aspects to be considered. The desired output is usually reflectance. However, in typical situations, the output is strictly speaking the hemispherical conical reflectance factor (HCRF; BIB001 BIB003 ), because the IFOVs of each pixel captures a conical beam. The pixels of imaging spectrometers have a relatively small IFOV; therefore, their measurements can be considered an approximation of hemispherical directional reflectance factors (HDRF; BIB007 BIB005 ). Finally, multi-angular measurements across large parts of the hemisphere can be used to approximate the BRDF of a surface (e.g., BIB006 ). Besides, if measurements from the hemisphere are averaged, the bihemispherical reflectance, which is also called albedo (blue sky albedo in the MODIS product suite; BIB003 ), can be approximated. Consequently, the albedo is also approximated if the information of pixels with a wide range of different viewing geometries (e.g., from multiple images) are averaged, as is often done during orthomosaic generation from 2D images BIB007 . In every case, the data needs to be calibrated. In the following subsections, the sensor calibration (Section 4.2) and the image data calibration (Section 4.3) processes are described in detail.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> The Nature of Remote Sensing: Introduction. Remote Sensing. Information Extraction from Remote-Sensing Images. Spectral Factors in Remote Sensing. Spectral Signatures. Remote-Sensing Systems. Optical Sensors. Temporal Characteristics. Image Display Systems. Data Systems. Summary. Exercises. References. Optical Radiation Models: Introduction. Visible to Short Wave Infrared Region. Solar Radiation. Radiation Components. Surface-Reflected. Unscattered Component. Surface-Reflected. Atmosphere-Scattered Component. Path-Scattered Component. Total At-Sensor. Solar Radiance. Image Examples in the Solar Region. Terrain Shading. Shadowing. Atmospheric Correction. Midwave to Thermal Infrared Region. Thermal Radiation. Radiation Components. Surface-Emitted Component. Surface-Reflected. Atmosphere-Emitted Component. Path-Emitted Component. Total At-Sensor. Emitted Radiance. Total Solar and Thermal Upwelling Radiance. Image Examples in the Thermal Region. Summary. Exercises. References. Sensor Models: Introduction. Overall Sensor Model. Resolution. The Instrument Response. Spatial Resolution. Spectral Resolution. Spectral Response. Spatial Response. Optical PSFopt. Image Motion PSFIM. Detector PSFdet. Electronics PSFel. Net PSFnet. Comparison of Sensor PSFs. PSF Summary for TM. Imaging System Simulation. Amplification. Sampling and Quantization. Simplified Sensor Model. Geometric Distortion. Orbit Models. Platform Attitude Models. Scanner Models. Earth Model. Line and Whiskbroom ScanGeometry. Pushbroom Scan Geometry. Topographic Distortion. Summary. Exercises. References. Data Models: Introduction. A Word on Notation. Univariate Image Statistics. Histogram. Normal Distribution. Cumulative Histogram. Statistical Parameters. Multivariate Image Statistics. Reduction to Univariate Statistics. Noise Models. Statistical Measures of Image Quality. Contrast. Modulation. Signal-to-Noise Ratio (SNR). Noise Equivalent Signal. Spatial Statistics. Visualization of Spatial Covariance. Covariance with Semivariogram. Separability and Anisotropy. Power Spectral Density. Co-occurrence Matrix. Fractal Geometry. Topographic and Sensor Effects. Topography and Spectral Statistics. Sensor Characteristics and Spectral Stastistics. Sensor Characteristics and Spectral Scattergrams. Summary. Exercises. References. Spectral Transforms: Introduction. Feature Space. Multispectral Ratios. Vegetation Indexes. Image Examples. Principal Components. Standardized Principal Components (SPC) Transform. Maximum Noise Fraction (MNF) Transform. Tasseled Cap Tranformation. Contrast Enhancement. Transformations Based on Global Statistics. Linear Transformations. Nonlinear Transformations. Normalization Stretch. Reference Stretch. Thresholding. Adaptive Transformation. Color Image Contrast Enhancement. Min-max Stretch. Normalization Stretch. Decorrelation Stretch. Color Spacer Transformations. Summary. Exercises. References. Spatial Transforms: Introduction. An Image Model for Spatial Filtering. Convolution Filters. Low Pass and High Pass Filters. High Boost Filters. Directional Filters. The Border Region. Characterization of Filtered Images. The Box Filter Algorithm. Cascaded Linear Filters. Statistical Filters. Gradient Filters. Fourier Synthesis. Discrete Fourier Transforms in 2-D. The Fourier Components. Filtering with the Fourier Transform. Transfer Functions. The Power Spectrum. Scale Space Transforms. Image Resolution Pyramids. Zero-Crossing Filters. Laplacian-of-Gaussian (LoG) Filters. Difference-of-Gaussians (DoG) Filters.Wavelet Transforms. Summary. Exercises. References. Correction and Calibration: Introduction. Noise Correction. Global Noise. Sigma Filter. Nagao-Matsuyama Filter. Local Noise. Periodic Noise. Distriping 359. Global,Linear Detector Matching. Nonlinear Detector Matching. Statistical Modification to Linear and Nonlinear Detector. Matching. Spatial Filtering Approaches. Radiometric Calibration. Sensor Calibration. Atmospheric Correction. Solar and Topographic Correction. Image Examples. Calibration and Normalization of Hyperspectral Imagery. AVIRIS Examples. Distortion Correction. Polynomial Distortion Models. Ground Control Points (GCPs). Coordinate Transformation. Map Projections. Resampling. Summary. Exercises References. Registration and Image Fusion: Introduction. What is Registration? Automated GCP Location. Area Correlation. Other Spatial Features. Orthrectification. Low-Resolution DEM. High-Resolution DEM. Hierarchical Warp Stereo. Multi-Image Fusion. Spatial Domain Fusion. High Frequency Modulation. Spectral Domain Fusion. Fusion Image Examples. Summary. Exercises. References. Thematic Classification: Introduction. The Importance of Image Scale. The Notion of Similarity. Hard Versus Soft Classification. Training the Classifier. Supervised Training. Unsupervised Training. K-Means Clustering Algorithm. Clustering Examples. Hybrid Supervised/Unsupervised Training. Non-Parametric Classification Algorithms. Level-Slice. Nearest-Mean. Artificial Neural Networks (ANNs). Back-Propagation Algorithm. Nonparametric Classification Examples. Parametric Classification Algorithms. Estimation of Model-Parameters. Discriminant Functions. The Normal Distribution Model. Relation to the Nearest-Mean Classifier. Supervised Classification Examples and Comparison to Nonparametric Classifiers. Segmentation. Region Growing. Region Labeling. Sub-Pixel Classification. The Linear Mixing Model. Unmixing Model. Hyperspectral Image Analysis. Visualization of the Image Cube. Feature Extraction. Image Residuals. Pre-Classification Processing and Feature Extraction. Classification Algorithms. Exercises. Error Analysis. Multitemporal Images. Summary. References. Index. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Two critical limitations for using current satellite sensors in real-time crop management are the lack of imagery with optimum spatial and spectral resolutions and an unfavorable revisit time for most crop stress-detection applications. Alternatives based on manned airborne platforms are lacking due to their high operational costs. A fundamental requirement for providing useful remote sensing products in agriculture is the capacity to combine high spatial resolution and quick turnaround times. Remote sensing sensors placed on unmanned aerial vehicles (UAVs) could fill this gap, providing low-cost approaches to meet the critical requirements of spatial, spectral, and temporal resolutions. This paper demonstrates the ability to generate quantitative remote sensing products by means of a helicopter-based UAV equipped with inexpensive thermal and narrowband multispectral imaging sensors. During summer of 2007, the platform was flown over agricultural fields, obtaining thermal imagery in the 7.5-13-mum region (40-cm resolution) and narrowband multispectral imagery in the 400-800-nm spectral region (20-cm resolution). Surface reflectance and temperature imagery were obtained, after atmospheric corrections with MODTRAN. Biophysical parameters were estimated using vegetation indices, namely, normalized difference vegetation index, transformed chlorophyll absorption in reflectance index/optimized soil-adjusted vegetation index, and photochemical reflectance index (PRI), coupled with SAILH and FLIGHT models. As a result, the image products of leaf area index, chlorophyll content (C ab), and water stress detection from PRI index and canopy temperature were produced and successfully validated. This paper demonstrates that results obtained with a low-cost UAV system for agricultural applications yielded comparable estimations, if not better, than those obtained by traditional manned airborne sensors. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> article i nfo The remote detection of water stress in a citrus orchard was investigated using leaf-level measurements of chlorophyll fluorescence and Photochemical Reflectance Index (PRI) data, seasonal time-series of crown tem- perature and PRI, and high-resolution airborne imagery. The work was conducted in an orchard where a reg- ulated deficit irrigation (RDI) experiment generated a gradient in water stress levels. Stomatal conductance (Gs) and water potential (Ψ) were measured over the season on each treatment block. The airborne data consisted on thermal and hyperspectral imagery acquired at the time of maximum stress differences among treatments, prior to the re-watering phase, using a miniaturized thermal camera and a micro-hyperspectral imager on board an unmanned aerial vehicle (UAV). The hyperspectral imagery was acquired at 40 cm resolution and 260 spectral bands in the 400-885 nm spectral range at 6.4 nm full width at half maximum (FWHM) spectral resolution and 1.85 nmsampling interval,enablingthe identificationof pure crownsfor extractingradiance andreflectance hyperspectral spectra from each tree. The FluorMOD model was used to investigate the retrieval of chlorophyll fluorescence by applying the Fraunhofer Line Depth (FLD) principle using three spectral bands (FLD3), which demonstrated that fluorescence retrievalwas feasible with the configuration of the UAV micro-hyperspectral in- strument flown over the orchard. Results demonstrated the link between seasonal PRI and crown temperature acquired from instrumented trees and field measurements of stomatal conductance and water potential. The sensitivity of PRI and Tc-Ta time-series to water stress levels demonstrated a time delay of PRI vs Tc-Ta during the recovery phase after re-watering started. At the time of the maximum stress difference among treatment blocks, the airborneimagery acquired fromthe UAV platform demonstrated that the crown temperature yielded the best coefficient of determination for Gs (r 2 <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of rather large UAVs. In this article we present a lightweight hyperspectral mapping system (HYMSY) for rotor-based UAVs, the novel processing chain for the system, and its potential for agricultural mapping and monitoring applications. The HYMSY consists of a custom-made pushbroom spectrometer (400–950 nm, 9 nm FWHM, 25 lines/s, 328 px/line), a photogrammetric camera, and a miniature GPS-Inertial Navigation System. The weight of HYMSY in ready-to-fly configuration is only 2.0 kg and it has been constructed mostly from off-the-shelf components. The processing chain uses a photogrammetric algorithm to produce a Digital Surface Model (DSM) and provides high accuracy orientation of the system over the DSM. The pushbroom data is georectified by projecting it onto the DSM with the support of photogrammetric orientations and the GPS-INS data. Since an up-to-date DSM is produced internally, no external data are required and the processing chain is capable to georectify pushbroom data fully automatically. The system has been adopted for several experimental flights related to agricultural and habitat monitoring applications. For a typical flight, an area of 2–10 ha was mapped, producing a RGB orthomosaic at 1–5 cm resolution, a DSM at 5–10 cm resolution, and a hyperspectral datacube at 10–50 cm resolution. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> This study describes the development of a small hyperspectral Unmanned Aircraft System (HyUAS) for measuring Visible and Near-Infrared (VNIR) surface reflectance and sun-induced fluorescence, co-registered with high-resolution RGB imagery, to support field spectroscopy surveys and calibration and validation of remote sensing products. The system, namely HyUAS, is based on a multirotor platform equipped with a cost-effective payload composed of a VNIR non-imaging spectrometer and an RGB camera. The spectrometer is connected to a custom entrance optics receptor developed to tune the instrument field-of-view and to obtain systematic measurements of instrument dark-current. The geometric, radiometric and spectral characteristics of the instruments were characterized and calibrated through dedicated laboratory tests. The overall accuracy of HyUAS data was evaluated during a flight campaign in which surface reflectance was compared with ground-based reference measurements. HyUAS data were used to estimate spectral indices and far-red fluorescence for different land covers. RGB images were processed as a high-resolution 3D surface model using structure from motion algorithms. The spectral measurements were accurately geo-located and projected on the digital surface model. The overall results show that: (i) rigorous calibration enabled radiance and reflectance spectra from HyUAS with RRMSE < 10% compared with ground measurements; (ii) the low-flying UAS setup allows retrieving fluorescence in absolute units; (iii) the accurate geo-location of spectra on the digital surface model greatly improves the overall interpretation of reflectance and fluorescence data. In general, the HyUAS was demonstrated to be a reliable system for supporting high-resolution field spectroscopy surveys allowing one to collect systematic measurements at very detailed spatial resolution with a valuable potential for vegetation monitoring studies. Furthermore, it can be considered a useful tool for collecting spatially-distributed observations of reflectance and fluorescence that can be further used for calibration and validation activities of airborne and satellite optical images in the context of the upcoming FLEX mission and the VNIR spectral bands of optical Earth observation missions (i.e., Landsat, Sentinel-2 and Sentinel-3). <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Hyperspectral remote sensing is used in precision agriculture to remotely and quickly acquire crop phenotype information. This paper describes the generation of a digital orthophoto map (DOM) and radiometric calibration for images taken by a miniaturized snapshot hyperspectral camera mounted on a lightweight unmanned aerial vehicle (UAV). The snapshot camera is a relatively new type of hyperspectral sensor that can acquire an image cube with one spectral and two spatial dimensions at one exposure. The images acquired by the hyperspectral snapshot camera need to be mosaicked together to produce a DOM and radiometrically calibrated before analysis. However, the spatial resolution of hyperspectral cubes is too low to mosaic the images together. Furthermore, there are no systematic radiometric calibration methods or procedures for snapshot hyperspectral images acquired from low-altitude carrier platforms. In this study, we obtained hyperspectral imagery using a snapshot hyperspectral sensor mounted on a UAV. We quantitatively evaluated the radiometric response linearity (RRL) and radiometric response variation (RRV) and proposed a method to correct the RRV effect. We then introduced a method to interpolate position and orientation system (POS) information and generate a DOM with low spatial resolution and a digital elevation model (DEM) using a 3D mesh model built from panchromatic images with high spatial resolution. The relative horizontal geometric precision of the DOM was validated by comparison with a DOM generated from a digital RGB camera. A surface crop model (CSM) was produced from the DEM, and crop height for 48 sampling plots was extracted and compared with the corresponding field-measured crop height to verify the relative precision of the DEM. Finally, we applied two absolute radiometric calibration methods to the generated DOM and verified their accuracy via comparison with spectra measured with an ASD Field Spec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The DOM had high relative horizontal accuracy, and compared with the digital camera-derived DOM, spatial differences were below 0.05 m (RMSE = 0.035). The determination coefficient for a regression between DEM-derived and field-measured crop height was 0.680. The radiometric precision was 5% for bands between 500 and 945 nm, and the reflectance curve in the infrared spectral region did not decrease as in previous research. The pixel and data sizes for the DOM corresponding to a field area of approximately 85 m × 34 m were small (0.67 m and approximately 13.1 megabytes, respectively), which is convenient for data transmission, preprocessing and analysis. The proposed method for radiometric calibration and DOM generation from hyperspectral cubes can be used to yield hyperspectral imagery products for various applications, particularly precision agriculture. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensor-Related Calibration <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB013
Radiometric sensor calibration determines the radiometric response of an individual sensor BIB001 . The calibration process includes several phases: a relative radiometric calibration, which aims for a uniform output across the pixels and time, a spectral calibration, which determines the spectral response of the bands, and an absolute radiometric calibration, which determines the transformation from pixel values to the physical unit radiance. A comprehensive review on calibration procedures for high-resolution radiance measurements can be found in Jablonski et al., BIB009 and Yoon and Kacker . In the following, we will focus on sensor calibration procedures that are required to generate reflectance maps from spectral UAV data. These procedures will provide sensor-specific calibration factors that are applied to the captured spectrometric datasets. While the calibration steps are essentially the same for point, line, and 2D imagers, the complexity increases with data dimension, since every pixel needs to be characterized and calibrated. In many cases, researchers have implemented their own calibration procedures for small spectrometers or spectral imagers used in UAVs, since either the systems have been experimental setups, or the sensor manufacturers of small-format sensors have not provided calibration files or suitable calibration procedures. The examples in this section are taken from studies with the point spectrometer Ocean Optics STS-VIS BIB005 and USB 4000 BIB011 , the pushbroom system Headwall Photonics Micro-Hyperspec BIB006 BIB003 , other custom-made pushbroom sensors BIB007 , 2D imagers Cubert Firefleye BIB008 BIB010 BIB012 , Rikola FPI BIB013 , and Tetracam MCA and mini-MCA models BIB004 BIB002 ]. Radiometric sensor calibration determines the radiometric response of an individual sensor BIB001 . The calibration process includes several phases: a relative radiometric calibration, which aims for a uniform output across the pixels and time, a spectral calibration, which determines the spectral response of the bands, and an absolute radiometric calibration, which determines the transformation from pixel values to the physical unit radiance. A comprehensive review on calibration procedures for high-resolution radiance measurements can be found in Jablonski et al., BIB009 and Yoon and Kacker . In the following, we will focus on sensor calibration procedures that are required to generate reflectance maps from spectral UAV data. These procedures will provide sensor-specific calibration factors that are applied to the captured spectrometric datasets. While the calibration steps are essentially the same for point, line, and 2D imagers, the complexity increases with data dimension, since every pixel needs to be characterized and calibrated. In many cases, researchers have implemented their own calibration procedures for small spectrometers or spectral imagers used in UAVs, since either the systems have been experimental setups, or the sensor manufacturers of small-format sensors have not provided calibration files or suitable calibration procedures. The examples in this section are taken from studies with the point spectrometer Ocean Optics STS-VIS BIB005 and USB 4000 BIB011 , the pushbroom system Headwall Photonics Micro-Hyperspec BIB006 BIB003 , other custom-made pushbroom sensors BIB007 , 2D imagers Cubert Firefleye BIB008 BIB010 BIB012 , Rikola FPI BIB013 , and Tetracam MCA and mini-MCA models BIB004 BIB002 ].
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> We discuss calibration and removal of “vignetting” (radial falloff) and exposure (gain) variations from sequences of images. Even when the response curve is known, spatially varying ambiguities prevent us from recovering the vignetting, exposure, and scene radiances uniquely. However, the vignetting and exposure variations can nonetheless be removed from the images without resolving these ambiguities or the previously known scale and gamma ambiguities. Applications include panoramic image mosaics, photometry for material reconstruction, image-based rendering, and preprocessing for correlation-based vision algorithms. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data, which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics that are more representative of the scene than normal mosaics. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of rather large UAVs. In this article we present a lightweight hyperspectral mapping system (HYMSY) for rotor-based UAVs, the novel processing chain for the system, and its potential for agricultural mapping and monitoring applications. The HYMSY consists of a custom-made pushbroom spectrometer (400–950 nm, 9 nm FWHM, 25 lines/s, 328 px/line), a photogrammetric camera, and a miniature GPS-Inertial Navigation System. The weight of HYMSY in ready-to-fly configuration is only 2.0 kg and it has been constructed mostly from off-the-shelf components. The processing chain uses a photogrammetric algorithm to produce a Digital Surface Model (DSM) and provides high accuracy orientation of the system over the DSM. The pushbroom data is georectified by projecting it onto the DSM with the support of photogrammetric orientations and the GPS-INS data. Since an up-to-date DSM is produced internally, no external data are required and the processing chain is capable to georectify pushbroom data fully automatically. The system has been adopted for several experimental flights related to agricultural and habitat monitoring applications. For a typical flight, an area of 2–10 ha was mapped, producing a RGB orthomosaic at 1–5 cm resolution, a DSM at 5–10 cm resolution, and a hyperspectral datacube at 10–50 cm resolution. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> Hyperspectral remote sensing is used in precision agriculture to remotely and quickly acquire crop phenotype information. This paper describes the generation of a digital orthophoto map (DOM) and radiometric calibration for images taken by a miniaturized snapshot hyperspectral camera mounted on a lightweight unmanned aerial vehicle (UAV). The snapshot camera is a relatively new type of hyperspectral sensor that can acquire an image cube with one spectral and two spatial dimensions at one exposure. The images acquired by the hyperspectral snapshot camera need to be mosaicked together to produce a DOM and radiometrically calibrated before analysis. However, the spatial resolution of hyperspectral cubes is too low to mosaic the images together. Furthermore, there are no systematic radiometric calibration methods or procedures for snapshot hyperspectral images acquired from low-altitude carrier platforms. In this study, we obtained hyperspectral imagery using a snapshot hyperspectral sensor mounted on a UAV. We quantitatively evaluated the radiometric response linearity (RRL) and radiometric response variation (RRV) and proposed a method to correct the RRV effect. We then introduced a method to interpolate position and orientation system (POS) information and generate a DOM with low spatial resolution and a digital elevation model (DEM) using a 3D mesh model built from panchromatic images with high spatial resolution. The relative horizontal geometric precision of the DOM was validated by comparison with a DOM generated from a digital RGB camera. A surface crop model (CSM) was produced from the DEM, and crop height for 48 sampling plots was extracted and compared with the corresponding field-measured crop height to verify the relative precision of the DEM. Finally, we applied two absolute radiometric calibration methods to the generated DOM and verified their accuracy via comparison with spectra measured with an ASD Field Spec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The DOM had high relative horizontal accuracy, and compared with the digital camera-derived DOM, spatial differences were below 0.05 m (RMSE = 0.035). The determination coefficient for a regression between DEM-derived and field-measured crop height was 0.680. The radiometric precision was 5% for bands between 500 and 945 nm, and the reflectance curve in the infrared spectral region did not decrease as in previous research. The pixel and data sizes for the DOM corresponding to a field area of approximately 85 m × 34 m were small (0.67 m and approximately 13.1 megabytes, respectively), which is convenient for data transmission, preprocessing and analysis. The proposed method for radiometric calibration and DOM generation from hyperspectral cubes can be used to yield hyperspectral imagery products for various applications, particularly precision agriculture. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Relative Radiometric Calibration <s> Perception systems for outdoor robotics have to deal with varying environmental conditions. Variations in illumination in particular, are currently the biggest challenge for vision-based perception. In this paper we present an approach for radiometric characterization of multispectral cameras. To enable spatio-temporal mapping we also present a procedure for in-situ illumination estimation, resulting in radiometric calibration of the collected images. In contrast to current approaches, we present a purely data driven, parameter free approach, based on maximum likelihood estimation which can be performed entirely on the field, without requiring specialised laboratory equipment. Our routine requires three simple datasets which are easily acquired using most modern multispectral cameras. We evaluate the framework with a cost-effective snapshot multispectral camera. The results show that our method enables the creation of quatitatively accurate relative reflectance images with challenging on field calibration datasets under a variety of ambient conditions. <s> BIB010
Relative radiometric calibration transforms the output of the sensor to normalized DNs (DNn), Figure 5 . The full data processing workflow to create a reflectance data product. First, sensor-related calibration procedures are carried out. Relative calibration (RC 1 ) and spectral calibration (SC) transform the digital numbers (DN) of the sensor to normalized DN (DN n ). Further, absolute radiometric calibration (RC 2 ) can be carried out to generate at-sensor radiance (L s ). Second, the data is transformed to reflectance factors (R) with the empirical line method (ELM), based on a second radiometrically calibrated reference device on the ground, the UAV, or models. Geometric processing (GP) is an estimation of the relative position and orientation of the measurements, and composes the data into a scene. Radiometric block adjustment can be used at different steps in the process to optimize the radiometry of the scene and correct for bidirectional reflectance distribution function (BRDF) effects. The geometric processing (GP) composes the data into a scene. Additional modules may then transform the reflectance factors in the scene to reflectance quantities (c.f. Section 4.4), and shadows and topography effects may be corrected. Independent radiometric reference targets are used to validate the data. The processing procedures are tracked in metadata to allow an accurate interpretation of the results. Relative radiometric calibration transforms the output of the sensor to normalized DNs (DN n ), which have a uniform response over the entire image during the time of operation . This transformation includes dark signal correction and photo response and optical path non-uniformity normalization. The dark signal noise mainly consists of the read out noise and thermal noise, which are related to sensor temperature and integration time and is corrected by estimating the dark signal non-uniformity (DSNU). Practical approaches for DSNU compensation are the thermal characterization of the DSNU in the laboratory at multiple integration times, the correction based on continuous measurement of dark current during operation utilizing so-called "black pixels" within the sensor, or taking closed shutter images. When no dark pixels are available but temperature readings are, the DSNU can be characterized at multiple temperatures and integration times BIB004 . For sensors where neither dark pixels nor temperature readings are available, the DSNU might be estimated by taking pictures with blocking the lens under the same conditions as during the image capture BIB005 BIB007 BIB008 . Preferably, this should be combined with an analysis of the DSNU variability during operation or with integration time BIB003 . The optical path of a camera alters the incoming radiant flux (vignetting, c.f. BIB001 BIB002 ), and different pixels transform it non-uniformly to an electric signal. To normalize these effects, modeling the optical pathway or image-based techniques can be performed. For the latter approach, both a simpler and more accurate approach , a uniform target such as an integration sphere or homogeneously illuminated Lambertian surface is measured, and a look-up-table (LUT) or sensor model is created for every pixel. Suomalainen et al. BIB005 and Lucieer et al. BIB006 performed non-uniformity normalization for their pushbroom systems by taking a series of images of a large integrating sphere illuminated with a quartz-tungsten-halogen lamp. Kelcey and Lucieer BIB003 and Nocerino et al. [135] determined a per-pixel correction factor look-up-table (LUT) using a uniform, spectrally homogeneous, Lambertian flat field surface for the mini-MCA and the MAIA multispectral cameras, respectively. Aasen et al. , Büttner and Röser , and Yang et al. BIB009 used an integrating sphere to perform the non-uniformity normalization and determined the sensor's linear response range by measuring at different integration times. Aasen et al. BIB007 and Yang et al. BIB009 determined the vignetting correction with a Lambertian panel in the field. Khanna et al. BIB010 presented a simplified approach to the non-uniformity and photo response normalization by using computer vision techniques. Finally, while dark signal correction is relatively easy as long as a sensor provides temperature readings, the vignetting correction might be challenging in practice. Large integration spheres are expensive, and small spheres might not provide homogeneous illumination across the sensor's FOV. On the other hand, when using a Lambertian surface, such as a radiometric reference panel, it is challenging to illuminate the whole target homogenously.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> Digital cameras have spectral and radiometric properties superior to analogue film cameras. Due to its radiometrically stable construction the ADS40 sensor is capable of making images for cartography as well as remote sensing applications. For the increased size of current projects, for sensor fusion, as well as for change detection purposes it is necessary to produce comparable images for different flight conditions (weather, camera system, etc.). This is not possible with classical film cameras since comparable images require the absolute radiometric calibration of the imaging system, before atmospheric correction, reflectance calibration and BRDF correction can take place. The methods for satellite laboratory calibration can be also used for digital airborne cameras. A laboratory calibration of the ADS40 is made with a calibrated integrating sphere in order to determine dark signal, lens falloff, and radiometric gain for each sensor line. For the ADS40 a linear radiometric model is sufficiently accurate. The knowledge of the system spectral response allows a more accurate calculation of the radiometric calibration coefficients and is a check for system integrity. In order to provide a regional camera service, radiometric calibration facilities are being established on several service sites of the world. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> SpecCal software for the spectral calibration of high-resolution spectrometers is presented in this manuscript. The software, written in IDL 7.1, allows estimation of the channel central wavelength and the full width at half maximum of a selected spectrometer at several wavelengths across the VNIR range (350-1050nm). This is achieved through comparison of the position and width of specific solar and terrestrial absorption features, as observed in the measured data, with those observed in simulated MODTRAN4 irradiance data. SpecCal is operated from a user-friendly graphical user interface that allows semiautomatic application of the spectral calibration algorithm at several wavelengths. The proposed software may be exploited as a useful in situ vicarious spectral calibration tool for field spectrometers operating in the VNIR range, which makes it possible to quickly analyze the spectral characteristics of the instruments and their possible variations with time. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of rather large UAVs. In this article we present a lightweight hyperspectral mapping system (HYMSY) for rotor-based UAVs, the novel processing chain for the system, and its potential for agricultural mapping and monitoring applications. The HYMSY consists of a custom-made pushbroom spectrometer (400–950 nm, 9 nm FWHM, 25 lines/s, 328 px/line), a photogrammetric camera, and a miniature GPS-Inertial Navigation System. The weight of HYMSY in ready-to-fly configuration is only 2.0 kg and it has been constructed mostly from off-the-shelf components. The processing chain uses a photogrammetric algorithm to produce a Digital Surface Model (DSM) and provides high accuracy orientation of the system over the DSM. The pushbroom data is georectified by projecting it onto the DSM with the support of photogrammetric orientations and the GPS-INS data. Since an up-to-date DSM is produced internally, no external data are required and the processing chain is capable to georectify pushbroom data fully automatically. The system has been adopted for several experimental flights related to agricultural and habitat monitoring applications. For a typical flight, an area of 2–10 ha was mapped, producing a RGB orthomosaic at 1–5 cm resolution, a DSM at 5–10 cm resolution, and a hyperspectral datacube at 10–50 cm resolution. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> Hyperspectral imaging has been widely applied in remote sensing scientific fields. For this study, hyperspectral imaging data covering the spectral region from 400 to 1000 nm were collected from an unmanned aerial vehicle visible/near-infrared imaging hyperspectrometer (UAV-VNIRIS). Theoretically, the spectral calibration parameters of the UAV-VNIRIS measured in the laboratory should be refined when applied to the hyperspectral data obtained from the UAV platform due to variations between the laboratory and actual flight environments. Therefore, accurate spectral calibration of the UAV-VNIRIS is essential to further applications of the hyperspectral data. Shifts in both the spectral center wavelength position and the full-width at half-maximum (FWHM) were retrieved using two different methods (Methods I and II) based on spectrum matching of atmospheric absorption features at oxygen bands near 760 nm and water vapor bands near 820 and 940 nm. Comparison of the spectral calibration results of these two methods over the calibration targets showed that the derived center wavelength and FWHM shifts are similar. For the UAV-VNIRIS observed data used here, the shifts in center wavelength derived from both Methods I and II over the three absorption bands are less than 0.13 nm, and less than 0.22 nm in terms of FWHM. The findings of this paper revealed: 1) the UAV-VNIRIS payload on the UAV platform performed well in terms of spectral calibration; and 2) the applied methods are effective for on-orbit spectral calibration of the hyper spectrometer. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> The Calibration Home Base (CHB) at the Remote Sensing Technology Institute of the German Aerospace Center (DLR-IMF) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric characterization is realized in the CHB in a precise and highly automated fashion. This allows performing a wide range of time consuming measurements in an efficient way. The implementation of ISO 9001 standards ensures a traceable quality of results. DLR-IMF will support the calibration and characterization campaign of the future German spaceborne hyperspectral imager EnMAP. In the context of this activity, a procedure for the correction of imaging artifacts, such as due to stray light, is currently being developed by DLR-IMF. Goal is the correction of in-band stray light as well as ghost images down to a level of a few digital numbers in the whole wavelength range 420-2450 nm. DLR-IMF owns a Norsk Elektro Optikks HySpex airborne imaging spectrometer system that has been thoroughly characterized. This system will be used to test stray light calibration procedures for EnMAP. Hyperspectral snapshot sensors offer the possibility to simultaneously acquire hyperspectral data in two dimensions. Recently, these rather new spectrometers have arisen much interest in the remote sensing community. Different designs are currently used for local area observation such as by use of small unmanned aerial vehicles (sUAV). In this context the CHB's measurement capabilities are currently extended such that a standard measurement procedure for these new sensors will be implemented. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> Hyperspectral remote sensing is used in precision agriculture to remotely and quickly acquire crop phenotype information. This paper describes the generation of a digital orthophoto map (DOM) and radiometric calibration for images taken by a miniaturized snapshot hyperspectral camera mounted on a lightweight unmanned aerial vehicle (UAV). The snapshot camera is a relatively new type of hyperspectral sensor that can acquire an image cube with one spectral and two spatial dimensions at one exposure. The images acquired by the hyperspectral snapshot camera need to be mosaicked together to produce a DOM and radiometrically calibrated before analysis. However, the spatial resolution of hyperspectral cubes is too low to mosaic the images together. Furthermore, there are no systematic radiometric calibration methods or procedures for snapshot hyperspectral images acquired from low-altitude carrier platforms. In this study, we obtained hyperspectral imagery using a snapshot hyperspectral sensor mounted on a UAV. We quantitatively evaluated the radiometric response linearity (RRL) and radiometric response variation (RRV) and proposed a method to correct the RRV effect. We then introduced a method to interpolate position and orientation system (POS) information and generate a DOM with low spatial resolution and a digital elevation model (DEM) using a 3D mesh model built from panchromatic images with high spatial resolution. The relative horizontal geometric precision of the DOM was validated by comparison with a DOM generated from a digital RGB camera. A surface crop model (CSM) was produced from the DEM, and crop height for 48 sampling plots was extracted and compared with the corresponding field-measured crop height to verify the relative precision of the DEM. Finally, we applied two absolute radiometric calibration methods to the generated DOM and verified their accuracy via comparison with spectra measured with an ASD Field Spec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The DOM had high relative horizontal accuracy, and compared with the digital camera-derived DOM, spatial differences were below 0.05 m (RMSE = 0.035). The determination coefficient for a regression between DEM-derived and field-measured crop height was 0.680. The radiometric precision was 5% for bands between 500 and 945 nm, and the reflectance curve in the infrared spectral region did not decrease as in previous research. The pixel and data sizes for the DOM corresponding to a field area of approximately 85 m × 34 m were small (0.67 m and approximately 13.1 megabytes, respectively), which is convenient for data transmission, preprocessing and analysis. The proposed method for radiometric calibration and DOM generation from hyperspectral cubes can be used to yield hyperspectral imagery products for various applications, particularly precision agriculture. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Spectral Calibration <s> This study describes the development of a small hyperspectral Unmanned Aircraft System (HyUAS) for measuring Visible and Near-Infrared (VNIR) surface reflectance and sun-induced fluorescence, co-registered with high-resolution RGB imagery, to support field spectroscopy surveys and calibration and validation of remote sensing products. The system, namely HyUAS, is based on a multirotor platform equipped with a cost-effective payload composed of a VNIR non-imaging spectrometer and an RGB camera. The spectrometer is connected to a custom entrance optics receptor developed to tune the instrument field-of-view and to obtain systematic measurements of instrument dark-current. The geometric, radiometric and spectral characteristics of the instruments were characterized and calibrated through dedicated laboratory tests. The overall accuracy of HyUAS data was evaluated during a flight campaign in which surface reflectance was compared with ground-based reference measurements. HyUAS data were used to estimate spectral indices and far-red fluorescence for different land covers. RGB images were processed as a high-resolution 3D surface model using structure from motion algorithms. The spectral measurements were accurately geo-located and projected on the digital surface model. The overall results show that: (i) rigorous calibration enabled radiance and reflectance spectra from HyUAS with RRMSE < 10% compared with ground measurements; (ii) the low-flying UAS setup allows retrieving fluorescence in absolute units; (iii) the accurate geo-location of spectra on the digital surface model greatly improves the overall interpretation of reflectance and fluorescence data. In general, the HyUAS was demonstrated to be a reliable system for supporting high-resolution field spectroscopy surveys allowing one to collect systematic measurements at very detailed spatial resolution with a valuable potential for vegetation monitoring studies. Furthermore, it can be considered a useful tool for collecting spatially-distributed observations of reflectance and fluorescence that can be further used for calibration and validation activities of airborne and satellite optical images in the context of the upcoming FLEX mission and the VNIR spectral bands of optical Earth observation missions (i.e., Landsat, Sentinel-2 and Sentinel-3). <s> BIB010
Spectral response gives the system's radiometric response as a function of wavelength for each band and spatial pixel BIB007 BIB001 . Monochromators or line emission lamps are usually used for determining the spectral calibration. Either complete measured spectral response or some functional form is used in calculations; typically, a Gaussian with the central wavelength and the full width of half maximum (FWHM) is used as the model spectral response. In addition, the smile effect, which causes a shift of the central wavelength as the function of the position of the pixel in the focal plane, and the keystone, which causes a bending of spatial lines across the spectral axis BIB007 needs to be corrected. Thus, the spectral response must be measured in the spectral as well as in the spatial detector dimension. The smile and keystone characterization are necessary, in particular with hyperspectral sensors BIB007 . The information of the spectral response functions of the lightweight spectral sensors is still rather limited. To date, mostly monochromators BIB003 BIB008 BIB009 or HgAr, Xe, and Ne gas emission lamps BIB004 BIB005 have been used for spectral calibration. Garzonio et al. BIB010 characterized their Ocean optics USB 4000 point spectrometer to measure fluorescence that requires a rigorous calibration. They also used emission lamps while also integrating vibration tests that simulated real flight situations, and found that the system had a good spectral stability. A different approach can use Fraunhofer and absorption lines of the atmosphere for spectral calibration BIB006 , if the spectral resolution of the sensor is good enough. Busetto et al. BIB002 published software that estimates the spectral shift of a given data set to moderate resolution atmospheric transmission (MODTRAN) simulations . This approach is particularly important, since it can be used during the flight campaign, and the spectral performance can be different in laboratory and actual flight environments.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Absolute Radiometric Calibration <s> The Nature of Remote Sensing: Introduction. Remote Sensing. Information Extraction from Remote-Sensing Images. Spectral Factors in Remote Sensing. Spectral Signatures. Remote-Sensing Systems. Optical Sensors. Temporal Characteristics. Image Display Systems. Data Systems. Summary. Exercises. References. Optical Radiation Models: Introduction. Visible to Short Wave Infrared Region. Solar Radiation. Radiation Components. Surface-Reflected. Unscattered Component. Surface-Reflected. Atmosphere-Scattered Component. Path-Scattered Component. Total At-Sensor. Solar Radiance. Image Examples in the Solar Region. Terrain Shading. Shadowing. Atmospheric Correction. Midwave to Thermal Infrared Region. Thermal Radiation. Radiation Components. Surface-Emitted Component. Surface-Reflected. Atmosphere-Emitted Component. Path-Emitted Component. Total At-Sensor. Emitted Radiance. Total Solar and Thermal Upwelling Radiance. Image Examples in the Thermal Region. Summary. Exercises. References. Sensor Models: Introduction. Overall Sensor Model. Resolution. The Instrument Response. Spatial Resolution. Spectral Resolution. Spectral Response. Spatial Response. Optical PSFopt. Image Motion PSFIM. Detector PSFdet. Electronics PSFel. Net PSFnet. Comparison of Sensor PSFs. PSF Summary for TM. Imaging System Simulation. Amplification. Sampling and Quantization. Simplified Sensor Model. Geometric Distortion. Orbit Models. Platform Attitude Models. Scanner Models. Earth Model. Line and Whiskbroom ScanGeometry. Pushbroom Scan Geometry. Topographic Distortion. Summary. Exercises. References. Data Models: Introduction. A Word on Notation. Univariate Image Statistics. Histogram. Normal Distribution. Cumulative Histogram. Statistical Parameters. Multivariate Image Statistics. Reduction to Univariate Statistics. Noise Models. Statistical Measures of Image Quality. Contrast. Modulation. Signal-to-Noise Ratio (SNR). Noise Equivalent Signal. Spatial Statistics. Visualization of Spatial Covariance. Covariance with Semivariogram. Separability and Anisotropy. Power Spectral Density. Co-occurrence Matrix. Fractal Geometry. Topographic and Sensor Effects. Topography and Spectral Statistics. Sensor Characteristics and Spectral Stastistics. Sensor Characteristics and Spectral Scattergrams. Summary. Exercises. References. Spectral Transforms: Introduction. Feature Space. Multispectral Ratios. Vegetation Indexes. Image Examples. Principal Components. Standardized Principal Components (SPC) Transform. Maximum Noise Fraction (MNF) Transform. Tasseled Cap Tranformation. Contrast Enhancement. Transformations Based on Global Statistics. Linear Transformations. Nonlinear Transformations. Normalization Stretch. Reference Stretch. Thresholding. Adaptive Transformation. Color Image Contrast Enhancement. Min-max Stretch. Normalization Stretch. Decorrelation Stretch. Color Spacer Transformations. Summary. Exercises. References. Spatial Transforms: Introduction. An Image Model for Spatial Filtering. Convolution Filters. Low Pass and High Pass Filters. High Boost Filters. Directional Filters. The Border Region. Characterization of Filtered Images. The Box Filter Algorithm. Cascaded Linear Filters. Statistical Filters. Gradient Filters. Fourier Synthesis. Discrete Fourier Transforms in 2-D. The Fourier Components. Filtering with the Fourier Transform. Transfer Functions. The Power Spectrum. Scale Space Transforms. Image Resolution Pyramids. Zero-Crossing Filters. Laplacian-of-Gaussian (LoG) Filters. Difference-of-Gaussians (DoG) Filters.Wavelet Transforms. Summary. Exercises. References. Correction and Calibration: Introduction. Noise Correction. Global Noise. Sigma Filter. Nagao-Matsuyama Filter. Local Noise. Periodic Noise. Distriping 359. Global,Linear Detector Matching. Nonlinear Detector Matching. Statistical Modification to Linear and Nonlinear Detector. Matching. Spatial Filtering Approaches. Radiometric Calibration. Sensor Calibration. Atmospheric Correction. Solar and Topographic Correction. Image Examples. Calibration and Normalization of Hyperspectral Imagery. AVIRIS Examples. Distortion Correction. Polynomial Distortion Models. Ground Control Points (GCPs). Coordinate Transformation. Map Projections. Resampling. Summary. Exercises References. Registration and Image Fusion: Introduction. What is Registration? Automated GCP Location. Area Correlation. Other Spatial Features. Orthrectification. Low-Resolution DEM. High-Resolution DEM. Hierarchical Warp Stereo. Multi-Image Fusion. Spatial Domain Fusion. High Frequency Modulation. Spectral Domain Fusion. Fusion Image Examples. Summary. Exercises. References. Thematic Classification: Introduction. The Importance of Image Scale. The Notion of Similarity. Hard Versus Soft Classification. Training the Classifier. Supervised Training. Unsupervised Training. K-Means Clustering Algorithm. Clustering Examples. Hybrid Supervised/Unsupervised Training. Non-Parametric Classification Algorithms. Level-Slice. Nearest-Mean. Artificial Neural Networks (ANNs). Back-Propagation Algorithm. Nonparametric Classification Examples. Parametric Classification Algorithms. Estimation of Model-Parameters. Discriminant Functions. The Normal Distribution Model. Relation to the Nearest-Mean Classifier. Supervised Classification Examples and Comparison to Nonparametric Classifiers. Segmentation. Region Growing. Region Labeling. Sub-Pixel Classification. The Linear Mixing Model. Unmixing Model. Hyperspectral Image Analysis. Visualization of the Image Cube. Feature Extraction. Image Residuals. Pre-Classification Processing and Feature Extraction. Classification Algorithms. Exercises. Error Analysis. Multitemporal Images. Summary. References. Index. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Absolute Radiometric Calibration <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Absolute Radiometric Calibration <s> The airborne prism experiment (APEX) is an imaging spectrometer developed by a joint Swiss–Belgian consortium composed of institutes (University of Zurich, Flemish Institute for Technological Research) and industries (RUAG, OIP, Netcetera), supported by the European Space Agency's PRODEX programme. APEX is designed to support the development of future space-borne Earth observation systems by simulating, calibrating or validating existing or planned optical satellite missions. Therefore, periodic extensive calibration of APEX is one major objective within the project. APEX calibration under laboratory conditions is done at its dedicated calibration and characterization facility at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany. While environmental influences under laboratory conditions are reduced to a minimum, the effects of atmospheric absorption and the properties of the underlying calibration infrastructure may still influence the measurements and subsequently the accuracy of the sensor spectral response estimations. It is demonstrated that even a lightpath of ∼2 m through the atmosphere or the monochromator grating can have significant impact on the spectral response estimation of the sensor. A normalization approach described in this letter is able to compensate for these effects. The correction algorithm is exemplarily demonstrated on actual measurements for the short wavelength-IR range channel. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Absolute Radiometric Calibration <s> Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups. <s> BIB004
Absolute radiometric calibration determines the coefficients for the transformation between DN and the physical unit radiance for each spectral band [W m −2 sr −1 nm −1 ]. Typically, a linear model with gain and offset parameters is appropriate BIB001 . Two procedures have been published to accomplish this. The first approach uses a radiometrically calibrated integrating sphere. Büttner and Röser used a sphere equipped with an optometer for measuring the total radiance that is regularly calibrated against German national standard (PTB), and stated that calibration was valid for the spectral range from 380 nm to 1100 nm with a relative uncertainty of 5%. The second approach is to cross-calibrate a new device with an already radiometrically calibrated device. Burkart et al. BIB002 cross-calibrated their Ocean Optics STS point spectrometer with a radiometrically calibrated ASD FieldSpec Pro 4 by aligning the FOVs of both devices such that they pointed on almost the same area on a white reference panel. Calibration coefficients were obtained by comparing several spectra that were collected at different solar zenith angles to provide measurements covering different light levels and a linear relationship between ASD radiance values and STS digital counts at different light levels, which were normalized for different instrument integration times. A similar approach was performed by Del Pozo for the 2D imager Tetracam mini-MCA. Both procedures require that the spectral response function of the sensors is known, since the spectral bands of the reference and the sensor need to be convolved to a common level to derive the band-specific calibration coefficients. The absolute calibration is often a challenging process, because the source must be traceable to radiance standards. The next section shows that absolute radiometric calibration can be omitted in cases where only reflectance, is needed and radiance is not. Besides, many factors influence the system radiometric response, such as for example, the shutter, stray light effects, impacts of temperature, and pressure BIB004 BIB003 . For applications that require very precise data, such as solar induced fluorescence estimation, these parameters also need to be considered.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> We present a methodology for correcting the global UV spectral ::: measurements of a Brewer MKIII spectroradiometer for the error ::: introduced by the deviation of the angular response of the instrument ::: from the ideal response. This methodology is applicable also to ::: other Brewer spectroradiometers that are currently in ::: operation. The various stages of the methodology are described in ::: detail, together with the uncertainties involved in each ::: stage. Finally global spectral UV measurements with and without the ::: application of the correction are compared with collocated measurements ::: of another spectroradiometer and with model calculations, demonstrating ::: the efficiency of the method. Depending on wavelength and on the ::: aerosol loading, the cosine correction factors range from 2% to ::: 7%. The uncertainties involved in the calculation of these ::: correction factors were found to be relatively small, ranging from ::: ∼0.2% to ∼2%. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> Abstract. The libRadtran software package is a suite of tools for radiative transfer calculations in the Earth's atmosphere. Its main tool is the uvspec program. It may be used to compute radiances, irradiances and actinic fluxes in the solar and terrestrial part of the spectrum. The design of uvspec allows simple problems to be easily solved using defaults and included data, hence making it suitable for educational purposes. At the same time the flexibility in how and what input may be specified makes it a powerful and versatile tool for research tasks. The uvspec tool and additional tools included with libRadtran are described and realistic examples of their use are given. The libRadtran software package is available from http://www.libradtran.org. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> An experiment to determine the most accurate and repeatable method for generating instrument inter‐calibration functions (ICFs) is described, based upon data collected with a dual‐beam GER1500 spectroradiometer system. The quality of reflectance data collected using a dual‐beam spectroradiometer system is reliant upon accurate inter‐calibration of the sensor pairs to take into account differences in their radiant sensitivity and spectral characteristics. A cos‐conical field‐based method for inter‐calibrating dual‐beam spectroradiometers was tested alongside laboratory inter‐calibration procedures. The field‐based method produced the most accurate results when a field‐derived ICF collected close in time was used to correct the spectral scan. A regression model to predict the ICF at a range of wavelengths was tested, using inputs of solar zenith angle, cosine of solar zenith angle and broadband diffuse‐to‐global irradiance ratios. The linear multiple regression model described up to 78% of the variability i... <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> The design, operation, and properties of the Finnish Geodetic Institute Field Goniospectrometer (FIGIFIGO) are presented. FIGIFIGO is a portable instrument for the measurement of surface Bidirectional Reflectance Factor (BRF) for samples with diameters of 10 – 50 cm. A set of polarising optics enable the measurement of linearly polarised BRF over the full solar spectrum (350 – 2,500 nm). FIGIFIGO is designed mainly for field operation using sunlight, but operation in a laboratory environment is also possible. The acquired BRF have an accuracy of 1 – 5% depending on wavelength, sample properties, and measurement conditions. The angles are registered at accuracies better than 2°. During 2004 – 2008, FIGIFIGO has been used in the measurement of over 150 samples, all around northern Europe. The samples concentrate mostly on boreal forest understorey, snow, urban surfaces, and reflectance calibration surfaces. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> article i nfo The remote detection of water stress in a citrus orchard was investigated using leaf-level measurements of chlorophyll fluorescence and Photochemical Reflectance Index (PRI) data, seasonal time-series of crown tem- perature and PRI, and high-resolution airborne imagery. The work was conducted in an orchard where a reg- ulated deficit irrigation (RDI) experiment generated a gradient in water stress levels. Stomatal conductance (Gs) and water potential (Ψ) were measured over the season on each treatment block. The airborne data consisted on thermal and hyperspectral imagery acquired at the time of maximum stress differences among treatments, prior to the re-watering phase, using a miniaturized thermal camera and a micro-hyperspectral imager on board an unmanned aerial vehicle (UAV). The hyperspectral imagery was acquired at 40 cm resolution and 260 spectral bands in the 400-885 nm spectral range at 6.4 nm full width at half maximum (FWHM) spectral resolution and 1.85 nmsampling interval,enablingthe identificationof pure crownsfor extractingradiance andreflectance hyperspectral spectra from each tree. The FluorMOD model was used to investigate the retrieval of chlorophyll fluorescence by applying the Fraunhofer Line Depth (FLD) principle using three spectral bands (FLD3), which demonstrated that fluorescence retrievalwas feasible with the configuration of the UAV micro-hyperspectral in- strument flown over the orchard. Results demonstrated the link between seasonal PRI and crown temperature acquired from instrumented trees and field measurements of stomatal conductance and water potential. The sensitivity of PRI and Tc-Ta time-series to water stress levels demonstrated a time delay of PRI vs Tc-Ta during the recovery phase after re-watering started. At the time of the maximum stress difference among treatment blocks, the airborneimagery acquired fromthe UAV platform demonstrated that the crown temperature yielded the best coefficient of determination for Gs (r 2 <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> Field spectroradiometers integrated in automated systems at Eddy Covariance (EC) sites are a powerful tool for monitoring and upscaling vegetation physiology and carbon and water fluxes. However, exposure to varying environmental conditions can affect the functioning of these sensors, especially if these cannot be completely insulated and stabilized. This can cause inaccuracy in the spectral measurements and hinder the comparison between data acquired at different sites. This paper describes the characterization of key sensor models in a double beam spectroradiometer necessary to calculate the Hemispherical-Conical Reflectance Factor (HCRF). Dark current, temperature dependence, non-linearity, spectral calibration and cosine receptor directional responses are modeled in the laboratory as a function of temperature, instrument settings, radiation measured or illumination angle. These models are used to correct the spectral measurements acquired continuously by the same instrument integrated outdoors in an automated system (AMSPEC-MED). Results suggest that part of the instrumental issues cancel out mutually or can be controlled by the instrument configuration, so that changes induced in HCFR reached about 0.05 at maximum. However, these corrections are necessary to ensure the inter-comparison of data with other ground or remote sensors and to discriminate instrumentally induced changes in HCRF from those related with vegetation physiology and directional effects. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> Abstract. We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> Abstract. Albedo is a fundamental parameter in earth sciences, and many analyses utilize the Moderate Resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF)/albedo (MCD43) algorithms. While derivative albedo products have been evaluated over Greenland, we present a novel, direct comparison with nadir surface reflectance collected from an unmanned aerial system (UAS). The UAS was flown from Summit, Greenland, on 210 km transects coincident with the MODIS sensor overpass on board the Aqua and Terra satellites on 5 and 6 August 2010. Clear-sky acquisitions were available from the overpasses within 2 h of the UAS flights. The UAS was equipped with upward- and downward-looking spectrometers (300–920 nm) with a spectral resolution of 10 nm, allowing for direct integration into the MODIS bands 1, 3, and 4. The data provide a unique opportunity to directly compare UAS nadir reflectance with the MODIS nadir BRDF-adjusted surface reflectance (NBAR) products. The data show UAS measurements are slightly higher than the MODIS NBARs for all bands but agree within their stated uncertainties. Differences in variability are observed as expected due to different footprints of the platforms. The UAS data demonstrate potentially large sub-pixel variability of MODIS reflectance products and the potential to explore this variability using the UAS as a platform. It is also found that, even at the low elevations flown typically by a UAS, reflectance measurements may be influenced by haze if present at and/or below the flight altitude of the UAS. This impact could explain some differences between data from the two platforms and should be considered in any use of airborne platforms. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Reflectance Generation Based on Incident Irradiance <s> Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. <s> BIB010
The spectrometer radiances can be transformed into reflectance with the aid of incident irradiance measurements. The incident irradiance can be either estimated using atmospheric radiative transfer models (ARTMs) or measured using an irradiance spectrometer. ARTMs allow simulating incoming irradiance from the sun to top of the canopy as well as the influence of the atmosphere on the signal during its way from the canopy to the sensor. Input parameters include the time, date, location, temperature, humidity, and aerosol optical depth measured by, e.g., a sun photometer. ARTMs can be used to generate the irradiance necessary to calculate reflectance together with the radiance received by the sensor. One example is shown in Zarco-Tejada et al. BIB005 , which used the SMARTS model , parameterized with the aerosol optical depth measured at 550 nm with a Micro-Tops II sun photometer (Solar LIGHT Co., Philadelphia, PA, USA) collected in the study areas at the time of the flight, for hyperspectral pushbroom imagery at 575 m above ground level. The drawback of ARTMs for reflectance calculations is the need for sufficient parameterization of the atmosphere. This is particularly challenging for flights over larger areas, where the atmosphere might be heterogeneous, and under varying illumination conditions due to clouds. Due to the challenges with the ARTMs, the technologies measuring the incident irradiance using a secondary spectrometer are of great interest in the UAV spectrometry. The possible methods include using stationary irradiance or radiance recordings (e.g., of a reference panel or with a cosine receptor on the ground) or a mobile irradiance sensor equipped with cosine receptor optics mounted on the UAV. Burkart et al. BIB006 used two Ocean Optics STS-VIS cross-calibrated point spectrometers. One of the spectrometers was measuring radiance reflected from the object on-board a multi-rotor UAV, and the second spectrometer measured the Spectralon panel on ground. This method is also referred to as a "continuous panel method", and is similar to setups of classical dual ground spectrometer measurements, which provide reflectance factors by taking consecutive measurements of the target and Lambertian reference panel (e.g., BIB004 ). Burkhart et al. BIB009 used a dual-spectrometer approach with two TriOS RAMSES point spectrometers. They calculated the relative sensitivity of the radiance and irradiance sensors and fitted a third-order polynomial to this ratio. They transformed the radiance measurements of the downwards-facing device to reflectance utilizing the simultaneous irradiance measurements of one upward-facing spectrometer equipped with a cosine receptor on-board the UAV. Lately, also consumer-grade multispectral sensors such as the Parrot Sequoia and Maia are shipped with an irradiance sensor [51, . An upward-looking sensor equipped with a cosine receptor foreoptic is required to measure the hemispherical irradiance. In reality, the angular response of the cosine receptor deviates from a cosine shape (c.f. Sections 3.3.7 and 4 in and BIB007 ). The cosine error correction depends on the atmospheric state when the measurements were made, and the deviations typically become larger as the incidence angle increases, implying that measured irradiance is underestimated compared with an instrument with a perfect angular response. This is particularly important during times of low sun elevation (e.g., in high latitudes, and in the morning and evening). The underestimation may be corrected for, providing that the sky conditions during the measurements are known, and that the angular response of the instrument is known. Bais et al. BIB001 reported a deviation from a perfect cosine response of less than 2%. A further requirement is that the upward-looking detector must be properly leveled to allow accurate measurements of the downwelling irradiance. LibRadtran radiative transfer modeling BIB002 with a solar zenith angle of 55.66 • with a 10 • zenith angle showed approximately 20% differences depending on whether the sensor was facing toward or away from the sun BIB009 . When flying under cloud cover, the influence is weaker BIB008 . Examples of the impact of illumination changes on broadband irradiance measurements when using an irradiance sensor fixed on the UAV frame during eight flights carried out under sunny, partially cloudy, and cloudy conditions were presented by Nevalainen et al. BIB010 . In sunny conditions, the tilting of the sensor toward and away from the sun caused manifold impacts in the irradiance recordings. Both the stationary and mobile approaches allow the illumination changes to be determined during the data capture. However, the mobile solution allows the irradiance changes to be tracked at the measurement place, which has benefits in case of non-homogeneous sky conditions (still one needs to regard that due to the oblique illumination from the sun, the ground could receive a different amount of energy to the reference device on the UAV). Ideally, the second spectrometer (and eventually also including the reflectance of the radiometric reference panel) is radiometrically and spectrally cross-calibrated with the primary spectrometer. Often, this is done simultaneously, cross-calibrating both devices to a third radiometrically calibrated device BIB006 BIB009 BIB003 .
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> To have lasting quantitative value, remotely sensed data must be calibrated to physical units of reflectance. The empirical line method offers a logistically simple means of generating acceptable estimates of surface reflectance. A review and case-study identify the ease with which this method can be applied, but also some of the pitfalls that can be encountered if it is not planned and implemented properly. A number of theoretical assumptions and practical considerations should be taken into account before applying this approach. It is suggested that the empirical line method allows the calibration of remotely sensed data to reflectance with errors of only a few percent. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Chemically treated canvas tarps of large dimension (8 by 8 m) can be deployed within the field of view of airborne digital sensors to provide a stable ground reference for converting image digital number IDN) to sqface reflectance factor (p). However, the accuracy of such tarp-based conversion is dependent upon a good knowledge of tarp p at a variety of solar and view angles (0, and 03, and upon good care and proper deployment of tarps. In this study, a set of tarps of p mngingfrom 0.04 to 0.64 were evaluated to determine the magnitude of error in measured tarp p associated with variations in 0,, 0, and for reasonable levels of tarp dirtiness. Results showed that, for operational values of 0, and 0, and for reasonable levels of tarp dirtiness, the variation of measured tarp p from the factory-designated p could easily be greater than 50 percent. On the other hand, we found that, if tarps were deployed correctly and kept clean through careful use and periodic cleaning, and if tarp p was determined through calibration equations that account for both 0, and 0, the greatest sources of error were minimized. General calibmtion equations were derived and provided here; these will be useful for applications with tarps of the same factory-designated p values as those used in this study. Furthermore, equations were provided to allow calibration coefficients to be determined from the value of factory-designated p for the visible and near-infrared spectral bands. The major limitation of tarps as calibration sources was related to the difficulty associated with deploying heavy, cumbersome tarps under normal field conditions characterized by moderate wind, dust, heat, and possibly mud. This study should provide tarp users with the information necessary to properly deploy tarps and process results for accurate image interpretation. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Two critical limitations for using current satellite sensors in real-time crop management are the lack of imagery with optimum spatial and spectral resolutions and an unfavorable revisit time for most crop stress-detection applications. Alternatives based on manned airborne platforms are lacking due to their high operational costs. A fundamental requirement for providing useful remote sensing products in agriculture is the capacity to combine high spatial resolution and quick turnaround times. Remote sensing sensors placed on unmanned aerial vehicles (UAVs) could fill this gap, providing low-cost approaches to meet the critical requirements of spatial, spectral, and temporal resolutions. This paper demonstrates the ability to generate quantitative remote sensing products by means of a helicopter-based UAV equipped with inexpensive thermal and narrowband multispectral imaging sensors. During summer of 2007, the platform was flown over agricultural fields, obtaining thermal imagery in the 7.5-13-mum region (40-cm resolution) and narrowband multispectral imagery in the 400-800-nm spectral region (20-cm resolution). Surface reflectance and temperature imagery were obtained, after atmospheric corrections with MODTRAN. Biophysical parameters were estimated using vegetation indices, namely, normalized difference vegetation index, transformed chlorophyll absorption in reflectance index/optimized soil-adjusted vegetation index, and photochemical reflectance index (PRI), coupled with SAILH and FLIGHT models. As a result, the image products of leaf area index, chlorophyll content (C ab), and water stress detection from PRI index and canopy temperature were produced and successfully validated. This paper demonstrates that results obtained with a low-cost UAV system for agricultural applications yielded comparable estimations, if not better, than those obtained by traditional manned airborne sensors. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> The design, operation, and properties of the Finnish Geodetic Institute Field Goniospectrometer (FIGIFIGO) are presented. FIGIFIGO is a portable instrument for the measurement of surface Bidirectional Reflectance Factor (BRF) for samples with diameters of 10 – 50 cm. A set of polarising optics enable the measurement of linearly polarised BRF over the full solar spectrum (350 – 2,500 nm). FIGIFIGO is designed mainly for field operation using sunlight, but operation in a laboratory environment is also possible. The acquired BRF have an accuracy of 1 – 5% depending on wavelength, sample properties, and measurement conditions. The angles are registered at accuracies better than 2°. During 2004 – 2008, FIGIFIGO has been used in the measurement of over 150 samples, all around northern Europe. The samples concentrate mostly on boreal forest understorey, snow, urban surfaces, and reflectance calibration surfaces. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The “reflectance mode (RM)” method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The “linear-interpolation (LI)” method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The “continuous panel (CP)” method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Abstract Mapping vegetation in crop fields is an important step in remote sensing applications for precision agriculture. Traditional aerial platforms such as planes and satellites are not suitable for these applications due to their low spatial and temporal resolutions. In this article, a UAV equipped with a commercial camera (visible spectrum) was used for ultra-high resolution image acquisition over a wheat field in the early-season period. From these images, six visible spectral indices (CIVE, ExG, ExGR, Woebbecke Index, NGRDI, VEG) and two combinations of these indices were calculated and evaluated for vegetation fraction mapping, to study the influence of flight altitude (30 and 60 m) and days after sowing (DAS) from 35 to 75 DAS on the classification accuracy. The ExG and VEG indices achieved the best accuracy in the vegetation fraction mapping, with values ranging from 87.73% to 91.99% at a 30 m flight altitude and from 83.74% to 87.82% at a 60 m flight altitude. These indices were also spatially and temporally consistent, allowing accurate vegetation mapping over the entire wheat field at any date. This provides evidence that visible spectral indices derived from images acquired using a low-cost camera onboard a UAV flying at low altitudes are a suitable tool to use to discriminate vegetation in wheat fields in the early season. This opens the doors for the utilisation of this technology in precision agriculture applications such as early site specific weed management in which accurate vegetation fraction mapping is essential for crop-weed classification. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> The use of small unmanned aircraft systems (sUAS) to acquire very high-resolution multispectral imagery has attracted growing attention recently; however, no systematic, feasible, and convenient radiometric calibration method has been specifically developed for sUAS remote sensing. In this research, we used a modified color infrared (CIR) digital single-lens reflex (DSLR) camera as the sensor and the DJI S800 hexacopter sUAS as the platform to collect imagery. Results show that the relationship between the natural logarithm of measured surface reflectance and image raw, unprocessed digital numbers (DNs) is linear and the ${\bm{y}}$ -intercept of the linear equation can be theoretically interpreted as the minimal possible surface reflectance that can be detected by each sensor waveband. The empirical line calibration equation for every single band image can be built using the ${\bm{y}}$ -intercept as one data point, and the natural log-transformed measured reflectance and image DNs of a gray calibration target as another point in the coordinate system. Image raw DNs are therefore converted to reflectance using the calibration equation. The Mann–Whitney ${\bm{U}}$ test results suggest that the difference between the measured and the predicted reflectance values of 13 tallgrass sampling quadrats is not statistically significant. The method theory developed in this study can be employed for other sUAS-based remote sensing applications. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Hyperspectral remote sensing is used in precision agriculture to remotely and quickly acquire crop phenotype information. This paper describes the generation of a digital orthophoto map (DOM) and radiometric calibration for images taken by a miniaturized snapshot hyperspectral camera mounted on a lightweight unmanned aerial vehicle (UAV). The snapshot camera is a relatively new type of hyperspectral sensor that can acquire an image cube with one spectral and two spatial dimensions at one exposure. The images acquired by the hyperspectral snapshot camera need to be mosaicked together to produce a DOM and radiometrically calibrated before analysis. However, the spatial resolution of hyperspectral cubes is too low to mosaic the images together. Furthermore, there are no systematic radiometric calibration methods or procedures for snapshot hyperspectral images acquired from low-altitude carrier platforms. In this study, we obtained hyperspectral imagery using a snapshot hyperspectral sensor mounted on a UAV. We quantitatively evaluated the radiometric response linearity (RRL) and radiometric response variation (RRV) and proposed a method to correct the RRV effect. We then introduced a method to interpolate position and orientation system (POS) information and generate a DOM with low spatial resolution and a digital elevation model (DEM) using a 3D mesh model built from panchromatic images with high spatial resolution. The relative horizontal geometric precision of the DOM was validated by comparison with a DOM generated from a digital RGB camera. A surface crop model (CSM) was produced from the DEM, and crop height for 48 sampling plots was extracted and compared with the corresponding field-measured crop height to verify the relative precision of the DEM. Finally, we applied two absolute radiometric calibration methods to the generated DOM and verified their accuracy via comparison with spectra measured with an ASD Field Spec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The DOM had high relative horizontal accuracy, and compared with the digital camera-derived DOM, spatial differences were below 0.05 m (RMSE = 0.035). The determination coefficient for a regression between DEM-derived and field-measured crop height was 0.680. The radiometric precision was 5% for bands between 500 and 945 nm, and the reflectance curve in the infrared spectral region did not decrease as in previous research. The pixel and data sizes for the DOM corresponding to a field area of approximately 85 m × 34 m were small (0.67 m and approximately 13.1 megabytes, respectively), which is convenient for data transmission, preprocessing and analysis. The proposed method for radiometric calibration and DOM generation from hyperspectral cubes can be used to yield hyperspectral imagery products for various applications, particularly precision agriculture. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Abstract Investigating spatio-temporal variations of species composition in grassland is an essential step in evaluating grassland health conditions, understanding the evolutionary processes of the local ecosystem, and developing grassland management strategies. Space-borne remote sensing images (e.g., MODIS, Landsat, and Quickbird) with spatial resolutions varying from less than 1 m to 500 m have been widely applied for vegetation species classification at spatial scales from community to regional levels. However, the spatial resolutions of these images are not fine enough to investigate grassland species composition, since grass species are generally small in size and highly mixed, and vegetation cover is greatly heterogeneous. Unmanned Aerial Vehicle (UAV) as an emerging remote sensing platform offers a unique ability to acquire imagery at very high spatial resolution (centimetres). Compared to satellites or airplanes, UAVs can be deployed quickly and repeatedly, and are less limited by weather conditions, facilitating advantageous temporal studies. In this study, we utilize an octocopter, on which we mounted a modified digital camera (with near-infrared (NIR), green, and blue bands), to investigate species composition in a tall grassland in Ontario, Canada. Seven flight missions were conducted during the growing season (April to December) in 2015 to detect seasonal variations, and four of them were selected in this study to investigate the spatio-temporal variations of species composition. To quantitatively compare images acquired at different times, we establish a processing flow of UAV-acquired imagery, focusing on imagery quality evaluation and radiometric correction. The corrected imagery is then applied to an object-based species classification. Maps of species distribution are subsequently used for a spatio-temporal change analysis. Results indicate that UAV-acquired imagery is an incomparable data source for studying fine-scale grassland species composition, owing to its high spatial resolution. The overall accuracy is around 85% for images acquired at different times. Species composition is spatially attributed by topographical features and soil moisture conditions. Spatio-temporal variation of species composition implies the growing process and succession of different species, which is critical for understanding the evolutionary features of grassland ecosystems. Strengths and challenges of applying UAV-acquired imagery for vegetation studies are summarized at the end. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Empirical Line Method (ELM) <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB012
The ELM is a commonly used image calibration method in which a set of radiometric reference panels with a known spectral reflectance are used to calculate reflectance factors (HDRFs). After the relative or absolute radiometric calibration of images, a line is fitted with the least-squares method between the image DNs and the measured target reflectance factors BIB001 . The sensor may be radiometrically calibrated, or, if not, then the method combines the linear conversions from the DNs to the reflectance factor into a single linear transformation. Implementations of the method can be found in software packages such as ENVI (Exelis Visual Information Solutions, Bolder, CO, USA). Several researchers have applied the ELM for their UAV operations (e.g., BIB010 BIB003 ). Lucieer et al. BIB006 used five near-Lambertian gray panels with 5% to 70% reflectance built with a special paint that provided a reasonably flat spectral response. Yang et al. BIB010 used five artificial near-Lambertian tarps placed on flat ground for the ELM. Wang and Myint BIB008 described the ELM for transforming raw RGB images to reflectance based on nine characterized gray panels with different intensities (although they did not perform relative radiometric correction as would be recommended). Additionally, the ELM has been used with (modified) RGB consumer-grade cameras BIB011 BIB007 . Several UAV studies have also used a simplified ELM with only one panel; the reflectance factors are calculated by rationing the target and the white reference measurements BIB009 BIB010 BIB004 . However, Aasen and Bolten BIB012 found some issues when using this simplified ELM for UAVs. When placing the sensor and the UAV above the panel, a large part of the hemisphere is (invisibly) shaded. This might introduce a severe wavelength-dependent bias to the measurements that also affects the retrieval of vegetation parameters. The bias is strongest under cloudy conditions, and can account for up to 15% BIB012 . Thus, we recommend not using the simplified ELM for UAV research when the UAV is placed above the panel at a short distance. The ELM is simple and accurate if all of the assumptions of the method are met. When ELM is used in its simple form, many factors can deteriorate the accuracy, such as for example, variations in atmospheric conditions over the area of interest, topographic variations, and atmospheric BRDF. A minimum of two reference targets covering the range of reflectance values of interest should be used; typically, the range is 0-50% for vegetation. Adding more than two targets reduces uncertainties, enables an assessment of sensor linearity, and allows evaluation of the ELM results by using some panels only for verification. The calibration targets should be flat and leveled, without obstructions, and should be large enough (preferably more than five times the image GSD) to reduce adjacency effects by only selecting the middle part of the panel. Targets should have uniform intensity and be close to Lambertian reflectance characteristics. If it is not possible to deploy targets, reflectance of ground objects with an appropriate reflectance range and Lambertian reflectance can be measured and used, for example, gravel, sand, or asphalt surfaces. The assumption of Lambertian reflectance can be released if the reflectance anisotropy of the target is characterized and considered BIB002 . Miura and Huete BIB005 stated that the ELM was suitable for flight times shorter than 30 min under stable weather conditions (clear sky) when the results of the ELM of panel measurements at the beginning and end of a flight were linearly interpolated. However, the main disadvantage is that the ELM cannot adapt for illumination changes during the flight, since the panels are not within every image. Thus, the ELM alone is not useful under variable conditions.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Atmospheric Correction <s> A method for the radiometric correction of wide field-of-view airborne imagery has been developed that accounts for the angular dependence of the path radiance and atmospheric transmittance functions to remove atmospheric and topographic effects. The first part of processing is the parametric geocoding of the scene to obtain a geocoded, orthorectified image and the view geometry (scan and azimuth angles) for each pixel as described in part 1 of this jointly submitted paper. The second part of the processing performs the combined atmospheric/ topographic correction. It uses a database of look-up tables of the atmospheric correction functions (path radiance, atmospheric transmittance, direct and diffuse solar flux) calculated with a radiative transfer code. Additionally, the terrain shape obtained from a digital elevation model is taken into account. The issues of the database size and accuracy requirements are critically discussed. The method supports all common types of imaging airborne optical instrument... <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Atmospheric Correction <s> article i nfo The remote detection of water stress in a citrus orchard was investigated using leaf-level measurements of chlorophyll fluorescence and Photochemical Reflectance Index (PRI) data, seasonal time-series of crown tem- perature and PRI, and high-resolution airborne imagery. The work was conducted in an orchard where a reg- ulated deficit irrigation (RDI) experiment generated a gradient in water stress levels. Stomatal conductance (Gs) and water potential (Ψ) were measured over the season on each treatment block. The airborne data consisted on thermal and hyperspectral imagery acquired at the time of maximum stress differences among treatments, prior to the re-watering phase, using a miniaturized thermal camera and a micro-hyperspectral imager on board an unmanned aerial vehicle (UAV). The hyperspectral imagery was acquired at 40 cm resolution and 260 spectral bands in the 400-885 nm spectral range at 6.4 nm full width at half maximum (FWHM) spectral resolution and 1.85 nmsampling interval,enablingthe identificationof pure crownsfor extractingradiance andreflectance hyperspectral spectra from each tree. The FluorMOD model was used to investigate the retrieval of chlorophyll fluorescence by applying the Fraunhofer Line Depth (FLD) principle using three spectral bands (FLD3), which demonstrated that fluorescence retrievalwas feasible with the configuration of the UAV micro-hyperspectral in- strument flown over the orchard. Results demonstrated the link between seasonal PRI and crown temperature acquired from instrumented trees and field measurements of stomatal conductance and water potential. The sensitivity of PRI and Tc-Ta time-series to water stress levels demonstrated a time delay of PRI vs Tc-Ta during the recovery phase after re-watering started. At the time of the maximum stress difference among treatment blocks, the airborneimagery acquired fromthe UAV platform demonstrated that the crown temperature yielded the best coefficient of determination for Gs (r 2 <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Atmospheric Correction <s> Abstract. Albedo is a fundamental parameter in earth sciences, and many analyses utilize the Moderate Resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF)/albedo (MCD43) algorithms. While derivative albedo products have been evaluated over Greenland, we present a novel, direct comparison with nadir surface reflectance collected from an unmanned aerial system (UAS). The UAS was flown from Summit, Greenland, on 210 km transects coincident with the MODIS sensor overpass on board the Aqua and Terra satellites on 5 and 6 August 2010. Clear-sky acquisitions were available from the overpasses within 2 h of the UAS flights. The UAS was equipped with upward- and downward-looking spectrometers (300–920 nm) with a spectral resolution of 10 nm, allowing for direct integration into the MODIS bands 1, 3, and 4. The data provide a unique opportunity to directly compare UAS nadir reflectance with the MODIS nadir BRDF-adjusted surface reflectance (NBAR) products. The data show UAS measurements are slightly higher than the MODIS NBARs for all bands but agree within their stated uncertainties. Differences in variability are observed as expected due to different footprints of the platforms. The UAS data demonstrate potentially large sub-pixel variability of MODIS reflectance products and the potential to explore this variability using the UAS as a platform. It is also found that, even at the low elevations flown typically by a UAS, reflectance measurements may be influenced by haze if present at and/or below the flight altitude of the UAS. This impact could explain some differences between data from the two platforms and should be considered in any use of airborne platforms. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Atmospheric Correction <s> In the last decade, significant progress has been made in estimating Solar-Induced chlorophyll Fluorescence (SIF) by passive remote sensing techniques that exploit the oxygen absorption spectral regions. Although the O2–B and the deep O2–A absorption bands present a high sensitivity to detect SIF, these regions are also largely influenced by atmospheric effects. Therefore, an accurate Atmospheric Correction (AC) process is required to measure SIF from oxygen bands. In this regard, the suitability of a two-step approach, i.e., first an AC and second a Spectral Fitting technique to disentangle SIF from reflected light, has been evaluated. One of the advantages of the two-step approach resides in the derived intermediate products provided prior to SIF estimation, such as surface apparent reflectance. Results suggest that errors introduced in the AC, e.g., related to the characterization of aerosol optical properties, are propagated into systematic residual errors in the apparent reflectance. However, of interest is that these errors can be easily detected in the oxygen bands thanks to the high spectral resolution required to measure SIF. To illustrate this, the predictive power of the apparent reflectance spectra to detect and correct inaccuracies in the aerosols characterization is assessed by using a simulated database with SCOPE and MODTRAN radiative transfer models. In 75% of cases, the aerosol optical thickness, the Angstrom coefficient and the scattering asymmetry factor are corrected with a relative error below of 0.5%, 8% and 3%, respectively. To conclude with, and in view of future SIF monitoring satellite missions such as FLEX, the analysis of the apparent reflectance can entail a valuable quality indicator to detect and correct errors in the AC prior to the SIF estimation. <s> BIB004
Section 4.3.1 described how ARTMs can be used to simulate the irradiance to calculate reflectance. Additionally, ARTMs are widely used for correcting multispectral and hyperspectral imagery satellite and airborne data BIB001 and high-altitude UAV images BIB002 for atmospheric influence on the path from the object to the sensor. Recent modeling studies suggest that atmospheric correction is important for very precise radiometric measurements, e.g., to estimate solar-induced chlorophyll fluorescence BIB004 . For reflectance studies, a detailed analysis is still missing. Thus, atmospheric influences should be considered in each application BIB003 . An easy approach to normalize the influence of the atmosphere is to use the ELM.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> A simple equation has been developed for describing the bidirectional reflectance of some vegetative canopies and bare soil surfaces. The equation describes directional reflectance as a function of zenith and azimuth view angles and solar azimuth angle. The equation works for simulated and field measured red and IR reflectance under clear sky conditions. Hemispherical reflectance can be calculated as a function of the simple equation coefficients by integrating the equation over the hemisphere of view angles. A single equation for estimating soil bidirectional reflectance was obtained using the relationships between solar zenith angles and the simple equation coefficients for medium and rough soil distributions. The equation has many useful applications such as providing a lower level boundary condition in complex plant canopy models and providing an additional tool for studying bidirectional effects on pointable sensors. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Abstract An analytical reflectance model for a statistically homogeneous plant canopy has been developed. The most specific characteristics of the model are: 1) considering both the single and the multiple scattering of radiation in the canopy and on the soil and 2) accounting for the specular reflection of radiation on leaves and canopy hot spot. For the inversion of the model the technique suggested by Goel and Strebel (1983) has been applied. The reflectance model fits well the results of measurements both of the seasonal course of the nadir reflectance and of the angular distribution of the directional reflectance of the winter wheat and barley canopies. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> A method for the radiometric correction of wide field-of-view airborne imagery has been developed that accounts for the angular dependence of the path radiance and atmospheric transmittance functions to remove atmospheric and topographic effects. The first part of processing is the parametric geocoding of the scene to obtain a geocoded, orthorectified image and the view geometry (scan and azimuth angles) for each pixel as described in part 1 of this jointly submitted paper. The second part of the processing performs the combined atmospheric/ topographic correction. It uses a database of look-up tables of the atmospheric correction functions (path radiance, atmospheric transmittance, direct and diffuse solar flux) calculated with a radiative transfer code. Additionally, the terrain shape obtained from a digital elevation model is taken into account. The issues of the database size and accuracy requirements are critically discussed. The method supports all common types of imaging airborne optical instrument... <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Abstract The remote sensing community puts major efforts into calibration and validation of sensors, measurements, and derived products to quantify and reduce uncertainties. Given recent advances in instrument design, radiometric calibration, atmospheric correction, algorithm development, product development, validation, and delivery, the lack of standardization of reflectance terminology and products becomes a considerable source of error. This article provides full access to the basic concept and definitions of reflectance quantities, as given by Nicodemus et al. [Nicodemus, F.E., Richmond, J.C., Hsia, J.J., Ginsberg, I.W., and Limperis, T. (1977). Geometrical Considerations and Nomenclature for Reflectance. In: National Bureau of Standards, US Department of Commerce, Washington, D.C. URL: http://physics.nist.gov/Divisions/Div844/facilities/specphoto/pdf/geoConsid.pdf .] and Martonchik et al. [Martonchik, J.V., Bruegge, C.J., and Strahler, A. (2000). A review of reflectance nomenclature used in remote sensing. Remote Sensing Reviews, 19, 9–20.]. Reflectance terms such as BRDF, HDRF, BRF, BHR, DHR, black-sky albedo, white-sky albedo, and blue-sky albedo are defined, explained, and exemplified, while separating conceptual from measurable quantities. We use selected examples from the peer-reviewed literature to demonstrate that very often the current use of reflectance terminology does not fulfill physical standards and can lead to systematic errors. Secondly, the paper highlights the importance of a proper usage of definitions through quantitative comparison of different reflectance products with special emphasis on wavelength dependent effects. Reflectance quantities acquired under hemispherical illumination conditions (i.e., all outdoor measurements) depend not only on the scattering properties of the observed surface, but as well on atmospheric conditions, the object's surroundings, and the topography, with distinct expression of these effects in different wavelengths. We exemplify differences between the hemispherical and directional illumination quantities, based on observations (i.e., MISR), and on reflectance simulations of natural surfaces (i.e., vegetation canopy and snow cover). In order to improve the current situation of frequent ambiguous usage of reflectance terms and quantities, we suggest standardizing the terminology in reflectance product descriptions and that the community carefully utilizes the proposed reflectance terminology in scientific publications. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Multispectral remote sensing applications from UAS are reported in the literature less commonly than applications using visible bands, although light-weight multispectral sensors for UAS are being used increasingly. . In this paper, we describe challenges and solutions associated with efficient processing of multispectral imagery to obtain orthorectified, radiometrically calibrated image mosaics for the purpose of rangeland vegetation classification. We developed automated batch processing methods for file conversion, band-to-band registration, radiometric correction, and orthorectification. An object-based image analysis approach was used to derive a species-level vegetation classification for the image mosaic with an overall accuracy of 87%. We obtained good correlations between: (1) ground and airborne spectral reflectance (R 2 = 0.92); and (2) spectral reflectance derived from airborne and WorldView-2 satellite data for selected vegetation and soil targets. UAS-acquired multispectral imagery provides quality high resolution information for rangeland applications with the potential for upscaling the data to larger areas using high resolution satellite imagery. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Directional effects in airborne imaging spectrometer (IS) data are mainly caused by anisotropic reflectance behavior of surfaces, commonly described by bi-directional reflectance distribution functions (BRDF). The radiometric and spectral accuracy of IS data is known to be highly influenced by such effects, which prevents consistent comparison of products. Several models were developed to approximate surface reflectance anisotropy for multi-angular observations. Few studies were carried out using such models for airborne flight lines where only a single observation is available for each ground location. In the present work, we quantified and corrected reflectance anisotropy on a single airborne HyMap flight line using a Ross-Li model. We stratified the surface in two vegetation structural types (different in vertical structuring) using spectral angle mapping, to generate a structure dependent set of angular observations. We then derived a suite of products [indices (structure insensitive pigment index, normalized difference vegetation index, simple ratio index, and anthocyanin reflectance index) and inversion-based (SAIL/PROSPECT-leaf area index, Cw, Cdm, Cab)] from corrected and uncorrected images. Non-parametric analysis of variance (Kruskal-Wallis test) showed throughout significant improvements in products from corrected images. Data correction resulting in airborne nadir BRDF adjusted reflectance (aNBAR) showed uncertainty reductions from 60 to 100% (p-value = 0.05) as compared to uncorrected and nadir observations. Using sparse IS data acquisitions, the use of fully parametrized BRDF models is limited. Our normalization scheme is straightforward and can be applied with illumination and observation geometry being the only a priori information. We recommend aNBAR generation to precede any higher level airborne IS product generation based on reflectance data. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> The radiometric correction of airborne imagery aims at providing unbiased spectral information about the Earth's surface. Correction steps include system calibration, geometric correction, and the compensation for atmospheric effects. Such preprocessed data are affected by the bidirectional reflectance distribution function (BRDF), which requires an additional compensation step. We present a novel method for a surface-cover-dependent BRDF effects correction (BREFCOR). It uses a continuous index based on bottom-of-atmosphere reflectances to tune the Ross–Thick Li–Sparse BRDF model. This calibrated model is then used to correct for observation-angle-dependent anisotropy. The method shows its benefits specifically for wide-field-of-view airborne systems where BRDF effects strongly affect image quality. Evaluation results are shown for sample data from a multispectral photogrammetric Leica ADS camera system and for HYSPEX imaging spectroscopy data. The scalability of the procedure for various kinds of sensor configurations allows for its operational use as part of standard processing systems. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Miniaturized hyperspectral imaging sensors are becoming available to small unmanned airborne vehicle (UAV) platforms. Imaging concepts based on frame format offer an attractive alternative to conventional hyperspectral pushbroom scanners because they enable enhanced processing and interpretation potential by allowing for acquisition of the 3-D geometry of the object and multiple object views together with the hyperspectral reflectance signatures. The objective of this investigation was to study the performance of novel visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral frame cameras based on a tunable Fabry–Perot interferometer (FPI) in measuring a 3-D digital surface model and the surface moisture of a peat production area. UAV image blocks were captured with ground sample distances (GSDs) of 15, 9.5, and 2.5 cm with the SWIR, VNIR, and consumer RGB cameras, respectively. Georeferencing showed consistent behavior, with accuracy levels better than GSD for the FPI cameras. The best accuracy in moisture estimation was obtained when using the reflectance difference of the SWIR band at 1246 nm and of the VNIR band at 859 nm, which gave a root mean square error (rmse) of 5.21 pp (pp is the mass fraction in percentage points) and a normalized rmse of 7.61%. The results are encouraging, indicating that UAV-based remote sensing could significantly improve the efficiency and environmental safety aspects of peat production. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Field spectroscopy is increasingly used in various fields of science: either as a research tool in its own right or in support of airborne- or space-based optical instruments for calibration or validation purposes. Yet, while the use of the instruments appears deceptively simple, the processes of light and surface interactions are complex to be measured in full and are further complicated by the multidimensionality of the measurement process. This study exemplifies the cross validation of in situ point spectroscopy and airborne imaging spectroscopy data across all processing stages within the spectroscopy information hierarchy using data from an experiment focused on vegetation. In support of this endeavor, this study compiles the fundamentals of spectroscopy, the challenges inherent to field and airborne spectroscopy, and the best practices proposed by the field spectroscopy community. This combination of theory and case study shall enable the reader to develop an understanding of 1) some of the commonly involved sources of errors and uncertainties, 2) the techniques to collect high-quality spectra under natural illumination conditions, and 3) the importance of appropriate metadata collection to increase the long-term usability and value of spectral data. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> BRDF Correction <s> Unmanned airborne vehicles (UAV) equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF) related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the reflectance for most of the spectral bands. <s> BIB012
When analyzing imaging data measured with wide-angle FOVs, the anisotropy of the surface might cause significant radiometric differences within individual images and between neighboring images BIB011 BIB012 . Wide-angle FOVs are very common in UAV imaging spectroscopy to facilitate a larger spatial coverage. This introduces unwanted effects when mosaicing the images, and affects the spectral signature of objects within the scene BIB011 . BRDF correction is defined as the process of compensating the influence of anisotropy, so that the image reflectance values correspond to the reflectance factor at the (mostly) nadir direction. The commonly used BRDF-models can be classified as physical, empirical, or semi-empirical . In classical remote sensing, a BRDF correction is usually carried out by means of empirical models BIB003 BIB008 BIB007 . The BRDF correction is calculated by determining the BRDF model of the target of interest, and then calculating a multiplicative correction factor for BRDF compensation . It may also include calculation and normalization to reflectance factors for a desired geometry BIB004 BIB010 . The BRDF correction using the simple empirical model by Walthall et al. BIB001 and Nilson and Kusk BIB002 has been used by Beisl and by Honkavaara et al. BIB006 BIB009 BIB012 in the radiometric block adjustment approach. Different statistical methods are also popular in correcting BRDF effects. Laliberte et al. BIB005 used the dodging method to compensate for the uneven lighting conditions across a photo frame due to the BRDF effects, fragmented cloud cover, vignetting, and other factors. The process is based on global statistics calculations for a group of images, to balance the radiometry both within individual images and across groups of imagery. Their results showed less than 2% residual root mean square errors (RMSEs) (in reflectance) when calculating the linear fit of the reflectance mosaic and reference reflectance.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Topographic Correction <s> SUMMARYThe effects of topography on the radiometric properties of multispectral scanner (MSS) data are examined in the context of the remote sensing of forests in mountainous regions. The two test areas considered for this study are located in the coastal mountains of British Columbia, one at the Anderson River near Boston Bar and the other at Gun Lake near Bralorne. The predominant forest type at the former site is Douglas fir, whereas forest types at the latter site are primarily lodgepole pine and ponderosa pine. Both regions have rugged topography, with elevations ranging from 330 to 1100 metres above sea level at Anderson River and from 750 to 1300 metres above sea level at Gun Lake.Lambertian and non-Lambertian illumination corrections are formulated, taking into account atmospheric effects as well as topographic variations. Terrain slope and aspect values are determined from a digital elevation model and atmospheric parameters are obtained from a model atmosphere computation for the solar angles an... <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Topographic Correction <s> Steephill and mountain slopes severely affect remote sensing of vegetation. The irradiation on a slope varies strongly with slope azimuth relative to the sun, and the reflectance of the slope varies with the angles of incidence and exitance relative to the slope normal. Topographic correction involves standardizing imagery for these two effects. We use an atmospheric model with a Digital Elevation Model (DEM) to calculate direct and diffuse illumination, and a simple function of incidence and exitance angles to calculate vegetation-canopy reflectance on terrain slope. The reflectance correction has been derived from the physics of visible direct radiation on a vegetation canopy, but has proved applicable to infrared wavelengths and only requires solar position, slope and aspect. We applied the reflectance and illumination correction to a SPOT 4 image of New Zealand to remove topographic variation. In all spectral bands, the algorithm markedly reduced the coefficients of variation of vegetation groups on r... <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Topographic Correction <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB003
The topography can have a large influence on the local illumination within an image . The radiance of the same material varies if it is located on a slope oriented toward or away from the sunlight incidence. For a correction, a DSM and the Sun's elevation and azimuth angles at the time of acquisition are needed. Several correction methods exist . Jakob et al. BIB003 implemented and tested some of the common topographic correction methods with UAV-based imagery for geological applications. The methods comprised Lambertian, such as the cosine method BIB001 , gamma method BIB002 or percent method, as well as non-Lambertian methods, such as the Minnaert method or the c-factor method by Teillet et al. BIB001 . They recommended the c-factor method.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Shadow Correction <s> A de‐shadowing technique is presented for multispectral and hyperspectral imagery over land acquired by satellite/airborne sensors. The method requires a channel in the visible and at least one spectral band in the near‐infrared (0.8–1 µm) region, but performs much better if bands in the short‐wave infrared region (around 1.6 and 2.2 µm) are available as well. The algorithm consists of these major components: (i) calculation of the covariance matrix and zero‐reflectance matched filter vector, (ii) derivation of the unscaled and scaled shadow function, (iii) histogram thresholding of the unscaled shadow function to define the core shadow areas, (iv) region growing to include the surroundings of the core shadow areas for a smooth shadow/clear transition, and (v) de‐shadowing of the pixels in the final shadow mask. The critical parameters of the method are discussed. Example images from different climates and landscapes are presented to demonstrate the successful performance of the shadow removal process ove... <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Shadow Correction <s> Digital airborne photogrammetric cameras have evolved from imagers to well-calibrated radiometric measurement devices. As such, the radiative transfer based processing of the acquired data to surface reflectance products has become feasible. Such processing allows for automatic and consistent compensation of the effects of the atmosphere and the topography, which is known from remote sensing applications as the atmospheric correction task. The motivation is both, a qualitative improvement of the outputs of the automatic processing chains as well as the possibility to develop remote sensing data products from the imagery. This paper presents the operational implementation of a radiative-transfer based radiometric correction method of the Leica’s ADS-80 image products. The method is developed on the basis of the ATCOR-4 technology. The ATCOR-4 atmospheric correction software inverts the MODTRAN R -5 radiative transfer code for atmospheric compensation of trace gas and aerosol influences as well as for topographic correction of the illumination field. The focus of the processing is twofold: for image products, the correction of topographic dependency of atmospheric scattering, depending on flight altitude, terrain height, and viewing angle is envisaged. For remote sensing products, the output shall be optimized for automatic quantitative processing, including the correction of irradiance variations and cast shadow effects. The implementation of these two procedures have been successfully tested for both types of applications. Validation results in comparison to in-field measurements indicate a reliable accuracy of the such produced reflectance spectra. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Shadow Correction <s> Abstract Automatic shadow detection is a very important pre-processing step for many remote sensing applications, particularly for images acquired with high spatial resolution. In complex urban environments, shadows may occupy a significant portion of the image. Ignoring these regions would lead to errors in various applications, such as atmospheric correction and classification. To better understand the radiative impact of shadows, a physical study was conducted through the simulation of a synthetic urban canyon scene. Its results helped to explain the most common assumptions made on shadows from a physical point of view in the literature. With this understanding, state-of-the-art methods on shadow detection were surveyed and categorized into six classes: histogram thresholding, invariant color models, object segmentation, geometrical methods, physics-based methods, unsupervised and supervised machine learning methods. Among them, some methods were selected and tested on a large dataset of multispectral and hyperspectral airborne images with high spatial resolution. The dataset chosen contains a large variety of typical occidental urban scenes. The results were compared based on accurate reference shadow masks. In these experiments, histogram thresholding on RGB and NIR channels performed the best with an average accuracy of 92.5%, followed by physics-based methods, such as Richter’s method with 90.0%. Finally, this paper analyzes and discusses the limits of these algorithms, concluding with some recommendations for shadow detection. <s> BIB003
Shadows are caused by 3D objects within the scene and created by clouds. The approaches for treating shadowed areas include de-shadowing and separating the analysis of the shadowed and sun-illuminated areas. Adeline et al. BIB003 categorized shadow detection methods into six classes: histogram thresholding, invariant color models, object segmentation, geometrical methods, physics-based methods, and unsupervised and supervised machine learning methods. The geometric method requires the object 3D model and the information of the solar elevation and direction to calculate the positions of shadows. Due to various uncertainties, the accuracy of the geometric method is not sufficient in most cases, especially with high resolution images BIB002 . Therefore, image-based methods are needed. Adeline et al. BIB003 used simulated data to obtain accurate reference shadow masks. In these experiments, histogram thresholding on RGB and NIR channels performed the best, followed by physics-based methods. De-shadowing based on physical radiation modeling relies on all of the areas in shadows being illuminated by diffuse irradiance only; the shadow correction provided good results for hyperspectral airborne images and satellite images BIB001 and the ADS high-resolution photogrammetric multispectral scanner BIB002 . To the authors' knowledge, no such studies exist for high-resolution UAV approaches; thus, further studies are needed in this field.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Block Adjustment <s> The advent of routine collection of high-quality digital photography provides for traditional uses, as well as “remote sensing” uses such as the monitoring of environmental indicators. A well-devised monitoring system, based on consistent data and methods, provides the opportunity to track and communicate changes in features of interest in a way that has not previously been possible. Data that are geometrically and radiometrically consistent are fundamental to establishing systems for monitoring. In this paper, we focus on models for the radiometric calibration of mosaics consisting of thousands of images. We apply the models to the data acquired by the Australian Commonwealth Scientific and Industrial Research Organisation and its partners as part of regular systematic acquisitions over the city of Perth for a project known as Urban Monitor. One goal of the project, and hence the model development, is to produce annually updated mosaics calibrated to reflectance at 0.2-m ground sample distance for an area of approximately 9600 km2. This equates to terabytes of data and, for frame-based instruments, tens of thousands of images. For the experiments considered in this paper, this requires mosaicking estimates derived from 3000 digital photographic frames, and the methods will shortly be expanded to 30 000+ frames. A key part of the processing is the removal of spectral variation due to the viewing geometry, typically attributed to the bidirectional reflectance distribution function (BRDF) of the land surface. A variety of techniques based on semiempirical BRDF kernels have been proposed in the literature for correcting the BRDF effect in single frames, but mosaics with many frames provide unique challenges. This paper presents and illuminates a complete empirical radiometric calibration method for digital aerial frame mosaics, based on a combined model that uses kernel-based techniques for BRDF correction and incorporates additive and multiplicative terms for correcting other effects, such as variations due to the sensor and atmosphere. Using ground truth, which consists of laboratory-measured white, gray, and black targets that were placed in the field at the time of acquisition, we calculate the fundamental limitations of each model, leading to an optimal result for each model type. We demonstrate estimates of ground reflectance that are accurate to approximately 10%, 5%, and 3% absolute reflectances for ground targets having reflectances of 90%, 40%, and 4%, respectively. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Block Adjustment <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Block Adjustment <s> Unmanned airborne vehicles (UAV) equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF) related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the reflectance for most of the spectral bands. <s> BIB003
Radiometric block adjustment can be used in cases when the area of interest is covered by multiple overlapping images, such as image blocks captured with 2D or pushbroom imaging sensors. In the photogrammetric (geometric) processing of image blocks, the block adjustment is used to determine the best geometric fit over the entire image block. Radiometric block adjustment is based on a similar idea. The approach is to model the radiometric imaging process, i.e., the model between the object reflectance and the image DN, and then solve the parameters of this model using optimization techniques utilizing redundant information from the multiple overlapping images BIB002 BIB003 . The outputs of the process are the parameters of the radiometric model, which can be used in the following processes to produce radiometrically corrected image products, such as reflectance mosaics, reflectance point clouds, or reflectance observations of objects of interest BIB002 BIB003 . Similar approaches have previously been used with aircraft images BIB001 . The model between a DN and reflectance by BIB002 accounts for the variability of the radiance measurement and the BRDF effects, and determines the absolute transformation from DN to reflectance using the ELM. In the adjustment process, a set of radiometric tie points are determined, observation equations are formed utilizing the DN observations of each radiometric tie point in multiple images. In addition to the radiometric tie points, other observations can also be included. In the current implementation, radiometric control points (e.g., reflectance panels, c.f. Section 4.4.3) and the a priori values of relative differences in the irradiance of different images can be included as observations BIB003 . Relevant model parameters are selected for each adjustment task. For example, during overcast conditions, it is not necessary to use the BRDF parameters, whereas under stable conditions, the relative correction parameters are not usually necessary. Furthermore, a comprehensive weighting strategy is used to reach the optimal results in the combined adjustment mode BIB003 . The definition for the reflectance outputs calculated as a result of this procedure is the hemispherical directional reflectance factor (HDRF). Many of the steps in Figure 5 are thus integrated into the radiometric block adjustment process.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> A novel hyperspectral measurement system for unmanned aerial vehicles (UAVs) in the visible to near infrared (VIS/NIR) range (350-800 nm) was developed based on the Ocean Optics STS microspectrometer. The ultralight device relies on small open source electronics and weighs a ready-to-fly 216 g. The airborne spectrometer is wirelessly synchronized to a second spectrometer on the ground for simultaneous white reference collection. In this paper, the performance of the system is investigated and specific issues such as dark current correction or second order effects are addressed. Full width at half maximum was between 2.4 and 3.0 nm depending on the spectral band. The functional system was tested in flight at a 10-m altitude against a current field spectroscopy gold standard device Analytical Spectral Devices Field Spec 4 over an agricultural site. A highly significant correlation was found in reflection comparing both measurement approaches. Furthermore, the aerial measurements have a six times smaller standard deviation than the hand held measurements. Thus, the present spectrometer opens a possibility for low-cost but high-precision field spectroscopy from UAVs. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Abstract We present the Airborne Prism Experiment (APEX), its calibration and subsequent radiometric measurements as well as Earth science applications derived from this data. APEX is a dispersive pushbroom imaging spectrometer covering the solar reflected wavelength range between 372 and 2540 nm with nominal 312 (max. 532) spectral bands. APEX is calibrated using a combination of laboratory, in-flight and vicarious calibration approaches. These are complemented by using a forward and inverse radiative transfer modeling approach, suitable to further validate APEX data. We establish traceability of APEX radiances to a primary calibration standard, including uncertainty analysis. We also discuss the instrument simulation process ranging from initial specifications to performance validation. In a second part, we present Earth science applications using APEX. They include geometric and atmospheric compensated as well as reflectance anisotropy minimized Level 2 data. Further, we discuss retrieval of aerosol optical depth as well as vertical column density of NOx, a radiance data-based coupled canopy–atmosphere model, and finally measuring sun-induced chlorophyll fluorescence (Fs) and infer plant pigment content. The results report on all APEX specifications including validation. APEX radiances are traceable to a primary standard with 625 for all spectral bands. Radiance based vicarious calibration is traceable to a secondary standard with ≤ 6.5% uncertainty. Except for inferring plant pigment content, all applications are validated using in-situ measurement approaches and modeling. Even relatively broad APEX bands (FWHM of 6 nm at 760 nm) can assess Fs with modeling agreements as high as R 2 = 0.87 (relative RMSE = 27.76%). We conclude on the use of high resolution imaging spectrometers and suggest further development of imaging spectrometers supporting science grade spectroscopy measurements. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Variations in photosynthesis still cause substantial uncertainties in predicting photosynthetic CO2 uptake rates and monitoring plant stress. Changes in actual photosynthesis that are not related to greenness of vegetation are difficult to measure by reflectance based optical remote sensing techniques. Several activities are underway to evaluate the sun-induced fluorescence signal on the ground and on a coarse spatial scale using space-borne imaging spectrometers. Intermediate-scale observations using airborne-based imaging spectroscopy, which are critical to bridge the existing gap between small-scale field studies and global observations, are still insufficient. Here we present the first validated maps of sun-induced fluorescence in that critical, intermediate spatial resolution, employing the novel airborne imaging spectrometer HyPlant. HyPlant has an unprecedented spectral resolution, which allows for the first time quantifying sun-induced fluorescence fluxes in physical units according to the Fraunhofer Line Depth Principle that exploits solar and atmospheric absorption bands. Maps of sun-induced fluorescence show a large spatial variability between different vegetation types, which complement classical remote sensing approaches. Different crop types largely differ in emitting fluorescence that additionally changes within the seasonal cycle and thus may be related to the seasonal activation and deactivation of the photosynthetic machinery. We argue that sun-induced fluorescence emission is related to two processes: (i) the total absorbed radiation by photosynthetically active chlorophyll; and (ii) the functional status of actual photosynthesis and vegetation stress. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> In this study we present a hyperspectral flying goniometer system, based on a rotary-wing unmanned aerial vehicle (UAV) equipped with a spectrometer mounted on an active gimbal. We show that this approach may be used to collect multiangular hyperspectral data over vegetated environments. The pointing and positioning accuracy are assessed using structure from motion and vary from σ = 1° to 8° in pointing and σ = 0.7 to 0.8 m in positioning. We use a wheat dataset to investigate the influence of angular effects on the NDVI, TCARI and REIP vegetation indices. Angular effects caused significant variations on the indices: NDVI = 0.83–0.95; TCARI = 0.04–0.116; REIP = 729–735 nm. Our analysis highlights the necessity to consider angular effects in optical sensors when observing vegetation. We compare the measurements of the UAV goniometer to the angular modules of the SCOPE radiative transfer model. Model and measurements are in high accordance (r2 = 0.88) in the infrared region at angles close to nadir; in contrast the comparison show discrepancies at low tilt angles (r2 = 0.25). This study demonstrates that the UAV goniometer is a promising approach for the fast and flexible assessment of angular effects. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Raman spectroscopy is an important tool in understanding chemical components of various materials. However, the excessive weight and energy consumption of a conventional CCD-based Raman spectrometer forbids its applications under extreme conditions, including unmanned aircraft vehicles (UAVs) and Mars/Moon rovers. In this article, we present a highly sensitive, shot-noise–limited, and ruggedized Raman signal acquisition using a time-correlated photon-counting system. Compared with conventional Raman spectrometers, over 95% weight, 65% energy consumption, and 70% cost could be removed through this design. This technique allows space- and UAV-based Raman spectrometers to robustly perform hyperspectral Raman acquisitions without excessive energy consumption. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Reflectance anisotropy is a signal that contains information on the optical and structural properties of a surface and can be studied by performing multi-angular reflectance measurements that are often done using cumbersome goniometric measurements. In this paper we describe an innovative and fast method where we use a hyperspectral pushbroom spectrometer mounted on a multirotor unmanned aerial vehicle (UAV) to perform such multi-angular measurements. By hovering the UAV above a surface while rotating it around its vertical axis, we were able to sample the reflectance anisotropy within the field of view of the spectrometer, covering all view azimuth directions up to a 30° view zenith angle. We used this method to study the reflectance anisotropy of barley, potato, and winter wheat at different growth stages. The reflectance anisotropy patterns of the crops were interpreted by analysis of the parameters obtained by fitting of the Rahman-Pinty-Verstraete (RPV) model at a 5-nm interval in the 450–915 nm range. To demonstrate the results of our method, we firstly present measurements of barley and winter wheat at two different growth stages. On the first measuring day, barley and winter wheat had structurally comparable canopies and displayed similar anisotropic reflectance patterns. On the second measuring day the anisotropy of crops differed significantly due to the crop-specific development of grain heads in the top layer of their canopies. Secondly, we show how the anisotropy is reduced for a potato canopy when it grows from an open row structure to a closed canopy. In this case, especially the backward scattering intensity was strongly diminished due to the decrease in shadowing effects that were caused by the potato rows that were still present on the first measuring day. The results of this study indicate that the presented method is capable of retrieving anisotropic reflectance characteristics of vegetation canopies and that it is a feasible alternative for field goniometer measurements. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Both inductive and deductive approaches based on hyperspectral remote sensing data require abundant observation opportunities under various conditions due to the high dimensionality of the hyperspectral data. With the recent advent of low-cost lightweight unmanned aerial vehicles (UAVs), UAVs for low-altitude aerial observation are becoming commodities rather than special equipment. Therefore, the appearance of low-cost hyperspectral imagers is anticipated for aerial hyperspectral sensing via UAVs. In this paper, we describe the development of a low-cost hyperspectral imager based on a whiskbroom scanning mechanism. The main components of the developed system include an optical fiber bundle, a swing mirror, and compact spectrometers. An image formed by an objective lens is quantized into a set of pixels by a two-dimensional array of quartz fiber-optic cables at one end of an optical fiber bundle. The quantized image travels to the other end of the bundle, inside of which a swing mirror is used for cross-track scanning. The light in each pixel of the quantized image is then measured using a compact spectrometer. Calculated reflectances in close-range measurements of color checkered patterns were spatially and spectrally accurate. In an aerial measurement of a coastal area from a 20-m altitude via a lightweight UAV, a hyperspectral image with a 0.5-m spatial resolution and an 8-m swath was acquired. Based on pattern matching using cross correlation, classification of three classes of marine macrophyte beds, agar, coralline, and sand realized overall accuracies of 0.755 (diffuse dominant illumination) and 0.719 (direct sunlight dominant illumination). <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Low-altitude hyperspectral observation systems using aerial observation with unmanned aerial vehicles (UAVs) have advantages over satellite systems with respect to frequency, accuracy, and spatial resolution. Although low-cost lightweight UAVs have become available in recent years, the current price ranges of lightweight pushbroom and snapshot hyperspectral sensors remain high. For sustainable operation of UAV-mounted hyperspectral sensing, the challenge in production has been shifted from the size and weight to the cost of the lightweight hyperspectral sensors. In this paper, we develop a low-cost, lightweight whiskbroom hyperspectral imaging system. The gross weight of the sensor is 1200 g. The spectral range of the 256-band spectrometer extends from 340 to 750 nm with a 14-nm spectral resolution. The viewing angle across the flight direction is controlled by the rotation of an eight-sided polygon mirror. When the exposure time, flight altitude, flight speed, and focal length of the optical lens are 3.2 ms, 10 m, 10 m/s, and 8 mm, respectively, then the estimated values of the swath and the area coverage per second are 13.4 m and 134.1 $\hbox{m}^2$/s, respectively. The spatial resolution is 0.97 (m) (flight direction) $\times$ 0.46 (m) (scanning direction). In preliminary close-range measurements with a 3.2-ms exposure time per area and a 1224-ms rotation period of the polygon mirror, the reflected light from 12 areas of a printed checkered color pattern in the moving direction were measured. We found that the calculated reflectance based on the measurements is spatially consistent and spectrally accurate. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensor- and platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. <s> BIB013 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Viewing and illumination geometry has a strong influence on optical measurements of natural surfaces due to their anisotropic reflectance properties. Typically, cameras on-board unmanned aerial vehicles (UAVs) are affected by this because of their relatively large field of view (FOV) and thus large range of viewing angles. In this study, we investigated the magnitude of reflectance anisotropy effects in the 500–900 nm range, captured by a frame camera mounted on a UAV during a standard mapping flight. After orthorectification and georeferencing of the images collected by the camera, we calculated the viewing geometry of all observations of each georeferenced ground pixel, forming a dataset with multi-angular observations. We performed UAV flights on two days during the summer of 2016 over an experimental potato field where different zones in the field received different nitrogen fertilization treatments. These fertilization levels caused variation in potato plant growth and thereby differences in structural properties such as leaf area index (LAI) and canopy cover. We fitted the Rahman–Pinty–Verstraete (RPV) model through the multi-angular observations of each ground pixel to quantify, interpret, and visualize the anisotropy patterns in our study area. The Θ parameter of the RPV model, which controls the proportion of forward and backward scattering, showed strong correlation with canopy cover, where in general an increase in canopy cover resulted in a reduction of backward scattering intensity, indicating that reflectance anisotropy contains information on canopy structure. In this paper, we demonstrated that anisotropy data can be extracted from measurements using a frame camera, collected during a typical UAV mapping flight. Future research will focus on how to use the anisotropy signal as a source of information for estimation of physical vegetation properties. <s> BIB014 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. <s> BIB015 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Sensors <s> Traditional imagery—provided, for example, by RGB and/or NIR sensors—has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors’ ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre- and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectral sensors plays a center role—is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing—by isolating the common user from the processes’ mathematical complexity—several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper’s conclusions. <s> BIB016
During the last 10 years, the number of (commercial) sensors tailored for UAV sensing systems has rapidly increased. Today, sensors are able to capture data faster and with much higher spatial resolution, which allows flying higher and faster, and covering a much larger area. While most of the UAV sensors still cannot compete with their bigger counterparts carried on airplanes such as the CASI [181] , airborne prism experiment (APEX; BIB004 ), HyPlant BIB005 , the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS; [184] ) or NASA Goddard's LiDAR, Hyperspectral, and Thermal Airborne Imager (G-LiHT; ), UAV point and pushbroom sensing systems, in particular, have been demonstrated to fill a unique niche for a large variety of applications and research purposes. We expect this trend to continue, since manufacturers of professional airborne sensors such as HySpex and Specim have now also started to build UAV sensors . At the same time, UAV remote sensing has grown to form its own discipline with research particularly directed to investigating and improving the quality of small and lightweight sensors BIB003 BIB001 and further developing data processing algorithms to fit the ultra-high resolution data, including quality assurance approaches BIB006 BIB013 . Moreover, innovative approaches empowered by the new technology are developed that go beyond the classical capabilities of remote sensing platforms, such as rapid BRDF quantification BIB007 BIB014 BIB009 and simultaneous spectral and 3D mapping BIB006 BIB002 BIB015 . Table 2 reflects these developments by summarizing key publications on novel sensors, concepts, or methods for calibration, integration, or data pre-processing for UAV spectral sensors and data. Furthermore, the interested reader is referred to BIB016 for a comprehensive list of spectral imaging sensors that extend the examples in this manuscript. Due to the variety of sensors and their configuration, the classical discrimination between hyperspectral and multispectral is becoming blurred. Thus, every study published should contain information on the specific band configuration. Since so many spectral sensors with different configurations have appeared, it becomes hard to compare the results between different studies. Making the sensor configuration transparent is the very first step in addressing this issue. However, a prerequisite for such information is a comprehensive characterization and calibration of the sensing system. While the interested reader is referred to the calibration studies in Table 2 and Jablonski et al. BIB010 , we see the main responsibility as being with the camera manufacturers. At the same time, this also includes the correct usage of common terminology (e.g., spectral sampling interval versus FWHM). It is important to note that there is most likely no sensor that is able to meet all needs. Additionally, when selecting a sensor, users are usually confronted with the spatial resolution versus spectral resolution versus coverage challenge. Generally, a higher spatial resolution leads to a lower spectral resolution, due to physical constraints in sensor design. Additionally, if larger areas should be covered, this is mostly achieved by flying higher, which will in turn lower the ground sampling distance. We expect that more UAV sensors will become available in the near future and identify two trends. On one hand, there are the more complex and expensive cameras that are able to capture many bands or implement new techniques to capture spectral information (e.g., BIB011 BIB012 BIB008 ) with even more lightweight and small sensors. These sensors allow researchers to conduct research on spectral sensing and identify promising bands for different applications. Another trend is toward more consumer-oriented cameras that are relatively easy to use and allow standardized tasks to be carried out, such as acquisition of NDVI imagery. Table 2 . Key publications on novel sensors, concepts, or methods for calibration (C), integration (I), or data pre-processing (P) for UAV spectral sensors and data. RGB: red-green-blue.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Structure from Motion (SfM) <s> Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV) have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS) techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud ( < 1–3 cm point spacing) is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS) are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from ~50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion) can be monitored. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Structure from Motion (SfM) <s> Abstract. The accurate determination of the height of agricultural crops helps to predict yield, biomass etc. These relationships are of great importance not only for crop production but also in grassland management, because the available biomass and food quality are valuable information. However there is no cost efficient and automatic system for the determination of the crop height available. 3D-point clouds generated from high resolution UAS imagery offer a new alternative. Two different approaches for crop height determination are presented. The "difference method" were the canopy height is determined by taking the difference between a current UAS-surface model and an existing digital terrain model (DTM) is the most suited and most accurate method. In situ measurements, vegetation indices and yield observations correlate well with the determined UAS crop heights. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Structure from Motion (SfM) <s> Image matching has a history of more than 50 years, with the first experiments performed with analogue procedures for cartographic and mapping purposes. The recent integration of computer vision algorithms and photogrammetric methods is leading to interesting procedures which have increasingly automated the entire image-based 3D modelling process. Image matching is one of the key steps in 3D modelling and mapping. This paper presents a critical review and analysis of four dense image-matching algorithms, available as open-source and commercial software, for the generation of dense point clouds. The eight datasets employed include scenes recorded from terrestrial and aerial blocks, acquired with convergent and normal (parallel axes) images, and with different scales. Geometric analyses are reported in which the point clouds produced with each of the different algorithms are compared with one another and also to ground-truth data. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Structure from Motion (SfM) <s> In unmanned aerial vehicle (UAV) photogrammetric surveys, the cameracan be pre-calibrated or can be calibrated "on-the-job" using structure-from-motion anda self-calibrating bundle adjustment. This study investigates the impact on mapping accuracyof UAV photogrammetric survey blocks, the bundle adjustment and the 3D reconstructionprocess under a range of typical operating scenarios for centimetre-scale natural landformmapping (in this case, a coastal cliff). We demonstrate the sensitivity of the process tocalibration procedures and the need for careful accuracy assessment. For this investigation, vertical (nadir or near-nadir) and oblique photography were collected with 80%–90%overlap and with accurately-surveyed (σ ≤ 2 mm) and densely-distributed ground control.This allowed various scenarios to be tested and the impact on mapping accuracy to beassessed. This paper presents the results of that investigation and provides guidelines thatwill assist with operational decisions regarding camera calibration and ground control forUAV photogrammetry. The results indicate that the use of either a robust pre-calibration ora robust self-calibration results in accurate model creation from vertical-only photography,and additional oblique photography may improve the results. The results indicate thatif a dense array of high accuracy ground control points are deployed and the UAVphotography includes both vertical and oblique images, then either a pre-calibration or anon-the-job self-calibration will yield reliable models (pre-calibration RMSEXY = 7.1 mmand on-the-job self-calibration RMSEXY = 3.2 mm). When oblique photography was Remote Sens. 2015, 7 11934 excluded from the on-the-job self-calibration solution, the accuracy of the model deteriorated(by 3.3 mm horizontally and 4.7 mm vertically). When the accuracy of the ground controlwas then degraded to replicate typical operational practice (σ = 22 mm), the accuracyof the model further deteriorated (e.g., on-the-job self-calibration RMSEXY went from3.2–7.0 mm). Additionally, when the density of the ground control was reduced, the modelaccuracy also further deteriorated (e.g., on-the-job self-calibration RMSEXY went from7.0–7.3 mm). However, our results do indicate that loss of accuracy due to sparse groundcontrol can be mitigated by including oblique imagery. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Structure from Motion (SfM) <s> Recent advances in structure from motion (SfM) and dense matching algorithms enable surface reconstruction from unmanned aerial vehicle (UAV) images with high spatial resolution, allowing for new insights into earth surface processes. However, accuracy issues are inherent in parallel-axes UAV image configurations. In this study, the quality of digital elevation models (DEMs) is assessed using images from a simulated UAV flight. Five different SfM tools and three different cameras are compared. If ground control points (GCPs) are not integrated into the adjustment process with parallel-axes image configurations, significant dome-effect systematic errors are observed, which can be reduced based on calibration parameters retrieved from a testfield captured with convergent images immediately before or after the UAV flight. A comparison between DEMs of a soil surface generated from UAV images and terrestrial laser-scanning data show that natural surfaces can be very accurately reconstructed from UAV images, even when GCPs are missing and simple geometric camera models are considered. <s> BIB005
SfM can provide a simpler solution for on-board sensor integration. Several open source (e.g., MicMac, VisualSFM, PMVS/CMVS, OpenMVG) and commercial software packages (e.g., Pix4D, Agisoft Photoscan) exist to carry out the SfM process. Several authors have investigated the performance of these packages and compared them to each other for different applications (e.g., BIB004 BIB001 BIB002 BIB005 BIB003 ). However, the development of algorithms and software is advancing fast, and the performance of the different solutions might change. The high geometric fidelity of the bundle adjustment in most SfM solutions means that the camera pose can be used to determine the position and orientation of the spectral sensor (provided that the SfM sensor and spectral sensor are accurately synchronized). To achieve a high absolute accuracy, either GCPs or on-board GNSS data are still required. SfM is the favored approach for 2D imagers, as it allows for a small and lightweight solution on-board the UAV. The advantages of this approach are that a lightweight and small machine vision camera can replace a high-grade on-board GNSS/IMU. In addition, 3D point clouds and DSMs can be derived as part of the SfM process (spectral and structural data products from one flight). Finally, high relative accuracy of the 3D model and orthophoto can be derived through the SfM process. The disadvantages of the SfM approach include the requirement of substantial on-board storage capacity for high-rate machine vision data. In addition, the post-processing of SfM data is computationally demanding. SfM requires high overlap between flight strips, which influences flight planning (and limits the size of the area that can be covered in a single flight). Finally, the high absolute accuracy of an SfM solution still requires accurate GCPs or on-board GNSS data.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> An experiment to determine the most accurate and repeatable method for generating instrument inter‐calibration functions (ICFs) is described, based upon data collected with a dual‐beam GER1500 spectroradiometer system. The quality of reflectance data collected using a dual‐beam spectroradiometer system is reliant upon accurate inter‐calibration of the sensor pairs to take into account differences in their radiant sensitivity and spectral characteristics. A cos‐conical field‐based method for inter‐calibrating dual‐beam spectroradiometers was tested alongside laboratory inter‐calibration procedures. The field‐based method produced the most accurate results when a field‐derived ICF collected close in time was used to correct the spectral scan. A regression model to predict the ICF at a range of wavelengths was tested, using inputs of solar zenith angle, cosine of solar zenith angle and broadband diffuse‐to‐global irradiance ratios. The linear multiple regression model described up to 78% of the variability i... <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Ground calibration targets (GCT) fulfil an essential role in vicarious calibration and atmospheric correction methodologies. However, assumptions are often made about the temporal stability of GCT reflectance. This letter presents results from a multi‐year study aimed at testing the temporal stability of a typical weathered concrete GCT in southern England. Very accurate measurements of hemispherical‐directional reflectance factors in the 400–1000 nm range were collected using a mobile dual‐beam spectroradiometer. Results demonstrated that the calibration surface was subject to seasonal growth of a biological material, which caused the reflectance factor to vary by a factor of two during the year (range = 16.4% reflectance at 670 nm). The spectral effect of this was most noticeable in field spectra collected in April. As environmental conditions became drier throughout the summer, concrete reflectance factors increased. Over multiple seasons the same patterns in reflectance factors repeated, indicating th... <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The “reflectance mode (RM)” method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The “linear-interpolation (LI)” method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The “continuous panel (CP)” method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Abstract This paper describes a study aimed at quantifying uncertainty in field measurements of vegetation canopy hemispherical conical reflectance factors (HCRF). The use of field spectroradiometers is common for this purpose, but the reliability of such measurements is still in question. In this paper we demonstrate the impact of various measurement uncertainties on vegetation canopy HCRF, using a combined laboratory and field experiment employing three spectroradiometers of the same broad specification (GER 1500). The results show that all three instruments performed similarly in the laboratory when a stable radiance source was measured (NEΔL −2 sr −1 nm −1 in the range of 400–1000 nm). In contrast, field-derived standard uncertainties ( u = SD of 10 consecutive measurements of the same surface measured in ideal atmospheric conditions) significantly differed from the lab-based uncertainty characterisation for two targets: a control (75% Spectralon panel) and a cropped grassland surface. Results indicated that field measurements made by a single instrument of the vegetation surface were reproducible to within ± 0.015 HCRF and of the control surface to within ± 0.006 HCRF (400–1000 nm (± 1σ)). Field measurements made by all instruments of the vegetation surface were reproducible to within ± 0.019 HCRF and of the control surface to within ± 0.008 HCRF (400–1000 nm (± 1σ)). Statistical analysis revealed that even though the field conditions were carefully controlled and the absolute values of u were small, different instruments yielded significantly different reflectance values for the same target. The results also show that laboratory-derived uncertainty quantities do not present a useful means of quantifying all uncertainties in the field. The paper demonstrates a simple method for u characterisation, using internationally accepted terms, in field scenarios. This provides an experiment-specific measure of u that helps to put measurements in context and forms the basis for comparison with other studies. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> The advent of routine collection of high-quality digital photography provides for traditional uses, as well as “remote sensing” uses such as the monitoring of environmental indicators. A well-devised monitoring system, based on consistent data and methods, provides the opportunity to track and communicate changes in features of interest in a way that has not previously been possible. Data that are geometrically and radiometrically consistent are fundamental to establishing systems for monitoring. In this paper, we focus on models for the radiometric calibration of mosaics consisting of thousands of images. We apply the models to the data acquired by the Australian Commonwealth Scientific and Industrial Research Organisation and its partners as part of regular systematic acquisitions over the city of Perth for a project known as Urban Monitor. One goal of the project, and hence the model development, is to produce annually updated mosaics calibrated to reflectance at 0.2-m ground sample distance for an area of approximately 9600 km2. This equates to terabytes of data and, for frame-based instruments, tens of thousands of images. For the experiments considered in this paper, this requires mosaicking estimates derived from 3000 digital photographic frames, and the methods will shortly be expanded to 30 000+ frames. A key part of the processing is the removal of spectral variation due to the viewing geometry, typically attributed to the bidirectional reflectance distribution function (BRDF) of the land surface. A variety of techniques based on semiempirical BRDF kernels have been proposed in the literature for correcting the BRDF effect in single frames, but mosaics with many frames provide unique challenges. This paper presents and illuminates a complete empirical radiometric calibration method for digital aerial frame mosaics, based on a combined model that uses kernel-based techniques for BRDF correction and incorporates additive and multiplicative terms for correcting other effects, such as variations due to the sensor and atmosphere. Using ground truth, which consists of laboratory-measured white, gray, and black targets that were placed in the field at the time of acquisition, we calculate the fundamental limitations of each model, leading to an optimal result for each model type. We demonstrate estimates of ground reflectance that are accurate to approximately 10%, 5%, and 3% absolute reflectances for ground targets having reflectances of 90%, 40%, and 4%, respectively. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> *Ecologists require spatially explicit data to relate structure to function. To date, heavy reliance has been placed on obtaining such data from remote-sensing instruments mounted on spacecraft or manned aircraft, although the spatial and temporal resolutions of the data are often not suited to local-scale ecological investigations. Recent technological innovations have led to an upsurge in the availability of unmanned aerial vehicles (UAVs) – aircraft remotely operated from the ground – and there are now many lightweight UAVs on offer at reasonable costs. Flying low and slow, UAVs offer ecologists new opportunities for scale-appropriate measurements of ecological phenomena. Equipped with capable sensors, UAVs can deliver fine spatial resolution data at temporal resolutions defined by the end user. Recent innovations in UAV platform design have been accompanied by improvements in navigation and the miniaturization of measurement technologies, allowing the study of individual organisms and their spatiotemporal dynamics at close range. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Directional effects in airborne imaging spectrometer (IS) data are mainly caused by anisotropic reflectance behavior of surfaces, commonly described by bi-directional reflectance distribution functions (BRDF). The radiometric and spectral accuracy of IS data is known to be highly influenced by such effects, which prevents consistent comparison of products. Several models were developed to approximate surface reflectance anisotropy for multi-angular observations. Few studies were carried out using such models for airborne flight lines where only a single observation is available for each ground location. In the present work, we quantified and corrected reflectance anisotropy on a single airborne HyMap flight line using a Ross-Li model. We stratified the surface in two vegetation structural types (different in vertical structuring) using spectral angle mapping, to generate a structure dependent set of angular observations. We then derived a suite of products [indices (structure insensitive pigment index, normalized difference vegetation index, simple ratio index, and anthocyanin reflectance index) and inversion-based (SAIL/PROSPECT-leaf area index, Cw, Cdm, Cab)] from corrected and uncorrected images. Non-parametric analysis of variance (Kruskal-Wallis test) showed throughout significant improvements in products from corrected images. Data correction resulting in airborne nadir BRDF adjusted reflectance (aNBAR) showed uncertainty reductions from 60 to 100% (p-value = 0.05) as compared to uncorrected and nadir observations. Using sparse IS data acquisitions, the use of fully parametrized BRDF models is limited. Our normalization scheme is straightforward and can be applied with illumination and observation geometry being the only a priori information. We recommend aNBAR generation to precede any higher level airborne IS product generation based on reflectance data. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Reflectance anisotropy is a signal that contains information on the optical and structural properties of a surface and can be studied by performing multi-angular reflectance measurements that are often done using cumbersome goniometric measurements. In this paper we describe an innovative and fast method where we use a hyperspectral pushbroom spectrometer mounted on a multirotor unmanned aerial vehicle (UAV) to perform such multi-angular measurements. By hovering the UAV above a surface while rotating it around its vertical axis, we were able to sample the reflectance anisotropy within the field of view of the spectrometer, covering all view azimuth directions up to a 30° view zenith angle. We used this method to study the reflectance anisotropy of barley, potato, and winter wheat at different growth stages. The reflectance anisotropy patterns of the crops were interpreted by analysis of the parameters obtained by fitting of the Rahman-Pinty-Verstraete (RPV) model at a 5-nm interval in the 450–915 nm range. To demonstrate the results of our method, we firstly present measurements of barley and winter wheat at two different growth stages. On the first measuring day, barley and winter wheat had structurally comparable canopies and displayed similar anisotropic reflectance patterns. On the second measuring day the anisotropy of crops differed significantly due to the crop-specific development of grain heads in the top layer of their canopies. Secondly, we show how the anisotropy is reduced for a potato canopy when it grows from an open row structure to a closed canopy. In this case, especially the backward scattering intensity was strongly diminished due to the decrease in shadowing effects that were caused by the potato rows that were still present on the first measuring day. The results of this study indicate that the presented method is capable of retrieving anisotropic reflectance characteristics of vegetation canopies and that it is a feasible alternative for field goniometer measurements. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Unmanned airborne vehicles (UAV) equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF) related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the reflectance for most of the spectral bands. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> ABSTRACTThe objective of this investigation was to study and optimize a hyperspectral unmanned aerial vehicle (UAV)-based remote-sensing system for the Brazilian environment. Comprised mainly of forest and sugarcane, the study area was located in the western region of the State of Sao Paulo. A novel hyperspectral camera based on a tunable Fabry–Perot interferometer was mounted aboard a UAV due to its flexibility and capability to acquire data with a high temporal and spatial resolution. Five approaches designed to produce mosaics of hyperspectral images, which represent the hemispherical directional reflectance factor of targets in the Brazilian environment, are presented and evaluated. The method considers the irradiance variation during image acquisition and the effects of the bidirectional reflectance distribution function. The main goal was achieved by comparing the spectral responses of radiometric reference targets acquired with a spectroradiometer in the field with those produced by the five differ... <s> BIB013 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB014 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Radiometric Processing <s> In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements. <s> BIB015
In good conditions, many radiometric calibration approaches are able to provide good quality reflectance data from the spectrometer observations. However, one of the benefits of UAVs that has been stated in the literature is the ability to fly below clouds to capture data (e.g., BIB006 ). At the same time, a study by Hakala et al. stated an influence on spectral UAV measurements of more than 100% due to "fluctuating levels of cloudiness". Section 4 reviewed the currently available options for transforming the spectral information captured by a sensor to the at-object reflectance. Table 4 summarizes their suitability under different atmospheric conditions. Major challenges hindering the utilization of the ARTM simulation for atmospheric correction are the uncertainties in the sensor absolute radiometric calibration and the inability to parameterize and model the atmospheric influence on the irradiance, in particular under unstable conditions. While a dual spectrometer approach requires cross-calibration of sensors, it can compensate illumination changes, but only if the secondary reference sensor and UAV sensor are close enough. Miura and Huete BIB003 found that under clear and cloud-free conditions, the approach with a stationary second spectrometer on the ground outperforms methods where a reference measurement is just taken before or after the flight (and then interpolated). A secondary properly stabilized device carried by the UAV allows illumination changes to be corrected for at the place of the measurement if the sun elevation is high enough to allow the cosine receptor to properly capture the illumination conditions. Although the implementation of this approach is more complex, it eases the flight operations, since no ground equipment is needed. We encourage manufacturers to build integrated dual spectrometer systems that provide accurate recordings in the dynamic conditions that are met in UAV remote sensing; challenges include rapid illumination changes, platform vibrations and movement, temperature effects, and others. For more information on the cross-calibration and reflectance factors retrieved with radiometric reference panels and multiple spectrometers, the interested reader is referred to the works of Anderson et al. BIB007 BIB001 BIB004 BIB002 . Further challenges that remain unresolved with the dual spectrometer systems include the disturbances caused by the object reflectance anisotropy and shadows captured in a measurement (e.g., part of an image) but not by the irradiance sensor and object topography. The ELM is an easy and straightforward approach for the radiometric correction of datasets under constant illumination conditions and if there is sufficient space to place the reference panels. ELM is particularly challenging in forest studies (e.g., BIB011 ), since the panels may have to be deployed in small openings inside the forest. Here, the illumination conditions do not correspond to the conditions at the top-of-canopy due to the scattering and blocking of the direct or diffuse sky radiance of the surrounding canopy. In this case, the radiometric block adjustment can be beneficial for carrying the calibration obtained from panels placed in an open area to the area of interest, as long as the datasets are connected and preferably using an irradiance sensor on-board the UAV BIB011 . Radiometric block adjustment has been applied in studies with different settings to produce uniform image mosaics BIB012 BIB005 BIB013 . Principally, the method can compensate for illumination changes during the flight campaign based on the information contained in the images. Thus, no further equipment is needed on-board the UAV for the irradiance measurement, but the irradiance recordings can also be integrated to the same adjustment process. In several studies, the method has provided the best uniformity over the entire image dataset when compared with approaches based on ground irradiance spectra measurement and on-board irradiance measurement BIB006 BIB013 . In Honkavaara et al. BIB006 and Hakala et al. , datasets were captured in illumination conditions varying from cloudy to sunny. The performances of approaches based on the irradiance radiometer on-board the UAV, irradiance spectrometer on the ground, and the radiometric block adjustment were compared, and the radiometric block adjustment provided the best results. Similar conclusions were drawn by Miyoshi et al. BIB013 . The radiometric block adjustment also provides the uniform mosaics of a dataset with different flights with varying solar azimuth and elevation BIB012 . Although the radiometric block adjustment can compensate for illumination fluctuations, it is important to note that the radiometric resolution might be decreased when the sensor is underexposed, and thus, differences in reflectance may not be sufficiently resolved for certain analysis BIB009 . The BRDF correction has mostly been carried out by means of empirical modeling. However, different surfaces (e.g., vegetation types) have different anisotropic behaviors, which makes empirical modeling challenging. With multiple overlapping images provided by 2D imagers, it is now possible to retrieve the BRDF of different surfaces and use it to normalize the data. Additionally, incorporating structural information is seen as a way forward BIB008 . As noted by Aasen et al. BIB014 BIB009 , the combination of 3D and spectral information derived by 2D imagers is potentially suited for this purpose. In addition, the anisotropy can also be used as a source of information BIB015 BIB010 . New radiometric correction tools have been implemented in software packages for UAV image data processing. For example, Pix4D and Agisoft Photoscan offer options for radiometric correction, including sensor calibration-related corrections, irradiance-related correction utilizing irradiance information stored in the image EXIF-file, and sun direction-related correction for some cameras. Moreover, there is the option for radiometric calibration using reflectance panels. Additionally, some cameras, such as the Parrot Sequoia, have an integrated irradiance sensor and GNSS receiver. These are much-needed developments, since they ease the use of spectral sensors and help exploit the potential of UAVs to fly below the clouds. Table 4 . Overview of the top-of-canopy generation procedures and their applicability to different atmospheric (e.g., cloudiness) and irradiance (e.g., different intensities due to diurnal sun angle change) conditions. Additionally, the applicability to point (P), pushbroom (PP), and 2D imagers (2D) is indicated. +: suitable; -: not suitable.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Abstract Remote sensing imagery needs to be converted into tangible information which can be utilised in conjunction with other data sets, often within widely used Geographic Information Systems (GIS). As long as pixel sizes remained typically coarser than, or at the best, similar in size to the objects of interest, emphasis was placed on per-pixel analysis, or even sub-pixel analysis for this conversion, but with increasing spatial resolutions alternative paths have been followed, aimed at deriving objects that are made up of several pixels. This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way. The most common approach used for building objects is image segmentation, which dates back to the 1970s. Around the year 2000 GIS and image processing started to grow together rapidly through object based image analysis (OBIA - or GEOBIA for geospatial object based image analysis). In contrast to typical Landsat resolutions, high resolution images support several scales within their images. Through a comprehensive literature review several thousand abstracts have been screened, and more than 820 OBIA-related articles comprising 145 journal papers, 84 book chapters and nearly 600 conference papers, are analysed in detail. It becomes evident that the first years of the OBIA/GEOBIA developments were characterised by the dominance of ‘grey’ literature, but that the number of peer-reviewed journal articles has increased sharply over the last four to five years. The pixel paradigm is beginning to show cracks and the OBIA methods are making considerable progress towards a spatially explicit information extraction workflow, such as is required for spatial planning as well as for many monitoring programmes. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> article i nfo The remote detection of water stress in a citrus orchard was investigated using leaf-level measurements of chlorophyll fluorescence and Photochemical Reflectance Index (PRI) data, seasonal time-series of crown tem- perature and PRI, and high-resolution airborne imagery. The work was conducted in an orchard where a reg- ulated deficit irrigation (RDI) experiment generated a gradient in water stress levels. Stomatal conductance (Gs) and water potential (Ψ) were measured over the season on each treatment block. The airborne data consisted on thermal and hyperspectral imagery acquired at the time of maximum stress differences among treatments, prior to the re-watering phase, using a miniaturized thermal camera and a micro-hyperspectral imager on board an unmanned aerial vehicle (UAV). The hyperspectral imagery was acquired at 40 cm resolution and 260 spectral bands in the 400-885 nm spectral range at 6.4 nm full width at half maximum (FWHM) spectral resolution and 1.85 nmsampling interval,enablingthe identificationof pure crownsfor extractingradiance andreflectance hyperspectral spectra from each tree. The FluorMOD model was used to investigate the retrieval of chlorophyll fluorescence by applying the Fraunhofer Line Depth (FLD) principle using three spectral bands (FLD3), which demonstrated that fluorescence retrievalwas feasible with the configuration of the UAV micro-hyperspectral in- strument flown over the orchard. Results demonstrated the link between seasonal PRI and crown temperature acquired from instrumented trees and field measurements of stomatal conductance and water potential. The sensitivity of PRI and Tc-Ta time-series to water stress levels demonstrated a time delay of PRI vs Tc-Ta during the recovery phase after re-watering started. At the time of the maximum stress difference among treatment blocks, the airborneimagery acquired fromthe UAV platform demonstrated that the crown temperature yielded the best coefficient of determination for Gs (r 2 <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Imaging using lightweight, unmanned airborne vehicles (UAVs) is one of the most rapidly developing fields in remote sensing technology. The new, tunable, Fabry-Perot interferometer-based (FPI) spectral camera, which weighs less than 700 g, makes it possible to collect spectrometric image blocks with stereoscopic overlaps using light-weight UAV platforms. This new technology is highly relevant, because it opens up new possibilities for measuring and monitoring the environment, which is becoming increasingly important for many environmental challenges. Our objectives were to investigate the processing and use of this new type of image data in precision agriculture. We developed the entire processing chain from raw images up to georeferenced reflectance images, digital surface models and biomass estimates. The processing integrates photogrammetric and quantitative remote sensing approaches. We carried out an empirical assessment using FPI spectral imagery collected at an agricultural wheat test site in the summer of 2012. Poor weather conditions during the campaign complicated the data processing, but this is one of the challenges that are faced in operational applications. The <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> With increasing demand to support and accelerate progress in breeding for novel traits, the plant research community faces the need to accurately measure increasingly large numbers of plants and plant parameters. The goal is to provide quantitative analyses of plant structure and function relevant for traits that help plants better adapt to low-input agriculture and resource-limited environments. We provide an overview of the inherently multidisciplinary research in plant phenotyping, focusing on traits that will assist in selecting genotypes with increased resource use efficiency. We highlight opportunities and challenges for integrating noninvasive or minimally invasive technologies into screening protocols to characterize plant responses to environmental challenges for both controlled and field experimentation. Although technology evolves rapidly, parallel efforts are still required because large-scale phenotyping demands accurate reporting of at least a minimum set of information concerning experimental protocols, data management schemas, and integration with modeling. The journey toward systematic plant phenotyping has only just begun. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> One of the key advantages of a low-flying unmanned aircraft system UAS is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral i.e., hyperspectral resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system HyperUAS. HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands 162 bands in a spectrally binned mode with bandwidths between 4 and 5i¾?nm at an ultrahigh spatial resolution of 2-5i¾?cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5i¾?cm. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> During the last years commercial hyperspectral imaging sensors have been miniaturized and their performance has been demonstrated on Unmanned Aerial Vehicles (UAV). However currently the commercial hyperspectral systems still require minimum payload capacity of approximately 3 kg, forcing usage of rather large UAVs. In this article we present a lightweight hyperspectral mapping system (HYMSY) for rotor-based UAVs, the novel processing chain for the system, and its potential for agricultural mapping and monitoring applications. The HYMSY consists of a custom-made pushbroom spectrometer (400–950 nm, 9 nm FWHM, 25 lines/s, 328 px/line), a photogrammetric camera, and a miniature GPS-Inertial Navigation System. The weight of HYMSY in ready-to-fly configuration is only 2.0 kg and it has been constructed mostly from off-the-shelf components. The processing chain uses a photogrammetric algorithm to produce a Digital Surface Model (DSM) and provides high accuracy orientation of the system over the DSM. The pushbroom data is georectified by projecting it onto the DSM with the support of photogrammetric orientations and the GPS-INS data. Since an up-to-date DSM is produced internally, no external data are required and the processing chain is capable to georectify pushbroom data fully automatically. The system has been adopted for several experimental flights related to agricultural and habitat monitoring applications. For a typical flight, an area of 2–10 ha was mapped, producing a RGB orthomosaic at 1–5 cm resolution, a DSM at 5–10 cm resolution, and a hyperspectral datacube at 10–50 cm resolution. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Constraints in field phenotyping capability limit our ability to dissect the genetics of quantitative traits, particularly those related to yield and stress tolerance (e.g., yield potential as well as increased drought, heat tolerance, and nutrient efficiency, etc.). The development of effective field-based high-throughput phenotyping platforms (HTPPs) remains a bottleneck for future breeding advances. However, progress in sensors, aeronautics, and high-performance computing are paving the way. Here, we review recent advances in field HTPPs, which should combine at an affordable cost, high capacity for data recording, scoring and processing, and non-invasive remote sensing methods, together with automated environmental data collection. Laboratory analyses of key plant parts may complement direct phenotyping under field conditions. Improvements in user-friendly data management together with a more powerful interpretation of results should increase the use of field HTPPs, therefore increasing the efficiency of crop genetic improvement to meet the needs of future generations. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. <s> BIB008 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> In this study we present a hyperspectral flying goniometer system, based on a rotary-wing unmanned aerial vehicle (UAV) equipped with a spectrometer mounted on an active gimbal. We show that this approach may be used to collect multiangular hyperspectral data over vegetated environments. The pointing and positioning accuracy are assessed using structure from motion and vary from σ = 1° to 8° in pointing and σ = 0.7 to 0.8 m in positioning. We use a wheat dataset to investigate the influence of angular effects on the NDVI, TCARI and REIP vegetation indices. Angular effects caused significant variations on the indices: NDVI = 0.83–0.95; TCARI = 0.04–0.116; REIP = 729–735 nm. Our analysis highlights the necessity to consider angular effects in optical sensors when observing vegetation. We compare the measurements of the UAV goniometer to the angular modules of the SCOPE radiative transfer model. Model and measurements are in high accordance (r2 = 0.88) in the infrared region at angles close to nadir; in contrast the comparison show discrepancies at low tilt angles (r2 = 0.25). This study demonstrates that the UAV goniometer is a promising approach for the fast and flexible assessment of angular effects. <s> BIB009 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB010 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> An automatic thresholding algorithm was developed in an OBIA framework.The algorithm was tested in UAV images acquired on different herbaceous row crops.The main objective was to accurately discriminate vegetation vs bare soil.Classification accuracies about 90% were achieved.Two cameras were tested on board the UAV: visible, and visible+infrared. In precision agriculture, detecting the vegetation in herbaceous crops in early season is a first and crucial step prior to addressing further objectives such as counting plants for germination monitoring, or detecting weeds for early season site specific weed management. The ultra-high resolution of UAV images, and the powerful tools provided by the Object Based Image Analysis (OBIA) are the key in achieving this objective. The present research work develops an innovative thresholding OBIA algorithm based on the Otsu's method, and studies how the results of this algorithm are affected by the different segmentation parameters (scale, shape and compactness). Along with the general description of the procedure, it was specifically applied for vegetation detection in remotely-sensed images captured with two sensors (a conventional visible camera and a multispectral camera) mounted on an Unmanned Aerial Vehicle (UAV) and acquired over fields of three different herbaceous crops (maize, sunflower and wheat). The tests analyzed the performance of the OBIA algorithm for classifying vegetation coverage as affected by different automatically selected thresholds calculated in the images of two vegetation indices: the Excess Green (ExG) and the Normalized Difference Vegetation Index (NDVI). The segmentation scale parameter affected the vegetation index histograms, which led to changes in the automatic estimation of the optimal threshold value for the vegetation indices. The other parameters involved in the segmentation procedure (i.e., shape and compactness) showed minor influence on the classification accuracy. Increasing the object size, the classification error diminished until an optimum was reached. After this optimal value, increasing object size produced bigger errors. <s> BIB011 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Abstract In this study we combined selected vegetation indices (VIs) and plant height information to estimate biomass in a summer barley experiment. The VIs were calculated from ground-based hyperspectral data and unmanned aerial vehicle (UAV)-based red green blue (RGB) imaging. In addition, the plant height information was obtained from UAV-based multi-temporal crop surface models (CSMs). The test site is a summer barley experiment comprising 18 cultivars and two nitrogen treatments located in Western Germany. We calculated five VIs from hyperspectral data. The normalised ratio index (NRI)-based index GnyLi (Gnyp et al., 2014) showed the highest correlation ( R 2 = 0.83) with dry biomass. In addition, we calculated three visible band VIs: the green red vegetation index (GRVI), the modified GRVI (MGRVI) and the red green blue VI (RGBVI), where the MGRVI and the RGBVI are newly developed VI. We found that the visible band VIs have potential for biomass prediction prior to heading stage. A robust estimate for biomass was obtained from the plant height models ( R 2 = 0.80–0.82). In a cross validation test, we compared plant height, selected VIs and their combination with plant height information. Combining VIs and plant height information by using multiple linear regression or multiple non-linear regression models performed better than the VIs alone. The visible band GRVI and the newly developed RGBVI are promising but need further investigation. However, the relationship between plant height and biomass produced the most robust results. In summary, the results indicate that plant height is competitive with VIs for biomass estimation in summer barley. Moreover, visible band VIs might be a useful addition to biomass estimation. The main limitation is that the visible band VIs work for early growing stages only. <s> BIB012 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Miniaturized hyperspectral imaging sensors are becoming available to small unmanned airborne vehicle (UAV) platforms. Imaging concepts based on frame format offer an attractive alternative to conventional hyperspectral pushbroom scanners because they enable enhanced processing and interpretation potential by allowing for acquisition of the 3-D geometry of the object and multiple object views together with the hyperspectral reflectance signatures. The objective of this investigation was to study the performance of novel visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral frame cameras based on a tunable Fabry–Perot interferometer (FPI) in measuring a 3-D digital surface model and the surface moisture of a peat production area. UAV image blocks were captured with ground sample distances (GSDs) of 15, 9.5, and 2.5 cm with the SWIR, VNIR, and consumer RGB cameras, respectively. Georeferencing showed consistent behavior, with accuracy levels better than GSD for the FPI cameras. The best accuracy in moisture estimation was obtained when using the reflectance difference of the SWIR band at 1246 nm and of the VNIR band at 859 nm, which gave a root mean square error (rmse) of 5.21 pp (pp is the mass fraction in percentage points) and a normalized rmse of 7.61%. The results are encouraging, indicating that UAV-based remote sensing could significantly improve the efficiency and environmental safety aspects of peat production. <s> BIB013 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> This study describes the development of a small hyperspectral Unmanned Aircraft System (HyUAS) for measuring Visible and Near-Infrared (VNIR) surface reflectance and sun-induced fluorescence, co-registered with high-resolution RGB imagery, to support field spectroscopy surveys and calibration and validation of remote sensing products. The system, namely HyUAS, is based on a multirotor platform equipped with a cost-effective payload composed of a VNIR non-imaging spectrometer and an RGB camera. The spectrometer is connected to a custom entrance optics receptor developed to tune the instrument field-of-view and to obtain systematic measurements of instrument dark-current. The geometric, radiometric and spectral characteristics of the instruments were characterized and calibrated through dedicated laboratory tests. The overall accuracy of HyUAS data was evaluated during a flight campaign in which surface reflectance was compared with ground-based reference measurements. HyUAS data were used to estimate spectral indices and far-red fluorescence for different land covers. RGB images were processed as a high-resolution 3D surface model using structure from motion algorithms. The spectral measurements were accurately geo-located and projected on the digital surface model. The overall results show that: (i) rigorous calibration enabled radiance and reflectance spectra from HyUAS with RRMSE < 10% compared with ground measurements; (ii) the low-flying UAS setup allows retrieving fluorescence in absolute units; (iii) the accurate geo-location of spectra on the digital surface model greatly improves the overall interpretation of reflectance and fluorescence data. In general, the HyUAS was demonstrated to be a reliable system for supporting high-resolution field spectroscopy surveys allowing one to collect systematic measurements at very detailed spatial resolution with a valuable potential for vegetation monitoring studies. Furthermore, it can be considered a useful tool for collecting spatially-distributed observations of reflectance and fluorescence that can be further used for calibration and validation activities of airborne and satellite optical images in the context of the upcoming FLEX mission and the VNIR spectral bands of optical Earth observation missions (i.e., Landsat, Sentinel-2 and Sentinel-3). <s> BIB014 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Agriculture has seen many revolutions, whether the domestication of animals and plants a few thousand years ago, the systematic use of crop rotations and other improvements in farming practice a few hundred years ago, or the “green revolution” with systematic breeding and the widespread use of man-made fertilizers and pesticides a few decades ago. We suggest that agriculture is undergoing a fourth revolution triggered by the exponentially increasing use of information and communication technology (ICT) in agriculture. ::: ::: ::: ::: New technologies, such as unmanned aerial vehicles with powerful, lightweight cameras, allow for improved farm management advice. Image courtesy of Shutterstock/Kleir. ::: ::: ::: ::: Autonomous, robotic vehicles have been developed for farming purposes, such as mechanical weeding, application of fertilizer, or harvesting of fruits. The development of unmanned aerial vehicles with autonomous flight control (1), together with the development of lightweight and powerful hyperspectral snapshot cameras that can be used to calculate biomass development and fertilization status of crops (2, 3), opens the field for sophisticated farm management advice. Moreover, decision-tree models are available now that allow farmers to differentiate between plant diseases based on optical information (4). Virtual fence technologies (5) allow cattle herd management based on remote-sensing signals and sensors or actuators attached to the livestock. ::: ::: Taken together, these technical improvements constitute a technical revolution that will generate disruptive changes in agricultural practices. This trend holds for farming not only in developed countries but also in developing countries, where deployments in ICT (e.g., use of mobile phones, access to the Internet) are being adopted at a rapid pace and could become the game-changers in the future (e.g., in the form of seasonal drought forecasts, climate-smart agriculture). ::: ::: Such profound changes in practice come not only with opportunities but also big challenges. It is crucial to point them out at an early stage of this … ::: ::: [↵][1]1To whom correspondence should be addressed. Email: achim.walter{at}usys.ethz.ch. ::: ::: [1]: #xref-corresp-1-1 <s> BIB015 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. <s> BIB016 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB017 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> ABSTRACTRemote sensing from unmanned aircraft systems (UAS) was expected to be an important new technology to assist farmers with precision agriculture, especially crop nutrient management. There are three advantages using UAS platforms compared to manned aircraft platforms with the same sensor for precision agriculture: (1) smaller ground sample distances, (2) incident light sensors for image calibration, and (3) canopy height models created from structure-from-motion point clouds. These developments hold promise for future data products. In order to better match vendor capabilities with farmer requirements, we classify applications into three general niches: (1) scouting for problems, (2) monitoring to prevent yield losses, and (3) planning crop management operations. The three different niches have different requirements for sensor calibration and have different costs of operation. Planning crop management operations may have the most environmental and economic benefits. However, a USDA Economic Resear... <s> BIB018 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> Forests are the most diverse terrestrial ecosystems and their biological diversity includes trees, but also other plants, animals, and micro-organisms. One-third of the forested land is in boreal zone; therefore, changes in biological diversity in boreal forests can shape biodiversity, even at global scale. Several forest attributes, including size variability, amount of dead wood, and tree species richness, can be applied in assessing biodiversity of a forest ecosystem. Remote sensing offers complimentary tool for traditional field measurements in mapping and monitoring forest biodiversity. Recent development of small unmanned aerial vehicles (UAVs) enable the detailed characterization of forest ecosystems through providing data with high spatial but also temporal resolution at reasonable costs. The objective here is to deepen the knowledge about assessment of plot-level biodiversity indicators in boreal forests with hyperspectral imagery and photogrammetric point clouds from a UAV. We applied individual tree crown approach (ITC) and semi-individual tree crown approach (semi-ITC) in estimating plot-level biodiversity indicators. Structural metrics from the photogrammetric point clouds were used together with either spectral features or vegetation indices derived from hyperspectral imagery. Biodiversity indicators like the amount of dead wood and species richness were mainly underestimated with UAV-based hyperspectral imagery and photogrammetric point clouds. Indicators of structural variability (i.e., standard deviation in diameter-at-breast height and tree height) were the most accurately estimated biodiversity indicators with relative RMSE between 24.4% and 29.3% with semi-ITC. The largest relative errors occurred for predicting deciduous trees (especially aspen and alder), partly due to their small amount within the study area. Thus, especially the structural diversity was reliably predicted by integrating the three-dimensional and spectral datasets of UAV-based point clouds and hyperspectral imaging, and can therefore be further utilized in ecological studies, such as biodiversity monitoring. <s> BIB019 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Data Products from UAV Sensing Systems <s> ABSTRACTThe objective of this investigation was to study and optimize a hyperspectral unmanned aerial vehicle (UAV)-based remote-sensing system for the Brazilian environment. Comprised mainly of forest and sugarcane, the study area was located in the western region of the State of Sao Paulo. A novel hyperspectral camera based on a tunable Fabry–Perot interferometer was mounted aboard a UAV due to its flexibility and capability to acquire data with a high temporal and spatial resolution. Five approaches designed to produce mosaics of hyperspectral images, which represent the hemispherical directional reflectance factor of targets in the Brazilian environment, are presented and evaluated. The method considers the irradiance variation during image acquisition and the effects of the bidirectional reflectance distribution function. The main goal was achieved by comparing the spectral responses of radiometric reference targets acquired with a spectroradiometer in the field with those produced by the five differ... <s> BIB020
Depending on the sensor configuration, different data products can be retrieved from UAV spectral sensing systems. Point spectroradiometers can measure distinct points (e.g., BIB014 ) in space or integrated over an area of interest to get a coarse spatial representation of the spectral properties. With a specialized flying pattern and tilting of the sensor, a multi-angular characterization of an area can be generated BIB009 . Pushbroom systems allow generating a 2D spectral representation of the surface (e.g., BIB005 BIB006 BIB002 ). These hyperspectral images can be overlaid on a digital surface model derived from LiDAR (e.g., ) or from RGB SfM. As a result, a spectral digital surface model is generated that represents the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface BIB010 . The 2D imagers directly allow the generation of spectral and 3D information at the same time, and thus derive spectral 3D point clouds and their derivative spectral digital surface models (c.f. Section 3.3 and e.g., BIB010 BIB003 BIB013 ). Moreover, since the spectral information is implicitly connected to the 3D information of every point, 3D spectral point clouds can be generated with the same approach (e.g., Figure 6 ). information is implicitly connected to the 3D information of every point, 3D spectral point clouds can be generated with the same approach (e.g., Figure 6 ). Additionally, the information from 2D imagers can also be used to generate 2D orthomosaics. However, in comparison with pushbroom systems, their orthorectification is likely to be more precise, since the geometry of the scene has implicitly been taken into account during the mosaicing process (if SfM was used). At the same time, the viewing geometries within the orthomosaics of 2D imagers are more complex than the ones in scenes from pushbroom sensing systems. This is due to the two dimensionality of the data and the high overlap between image frames. In every case where Additionally, the information from 2D imagers can also be used to generate 2D orthomosaics. However, in comparison with pushbroom systems, their orthorectification is likely to be more precise, since the geometry of the scene has implicitly been taken into account during the mosaicing process (if SfM was used). At the same time, the viewing geometries within the orthomosaics of 2D imagers are more complex than the ones in scenes from pushbroom sensing systems. This is due to the two dimensionality of the data and the high overlap between image frames. In every case where multiple images (or lines for pushbroom systems) overlap, a decision needs to be made on how to 'blend' or mosaic the data into a seamless orthomosaic. This decision has a significant impact on the final data product, as a study by Aasen and Bolten BIB017 shows. We think that the ultra-high resolution of UAV images can be used for precise measurements needed by smart farming BIB015 , agricultural BIB018 and phenotyping applications BIB017 BIB007 BIB004 and with the powerful tools provided by object-based image analysis (OBIA; BIB001 BIB008 ) for classification tasks (e.g., BIB011 ). Additionally, UAVs allow mapping in terrain that is hard to access with proximal sensing methods such as mangroves or high canopies such as forests BIB016 BIB019 BIB020 . In addition, the combination of high-resolution 3D data and spectral data enables new segmentation, complementing, and combination approaches for data analysis , as shown e.g., in forests BIB016 BIB019 and agriculture BIB012 . Still, approaches and algorithms that analyze the large amounts of multi-dimensional, high-resolution UAV data are a major bottleneck that should be focused on in future.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> The correct interpretation of scientific information from global, long-term series of remote sensing products requires the ability to discriminate between product artifacts and changes in the Earth processes being monitored. A suite of global land surface products is made from Moderate Resolution Imaging Spectroradiometer (MODIS) instrument data. Quality assessment (QA) is an integral part of this production chain and focuses on evaluating and documenting the scientific quality of the products with respect to their intended performance. This paper describes the QA approach adopted by the MODIS Land (MODLAND) Science Team and coordinated by the MODIS Land Data Operational Product Evaluation (LDOPE) facility. The described methodology represents a new approach for assessing and ensuring the performance of land remote sensing products that are generated on a systematic basis. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> The organised storage of spectral data described by metadata is important for long-term use and data sharing with other scientists. Metadata describing the sampling environment, geometry and measurement process serves to evaluate the suitability of existing data sets for new applications. There is a need for spectral databases that serve as repositories for spectral field campaign and reference signatures, including appropriate metadata parameters. Such systems must be (a) highly automated in order to encourage users entering their spectral data collections and (b) provide flexible data retrieval mechanisms based on subspace projections in metadata spaces. The recently redesigned SPECCHIO system stores spectral and metadata in a relational database based on a non-redundant data model and offers efficient data import, automated metadata generation, editing and retrieval via a Java application. RSL is disseminating the database and software to the remote sensing community in order to foster the use and further development of spectral databases. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> Abstract This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 – 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the image cube are spatially co-registered beforehand. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> In this study we present a hyperspectral flying goniometer system, based on a rotary-wing unmanned aerial vehicle (UAV) equipped with a spectrometer mounted on an active gimbal. We show that this approach may be used to collect multiangular hyperspectral data over vegetated environments. The pointing and positioning accuracy are assessed using structure from motion and vary from σ = 1° to 8° in pointing and σ = 0.7 to 0.8 m in positioning. We use a wheat dataset to investigate the influence of angular effects on the NDVI, TCARI and REIP vegetation indices. Angular effects caused significant variations on the indices: NDVI = 0.83–0.95; TCARI = 0.04–0.116; REIP = 729–735 nm. Our analysis highlights the necessity to consider angular effects in optical sensors when observing vegetation. We compare the measurements of the UAV goniometer to the angular modules of the SCOPE radiative transfer model. Model and measurements are in high accordance (r2 = 0.88) in the infrared region at angles close to nadir; in contrast the comparison show discrepancies at low tilt angles (r2 = 0.25). This study demonstrates that the UAV goniometer is a promising approach for the fast and flexible assessment of angular effects. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> Remote-sensing applications using the remotely piloted aerial system RPAS are becoming more frequent. RPAS is used in different contexts and in several areas, such as environmental studies, cultural heritage, civil engineering, forestry, and cartography. To process the images resulting from the RPAS, different types of image-based 3D modelling software proprietary or open source are used. MicMac is an open-source software which allows generating georeferenced information which can be manipulated or visualized under a geographical information system GIS environment. So, the integration between the MicMac procedures within a GIS software could be very useful. The main objective of this work was to create an open-source GIS application based on MicMac photogrammetric tools to obtain the orthophotographs, point clouds, and digital surface models. To test the application developed, two distinct areas were considered: one in a more natural environment Aguda beach near Porto city, Portugal and another in an urban environment in the city of Coimbra, Portugal. High-resolution data sets were obtained with a ground sampling distance GSD of approximately 4.5 cm. Shaded relief image and dense point cloud were generated. This open-source application can be automated and can create all the files required to run the functionalities from MicMac to obtain the georeferenced information, within a GIS software, bringing photogrammetric data generation to a wider user community. Moreover, integrating this application with the GIS software has several advantages like generating more georeferenced information, such as vegetation indices, or even creating the land use land cover map. Creation of shapefiles with the projection centre of the camera, the area covered by each photograph, and taking account of the number of images that appear in each location are also useful in performing certain tasks. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB007 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Quality Assurance and Metadata Information <s> Unmanned airborne vehicles (UAV) equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF) related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the reflectance for most of the spectral bands. <s> BIB008
The signal of an object is influenced at different stages before it is stored as a digital value in a data product. In the last sections, we looked at different sensors and their calibration, geometric, and radiometric processing, as well as influences of the environment, i.e., illumination conditions and the atmosphere. The keys to transforming data to useful information are the auxiliary and metadata. They support the interpretation of scientific data, and in general help to ensure long-term usability. Metadata provide a basis for the assessment of data quality and the possibility of data sharing and comparing between scientists BIB002 . It can be pixel, image (measurement), or scene-specific. Important pixel specific metadata includes the signal-to-noise ratio BIB005 and an approximation of the radiometric resolution BIB003 that gives an indication of the quality of a pixel value stored in a data product. While both can be derived on the image level during the relative radiometric calibration (c.f. Section 4.3.1), their estimation for pixels in a scene can be complex, since it is modified when, e.g., information of two pixels is composed. For every measurement, the measurement time (to reconstruct the Sun's position) and illumination conditions should be recorded. The latter would include qualitative information on the sky condition (clear or cloud-covered) and a direct-diffuse ratio, which could be derived from a shaded and a non-shaded reference panel. Additionally, the measurement geometry of the FOV or IFOV of every pixel, in case of imaging sensors, needs to be stored, since being in interaction with the illumination conditions, the measurement geometry has a significant influence on the data BIB004 BIB007 . In imaging data, this is sometimes visible along the transition of mosaiced images. In this context, the influence of the data-processing scheme also needs to be taken into account BIB007 . Thus, metadata that describes how the data is processed needs to be generated for every scene. It should include the software and its version, as well as the parameters that were set during the processing. Software packages such as Agisoft Photoscan, Pix4D mapper, and open source tool MicMac BIB006 generate a report file after processing. These files could be provided as supplementary data in every publication. Other scene-based metadata include the information on method/protocol used to derive top-of-canopy reflectance (c.f. Section 4.4), as well as the sensors used in the study (including their band configuration and model number or manufacturing year, since some of the UAV sensors are manually manufactured and constantly improved, eventually making them unique). As described above, pixel and image (measurement)-specific metadata can improve the interpretability of the data and should be saved with the data as already done in airborne and satellite remote sensing [215, BIB001 . To the best of our knowledge, a standard procedure for UAV remote sensing does not yet exist, but some researchers have implemented it into their work (e.g., BIB008 for the viewing geometry; BIB003 for the radiometric resolution, to calculate uncertainty of the output HDRF observations via the image signal-to-noise ratio and reflectance transformation standard deviation; BIB007 to trace the information from the individual images into the data product). Scene-specific metadata can be stored in an additional file, similar to ENVI header files. Ideally, quantitative metadata parameters should also have an uncertainty assigned to them. Table 5 summarizes the mandatory and optional metadata for UAV remote sensing. We argue that at least the mandatory scene-based metadata should be stated in every publication or its supplementary material. With increasing resolution, additional factors need to be considered that were not visible in remote sensing data of coarser resolution. One example is wind and wind gusts that can have an influence on the spectral signature. Further studies need to evaluate such effects. Table 5 . Numeric (n) or qualitatively (q) mandatory (m), and advised (a)auxiliary and metadata for spectral data processing. Although the direct and diffuse illumination ratio is important, it is set to advised, since it is not easy to measure.
Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Abstract. Unmanned aerial vehicles (UAVs) equipped with lightweight spectral sensors facilitate non-destructive, near-real-time vegetation analysis. In order to guarantee robust scientific analysis, data acquisition protocols and processing methodologies need to be developed and new sensors must be compared with state-of-the-art instruments. Four different types of optical UAV-based sensors (RGB camera, converted near-infrared camera, six-band multispectral camera and high spectral resolution spectrometer) were deployed and compared in order to evaluate their applicability for vegetation monitoring with a focus on precision agricultural applications. Data were collected in New Zealand over ryegrass pastures of various conditions and compared to ground spectral measurements. The UAV STS spectrometer and the multispectral camera MCA6 (Multiple Camera Array) were found to deliver spectral data that can match the spectral measurements of an ASD at ground level when compared over all waypoints (UAV STS: R2=0.98; MCA6: R2=0.92). Variability was highest in the near-infrared bands for both sensors while the band multispectral camera also overestimated the green peak reflectance. Reflectance factors derived from the RGB (R2=0.63) and converted near-infrared (R2=0.65) cameras resulted in lower accordance with reference measurements. The UAV spectrometer system is capable of providing narrow-band information for crop and pasture management. The six-band multispectral camera has the potential to be deployed to target specific broad wavebands if shortcomings in radiometric limitations can be addressed. Large-scale imaging of pasture variability can be achieved by either using a true colour or a modified near-infrared camera. Data quality from UAV-based sensors can only be assured, if field protocols are followed and environmental conditions allow for stable platform behaviour and illumination. <s> BIB001 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Abstract. Albedo is a fundamental parameter in earth sciences, and many analyses utilize the Moderate Resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF)/albedo (MCD43) algorithms. While derivative albedo products have been evaluated over Greenland, we present a novel, direct comparison with nadir surface reflectance collected from an unmanned aerial system (UAS). The UAS was flown from Summit, Greenland, on 210 km transects coincident with the MODIS sensor overpass on board the Aqua and Terra satellites on 5 and 6 August 2010. Clear-sky acquisitions were available from the overpasses within 2 h of the UAS flights. The UAS was equipped with upward- and downward-looking spectrometers (300–920 nm) with a spectral resolution of 10 nm, allowing for direct integration into the MODIS bands 1, 3, and 4. The data provide a unique opportunity to directly compare UAS nadir reflectance with the MODIS nadir BRDF-adjusted surface reflectance (NBAR) products. The data show UAS measurements are slightly higher than the MODIS NBARs for all bands but agree within their stated uncertainties. Differences in variability are observed as expected due to different footprints of the platforms. The UAS data demonstrate potentially large sub-pixel variability of MODIS reflectance products and the potential to explore this variability using the UAS as a platform. It is also found that, even at the low elevations flown typically by a UAS, reflectance measurements may be influenced by haze if present at and/or below the flight altitude of the UAS. This impact could explain some differences between data from the two platforms and should be considered in any use of airborne platforms. <s> BIB002 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Vegetation properties can be estimated using optical sensors, acquiring data on board of different platforms. For instance, ground-based and Unmanned Aerial Vehicle (UAV)-borne spectrometers can measure reflectance in narrow spectral bands, while different modelling approaches, like regressions fitted to vegetation indices, can relate spectra with crop traits. Although monitoring frameworks using multiple sensors can be more flexible, they may result in higher inaccuracy due to differences related to the sensors characteristics, which can affect information sampling. Also organic production systems can benefit from continuous monitoring focusing on crop management and stress detection, but few studies have evaluated applications with this objective. In this study, ground-based and UAV spectrometers were compared in the context of organic potato cultivation. Relatively accurate estimates were obtained for leaf chlorophyll (RMSE = 6.07 µg·cm-2), leaf area index (RMSE = 0.67 m²·m-2), canopy chlorophyll (RMSE = 0.24 g·m-2) and ground cover (RMSE = 5.5%) using five UAV-based data acquisitions, from 43 to 99 days after planting. These retrievals are slightly better than those derived from ground-based measurements (RMSE = 7.25 µg·cm-2, 0.85 m²·m-2, 0.28 g·m-2 and 6.8%, respectively), for the same period. Excluding observations corresponding to the first acquisition increased retrieval accuracy and made outputs more comparable between sensors, due to relatively low vegetation cover on this date. Intercomparison of vegetation indices indicated that indices based on the contrast between spectral bands in the visible and near-infrared, like OSAVI, MCARI2 and CIg provided, at certain extent, robust outputs that could be transferred between sensors. Information sampling at plot level by both sensing solutions resulted in comparable discriminative potential concerning advanced stages of late blight incidence. These results indicate that optical sensors, and their integration, have great potential for monitoring this specific organic cropping system. <s> BIB003 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Field spectroscopy is increasingly used in various fields of science: either as a research tool in its own right or in support of airborne- or space-based optical instruments for calibration or validation purposes. Yet, while the use of the instruments appears deceptively simple, the processes of light and surface interactions are complex to be measured in full and are further complicated by the multidimensionality of the measurement process. This study exemplifies the cross validation of in situ point spectroscopy and airborne imaging spectroscopy data across all processing stages within the spectroscopy information hierarchy using data from an experiment focused on vegetation. In support of this endeavor, this study compiles the fundamentals of spectroscopy, the challenges inherent to field and airborne spectroscopy, and the best practices proposed by the field spectroscopy community. This combination of theory and case study shall enable the reader to develop an understanding of 1) some of the commonly involved sources of errors and uncertainties, 2) the techniques to collect high-quality spectra under natural illumination conditions, and 3) the importance of appropriate metadata collection to increase the long-term usability and value of spectral data. <s> BIB004 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> This paper demonstrates the ability to generate quantitative remote sensing products by means of an unmanned aerial vehicle (UAV) equipped with one unaltered and one near infrared-modified commercial off-the-shelf (COTS) camera. Radiometrically calibrated orthomosaics were generated for 17 dates, from which digital numbers were corrected to surface reflectance and to normalized difference vegetation index (NDVI). Validation against ground measurements showed that 84%–90% of the variation in the ground reflectance and 95%–96% of the variation in the ground NDVI could be explained by the UAV-retrieved reflectance and NDVI, respectively. Comparisons against Landsat 8 data showed relationships of $0.73\leq R^{2} \geq 0.84$ for reflectance and $0.86\leq R^{2} \geq 0.89$ for NDVI. It was not possible to generate a fully consistent time series of reflectance, due to variable illumination conditions during acquisition on some dates. However, the calculation of NDVI resulted in a more stable UAV time series, which was consistent with a Landsat series of NDVI extracted over a deciduous and evergreen woodland. The results confirm that COTS cameras, following calibration, can yield accurate reflectance estimates (under stable within-flight illumination conditions), and that consistent NDVI time series can be acquired in very variable illumination conditions. Such methods have significant potential in providing flexible, low-cost approaches to vegetation monitoring at fine spatial resolution and for user-controlled revisit periods. <s> BIB005 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Abstract Unmanned Aerial Vehicle (UAV) remote sensing has opened the door to new sources of data to effectively characterize vegetation metrics at very high spatial resolution and at flexible revisit frequencies. Successful estimation of the leaf area index (LAI) in precision agriculture with a UAV image has been reported in several studies. However, in most forests, the challenges associated with the interference from a complex background and a variety of vegetation species have hindered research using UAV images. To the best of our knowledge, very few studies have mapped the forest LAI with a UAV image. In addition, the drawbacks and advantages of estimating the forest LAI with UAV and satellite images at high spatial resolution remain a knowledge gap in existing literature. Therefore, this paper aims to map LAI in a mangrove forest with a complex background and a variety of vegetation species using a UAV image and compare it with a WorldView-2 image (WV2). In this study, three representative NDVIs, average NDVI (AvNDVI), vegetated specific NDVI (VsNDVI), and scaled NDVI (ScNDVI), were acquired with UAV and WV2 to predict the plot level (10 × 10 m) LAI. The results showed that AvNDVI achieved the highest accuracy for WV2 (R 2 = 0.778, RMSE = 0.424), whereas ScNDVI obtained the optimal accuracy for UAV (R 2 = 0.817, RMSE = 0.423). In addition, an overall comparison results of the WV2 and UAV derived LAIs indicated that UAV obtained a better accuracy than WV2 in the plots that were covered with homogeneous mangrove species or in the low LAI plots, which was because UAV can effectively eliminate the influence from the background and the vegetation species owing to its high spatial resolution. However, WV2 obtained a slightly higher accuracy than UAV in the plots covered with a variety of mangrove species, which was because the UAV sensor provides a negative spectral response function(SRF) than WV2 in terms of the mangrove LAI estimation. <s> BIB006 </s> Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows <s> Comparability between Sensing Systems <s> Abstract With the increasing availability of spectral sensors and consumer-grade data processing software, a democratization of imaging spectroscopy is taking place. In particular, novel lightweight 2D spectral imagers in combination with UAVs are increasingly being adapted for imaging spectroscopy. In contrast to traditional line-scanners, these sensors capture spectral information as a 2D image within every exposure. With computer vision algorithms embedded in consumer grade software packages, these data can be processed to hyperspectral digital surface models that hold spectral and 3D spatial information in very high resolution. To understand the spectral signal, however, one must comprehend the complexity of the capturing and data processing process in imaging spectroscopy with 2D imagers. This study establishes the theoretical background to comprehend the properties of spectral data acquired with 2D imagers and investigates how different data processing schemes influence the data. To improve the interpretability of a spectral signal derived for an area of interest (AOI), the specific field of view is introduced as a concept to understand the composition of pixels and their angular properties used to characterize a specific AOI within a remote sensing scene. These considerations are applied to a multi-temporal field study carried out under different illumination conditions in a barley field phenotyping experiment. It is shown that data processing significantly affects the angular properties of the spectral data and influences the apparent spectral signature. The largest differences are found in the red domain, where the signal differs by approximately 10% relative to a single nadir image. Even larger differences of approximately 14% are found in comparison with ground-based non-imaging field spectrometer measurements. The differences are explained by investigating the interaction between the angular properties of the data and canopy anisotropy, which are wavelength and growth stage dependent. Additionally, it is shown that common vegetation indices cannot normalize the differences and that the retrieval of chlorophyll is affected. In conclusion, this study helps to understand the process of imaging spectroscopy with 2D imagers and provides recommendations for future missions. <s> BIB007
UAVs have been envisaged to bridge the gap between classical ground, full-size aircraft, and satellite sensing systems. While UAV sensing systems are not per se different from other airborne sensing systems, differences between the sensing system may exist in sensor performance (due to miniaturization), calibration, data processing, measurement geometries (integrated FOV of non-imaging devices versus IFOV of imaging devices, nadir versus oblique), spatial and spectral resolution, and measurement timings (fixed time with satellite versus flexible UAV). Several researchers have investigated the comparability of non-imaging ground and imaging and non-imaging UAV spectral data. Most found offsets, which they attributed to calibration issues BIB003 BIB001 . Aasen and Bolten BIB007 systematically looked at the issue and found that the differences rather resulted from differences in the angular properties of the data. They defined the term specific field of view (SFOV) as a concept to understand the composition of pixels and their angular properties used to characterize a specific area of interest. This SFOV is influenced by the sensor's FOV and the data processing, which explains the differences in the date captured by different sensors, eventually on different platforms, and processing BIB007 . Another study investigated the cross-validation of field and airborne spectroscopy data, and identified common sources of errors and uncertainties, as well as techniques to collect high-quality spectra under natural illumination conditions and highlighted the importance of appropriate metadata BIB004 . Other studies have compared UAV data to satellite observations, i.e., a comparison of Landsat 8 and two calibrated-one modified to NIR-Panasonic DMC-LX5 digital cameras showed that reflectance was not always consistent due to variable illumination conditions BIB005 . Burkhart et al. BIB002 compared a flight track of 210 km UAV nadir reflectance with the MODIS nadir BRDF-adjusted surface reflectance products of a dry snow region near Summit, Greenland. The data show that the UAV measurements were slightly higher than the MODIS NBARs for all of the bands, but agreed within their stated uncertainties. Tian et al. BIB006 compared UAV and WorldView-2 imagery for mapping a leaf area index. They found that the high resolution of UAV images was suitable to eliminate influences from the background in low leaf area index situations. In addition, many other factors might affect the comparability. One example is the measurement duration of a method that might introduce additional artefacts: satellites may sample many square km in an instant, while it may take the whole day to sample the same area with ground-based measurements; the latter would introduce artefacts from the diurnal illumination change. Research on such subjects has only just started.
A Review of T ext Classification Approaches for E-mail Management <s> INTRODUCTION <s> Classification of large datasets is an important data mining problem. Many classification algorithms have been proposed in the literature, but studies have shown that so far no algorithm uniformly outperforms all other algorithms in terms of quality. In this paper, we present a unifying framework called Rain Forest for classification tree construction that separates the scalability aspects of algorithms for constructing a tree from the central features that determine the quality of the tree. The generic algorithm is easy to instantiate with specific split selection methods from the literature (including C4.5, CART, CHAID, FACT, ID3 and extensions, SLIQ, SPRINT and QUEST). ::: ::: In addition to its generality, in that it yields scalable versions of a wide range of classification algorithms, our approach also offers performance improvements of over a factor of three over the SPRINT algorithm, the fastest scalable classification algorithm proposed previously. In contrast to SPRINT, however, our generic algorithm requires a certain minimum amount of main memory, proportional to the set of distinct values in a column of the input relation. Given current main memory costs, this requirement is readily met in most if not all workloads. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> INTRODUCTION <s> A realistic classification model for spam filtering should not only take account of the fact that spam evolves over time, but also that labeling a large number of examples for initial training can be expensive in terms of both time and money. This paper address the problem of separating legitimate emails from unsolicited ones with active and online learning algorithm, using a Support Vector Machines (SVM) as the base classifier. We evaluate its effectiveness using a set of goodness criteria on TREC2006 spam filtering benchmark datasets, and promising results are reported. <s> BIB002 </s> A Review of T ext Classification Approaches for E-mail Management <s> INTRODUCTION <s> Text categorization-assignment of natural language texts to one or more predefined categories based on their content-is an important component in many information organization and management tasks.Different automatic learning algorithms for text categori-zation have different classification accuracy.Very accurate text classifiers can be learned automatically from training examples. <s> BIB003 </s> A Review of T ext Classification Approaches for E-mail Management <s> INTRODUCTION <s> Email has become one of the fastest and most economical forms of communication. However, the increase of email users has resulted in the dramatic increase of spam emails. As spammers always try to find a way to evade existing filters, new filters need to be developed to catch spam. Ontologies allow for machine-understandable semantics of data. It is important to share information with each other for more effective spam filtering. Thus, it is necessary to build ontology and a framework for efficient email filtering. Using ontology that is specially designed to filter spam, bunch of unsolicited bulk email could be filtered out on the system. This paper proposes to find an efficient spam email filtering method using adaptive ontology <s> BIB004 </s> A Review of T ext Classification Approaches for E-mail Management <s> INTRODUCTION <s> E-mail spam has become an epidemic problem that can negatively affect the usability of electronic mail as a communication means. Besides wasting users' time and effort to scan and delete the massive amount of junk e-mails received; it consumes network bandwidth and storage space, slows down e-mail servers, and provides a medium to distribute harmful and/or offensive content. Several machine learning approaches have been applied to this problem. In this paper, we explore a new approach based on fuzzy similarity that can automatically classify e-mail messages as spam or legitimate. We study its performance for various conjunction and disjunction operators for several datasets. The results are promising as compared with a naive Bayesian classifier. Classification accuracy above 97% and low false positive rates are achieved in many test cases. <s> BIB005
Text Classification (TC) is the task of automatically sorting a set of documents into categories such as topics from a predefined set. The task falls at the crossroads of information retrieval (IR) and Machine Learning (ML). It has witnessed a booming interest in the last ten years from researchers and developers alike due to its ever-expanding horizon of applications such as document classification, text summarization, essay scoring and user-specific presentation of textual material . The Email affects every user of the Internet. However, emails also bloat and flood the inbox quickly leading to a morass of unorganized information. Even though many email providers allow the creation of folders and sub-folders where emails can be routed based on sender's address, date, subject etc. the whole process is largely manual. There is an urgent need for automatically segregating emails based on their relevance to the user. As a basic need, spam filtering classifies messages into two categories, viz. spam and non-spam. Besides being undesired, spam email consumes a lot of network bandwidth. This is not a typical TC application. Over time, spammers resort to deceptive and deluging methods to get around antispam software thereby leading to a gradual degeneration of the filter's efficacy. To counter this, innovative TC approaches with good generalization, continuous adaptive learning and context sensitivity need to be applied. Extending this concept to the general case of Upasana Pandey, (e-mail:[email protected]) S. Chakraverty, (e-mail:[email protected]) Division of Computer Engineering ,Netaji Subhas Inst. of Technology, New Delhi-110078 filtering emails into several categories based on their relevance to the user, we can investigate TC approaches for personalized management of all emails. Predominantly, statistical approaches have been applied for text classification. These approaches are based on the word occurrences i.e. frequency of one or more words in a given document. Several algorithms based on this method have been reported and have given good results in web applications [2] BIB004 BIB001 [9] BIB002 [12] BIB003 . An alternative approach is Context based text classification that takes into account how a word w1 influences the occurrence of another word w2 in the document. Thus, the presence or absence of w1 affects a classification based on w2. Even though some recent papers BIB005 have reported techniques and algorithms for finding relevancy among words, significant work has not yet been carried out in the field of context based text classification for email applications. In this paper we present a survey focusing on statistical as well as some recent context based approaches for TC with focus on spam filtering and email applications. Performance Measures: The following parameters are important performance indices for spam filtering. A false positive is result that classifies a legitimate email as a spam email. A false negative is a result that classifies a spam email as a legitimate email. A False-positive error that diverts a legitimate email as spam is generally considered more serious than a False-negative. Now, out of all the spam emails, let a numbers of them be categorized correctly as spam (true positives) and the remaining b be categorized as legitimate (false negatives). Likewise, out of all legitimate emails, let c of them be erroneously categorized as spam (false positives) and remaining d be categorized as legitimate (true negatives). Let N be the sum total of a, b, c, d. The following scores are defined: where Pr(S) is the overall probability that any given message is spam, Pr(W|S) is the probability that W appears in spam messages, Pr(H) is the overall probability than any given message is ham (not spam), Pr (W |H) is the probability that W appears in ham messages. During its training phase, a naïve Bayes classifier learns the posterior word probabilities. The main strength of naïve Bayes algorithm lies in its simplicity. Since the variables are mutually independent, only the variances of individual class variables need to be determined rather than handling the entire set of covariances. This makes naïve Bayes one of the most efficient models for email filtering. It is robust, continuously improving its accuracy while adapting to each user's preferences when he/she identifies incorrect classifications thus allowing continuous rectified training of the model. In , the authors constructed a corpus Ling-Spam with 2411 non spam and 481 spam messages and used a parameter λ to induce greater penalty to false positives. They demonstrated that the weighed accuracy of a naïve-Bayesian email filter can exceed 99%. Variations of the basic algorithm for example, using word positions and multi-word N-grams as attributes have also yielded good results . However, the naïve Bayes classifier is susceptible to Bayesian poisoning, a situation where a spammer mixes a large amount of legitimate text or video data to get around the filter's probabilistic detection mechanism.
A Review of T ext Classification Approaches for E-mail Management <s> B. <s> Classification of large datasets is an important data mining problem. Many classification algorithms have been proposed in the literature, but studies have shown that so far no algorithm uniformly outperforms all other algorithms in terms of quality. In this paper, we present a unifying framework called Rain Forest for classification tree construction that separates the scalability aspects of algorithms for constructing a tree from the central features that determine the quality of the tree. The generic algorithm is easy to instantiate with specific split selection methods from the literature (including C4.5, CART, CHAID, FACT, ID3 and extensions, SLIQ, SPRINT and QUEST). ::: ::: In addition to its generality, in that it yields scalable versions of a wide range of classification algorithms, our approach also offers performance improvements of over a factor of three over the SPRINT algorithm, the fastest scalable classification algorithm proposed previously. In contrast to SPRINT, however, our generic algorithm requires a certain minimum amount of main memory, proportional to the set of distinct values in a column of the input relation. Given current main memory costs, this requirement is readily met in most if not all workloads. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> B. <s> Email has become one of the fastest and most economical forms of communication. However, the increase of email users has resulted in the dramatic increase of spam emails. As spammers always try to find a way to evade existing filters, new filters need to be developed to catch spam. Ontologies allow for machine-understandable semantics of data. It is important to share information with each other for more effective spam filtering. Thus, it is necessary to build ontology and a framework for efficient email filtering. Using ontology that is specially designed to filter spam, bunch of unsolicited bulk email could be filtered out on the system. This paper proposes to find an efficient spam email filtering method using adaptive ontology <s> BIB002
Decision tree A Decision Tree (DT) is a predictive model that expands a tree of decisions and their possible consequences, including chance event outcomes, and resource costs. The outcomes can be discreet or as in case of regression trees, continuous. Each leaf represents a unique classification and branches represent the conjunction of features that lead to the classifications at various leaves. Popular decision tree based learning methods are CART, ID3, C4.5 and Naïve Tree [5] . 1) CART: -Classification and Regression Tree or CART based methods progressively split the set of training examples into smaller and smaller subsets on the basis of possible answers to a series of questions posed by the designer. When all samples in each subset acquire the same category label, each subset becomes Pure; such a condition would terminate that portion of the tree. Text documents are typically characterized by very high dimensional feature spaces. Such excessive detailing or noisy training data run the risk of overfitting. In order to avoid overfitting and improve generalization accuracy, it is necessary to employ some pruning technique. CART uses the Gini Impurity parameter to pick only the most appropriate features for each parameter [5] . 2) ID3:-The ID3 algorithm computes entropy based Information Gain for optimized feature selection. The recursive feature selection algorithm continues until there is only one class remaining in the data, or there are no features left. 3) C4.5:-C4.5 takes as input the tree generated by ID3 and attempts to reduce it by applying rule post pruning. The algorithm converts the tree into a set of if-then rules, and then prunes each rule by removing preconditions if the accuracy of the rule increases without it. The rules are then sorted according to their accuracy on the training set and applied in that order during classification. 4) Naïve Tree (NT):-Kohavi proposes a hybrid algorithm that combines the elegance of a recursive treebased partitioning technique such as C4.5 with the robustness of naïve Bayes categorizers that is applied at each leaf . By applying various datasets as inputs to NT, C4.5 and naïve Bayes, the average accuracy of NT is show to be 84.47%, 81.91% for C4.5 and 81.69% for naïve Bayes. In general the tree size learned by NT is smaller also than C4.5. Thus NT turns out to be more accurate, faster and more scalable than its constituents. The main strength of DT based algorithms is their ability to generate understandable rules without complex computations. The Information Gain provides a clear indication of which features are most important for classification. Also DT can handle missing data by assuming it is randomly distributed within the dataset. In BIB002 , the authors use a UCI Machine Learning Lab dataset containing 4600 emails, where 39.4% is spam emails and 60.6% is legitimate emails. The decision tree classifier filters the spam messages with a good overall accuracy of 97.17%. One of the weaknesses of decision tree is that for a continuous attribute the information gain of many points within each variable has to be computed, adding to the computational cost. The process of growing a decision tree incurs the additional cost of sorting all candidate fields before the best split can be found. Pruning too bears the cost of generating and comparing several sub-trees. Due to these reasons, an issue with decision trees is: how to ensure that its performance scales well with the size of training data. The work in BIB001 proposes a framework for improving the scalability for any given DT method. Fast DT algorithms have been developed [9] , that have a time complexity of O(m.n) as compared with O(mn 2 ) for C4.5, where m is the number of instances or records and n is the number of attributes.
A Review of T ext Classification Approaches for E-mail Management <s> C. <s> Two recently implemented machine-learning algorithms, RIPPER and sleeping-experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the “context” of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping-experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping-experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses. <s> BIB002 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> The natural language processing community has recently experienced a growth of interest in domain independent shallow semantic parsing--the process of assigning a Who did What to Whom, When, Where, Why, How etc. structure to plain text. This process entails identifying groups of words in a sentence that represent these semantic arguments and assigning specific labels to them. It could play a key role in NLP tasks like Information Extraction, Question Answering and Summarization. We propose a machine learning algorithm for semantic role parsing, extending the work of Gildea and Jurafsky (2002), Surdeanu et al. (2003) and others. Our algorithm is based on Support Vector Machines which we show give large improvement in performance over earlier classifiers. We show performance improvements through a number of new features designed to improve generalization to unseen data, such as automatic clustering of verbs. We also report on various analytic studies examining which features are most important, comparing our classifier to other machine learning algorithms in the literature, and testing its generalization to new test set from different genre. On the task of assigning semantic labels to the PropBank (Kingsbury, Palmer, & Marcus, 2002) corpus, our final system has a precision of 84% and a recall of 75%, which are the best results currently reported for this task. Finally, we explore a completely different architecture which does not requires a deep syntactic parse. We reformulate the task as a combined chunking and classification problem, thus allowing our algorithm to be applied to new languages or genres of text for which statistical syntactic parsers may not be available. <s> BIB003 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated.We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic/semantic alternations in the corpus. ::: ::: We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty ''trace'' categories of the treebank. <s> BIB004 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> Text classification (TC) is the task to automatically classify documents based on learned document features. Many popular TC models use simple occurrence of words in a document as features. They also commonly assume word occurrences to be statistically independent in their design. Although it is obvious that such assumption does not hold in general, these TC models have been robust and efficient in their task. Some recent studies have shown context-sensitive TC approaches, which take into consideration contexts in the form of word co-occurrences, have been able to perform better in general. On the other hand, there have been many studies in the use of complex linguistic or semantic features instead of simple word occurrences as features for information retrieval and classification tasks. While these complex features may intuitively have more relevance to the tasks concerned, results of these studies on their effectiveness have been mixed and not been conclusive. In this paper we present our investigation on the use of some complex linguistic features with context-sensitive TC method. Our experiment results show some potential advantages of such approach. <s> BIB005 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> A realistic classification model for spam filtering should not only take account of the fact that spam evolves over time, but also that labeling a large number of examples for initial training can be expensive in terms of both time and money. This paper address the problem of separating legitimate emails from unsolicited ones with active and online learning algorithm, using a Support Vector Machines (SVM) as the base classifier. We evaluate its effectiveness using a set of goodness criteria on TREC2006 spam filtering benchmark datasets, and promising results are reported. <s> BIB006 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> Text categorization-assignment of natural language texts to one or more predefined categories based on their content-is an important component in many information organization and management tasks.Different automatic learning algorithms for text categori-zation have different classification accuracy.Very accurate text classifiers can be learned automatically from training examples. <s> BIB007 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> Term weighting systems are of crucial importance in Information Extraction and Information Retrieval applications. Common approaches to term weighting are based either on statistical or on natural language analysis. In this paper, we present a new algorithm that capitalizes from the advantages of both the strategies by adopting a machine learning approach. In the proposed method, the weights are computed by a parametric function, called Context Function, that models the semantic influence exercised amongst the terms of the same context. The Context Function is learned from examples, allowing the use of statistical and linguistic information at the same time. The novel algorithm was successfully tested on crossword clues, which represent a case of Single-Word Question Answering. <s> BIB008 </s> A Review of T ext Classification Approaches for E-mail Management <s> C. <s> Requiring only category names as user input is a highly attractive, yet hardly explored, setting for text categorization. Earlier bootstrapping results relied on similarity in LSA space, which captures rather coarse contextual similarity. We suggest improving this scheme by identifying concrete references to the category name's meaning, obtaining a special variant of lexical expansion. <s> BIB009
Support Vector Machine (SVM) An SVM is a supervised learning method based on structural risk minimization [5] . It subjects every category to a separate binary classifier. SVM's forte is that it is relatively immune to the dimensionality of the feature space, focusing instead on maximizing the margin between positive and negative examples of training documents. It avoids the use of many training documents, employing only those near the classification border, to construct an irregular border separating positive and negative examples. By employing a suitable kernel functions, it can learn polynomial classifiers, radial basis functions and threelayered sigmoid neural nets, thus acquiring universal learning ability. 1) Soft Margin SVM: Since a sharp separation is not always possible, the Soft Margin SVM chooses a hyperplane that splits the example as cleanly as possible, while still maximizing the distance between the nearest cleanly split examples. 2) Combined Classifiers: In , Tretyakov tried combining two filters, both showing a low probability of reporting false positives. Such a combination filter reports a message as a spam if either of the constituent filters categorizes it as spam. The combination is used to yield better precision. A combination of soft margin SVM and naïve Bayes filter was tested on PU1 corpus. It reported 94.4% correct classifications, 12.7% false negatives and 0.0% false positive. In comparison, the accuracy is of the basic soft SVM was 98.1%, with 1.6% false positives and 2.3% false negatives. Parameter tuning of the soft SVM reduced the false positives to 0.0%, but this resulted in a marked degradation of accuracy to 90.8% and false negatives to 21%. We thus observe that the combined SVM tackles the more serious problem of false positives while still maintaining accuracy at an acceptable level. The main strength of the SVM is its ability to exhibit better performance even if a plethora of features is used; it self-tunes itself and maintains accuracy and generalization. Therefore, there is no compelling need to find the optimum number of features. In BIB006 , SVM employed for spam filtering and tested on the public corpora, Trec06p/full and Trec06c/full [12] and private corpora, X2 and B2 described in the paper, gave encouraging results with an average accuracy of 91.89%, 3.95% false negatives and 2.64% false positives. Comparing various inductive learning based classifiers in BIB007 using the Reuters 21578 corpus , the authors give the best report card to linear SVM in terms of accuracy and training time. However, choice of an appropriate kernel function, high memory requirement and increasing training time with training data size are its problems. Discussion on context sensitive techniques: The research and results summarized above indicate certain strengths of context sensitive TC over context independent methods. 1) Instead of relying on externally input, static set of constructs, the use of contextual information makes TC robust and more immune to noisy data. One can tap the vast knowledge accumulated and techniques available in the domain of AI-based learning methods. ML techniques specialized for TC has been reported such as IREP [21, 22, and 23 ] Adaboost BIB002 , weight learning algorithms BIB001 BIB008 and SVA BIB003 . A plethora of generic and domain specific corpora, carefully annotated [12, and ontological documents [35, BIB004 are available for training and testing classifiers. These methods and tools can be tapped for categorizing email messages. 2) Context is an intuitive and human-oriented way of text interpretation and can naturally be introduced in a variety of ways. Their application can be generic or suitable for specific problem domains. These include implicit context as captured by LSA BIB009 , lexical sense as implied by sparse matrices BIB001 , syntactic meanings as in POS phrases BIB005 , semantic meanings BIB003 and term weighting BIB008 . Adaptive information retrieval systems also make use of user profiles . The user's web interactions and feedback such as deleting a spam or transferring a message from one folder to another, can be examined to build and dynamically adapt his/her profile and frame it as user-centric context for organizing emails. It is indeed both a challenge and a potential opportunity to cull out useful contexts from the rich space of context oriented features. 3) Experimental results reported all the papers discussed do indicate positive directions for contexts sensitive TC as discussed above. They outperform other methods either by improving upon the quality of results with reduced error rates or by ushering in larger corpuses within the ambit of solvable problems, being effective on large noisy corpora. In general it can be seen that context sensitive methods performs well across a large category of TC classification problems. 4) The variety of techniques and interpretations of contexts leads to a great possibility of combining these techniques to exploit and reinforce the advantages of each. For example, rule based methods can derive antecedents which become initial input phrases for a group-of-words based method. 5) As the number of email users explodes, it will become a necessity to use innovative methods to automatically recognize and organize the messages. Statistical methods have been used for long for email filtering and have reached a saturation point where they are unable to foil spammers' circumventing methods. Context based classification techniques can be explored for next generation email management. Real time performance, adaptive learning and sensitivity to user-profiles are important criteria for email management. The TC model employed must have simple statistical assumptions and give linear-time performance. Techniques such as symbolic representation of features and attributes and efficient weight learning algorithms help reduce the search space. Minimal inputs from the user, such as category names should suffice to categorize incoming emails. Classifiers trained on ontology-driven semantics can be useful for domain specific classification. Dynamically adaptable learners will be needed to tune the classifier to changes in user's profile.
A Review of T ext Classification Approaches for E-mail Management <s> D. <s> We present a feature selection method by fuzzy inference and its application to spam-mail filtering in this work. The proposed fuzzy inference method outperforms information gain and chi squared test methods as a feature selection method in terms of error rate. In the case of junk mails, since the mail body has little text information, it provides insufficient hints to distinguish spam mails from legitimate ones. To address this problem, we follow hyperlinks contained in the email body, fetch contents of a remote web page, and extract hints from both original email body and fetched web pages. A two-phase approach is applied to filter spam mails in which definite hint is used first, and then less definite textual information is used. In our experiment, the proposed two-phase method achieved an improvement of recall by 32.4% on the average over the 1st phase or the 2nd phase only works. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> D. <s> E-mail spam has become an epidemic problem that can negatively affect the usability of electronic mail as a communication means. Besides wasting users' time and effort to scan and delete the massive amount of junk e-mails received; it consumes network bandwidth and storage space, slows down e-mail servers, and provides a medium to distribute harmful and/or offensive content. Several machine learning approaches have been applied to this problem. In this paper, we explore a new approach based on fuzzy similarity that can automatically classify e-mail messages as spam or legitimate. We study its performance for various conjunction and disjunction operators for several datasets. The results are promising as compared with a naive Bayesian classifier. Classification accuracy above 97% and low false positive rates are achieved in many test cases. <s> BIB002
Fuzzy logic Fuzzy logic uses linguistic variables, overlapping classes and approximate reasoning to model a classification problem . The works in [16, 17, and 18] show that fuzzy logic lends well to spam detection as indeed the classes spam and non spam messages overlap over a fuzzy boundary. Sayed et al employ fuzzy-based spam detection by first pre-processing the documents (removing all stop words such as 'he', 'the' and 'it' as well as HTML tags), building a fuzzy-model of overlapping categories {spam, valid} with membership functions derived from the training set and, and classifying input messages by calculating the fuzzy similarity measure between the received message and each category BIB002 . The authors tested their classifier with various fuzzy conjunction and disjunction operators using 4 datasets, two for training and two for testing. Averaging over the 4 cases, the best results were obtained for Bounded Diff. with an accuracy of 97.2%, spam recall of 90.5% and spam precision of 97.6%. In paper BIB001 , Kim et al retained hyperlinks because spammers can minimize text but list hyperlinks. They demonstrate that feature selection by fuzzy inference is superior to conventional methods such as Information Gain. This indicates that the linguistic modeling in fuzzy logic is well-suited for both feature extraction and TC. A good feature about fuzzy similarity based spam filtering is that it scans the content of the message to predict its category rather than relying on a fixed pre-specified set of keywords. Therefore it can adapt to spammer tactics and dynamically build its knowledge base. Fuzzy association method avoids ambiguity in English word usage by capturing the relationship or association among different index terms or keywords in the documents . However, fuzzy modeling has its pitfalls in that there are many ways to interpret fuzzy rules, combining the output of several fuzzy rules and defuzzifying the output. The performance of the email filtering engine therefore needs to be optimized by experimentally fine tuning all the relevant parameters.
A Review of T ext Classification Approaches for E-mail Management <s> 2) <s> 1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462/71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> 2) <s> Two recently implemented machine-learning algorithms, RIPPER and sleeping-experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the “context” of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping-experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping-experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information. <s> BIB002 </s> A Review of T ext Classification Approaches for E-mail Management <s> 2) <s> Text classification (TC) is the task to automatically classify documents based on learned document features. Many popular TC models use simple occurrence of words in a document as features. They also commonly assume word occurrences to be statistically independent in their design. Although it is obvious that such assumption does not hold in general, these TC models have been robust and efficient in their task. Some recent studies have shown context-sensitive TC approaches, which take into consideration contexts in the form of word co-occurrences, have been able to perform better in general. On the other hand, there have been many studies in the use of complex linguistic or semantic features instead of simple word occurrences as features for information retrieval and classification tasks. While these complex features may intuitively have more relevance to the tasks concerned, results of these studies on their effectiveness have been mixed and not been conclusive. In this paper we present our investigation on the use of some complex linguistic features with context-sensitive TC method. Our experiment results show some potential advantages of such approach. <s> BIB003
Lexical Units: Lexical units are co-occurring word-expressions associated with a meaning. In BIB002 Cohen and Singer propose a sleeping experts algorithm that entails a set of active lexical units called experts to predict a document's classification. Experts are groups of cooccurring words bearing a prescribed order but allowing variable gaps (arbitrary number of words) in between. A master algorithm learns appropriate weights for each expert during the learning phase adaptively and makes an overall prediction based on individual experts' predictions and a prescribed threshold during test phase. While there may be any number of such experts, only a few active ones actually post predictions on any given example; the remainder are said to be "sleeping" on that example. The paper BIB002 also presents the RIPPER algorithm to construct non linear classifiers that learn lexical units as Boolean function in the form of conjunctive conditions between words in a document. RIPPER carries itself through two stages. Stage 1 constructs an initial rule set using a variation of IREP (Incremental Reduced Error Pruning ); a context sensitive algorithm that helps derive a compact set of rules that can be triggered to classify a new document. The algorithm IREP* constructs one rule at a time, removes all examples covered by a new rule, randomly partitions the uncovered examples into two subsets, two third examples comprising a growing set to add clauses to a rule and remaining one third examples comprising a pruning set to remove clauses. A rule is expanded by adding conditions that maximize the Relative Information Gain, a factor that measures the growth of positive examples' density, and then pruned by removing those conditions that maximizes the differential between positive and negative examples. Stage 2 optimizes the initial rule set to further improve its accuracy. Each rule either (a) revised by growing it further with additional literals or (b) replaced by another new rule that is first grown and then pruned so as to minimize the error of the entire rule set or (c) retained as such. The final choice depends upon which course of action minimizes a critical parameter called description length. An adjustable parameter called loss ratio, defined as cost of false negatives to false positives, trades off between recall and precision to guide the learning process and minimize misclassifications of new data. The results as presented in BIB002 using the AP and TREC-AP corpora [12] for (i) RIPPER (ii)sleeping experts algorithms using four word phrases (E4) and single word phrases (E1) and (iii) a statistical linear classification algorithm called Rocchio BIB001 , are summarized in Table I for ease of reference. Tests on both corpora reveal that all context-based methods report fewer errors than the statistical approach Ro. Specifically for AP Title Corpus, both Ri and E4 have higher recall than Ro. E4 also has better precision than Ro. For TREC AP Corpus, sleeping experts E4 reports the best recall and precision among all context based methods. The authors also evaluated RIPPER (Ri), Sleeping experts E4, E3, E1 with four, three and one word phrases respectively, and Rochhio (Ro) algorithms on the Reuters-21578 corpus. Table II shows the performance index micro-averaged breakeven, at which precision equals recall. These results clearly indicate the superior performance of context based methods as compared to the statistical approach adopted in Rochhio. They also provide an encouraging indicator to the fact that the largest group of associated words, four as in the case of sleeping experts algorithm E4, gives the best results. 3) Syntactic Constructs: NLP makes use of syntactic structures of tokens that encapsulate grammar rules. Such structures such as Parts Of Speech (POS) and their combinations can be utilized to derive context. In BIB003 , the authors studied the efficacy of using complex syntactic linguistic constructs as core features for context based TC. They used various concatenations of lemma, POS and words dependency or modifier. They also use IREP to build a rule-base as described earlier. The newly constructed rule is then evaluated by the whole set of training data and added to the repository only if it reaches a stipulated threshold. The authors applied various combinations of complex syntactic feature sets on large and small classes taken from the dataset Reuter-21578 . Their experimental results reveal that the most complex features do outperform words as features, thus pointing towards their potential to improve performance of context sensitive text classification.
A Review of T ext Classification Approaches for E-mail Management <s> 4) Ontology and Semantic labels: <s> This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses. <s> BIB001 </s> A Review of T ext Classification Approaches for E-mail Management <s> 4) Ontology and Semantic labels: <s> The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated.We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic/semantic alternations in the corpus. ::: ::: We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty ''trace'' categories of the treebank. <s> BIB002 </s> A Review of T ext Classification Approaches for E-mail Management <s> 4) Ontology and Semantic labels: <s> The natural language processing community has recently experienced a growth of interest in domain independent shallow semantic parsing--the process of assigning a Who did What to Whom, When, Where, Why, How etc. structure to plain text. This process entails identifying groups of words in a sentence that represent these semantic arguments and assigning specific labels to them. It could play a key role in NLP tasks like Information Extraction, Question Answering and Summarization. We propose a machine learning algorithm for semantic role parsing, extending the work of Gildea and Jurafsky (2002), Surdeanu et al. (2003) and others. Our algorithm is based on Support Vector Machines which we show give large improvement in performance over earlier classifiers. We show performance improvements through a number of new features designed to improve generalization to unseen data, such as automatic clustering of verbs. We also report on various analytic studies examining which features are most important, comparing our classifier to other machine learning algorithms in the literature, and testing its generalization to new test set from different genre. On the task of assigning semantic labels to the PropBank (Kingsbury, Palmer, & Marcus, 2002) corpus, our final system has a precision of 84% and a recall of 75%, which are the best results currently reported for this task. Finally, we explore a completely different architecture which does not requires a deep syntactic parse. We reformulate the task as a combined chunking and classification problem, thus allowing our algorithm to be applied to new languages or genres of text for which statistical syntactic parsers may not be available. <s> BIB003
In , the authors propose superimposing concepts derived from background knowledge onto the classical word vector feature representation of documents that makes use of only word stems. Knowledge is derived from an ontology and context in the form of related words, syntactical patterns, morphological transformations and word sense disambiguation. They use the Adaboost Boosting ML technique BIB001 , whereby simple rules learned by several weak learners are combined as per an additive model. Authors evaluated their approach with experiments on the Reuters, OHSUMED [33] and FAODOC [35] corpora and utilized the WordNet, the MeSH and the AGROVOC BIB002 ontologies. These experiments reveal consistent improvements in the microaveraged as well as macroaveraged error rate, precision, recall, F 1 measure and Breakeven Point scores, when compared with classification with only term vectors. The authors analyze two kinds of concept integration that are responsible for the observed improvements: (1)Lexical level improvement by multiword expression detection and synonym conflation (2)Conceptual level improvement using ontology structures to generalize and thereby derive hypernyms and integrate them with word stems. Results further reveal that an appropriate choice of ontology affects the quality and consistency of results significantly. In BIB003 semantic labels such as who did what to whom, when, where why, how etc. are tagged to syntactic constituents surrounding a predicate. The shallow semantic parsing of sentences extends well to applications such as question answering, summarization, information extraction. The authors employ SVM to identify each non-copula verb or predicate in a sentence and tag syntactic constituents with distinct semantic arguments. SVM tuning comprises a pruning process which removes NULL constituents as identified by the first binary classifier. Next, N One Versus All (OVA) binary classifiers classifies each of the N NON NULL constituents. For training and testing, the authors use the PropBank corpus [37] which provides sentences annotated with verb predicates and their syntactic arguments. In their baseline approach, they include features such as the predicate, the Path from constituent to predicate, the Position of a constituent w.r.t. the predicate, the Head word etc. Results were further improved when many new innovative features such as verb clustering, named entities and head word part of speech were added. The SVM approach reports best results with 84% precision and 75% recall. However, it is also observed that the trained system worked poorly in terms of coverage on another corpus. This is mainly because of domain differences and also because the range of some of the important features such as predicate and Path is very large. To enable classification that is independent of syntactic parsing, the authors formulated the semantic labeling problem at a word by word level, through which each word was separately tagged. Experiments reveal a distinct fall in quality of results in the word-by-word approach as compared with the constituent-by-constituent approach. This reflects that syntactical context reinforces learning of semantics.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> A robot has to be safe and reliable. An unreliable robot may become the cause of unsafe conditions, high maintenance costs, inconvenience, etc.Over the years, in general safety and reliability areas various assessment methods have been developed, e.g. failure mode and effects analysis, fault tree analysis, and Markovian analysis. In view of these, this paper presents an overview of the most suitable robot safety and reliability assessment techniques. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> SDR-4X is the latest prototype model, which is a small humanoid type robot. We reported the outline of this robot last year. In this paper we discuss more about mechanical system, which is important and original for a small biped entertainment robot, which will be used, in home environment. One technology is the design of actuators alignment in the body, which enables dynamic motion performance. Another technology is the actuator technology, which we originally developed, named intelligent servo actuator (ISA). We explain the specification and the important technical points. Next technology is the sensor system, which supports the high performance of the robot, especially the detection of outside objects, ability of stable walking motion and safe interaction with human. The robot is used in normal home environment, so we should strongly consider the falling-over of the robot. We propose the ideas against falling-over which makes the robot as safe as possible. <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> In the immediate future, metrics related to safety and dependability have to be found in order to successfully introduce robots in everyday environments. The crucial issues needed to tackle the problem of a safe and dependable physical human-robot interaction (pHRI) were addressed in the EURON Perspective Research Project PHRIDOM (Physical Human- Robot Interaction in Anthropic Domains), aimed at charting the new "territory" of pHRI. While there are certainly also "cognitive" issues involved, due to the human perception of the robot (and vice versa), and other objective metrics related to fault detection and isolation, the discussion in this paper will focus on the peculiar aspects of "physical" interaction with robots. In particular, safety and dependability will be the underlying evaluation criteria for mechanical design, actuation, and control architectures. Mechanical and control issues will be discussed with emphasis on techniques that provide safety in an intrinsic way or by means of control components. Attention will be devoted to dependability, mainly related to sensors, control architectures, and fault handling and tolerance. After PHRIDOM, a novel research project has been launched under the Information Society Technologies Sixth Framework Programme of the European Commission. This "Specific Targeted Research or Innovation" project is dedicated to "Physical Human-Robot Interaction: depENDability and Safety" (PHRIENDS). PHRIENDS is about developing key components of the next generation of robots, including industrial robots and assist devices, designed to share the environment and to physically interact with people. The philosophy of the project proposes an integrated approach to the co-design of robots for safe physical interaction with humans, which revolutionizes the classical approach for designing industrial robots – rigid design for accuracy, active control for safety – by creating a new paradigm: design robots that are intrinsically safe, and control them to deliver performance. This paper presents the state of the art in the field as surveyed by the PHRIDOM project, as well as it enlightens a number of challenges that will be undertaken within the PHRIENDS project. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> The most critical challenge for Personal Robotics is to manage the issue of human safety and yet provide the physical capability to perform useful work. This paper describes a novel concept for a mobile, 2-armed, 25-degree-of- freedom system with backdrivable joints, low mechanical impedance, and a 5 kg payload per arm. System identification, design safety calculations and performance evaluation studies of the first prototype are included, as well as plans for a future development. <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> The DDT Project on rescue robots and related technologies was carried out in Japan’s fiscal years 2002–2006 by nationwide researchers, and was organized by International Rescue System Institute. The objective of this project was to develop practical technologies related to robotics as a countermeasure against earthquake disasters, and include robots, intelligent sensors, information equipment, and human interfaces that support emergency responses such as urban search and rescue, particularly victim search, information gathering, and communication. Typical technologies are teleoperated robots for victim search in hazardous disaster areas, and robotic systems with distributed sensors for gathering disaster information to support human decision making. This chapter introduces the objective of this project, and a brief overview of the research results. <s> BIB005 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> In this paper, we propose a sophisticated design of human symbiotic robots that provide physical supports to the elderly such as attendant care with high-power and kitchen supports with dexterity while securing contact safety even if physical contact occurs with them. First of all, we made clear functional requirements for such a new generation robot, amounting to fifteen items to consolidate five significant functions such as “safety”, “friendliness”, “dexterity”, “high-power” and “mobility”. In addition, we set task scenes in daily life where support by robot is useful for old women living alone, in order to deduce specifications for the robot. Based on them, we successfully developed a new generation of human symbiotic robot, TWENDY-ONE that has a head, trunk, dual arms with a compact passive mechanism, anthropomorphic dual hands with mechanical softness in joints and skins and an omni-wheeled vehicle. Evaluation experiments focusing on attendant care and kitchen supports using TWENDY-ONE indicate that this new robot will be extremely useful to enhance quality of life for the elderly in the near future where human and robot co-exist. <s> BIB006 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> The authors introduce an activity of a new working group under ISO/TC184/SC2 which is approved to formulate a new international safety standard “Robots and robotic devices — Safety requirements — Non-medical personal care robot” associated with robots that are allowed to coexist in human environments for the purpose of providing humans with various services. The standard includes risk assessment and risk elimination/reduction information for the design stage of personal care robots. The paper briefly delivers the points in the risk assessment and risk elimination/reduction processes, which have been discussed in the working group: Beginning with the scope, the main framework and roadmap of formulating the standard are also described. <s> BIB007 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> This paper is an overview of the work being performed by the ISO committee TC184/SC2 “Robots and Robotic Devices”. SC2 is developing safety standards for robotic applications in personal and medical care, as well as revising existing industrial robot standards with requirements for new applications. A key driver of the new standards is the need for safety guidelines for human robot interaction, as the new applications involve much more extensive HRI behavior than previous generations of industrial robots. The paper summarizes the content of a revision to ISO 10218 for industrial robots, the development of a new standard ISO/NP 13482 for service robots in personal care, and discusses future work in standards for medical care robots and other areas. <s> BIB008 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety in Domestic Robots <s> Power assist systems are expected to bring many benefits in various fields, and some of them have already been introduced in the welfare and manufacturing industries. As power assist systems demand manual contact with a human operator, it is important to prevent hazards that originate from system faults. The objectives of this paper are to introduce a strategy on safety function implementation by means of a case study for a power assist system and to propose an approach for safety function design. This paper describes details of the strategy for Skill-Assist — the power assist system adopted as the experimental platform. First, the safety integrity level (SIL) required for Skill-Assist was determined, following which top-down and bottom-up risk assessments were conducted. A safety-related system (SRS) with a fail-safe fault detection device and dual-channel voting architecture was then constructed based on the risk assessment result. A functional safety analysis was performed for the SRS and we found that... <s> BIB009
Recent advances in robotics have led to the growth of robotic application domains, such as medical - , military, rescue BIB005 - , personal care BIB004 - BIB006 , and entertainment BIB002 . Out of these categories, a personal-care robot is defined as a service robot with the purpose of either aiding or performing actions that contribute toward the improvement of the quality of life of BIB008 . A domestic robot is a personal-care robot with or without manipulators that operates in home environments and is often mobile. This cohabitation of domestic robots and humans in the same environment raised the issue of safety among standardization bodies BIB008 , BIB007 , research communities - [17] , and robot manufacturers - . As an attribute of dependability, safety is one of the fundamental issues that should be assured for flourishing the use of domestic robots in the future BIB003 , . In general, safety in domestic robotics is a broad topic that demands ensuring safety to the robot itself, to the environment, and to the human user, with the latter considered the most important requirement. In a robotic system where human interaction is involved with a certain risk, it is important to design robots carefully, considering the famous Murphy' s law: "If something can go wrong, it will. " The standard safety requirement used in robotics includes a three-step safety guideline: 1) risk assessment, 2) risk elimination and reduction, and 3) validation methods BIB008 , BIB007 , . The primary risk assessment step identifies a list of tasks, environmental conditions, and potential hazards that should be considered during system design. Different techniques of performing risk assessment to identify and methodically analyze faults in robotic systems are presented in BIB001 and as well as in International Organization for Standardization (ISO) 12100 standard [27] . The following risk identification and reduction step, by itself, is an iterative three-step process that includes safe design to avoid or minimize possible risks, a protection mechanism for risks, which cannot be avoided by design, and, finally, a warning to the user in case both design and protection failed. The final validation step establishes methods that are used to verify whether the desired safety requirements are satisfied by the developed system. Even if all three steps are equally important to design robots that can be used in human environments, most of the safetyrelated works in domestic robotics over the past decade focused on risk elimination and validation steps in a selected part of the total robotic system. Therefore, this survey leaves out works related to risk assessment and, instead, covers publications that include risk elimination and validation steps of the standard robotic safety requirement in domestic robotics. For a complex domestic robot that consists of different mechanical, sensing, actuation, control system, perception, and motion planning subsystems (Figure 1 ), analyzing the overall safety can be done using the concept of functional safety , BIB009 . This systematic approach allows for a safety evaluation of domestic robots based on the standardized functional safety of each subsystem as well as the interactions that exist between them. Typical functional safety standards that can be used for safety analysis are ISO 13849: Safety of Machinery: Safety Related Parts of Control System and IEC 61508: Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems .
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety Criteria and Metrics <s> Collision safety between humans and robots has drawn much attention since service robots are increasingly being used in human environments. A safe robot arm based on passive compliance can usually provide faster and more reliable responses for dynamic collision than an active one involving sensors and actuators. Since both positioning accuracy and collision safety of the robot arm are equally important, a robot arm should have very low stiffness when subjected to a collision force greater than the injury tolerance, but should otherwise maintain very high stiffness. To implement these requirements, a novel safe joint mechanism (SJM-II) which has much smaller size and lighter weight than the previous model, is proposed in this research. The SJM-II has the advantage of nonlinear spring which is achieved using only passive mechanical elements such as linear springs and a double-slider mechanism. Various analyses and experiments on static and dynamic collisions show that stiffness of the SJM-II is kept very high against an external torque less than the predetermined threshold torque, but abruptly drops when the input torque exceeds this threshold, thereby guaranteeing positioning accuracy and collision safety. Furthermore, a robot arm with two SJM-IIs is verified to achieve collision safety in 2D space. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Safety Criteria and Metrics <s> Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over the last years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. In these terms, safe behavior of the robot even under worst-case situations is crucial and forms also a basis for higher level decisional aspects. In order to quantify what safe behavior really means, the definition of injury, as well as understanding its general dynamics are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry, and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitely consider the dynamic properties of the manipulator and human injury. <s> BIB002
Domestic robots require meaningful criteria and metrics to analyze safety and define injury levels of potential hazardous conditions. Safety criteria define desired design requirements, while the quantitative safety metrics, defined based on the criteria, are essential for providing insightful safety improvement ideas, comparing successful system implementations, and assisting system accreditation. Safety metrics are, in general, used to identify what injury a robot might cause BIB002 . The safety criteria are mostly part of an international standard that is deemed acceptable by the manufacturing industry as well as the research community. A standard framework used when dealing with safety in robotics is a risk-or injury-based safety requirement, which requires a system-level analysis of safety. The ISO uses this approach to release a set of safety requirements for robots, such as ISO 10218-1: Safety Requirements for Robots in Manufacturing Industry . These standards are updated when needed, and, in the case of ISO 10218-1, a revised standard was released that deals with the emerging requirement in industrial robotics to share a workspace with humans . An ISO committee has also addressed the issue of safety in personal robots and released an advanced draft of their work ISO 13482: Safety Requirements: Non-Medical Personal Care Robot [32] . There are a number of hazards and risks that are included in the safety standard for domestic robots, but contact-based injuries can be divided into two types: 1) quasistatic clamping and 2) dynamical loading. Different subclasses of the injuries exist, depending on the constraint on a human, the singularity state of the robot, and the sharpness of the contact area . The dynamic loading collision between a robot and a human can be either a blunt impact or a sharp edge contact in which possible injuries range from soft-tissue contusions and bruises to more serious bodily harm. Collision analysis and modeling for the investigation of injury measurement was presented in BIB001 , while discussed the details of soft-tissue injuries, such as penetrations and stabs using experimental tests. There is no universally accepted safety metric that measures these injuries, but a number of approaches have been presented. The common safety metrics used to measure collision and clamping risks in domestic robotics can be categorized into different groups based on the parameters they use: acceleration based, force based, energy/power based, or other parameter based.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Acceleration Based <s> Play has physical, social, emotional and cognitive benefits for children.1 It has been suggested that opportunity for spontaneous play may be all that is needed to increase young children’s levels of physical activity,2 an appealing concept in view of our … <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Acceleration Based <s> Safety is a critical characteristic for robots designed to operate in human environments. This paper presents the concept of hybrid actuation for the development of human- friendly robotic systems. The new design employs inherently safe pneumatic artificial muscles augmented with small electrical actuators, human-bone-inspired robotic links, and newly designed distributed compact pressure regulators. The modularization and integration of the robot components enable low complexity in the design and assembly. The hybrid actuation concept has been validated on a two-degree-of-freedom prototype arm. The experimental results show the significant improvement that can be achieved with hybrid actuation over an actuation system with pneumatic artificial muscles alone. Using the manipulator safety index (MSI), the paper discusses the safety of the new prototype and shows the robot arm safety characteristics to be comparable to those of a human arm. <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Acceleration Based <s> A hermetic refrigeration compressor has a flat valve plate closing off a cylinder bore. The valve plate has an elongated recess on the outer side around the discharge port and a discharge valve assembly, comprising a flat reed valve and backing spring, fits within the recess beneath an overlying valve stop which engages the bottom of the recess at each end. The recess, the reed valve, the backing spring, and the valve stop are so configured that they can be assembled only in the correct configuration. The valve stop is held in place by an arcuate retaining spring having ends engaging notches in the valve plate and a projecting boss on a cylinder head defining a discharge valve plenum engages the retaining spring to press the spring and the valve plate into position within the recess to retain all of the parts in operating position. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Acceleration Based <s> It is evident that industrial robots are able to generate forces high enough to injure a human. To prevent this, robots have to work within a restricted space that includes the entire region reachable by any part of the robot. However, more and more robot applications require human intervention due to superior abilities for some tasks performance. In this paper we introduce danger/safety indices which indicate a level of the risk during interaction with robots, which are based on a robot's critical characteristics and on a human's physical and mental constrains. Collision model for a 1 DOF robot and "human" was developed. Case study with further simulations was provided for the PUMA 560 robot. <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Acceleration Based <s> The DLR Lightweight Robot III (LWR-III) developed at the German Aerospace Center (DLR) is characterized by low inertial properties, torque sensing in each joint, and a load to weight ratio similar to humans. These properties qualify it for applications requiring high mobility and direct interaction with human users or uncertain environments. An essential requirement for such a robot is that it must under no circumstances pose a threat to the human operator. To actually quantify the potential injury risk emanating from the manipulator, impact test were carried out using standard automobile crash-test facilities at the ADAC (German automobile club). Furthermore, we introduce our analysis for soft-tissue injury based on swine experiments with the LWR-III. This paper gives an overview about the variety of investigations necessary to provide a safety analysis of a human-friendly robot based on biomechanical injury results. We believe this paper can provide a guideline for the robotics community for future qualifications of other robots and thus serve as a key component to bring robots in our everyday life. <s> BIB005
The most widely used safety metric in domestic robotics for injuries due to collision is the acceleration-based head injury criteria (HIC) . The metric is derived from human biomechanics data given in the Wayne state tolerance curve and is used in biomechanics studies and accident researches in different fields, such as the automotive industry. It is a measure of the head acceleration for an impact that lasts for a certain duration and is given mathematically as where ( ) a x is the head acceleration normalized with respect to gravity, g , and t D is the measurement duration, which is often taken as 15 ms to investigate head concussion injuries . HIC has been used in robotics as a severity indicator for potential injury due to blunt impact to the human head. Such collisions typically exhibit a high-frequency behavior above the controller bandwidth and, thus, are mainly influenced by the link dynamics and, for stiff robots, also by the motor dynamics. HIC-based safety requirements are used in to identify dynamic constraints on a robot, and then the constraint information obtained to define a performance metric that allows for a better tradeoff between performance and safety is used. The effect of different robot parameters on HIC is analyzed and experimentally verified in . This insightful work included the experimental results with different robots to conclude that a robot of any arbitrary mass cannot severely hurt a human head if measured according to HIC because of the low operating speed. Haddadin et al. BIB005 applied a number of safety criteria while investigating the safety of a manipulator at a standard crash-test facility. They conducted a meticulous safety analysis of the manipulator based on human biomechanics and were able to present quantitative experimental results using different safety metrics for the head, neck, and chest areas. For unconstrained blunt impact, they used HIC as a metric for severe head injury. While reviewing different topics in physical human-robot interaction, noted the need for a new type of safety index in robotics other than HIC because the type of injury and operation speed in robotics is different from that of the automotive industry, where HIC is a standardized metric during crash tests. Other metrics whose results are interpreted based on HIC were also reported in the literature. A metric based on HIC known as the manipulator safety index (MSI), which is a function of the effective inertia of the manipulator, is proposed in BIB001 . After identifying effective inertia as the main factor in manipulator safety, this index analyzes the effective inertia of different manipulators under constant impact velocity and interface stiffness to compare their safety. This metric was used to validate the safety of a manipulator after design modifications in BIB002 and BIB003 . Three danger indexes whose results were interpreted based on HIC is developed and investigated in BIB004 . The work investigates force-, distance-, and acceleration-related danger indexes on a model to give a quantitative measure of the severity and likelihood of injury. The authors proposed a danger index that is a linear combination of the above qualities and considers the speed, effective mass, stiffness, and impact force.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Force Based <s> We propose the world's first general method of evaluating safety for human-care robots. In the case of a careless collision between a robot and a human, impact force and impact stress are chosen as evaluation measures, and a danger-index is defined to quantitatively evaluate the effectiveness of each safety strategy used for design and control. As a result, this proposed method allows us to assess the contribution of each safety strategy to the overall safety performance of a human-care robot. In addition, a new type of three-dimensional robot simulation system for danger evaluation is constructed on a PC. The system simplifies the danger evaluation of both the design and control of various types of human-care robots to quantify the effectiveness of various safety strategies. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Force Based <s> If robots are to be introduced into the human world as assistants to aid a person in the completion of a manual task two key problems of today's robots must be solved. The human-robot interface must be intuitive to use and the safety of the user with respect to injuries inflicted by collisions with the robot must be guaranteed. In this paper we describe the formulation and implementation of a control strategy for robot manipulators which provides quantitative safety guarantees for the user of assistant-type robots. We propose a control scheme for robot manipulators that restricts the torque commands of a position control algorithm to values that comply to preset safety restrictions. These safety restrictions limit the potential impact force of the robot in the case of a collision with a person. Such accidental collisions may occur with any part of the robot and therefore the impact force not only of the robot's hand but of all surfaces is controlled by the scheme. The integration of a visual control inter... <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Force Based <s> Collision safety between humans and robots has drawn much attention because service robots are increasingly being used in human environments. The design of a service robot usually requires reliable collision analysis based on appropriate safety criterion. Previous safety criteria are too restrictive or generous with respect to collision injury. This paper proposes a new safety criterion for physical human-robot interaction. Injury tolerance related to the fracture force of the thyroid and cricoid cartilage in the neck is more suitable to measure injury to humans from robots than criteria representing serious injury in car crash tests. To accurately evaluate robot collision safety, a novel collision model between a human and a robot is established which include the stiffness of the neck and covering, and the input torque of the robot. The injury criteria suggested in this paper were verified to estimate the safety of service robots. Various collision analyses based on this criterion are conducted, and thus the design parameters of robot arms can be adjusted to enhance safety. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Force Based <s> The DLR Lightweight Robot III (LWR-III) developed at the German Aerospace Center (DLR) is characterized by low inertial properties, torque sensing in each joint, and a load to weight ratio similar to humans. These properties qualify it for applications requiring high mobility and direct interaction with human users or uncertain environments. An essential requirement for such a robot is that it must under no circumstances pose a threat to the human operator. To actually quantify the potential injury risk emanating from the manipulator, impact test were carried out using standard automobile crash-test facilities at the ADAC (German automobile club). Furthermore, we introduce our analysis for soft-tissue injury based on swine experiments with the LWR-III. This paper gives an overview about the variety of investigations necessary to provide a safety analysis of a human-friendly robot based on biomechanical injury results. We believe this paper can provide a guideline for the robotics community for future qualifications of other robots and thus serve as a key component to bring robots in our everyday life. <s> BIB004
The other category of safety metrics for contact injuries is the force-based criteria, which considers that excessive force is the cause of potential injuries and, thus, should be limited. Covering detailed analysis on force-based criteria, Ikuta et al. BIB001 used the minimum impact force that can cause injury as a factor to define a unitless danger index to quantify safety strategies. The danger index a of a robot is defined as where Fc is the minimum critical force that can cause injury to a human and F is the possible impact force of the robot. Quantifying safety using this extendable metric was used to achieve safer design and an improved control strategy. In the mechanical design aspect, the index was used to relate safety and design modifications, such as low mass, soft covering, joint compliance, and surface friction or a combination of them. Three safety requirements that are essential in humanrobot interaction are proposed in BIB002 : 1) human-robot coexistence, 2) understandable and predictable motion by the robot, and 3) no injuries to the user. The author then defined a safety metric called the impact potential based on the maximum impact force that a multiple-degrees-of-freedom (DOF) robotic manipulator might exert during collision. For a set of possible impact surfaces on the robot , P the impact potential is given as , sup where p r is worst case impact forces at contact point p on the surface of the robot. Due to the low HIC values observed even for heavier robots as a result of low collision velocity, BIB003 proposed to use minimum forces that cause damage to different body parts as a safety metric. Since different body parts have different tolerance limits, the limit for neck injuries was chosen as a working criterion as it has the lowest value. A force-based safety criterion was used in to investigate the safety of a pneumatic muscle-actuated 2-DOF manipulator because HIC, according to the authors, does not provide an absolute measure of danger. While analyzing the safety of a manipulator with respect to injuries at different parts of the body, BIB004 used maximum bending torque as neck injury metrics and verified safety for quasistatic constrained impact at different body parts using the maximum contact force as a metric, whose allowed tolerance for different body parts is known.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Energy/Power Based <s> Abstract Rahimi, M., 1986. Systems safety for robots: An energy barrier analysis. Journal of Occupational Accidents , 8: 127–138. The need for a comprehensive study of hazards caused by robot work environments is stressed. Based on previous data and robot accident reports, major factors contributing to robot accidents are classified and listed. System safety is introduced as an appropriate approach to analyze safety of semiautomated and automated robot systems. A general procedure for conducting system safety analysis is presented. Energy Barrier Analysis (EBA), a qualitative system safety technique, is applied to a general model of human—robot system. Major concepts of EBA are integrated in a stepwise approach for evaluating and designing robot safety systems. As the result of this application, specific solutions and recommendations are discussed. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Energy/Power Based <s> A significant majority of cervical spine biomechanics studies has applied the external loading in the form of compressive force vectors. In contrast, there is a paucity of data on the tensile loading of the neck structure. These data are important as the human neck not only resists compression but also has to withstand distraction due to factors such as the anatomical characteristics and loading asymmetry. Furthermore, evidence exists implicating tensile stresses to be a mechanism of cervical spinal cord injury. Recent advancements in vehicular restraint systems such as air bags may induce tension to the neck in adverse circumstances. Consequently, this study was designed to develop experimental methodologies to determine the biomechanics of the human cervical spinal structures under distractive forces. A part-to-whole approach was used in the study. Four experimental models from 15 unembalmed human cadavers were used to demonstrate the feasibility of the methodology. Structures included isolated cervical spinal cords, intervertebral disc units, skull to T3 preparations, and intact unembalmed human cadavers. Axial tensile forces were applied, and the failure load and distraction were recorded. Stiffness and energy absorbing characteristics were computed. Maximum forces for the spinal cord specimens were the lowest (278 N +/- 90). The forces increased for the intervertebral disc (569 N +/- 54). skull to T3 (1555 N +/- 459), and intact human cadaver (3373 N +/- 464) preparations, indicating the load-carrying capacities when additional components are included to the experimental model. The experimental methodologies outlined in the present study provide a basis for further investigation into the mechanism of injury and the clinical applicability of biomechanical parameters. <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Energy/Power Based <s> Recently, several cases of mild traumatic brain injury (MTBI) to American professional football players have been reconstructed using instrumented Hybrid III anthropomorphic test dummies (ATDs). The translational and rotational acceleration responses of injured and uninjured players' heads have been documented. The acceleration data have been processed according to all current head injury assessment functions including the Gadd Severity Index (GSI), Head Injury Criterion (HIC) and GAMBIT among others. In this study, a new hypothesis is propounded that the threshold for head injury will be exceeded if the rate of change of kinetic energy of the head exceeds some limiting value. A functional relation is proposed, which includes all six degrees of motion and directional sensitivity characteristics, relating the rate of change of kinetic energy to the probability of head injury. The maximum value that the function achieves during impact is the maximum power input to the head and serves as an index by which the probability of head injury can be assessed. (A) For the covering abstract of the conference see ITRD E203843. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Energy/Power Based <s> The mechanical properties of the adult human skull are well documented, but little information is available for the infant skull. To determine the age-dependent changes in skull properties, we tested human and porcine infant cranial bone in three-point bending. The measurement of elastic modulus in the human and porcine infant cranial bone agrees with and extends previous published data [McPherson, G. K., and Kriewall, T. J. (1980), J. Biomech., 13, pp. 9‐16] for human infant cranial bone. After confirming that the porcine and human cranial bone properties were comparable, additional tensile and three-point bending studies were conducted on porcine cranial bone and suture. Comparisons of the porcine infant data with previously published adult human data demonstrate that the elastic modulus, ultimate stress, and energy absorbed to failure increase, and the ultimate strain decreases with age for cranial bone. Likewise, we conclude that the elastic modulus, ultimate stress, and energy absorbed to failure increase with age for sutures. We constructed two finite element models of an idealized one-month old infant head, one with pediatric and the other adult skull properties, and subjected them to impact loading to investigate the contribution of the cranial bone properties on the intracranial tissue deformation pattern. The computational simulations demonstrate that the comparatively compliant skull and membranous suture properties of the infant brain case are associated with large cranial shape changes, and a more diffuse pattern of brain distortion than when the skull takes on adult properties. These studies are a fundamental initial step in predicting the unique mechanical response of the pediatric skull to traumatic loads associated with head injury and, thus, for defining head injury thresholds for children. @S0148-0731~00!00904-3# <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Energy/Power Based <s> Rescue robotics is an important steppingstone in the scientific challenge to create autonomous systems. There is a significant market for rescue robots, which have unique features that allow a fruitful combination of application-oriented developments and basic research. Unlike other markets for advanced robotics systems like service robots, the rescue robotics domain benefits from the fact that there is a human in the loop, which allows a stepwise transition from dumb teleoperated devices to truly autonomous systems. Current teleoperated devices are already very useful in this domain and they benefit from any bit of autonomy added. Human rescue workers are a scarce resource at disaster scenarios. A single operator should, hence, ideally supervise a multitude of robots. We present results from the rescue robots at the International University Bremen in a core area supporting autonomy, i.e., mapping. <s> BIB005
Different empirical fits were suggested for the Wayne state data other than HIC approximations, and one of them proposes reducing the power in (1) to two BIB003 . According to this approximation, the equation then becomes where V D is the change in velocity of the head. According to BIB005 , possible injury to a human is proportional to the rate of kinetic energy transferred to the body during impact. Based on this observation, Newman et al. introduced a power-based safety metric called head impact power (HIP) from the experimental investigations. By evaluating concussion injury due to an impact on a human head, the proposed HIP risk curve relates the probability of having a concussion injury with the amount of power transferred during a collision. The rate of energy transfer was also suggested as a viscous criterion safety metric for constrained organs injury [50] . According to the viscous safety criterion, injury to human organs is proportional to the product of the compression and the rate of compression. Uncontrolled extra energy was also suggested as a cause of accidents in robots BIB001 , and various experimental tests on the dynamic responses of human biomechanics during impact were performed to define energy-based safety metrics that can be used in robotics. Energy limits that cause failure of the cranial bone in adult and infant subjects are identified in and BIB004 , respectively. The energy that causes a human skull fracture per volume of the skull was given as J/m k 290 for an adult and a six-month-old infant, respectively. The amount of energy that can cause fracturing of the neck bones and spinal injuries was determined in BIB002 . Accordingly, the amount of energy that can damage the spinal cord of an adult human was averaged at J. It is apparent that, since the aforementioned energy-based tolerance values are obtained from severe fracture injuries, they cannot be directly used as acceptable safety threshold limits for domestic robots.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Other Parameter Based <s> In this paper, we discuss a method to achieve safe autonomous robot system coexistence (or Kyozon in Japanese). First, we clarify human pain tolerance and point out that a robot working next to an operator should be covered with a soft material. Thus, we propose a concept and a design method of covering a robot with a viscoelastic material to achieve both impact force attenuation and contact sensitivity, keeping within the human pain tolerance limit. We stress the necessity of a simple robot system from the viewpoint of reliability. We provide a method of sensing contact force without any force sensors by monitoring the direct drive motor current and velocity of the robot. Finally, we covered a two-link arm manipulator with the optimum soft covering material, and applied the developed manipulator system to practical coexistence tasks. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Other Parameter Based <s> Safety is a critical success factor for consumer acceptance of domestic robotic products. Some researchers have adopted the head injury criterion (HIC) as absolute safety norm. However, this norm covers only part of the safety risk. In many cases skin damage (e.g. cuts, wounds, etc) can be a more serious risk. This article shows how to work towards a novel absolute safety measure for evaluating the shape and material choices of a robotic design w.r.t. skin damage. The proposed safety norm evaluates the situation of an unintended uncontrolled collision of a robotic part against a human. Maximum curvatures of the exterior robotic shape are approximated as a sphere in contact with the human skin (locally approximated as a flat surface). This local spheric approximation of the impact contact is used to predict maximum tensile stress during impact of the robotic part on the human. Robotic designs that include points for which the tensile strength of the skin is exceeded will cause at least skin fracture and are therefore considered intrinsically unsafe. While in general applicable, this paper specifically addresses how to apply the proposed norm in the case of safety evaluation of robotic manipulators. <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Other Parameter Based <s> Our work is focused on cooperation of a small industrial robot and human operator where collision is expected only between the robot end-effector and the lower arm of the human worker. To study the effect of the impact between the robot and man, a passive mechanical lower arm (PMLA) was developed. The investigation presented in this paper evaluates whether the PMLA is a sufficiently accurate emulation system of a passive human lower arm. The same experiments were performed with the PMLA and with human volunteers. The results of both investigations were compared and evaluated to determine whether the PMLA can competently replace human volunteers in dangerous future investigations. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Other Parameter Based <s> Modeling of low severity soft-tissue injury due to unwanted collisions of a robot in collaborative settings is an important aspect to be treated in safe physical Human-Robot Interaction (pHRI). Up to now, safety evaluations for pHRI were mainly conducted by using safety criteria related with impact forces and head accelerations. These indicate severe injury in the robotics context and leave out low severity injury such as contusions and lacerations. However, for the design of an intrinsically safer robot arm, a reliable evaluation of the collision between a human and a robot that is based on skin injury criteria is essential. In this paper, we propose a novel human-robot collision model with and without covering, which is based on the impact stress distribution. The reliability of the proposed collision model is verified by a comparison with various cadaver experiments taken from existing biomechanical literature. Since the stress characteristics acting on the human head can be analyzed with this new collision model, the occurrence of certain soft-tissue injury can be estimated. Furthermore, the method serves for selecting the appropriate covering parameters, as e.g. elastic modulus and thickness, by evaluating the chosen skin injury indices. <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Other Parameter Based <s> Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over recent years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. For this, safe robot behavior even under worst-case situations is crucial and forms also a basis for higher-level decisional aspects. For quantifying what safe behavior really means, the definition of injury, as well as understanding its general dynamics, are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitly consider the dynamic properties of the manipulator and human injury. <s> BIB005
Other safety metrics proposed for use in domestic robotics are based on factors such as pain tolerance, maximum stress, and energy density limit. The human pain tolerance limit for clamping or sudden collisions was used as a metric for safe robot design in BIB001 . The pain tolerance limit of a human for different parts of the body was used to identify the admissible force during normal operations, and a soft covering of the robot was designed based on this value. A strong correlation between the pain felt by a human and the impact energy density was indicated from the experimental investigation on the collision of a robotic manipulator with a human BIB003 . Skin injury to a human is the focus of BIB002 , which provides a safety metric that evaluates the safety of a robot design based on its cover shape and material covering. Using Hertzian contact models to represent the impact, the proposed safety norm identifies safe design choices by evaluating the maximum stress on the skin that will occur during impact of a point on the robotic cover against a human body. Focusing on soft-tissue injuries, BIB004 also developed a Hertz contact theory-based collision model between a covered robot and a human head to analyze laceration and contusion injuries. Then, using tensile stress and energy density limits of the skin as a safety criteria, the authors proposed allowable elastic modulus and thickness for a robot covering. Soft-tissue injuries that might result from sharp edge contacts between robot-operated tools and a human user were assessed using medical classifications in BIB005 . Instead of using a safety metric to define the injury level observed, this experimental study defined a risk curve that directly relates the observed injury with the mass, velocity, and geometry parameters of the operating robot.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> A method of actively controlling the apparent stiffness of a manipulator end effecter is presented. The approach allows the programmer to specify the three transnational and three rotational stiffness of a frame located arbitrarily in hand coordinates. Control of the nominal position of the hand then permits simultaneous position and force control. Stiffness may be changed under program control to match varying task requirements. A rapid servo algorithm is made possible by transformation of the problem into joint space at run time. Applications examples are given. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> A simple mechanical method for passively compensating for gravitationally induced joint torques is presented. This energy-conservative gravity-compensation method is suitable for a variety of manipulator designs. With cables and appropriate pulley profiles, changes in potential energy associated with link motion through a gravity field can be mapped to changes in strain energy storage in spring elements. The resulting system requires significant energy input only for acceleration and deceleration or to resist external forces. A testbed with both single- and double-link configurations has demonstrated the efficiency and accuracy of this gravity compensation method, as well as its robustness under dynamic loading conditions. > <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> This paper describes the performance requirements and mechanical design of an arm designed and built at MIT for whole-arm manipulation. Whole-arm manipulation began as a research objective to explore the benefits of manipulating objects with all surfaces of a robotic manipulator — not just the fingertips of an attached robotic hand. The need for robust environment contact by all surfaces of the robotic hardware prompted a re-evaluation of traditional manipulator design requirements and spurred the invention of new transmission mechanisms for robots. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications. <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> It is traditional to make the interface between an actuator and its load as stiff as possible. Despite this tradition, reducing interface stiffness offers a number of advantages, including greater shock tolerance, lower reflected inertia, more accurate and stable force control, less inadvertent damage to the environment, and the capacity for energy storage. As a trade-off, reducing interface stiffness also lowers zero motion force bandwidth. In this paper, the authors propose that for natural tasks, zero motion force bandwidth isn't everything, and incorporating series elasticity as a purposeful element within the actuator is a good idea. The authors use the term elasticity instead of compliance to indicate the presence of a passive mechanical spring in the actuator. After a discussion of the trade-offs inherent in series elastic actuators, the authors present a control system for their use under general force or impedance control. The authors conclude with test results from a revolute series-elastic actuator meant for the arms of the MIT humanoid robot Cog and for a small planetary rover. <s> BIB005 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> In this paper, we discuss a method to achieve safe autonomous robot system coexistence (or Kyozon in Japanese). First, we clarify human pain tolerance and point out that a robot working next to an operator should be covered with a soft material. Thus, we propose a concept and a design method of covering a robot with a viscoelastic material to achieve both impact force attenuation and contact sensitivity, keeping within the human pain tolerance limit. We stress the necessity of a simple robot system from the viewpoint of reliability. We provide a method of sensing contact force without any force sensors by monitoring the direct drive motor current and velocity of the robot. Finally, we covered a two-link arm manipulator with the optimum soft covering material, and applied the developed manipulator system to practical coexistence tasks. <s> BIB006 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Industrial robots have found great potential in applications to assembly‐line automation. Programmable robot‐based assembly systems are often needed, in particular for circumstances in which special assembly equipments is not available or well‐trained operators could not be employed economically. Robots with enough compliance can perform not only classic automation tasks, such as spot welding, cargo carrying, etc., but also can operate those tasks which demand the compliant motion capacity of robots. Therefore, the research on robot compliance is especially important for parts assembly by robots, where robot compliant motions and manipulations are essential requirements. This paper presents a number of important issues in robot compliance research, including the specification of robot end‐effector compliance; properties of a robot compliance matrix at its end‐effector; discussions on passive compliance and active compliance and their comparisons; and derivation of the compliance at the end‐effector required for tasks. <s> BIB007 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Abstract The addition of variable joint stiffness to an upper limb prosthesis can restore an important function of the natural arm. Design goals for the stiffness and force performance of prosthetic elbow joints are presented. This paper develops configurations of antagonistic actuators, for use in prosthetic arms, with improved energy efficiency, controllability, interaction properties, and size. Implementation of the required quadratic stiffness elements is investigated using rolamites. The rolamite band geometry is designed to generate the required force function. Rolamite design equations are extended to include effects of high force generation and stress concentrations. The effect of using rolamites to generate the spring forces is examined with respect to restrictions placed on the performance of the joint. The performance of this approach to the implementation of a variable stiffness joint is found to be marginally useful in a prosthetic application. Further design improvements are suggested. <s> BIB008 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> We present a survey of the nominal motion generation schemes and of the associated simple control solutions for robots displaying flexibility effects. Two model classes are considered: robots with elastic joints but rigid links, and robots with flexible links. Model-based feedforward laws are derived for the two basic motion tasks of state-to-state transfer in given time and exact trajectory execution. In particular, we present a new solution to the finite-time reconfiguration problem for a one-link flexible arm. Finally, we use the developed commands into a simple feedback scheme that requires only standard sensors on the motors. <s> BIB009 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> We would like to give robots the ability to secure human safety in human-robot collisions capable of arising in our living and working environments. However, unfortunately, not much attention has been paid to the technologies of human robot symbiosis to date because almost all robots have been designed and constructed on the assumption that the robots are physically separated from humans. A robot with a new concept will be required to deal with human-robot contact. In this article, we propose a passively movable human-friendly robot that consists of an elastic material-covered manipulator, passive compliant trunk, and passively movable base. The compliant trunk is equipped with springs and dampers, and the passively movable base is constrained by friction developed between the contact surface of the base and the ground. During unexpected collisions, the trunk and base passively move in response to the produced collision force. We describe the validity of the movable base and compliant trunk for collision ... <s> BIB010 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> The paper describes the recent design and development efforts in DLR Robotics Lab towards the second generation of light-weight robots. The design of the light weight mechanics, integrated sensors and electronics is outlined. The fully sensory joint, with motor and link position sensors as well as joint torque sensors enables the implementation of effective vibration damping and advanced control strategies for compliant manipulation. The mechatronic approach incorporates a tight collaboration between mechanics, electronics and controller design. The authors hope that important steps towards a new generation of service and personal robots have been achieved. <s> BIB011 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Many successful robotic manipulator designs have been introduced. However, there remains the challenge of designing a manipulator that possesses the inherent safety characteristics necessary for human-centered robotics. In this paper, we present a new actuation approach that has the requisite characteristics for inherent safety while maintaining the performance expected of modern designs. By drastically reducing the effective impedance of the manipulator while maintaining high frequency torque capability, we show that the competing design requirements of performance and safety can be successfully integrated into a single manipulation system. <s> BIB012 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> This paper is concerned with the design and control of actuators for machines and robots physically interacting with humans, implementing criteria established in our previous work [1] on optimal mechanical-control co-design for intrinsically safe, yet performant machines. In our Variable Impedance Actuation (VIA) approach, actuators control in real-time both the reference position and the mechanical impedance of the moving parts in the machine in such a way to optimize performance while intrinsically guaranteeing safety. In this paper we describe an implementation of such concepts, consisting of a novel electromechanical Variable Stiffness Actuation (VSA) motor. The design and the functioning principle of the VSA are reported, along with the analysis of its dynamic behavior. A novel scheme for feedback control of this device is presented, along with experimental results showing performance and safety of a one-link arm actuated by the VSA motor. <s> BIB013 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> We propose a new robot actuator, especially a robot actuator gear which has a very effective feature of backdrivability. We show the study of the new definition of backdrivability of an actuator gear which has the quantitative definition. From this definition we propose the method of making the gear which has a good backdrivability. Based on this method, the actuator gear was developed and we show the result from the experiment. The comparisons with the other types of actuator gear which are Harmonic drive gear and normal planetary gear are described. Finally the comparison has proved that the developed actuator gear has very effective backdrivability. <s> BIB014 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Purpose – The paper seeks to present a new generation of torque‐controlled light‐weight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center.Design/methodology/approach – An integrated mechatronic design approach for LWR is presented. Owing to the partially unknown properties of the environment, robustness of planning and control with respect to environmental variations is crucial. Robustness is achieved in this context through sensor redundancy and passivity‐based control. In the DLR root concept, joint torque sensing plays a central role.Findings – In order to act in unstructured environments and interact with humans, the robots have design features and control/software functionalities which distinguish them from classical robots, such as: load‐to‐weight ratio of 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, compliant control on joint and Cartesian level.Practical implications – The DLR robots are excellent rese... <s> BIB015 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> The most critical challenge for Personal Robotics is to manage the issue of human safety and yet provide the physical capability to perform useful work. This paper describes a novel concept for a mobile, 2-armed, 25-degree-of- freedom system with backdrivable joints, low mechanical impedance, and a 5 kg payload per arm. System identification, design safety calculations and performance evaluation studies of the first prototype are included, as well as plans for a future development. <s> BIB016 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Safety is a critical characteristic for robots designed to operate in human environments. This paper presents the concept of hybrid actuation for the development of human- friendly robotic systems. The new design employs inherently safe pneumatic artificial muscles augmented with small electrical actuators, human-bone-inspired robotic links, and newly designed distributed compact pressure regulators. The modularization and integration of the robot components enable low complexity in the design and assembly. The hybrid actuation concept has been validated on a two-degree-of-freedom prototype arm. The experimental results show the significant improvement that can be achieved with hybrid actuation over an actuation system with pneumatic artificial muscles alone. Using the manipulator safety index (MSI), the paper discusses the safety of the new prototype and shows the robot arm safety characteristics to be comparable to those of a human arm. <s> BIB017 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Facing new tasks, the conventional rigid design of robotic joints has come to its limits. Operating in unknown environments current robots are prone to failure when hitting unforeseen rigid obstacles. Moreover, safety constraints are a major aspect for robots interacting with humans. In order to operate safely, existing robotic systems in this field are slow and have a lack of performance. To circumvent these limitations, a new robot joint with a variable stiffness approach (VS-Joint) is presented. It combines a compact and highly integrated design with high performance actuation. The VS- Joint features a highly dynamic stiffness adjustment along with a mechanically programmable system behavior. This allows an easy adaption to a big variety of tasks. A benefit of the joint is its intrinsic robustness against impacts and hard contacts, which permits faster trajectories and handling. Thus, it provides excellent attributes for the use in shoulder and elbow joints of an anthropomorphic robot arm. <s> BIB018 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> A hermetic refrigeration compressor has a flat valve plate closing off a cylinder bore. The valve plate has an elongated recess on the outer side around the discharge port and a discharge valve assembly, comprising a flat reed valve and backing spring, fits within the recess beneath an overlying valve stop which engages the bottom of the recess at each end. The recess, the reed valve, the backing spring, and the valve stop are so configured that they can be assembled only in the correct configuration. The valve stop is held in place by an arcuate retaining spring having ends engaging notches in the valve plate and a projecting boss on a cylinder head defining a discharge valve plenum engages the retaining spring to press the spring and the valve plate into position within the recess to retain all of the parts in operating position. <s> BIB019 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> In this paper we discuss the integration of active and passive approaches to robotic safety in an overall scheme for real-time manipulator control. The active control approach is based on the use of a supervisory visual system, which detects the presence and position of humans in the vicinity of the robot arm, and generates motion references. The passive control approach uses variable joint impedance which combines with velocity control to guarantee safety in worst-case conditions, i.e. unforeseen impacts. The implementation of these techniques in a 3-dof, variable impedance arm is described, and the effectiveness of their functional integration is demonstrated through experiments. <s> BIB020 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Collision safety between humans and robots has drawn much attention since service robots are increasingly being used in human environments. A safe robot arm based on passive compliance can usually provide faster and more reliable responses for dynamic collision than an active one involving sensors and actuators. Since both positioning accuracy and collision safety of the robot arm are equally important, a robot arm should have very low stiffness when subjected to a collision force greater than the injury tolerance, but should otherwise maintain very high stiffness. To implement these requirements, a novel safe joint mechanism (SJM-II) which has much smaller size and lighter weight than the previous model, is proposed in this research. The SJM-II has the advantage of nonlinear spring which is achieved using only passive mechanical elements such as linear springs and a double-slider mechanism. Various analyses and experiments on static and dynamic collisions show that stiffness of the SJM-II is kept very high against an external torque less than the predetermined threshold torque, but abruptly drops when the input torque exceeds this threshold, thereby guaranteeing positioning accuracy and collision safety. Furthermore, a robot arm with two SJM-IIs is verified to achieve collision safety in 2D space. <s> BIB021 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> In this paper we present various new insights on the effect intrinsic joint elasticity has on safety in pHRI. We address the fact that the intrinsic safety of elastic mechanisms has been discussed rather one sided in favor of this new designs and intend to give a more differentiated view on the problem. An important result is that intrinsic joint elasticity does not reduce the Head Injury Criterion or impact forces compared to conventional actuation with some considerable elastic behavior in the joint, if considering full scale robots. We also elaborate conditions under which intrinsically compliant actuation is potentially more dangerous than rigid one. Furthermore, we present collision detection and reaction schemes for such mechanisms and verify their effectiveness experimentally. <s> BIB022 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> This paper presents a device that significantly improves the safety of ceiling-mounted robots whose end effector orientation remains constant with respect to the vertical direction (e.g. Scara-type robots). The device consists of a three-degree-of-freedom (DOF) parallel mechanism with the Delta architecture on which the revolute actuators have been replaced with torque limiters. The resulting Cartesian force limiting device (CFLD) is implemented as a mechanical connection between the robot and the effector. It is rigid unless excessive forces are applied on the end effector, for example during a collision. The magnitude of force that activates the mechanism is set by properly adjusting the threshold of the torque limiters. Furthermore, a collision can be rapidly detected with a limit switch placed on one of the links of the mechanism and a signal can be sent directly to brakes that will stop the robot, without passing through a controller and thus improving the reliability and reaction-time of the safety system. By mechanically disconnecting the robot from its end effector, the device ensures that the person involved in the collision is only subjected to the inertia of the end effector and thus potential injuries are greatly reduced. This work is the extension of a previous 2-DOF CFLD that was sensitive only to horizontal forces. The new architecture reacts to collisions occuring in any direction and is geometrically optimized for the proposed application. <s> BIB023 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> An anthropomorphic hand arm system using variable stiffness actuation has been developed at DLR. It is aimed to reach its human archetype regarding size, weight and performance. The main focus of our development is put on robustness, dynamic performance and dexterity. Therefore, a paradigm change from impedance controlled, but mechanically stiff joints to robots using intrinsic variable compliance joints is carried out. <s> BIB024 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Mechanical Design and Actuation <s> Designing intrinsically elastic robot systems, making systematic use of their properties in terms of impact decoupling, and exploiting temporary energy storage and release during excitative motions is becoming an important topic in nowadays robot design and control. In this paper we treat two distinct questions that are of primary interest in this context. First, we elaborate an accurate estimation of the maximum contact force during simplified human/obstacle-robot collisions and how the relation between reflected joint stiffness, link inertia, human/obstacle stiffness, and human/obstacle inertia affect it. Overall, our analysis provides a safety oriented methodology for designing intrinsically elastic joints and clearly defines how its basic mechanical properties influence the overall collision behavior. This can be used for designing safer and more robust robots. Secondly, we provide a closed form solution of reaching maximum link side velocity in minimum time with an intrinsically elastic joint, while keeping the maximum deflection constraint. This gives an analytical tool for determining suitable stiffness and maximum deflection values in order to be able to execute desired optimal excitation trajectories for explosive motions. <s> BIB025
The variations in use cases and performance requirements between domestic and industrial robots understandably lead to different designs. Robots designed for industrial purposes have a high stiffness to achieve the main performance requirement, which is accuracy, and consist of heavier links to handle heavy loads . Domestic robots are mostly designed with use cases that include performing humanlike activities in unstructured environments and, hence, have distinct mechanical design requirements BIB016 , BIB003 , BIB011 . Safety in mechanical design and actuation deals with the crucial issue of ensuring inherent safety, i.e., safety even in the unlikely case of loss of the entire control system. To achieve inherent safety, robotic arms mounted on domestic robots are designed to be lightweight and compliant so as to mitigate any possible injury that may arise in case of an uncontrolled collision with human. The presence of compliant behavior in the manipulator might result in unwanted oscillations during motion and compromise system performance. Hence, advanced controllers should be used to compensate the performance degradation in flexible robots BIB009 and enable an acceptable tradeoff between safety and performance . The most widely used performance metric in the mechanical design of robotic manipulators is the payload-to-weight ratio, which is defined as the ratio of maximum payload that the robot can manipulate to its stand-alone weight. Mechanical designs in domestic robot manipulators are aimed at achieving a higher payload-to-weight ratio while being able to perform the tasks defined in their use cases BIB017 , BIB011 . The main safety-based design rationale behind the lightweight links in domestic robotics is reducing the impact force by lowering the kinetic energy of the link. Compliance between the actuator and the end-effector is essential to decouple the actuator inertia and the link inertia so that only the inertia of the lightweight link is felt during uncontrolled impact. The dynamic relationship between the desired decoupling behavior, the maximum impact force, and the mechanical properties of flexible manipulators was recently investigated in BIB025 . Reference indicated that even a moderate compliance achieved using harmonic drives was able to yield the required decoupling, and further lowering of the compliance reduces the impact torque at the joint, thereby protecting the robot itself during collision. The compliance can be implemented as either active compliance using control BIB011 , BIB001 , BIB004 , passive compliance by inserting elastic elements at the joint actuation , or a combination of both in one manipulator, as used in BIB020 . Although active compliant manipulators offer satisfactory performance for nominal operation, current investigations in compliant actuation are trying to exploit the wide range of compliance and faster dynamic response rate offered by passive compliance , BIB007 . The first approach to have a compliant robot, called series elastic actuation (SEA), was done by inserting a passive compliant element between the joint and the actuator's gear train BIB005 . The authors presented a force-controlled actuation with less danger to the environment and less reflected actuator inertia during impact [ Figure 2(a) ]. A modified SEA actuation approach, variable impedance actuation (VIA), allows for tuning of the compliance in the transmission for improved performance and collision safety BIB021 , , BIB013 . This mechanism allows for adapting the mechanical impedance depending on the tasks to yield a wide range of manipulation capabilities by the robot [ Figure 2(b) ]. Various VIA designs have been proposed in the literature, which differ in their range of motion and stiffness BIB018 - BIB008 . Although the potential inherent safety of SEA and VIA comply with the prioritized risk reduction of mechanical design over control system, as proposed in ISO 12100, the energy stored in the compliant element of VIA can lead to increased link speed and compromise safety, as shown in BIB022 . It should also be noted that the VIA design also incorporates damping of the compliant joints to avoid unnecessary vibrations during operation. One of the earliest generations of manipulators designed for human interaction is the DLR lightweight robot with moderate joint compliance and suitable sensing and control capability BIB011 [ Figure 3(a) ]. The manipulator was planned to perform human-arm-like activities and mimicked the kinematics and sensing capability of a human arm. The manipulator has an active compliance, made possible by a joint torque control, and was able to have a payload-to-weight ratio of +1:2. New generations of the DLR lightweight robot included an advanced control system and achieved a payload-toweight ratio of 1:1, while safety for interaction is evaluated using HIC BIB015 . A new DLR hand arm system was also developed with the aim of matching its human equivalent in size, performance, and weight BIB024 . The design uses a number of variable stiffness actuation designs and exploits the energystoring capability of compliant joints to perform highly dynamic tasks. Another actuation scheme designed to fit in the humanfriendly robotics category is distributed macromini actuation (DM2). This novel actuation mechanism introduces two parallel actuators that handle the high-and low-frequency torque requirements BIB012 . In the first prototype that uses this mechanism, the low-frequency task manipulation torque actuation was handled by a larger electrical actuator at the base of the arm, while high-frequency disturbance rejection actions were performed by low-inertia motors at the joints. Compliance is provided using low reduction cable transmissions for the highfrequency actuation and SEA for the lowfrequency actuation. A follow-up study by the research group introduced the Stanford Human Safety Robot, , S2t with the same distributed actuation concept but replaced the heavy electrical actuators with pneumatic muscles to have a hybrid actuation arm BIB017 . The authors reported an improved payload-toweight ratio and control bandwidth while evaluating the safety requirements using the MSI. Further iterations of the S2t were indicated to have an improved control, responsiveness, and range of motion BIB019 . Another mechanical design relevant for the safety of a robot is a passive gravity compensation, as shown in BIB002 . The mechanism that is common in machine design uses geometrical analysis and springs to balance the gravitational energy with strain energy. Previously, passive gravity compensation was made possible using a counter mass that annuls the effect of gravity on the target manipulator. The spring-based system has an advantage over the counter mass in that it avoids the addition of inertia, which is unnecessary in domestic robotics. An extended arm actuation mechanism that uses passive gravity compensation is presented in BIB016 . Together with a backdrivable transmission, this design enhances safety and reduces the torque requirement at the joint actuators. Although most of the discussion in this section focused on manipulators that can be used on autonomous domestic robots, the idea similarly applies to the mechanical design of other robot parts, such as the trunk or mobile base. Aiming to emulate the natural reaction of a human' s waist to collision, BIB010 designed a passive viscoelastic trunk with a passive movable base. Other mechanical design issues addressed with regard to safety include using a backdrivable transmission BIB014 , eliminating pinch points by covering dangerous areas of the robot, analyzing the flexibility of nonrigid links , adding force limiting devices BIB023 , and placing a compliant cushion covering BIB006 .
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> For the application of robot manipulators to complex tasks, it is often necessary to control not only the position of a manipulator but also the force exerted by the hand on an object. For this purpose, Raibert and Craig have proposed the hybrid position/force control method. In this method, however, the manipulator dynamics has not been taken into account rigorously. The dynamic hybrid control method is proposed, which takes the manipulator dynamics into consideration. Constraints on the end effector are described by a set of constraint hypersurfaces. Then the basic equations for dynamic hybrid control are derived. It is shown that if the manipulator is not in a singular configuration, the desired position and force at the end effector can be simultaneously realized. Finally, a basic structure of the dynamic hybrid control system with a servo compensator is given. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Impedance control specifications for robot manipulators are given in terms of a desired motion trajectory and a desired dynamic relationship between position errors and interaction forces. An adaptive implementation is proposed as an alternative to reduce the design sensitivity due to model-manipulator parameter mismatch. Two adaptive controllers that globally achieve the impedance objective for the general nonlinear dynamic model are presented. The controller structures consist of a nonlinear feedback of positions, velocities, and end-effector applied forces. Computer simulations were carried out to demonstrate stability and performance control. > <s> BIB002 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> This paper reports on the existing robot force control algorithms and their composition based on the review of 75 papers on this subject. The objective is to provide a pragmatic exposition with speciality on their differences and different application conditions, and to give a guide of the existing robot force control algorithms. The previous work can be categorized into discussion, design and/or application of fundamental force control techniques, stability analysis of the various control algorithms, and the advanced methods. Advanced methods combine the fundamental force control techniques with advanced control algorithms such as adaptive, robust and learning control strategies. <s> BIB003 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> With the goal of filling the gap between theory and industrial applications, an implicit hybrid control scheme is proposed in this paper, designed to fit as much as possible the conventional industrial robot control architecture. The dynamic effects due to joint compliance, which are a major source of performance degradation in industrial robots, are fully taken into account. The scheme is based on a task description particularly suited for a direct integration in conventional robot programming tools and aims at exerting the force control action without affecting the trajectory tracking. Only the differential kinematic model (Jacobian) of the robot is needed in the design of the force control law, while the force control loop is charged with rejecting dynamic disturbances due to motion. A thorough experimental validation of the strategy, both in terms of force regulation and trajectory tracking capabilities, is discussed, based on experiments performed on an industrial robot, endowed with a six-axis wrist force/torque sensor and with a laser distance sensor. <s> BIB004 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Industrial robots are confronted with performing tasks where a contact with their environment occurs. Therefore, there is a need for control algorithms with position tracking performance and force control ability. Up to date many algorithms have been proposed which deal with robot motion and force control. In this paper a robust impedance control law based on an attractive theory of sliding mode is proposed. The control law guarantees a robot predefined impedance and therefore a force regulation based on the possessed impedance properties is feasible. The proposed impedance controller is used in a force-tracking task. Experimental results on a simple 1-DOF mechanism and 3-DOF direct drive robot mechanism are reported. <s> BIB005 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Passivity-based control (PBC) is a well-established technique that has shown to be very powerful to design robust controllers for physical systems described by Euler-Lagrange (EL) equations of motion. For regulation problems of mechanical systems, which can be stabilized ''shaping'' only the potential energy, PBC preserves the EL structure and furthermore assigns a closed-loop energy function equal to the difference between the energy of the system and the energy supplied by the controller. Thus, we say that stabilization is achieved via energy balancing. Unfortunately, these nice properties of EL-PBC are lost when used in other applications which require shaping of the total energy, for instance, in electrical or electromechanical systems, or even some underactuated mechanical devices. Our main objective in this paper is to develop a new PBC theory which extends to a broader class of systems the aforementioned energy-balancing stabilization mechanism and the structure invariance. Towards this end, we depart from the EL description of the systems and consider instead port-controlled Hamiltonian models, which result from the network modelling of energy-conserving lumped-parameter physical systems with independent storage elements, and strictly contain the class of EL models. <s> BIB006 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> In this work, a cartesian impedance controller purposely designed for dexterous manipulation is described. Based on the main features of the DLR Hand II, concerning kinematic structure and sensory equipment of fingers, this control strategy allows to overcome the main problems encountered in fine manipulation, namely: effects of the friction (and unmodeled dynamics) on robot performances and occurrence of singularity conditions. The achieved control scheme bas been experimentally validated by testing it on a finger of the DLR Hand. <s> BIB007 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> If robots are to be introduced into the human world as assistants to aid a person in the completion of a manual task two key problems of today's robots must be solved. The human-robot interface must be intuitive to use and the safety of the user with respect to injuries inflicted by collisions with the robot must be guaranteed. In this paper we describe the formulation and implementation of a control strategy for robot manipulators which provides quantitative safety guarantees for the user of assistant-type robots. We propose a control scheme for robot manipulators that restricts the torque commands of a position control algorithm to values that comply to preset safety restrictions. These safety restrictions limit the potential impact force of the robot in the case of a collision with a person. Such accidental collisions may occur with any part of the robot and therefore the impact force not only of the robot's hand but of all surfaces is controlled by the scheme. The integration of a visual control inter... <s> BIB008 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Holding an object and manipulating it in 6D is a key application for multifingered robot hands. In the past many algorithms were proposed based on a weighted pseudoinverse of the grasp map combined with an internal force control. The majority of these algorithms require robust contact detection/tracking and switching controllers. Employing the virtual object introduced by Stramigioli we present an object-level control law. We define a novel virtual object frame based on the robot hand configuration. Our control law takes a desired object frame and desired grasping forces as input, it is passive, has an intuitive physical meaning, and stability is even given in case a finger looses contact with the object. A damping design as a function of the desired object stiffness and the combined hand-object inertia is presented. The performance of the controller is proven in two experiments implemented on the DLR Hand II. <s> BIB009 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> A robot manipulator sharing its workspace with humans should be able to quickly detect collisions and safely react for limiting injuries due to physical contacts. In the absence of external sensing, relative motions between robot and human are not predictable and unexpected collisions may occur at any location along the robot arm. Based on physical quantities such as total energy and generalized momentum of the robot manipulator, we present an efficient collision detection method that uses only proprioceptive robot sensors and provides also directional information for a safe robot reaction after collision. The approach is first developed for rigid robot arms and then extended to the case of robots with elastic joints, proposing different reaction strategies. Experimental results on collisions with the DLR-III lightweight manipulator are reported. <s> BIB010 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Purpose – The paper seeks to present a new generation of torque‐controlled light‐weight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center.Design/methodology/approach – An integrated mechatronic design approach for LWR is presented. Owing to the partially unknown properties of the environment, robustness of planning and control with respect to environmental variations is crucial. Robustness is achieved in this context through sensor redundancy and passivity‐based control. In the DLR root concept, joint torque sensing plays a central role.Findings – In order to act in unstructured environments and interact with humans, the robots have design features and control/software functionalities which distinguish them from classical robots, such as: load‐to‐weight ratio of 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, compliant control on joint and Cartesian level.Practical implications – The DLR robots are excellent rese... <s> BIB011 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> When two physical systems (e.g. a robot and its environment) interact they exchange energy through localized ports and, in order to control their interaction, ::: it is necessary to control the exchanged energy. ::: The port-Hamiltonian formalism provides a general framework for modeling physical systems based on the concepts of energy, interconnection and power ports which describe the phenomena of energy storage, energy exchange and external interaction respectively. ::: This monograph deals with energy based control of interactive robotic interfaces and the port-Hamiltonian framework is exploited both for modeling and controlling interactive robotic interfaces. Using the port-Hamiltonian ::: framework, it is possible to identify the energetic properties that have to be controlled in order to achieve a desired interactive behavior and it is possible to build a port-Hamiltonian controller that properly regulates the robotic interface by shaping its energetic properties. ::: Thanks to its generality, the port-Hamiltonian formalism allows to model and control also complex interactive robotic interfaces in a very natural way. ::: In this book, a port-Hamiltonian approach for regulating the interaction between a robot and a local environment, a virtual environment (i.e. haptic interfaces) and a remote environment (i.e. bilateral telemanipulation systems) is developed. <s> BIB012 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Force control is very important for many practical manipulation tasks of robot manipulators, such as, assembly, grinding, deburring that associate with interaction between the end-effector of the robot and the environment. Force control of the robot manipulator can be break down into two categories: Position/Force Hybrid Control and Impedance Control. The former is focused on regulation of both position and manipulation force of the end-effector on the tangent and perpendicular directions of the contact surface. The later, however, is to realize desired dynamic characteristics of the robot in response to the interaction with the environment. The desired dynamic characteristics are usually prescribed with dynamic models of systems consisting of mass, spring, and dashpot. Literature demonstrates that various control methods in this area have been proposed during recent years, and the intensive research has mainly focused on rigid robot manipulators. On the research area of force control for flexible robots, however, only a few reports can be found. Regarding force control of the flexible robot, a main body of the research has concentrated on the position/force hybrid control. A force control law was presented based on a linearized motion equation[l]. A quasi-static force/position control method was proposed for a two-link planar flexible robot [2]. For serial connected flexible-macro and rigid-micro robot arm system, a position/force hybrid control method was proposed [3]. An adaptive hybrid force/position controller was resented for automated deburring[4]. Two-time scale position/force controllers were proposed using perturbation techniques[5],[6]. Furthermore, a hybrid position-force control method for two cooperated flexible robots was also presented [7]. The issue on impedance control of the flexible robot attracts many attentions as well. But very few research results have been reported. An impedance control scheme was presented for micro-macro flexible robot manipulators [8]. In this method, the controllers of the micro arm (rigid arm) and macro arm (flexible arm) are designed separately, and the micro arm is controlled such that the desired local impedance characteristics of end-effector are achieved. An adaptive impedance control method was proposed for n-link flexible robot manipulators in the presence of parametric uncertainties in the dynamics, and the effectiveness was confirmed by simulation results using a 2-link flexible robot [9]. <s> BIB013 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> This paper presents an energy-based control strategy to be used in robotic systems working closely or cooperating with humans. The presented method bounds the dangerous behavior of the robot during the first instants of the impact by limiting the energy stored into the system to a maximum imposed value.Two critical physical human robot interaction (pHRI) cases are studied, these are the collision either against a free or a clamped head. Safe energy values that can be used as reference were retrieved by analysing experimental data of energy absorption to failure of cranium bones and cervical spinal cords.The energy regulation control is implemented in a series elastic actuator prototype joint. The model and the control scheme of the system are analysed. The proposed control scheme is a position-based controller that adjusts the position trajectory reference in function of the maximum energy value imposed by the user. Preliminary results are presented to show that the actuator unit and this control scheme are capable of limiting the energy to a maximum imposed value. <s> BIB014 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> This paper presents an impedance controller for five-finger dexterous robot hand DLR-HIT II, which is derived in Cartesian space. By considering flexibility in finger joints and strong mechanical couplings in differential gear-box, modeling and control of the robot hand are described in this paper. The model-based friction estimation and velocity observer are carried out with an extended Kalman filter, which is implemented with parameters estimated by Least Squares Method. The designed estimator demonstrates good prediction performance, as shown in the experimental results. Stability analysis of the proposed impedance controller is carried out and described in this paper. Impedance control experiments are conducted with the five-finger dexterous robot hand DLR-HIT II in Cartesian coordinates system to help study the effectiveness of the proposed controller with friction compensation and hardware architecture. <s> BIB015 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> In this paper, we proposed an algorithm that can move robot manipulator with constant velocity in robust stability. The moving velocity of arm is controlled by an algorithm of the adaptive impedance control, which increases operation efficiency and keeps the advantages of the impedance control. This control algorithm will satisfy practical and efficient applications, such as manipulator control that assist safely in feeding impaired patients. The manipulator is firstly based on 1 DOF robot arm which rotates in the vertical plane. The effect of gravity was eliminated by robust control. The algorithm of robust adaptive impedance control increases operation efficiency and operation stability. Furthermore, the impedance with the robust control design eliminates the steady state error which is caused by the static friction, and the reaction torque observer reduces the ripple of torque and smoothes the output of velocity. <s> BIB016 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> One of the hallmarks of the performance, versatility, and robustness of biological motor control is the ability to adapt the impedance of the overall biomechanical system to different task requirements and stochastic disturbances. A transfer of this principle to robotics is desirable, for instance to enable robots to work robustly and safely in everyday human environments. It is, however, not trivial to derive variable impedance controllers for practical high degree-of-freedom (DOF) robotic tasks. In this contribution, we accomplish such variable impedance control with the reinforcement learning (RL) algorithm PI2 (Policy Improvement with Path Integrals). PI2 is a model-free, sampling-based learning method derived from first principles of stochastic optimal control. The PI 2 algorithm requires no tuning of algorithmic parameters besides the exploration noise. The designer can thus fully focus on the cost function design to specify the task. From the viewpoint of robotics, a particular useful property of PI2 is that it can scale to problems of many DOFs, so that reinforcement learning on real robotic systems becomes feasible. We sketch the PI2 algorithm and its theoretical properties, and how it is applied to gain scheduling for variable impedance control. We evaluate our approach by presenting results on several simulated and real robots. We consider tasks involving accurate tracking through via points, and manipulation tasks requiring physical contact with the environment. In these tasks, the optimal strategy requires both tuning of a reference trajectory and the impedance of the end-effector. The results show that we can use path integral based reinforcement learning not only for planning but also to derive variable gain feedback controllers in realistic scenarios. Thus, the power of variable impedance control is made available to a wide variety of robotic systems and practical applications. <s> BIB017 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> This paper presents a new control law for robotic manipulators in unstructured environments which guarantees the achievement of the goal position without incurring in local minima. The passivity of the closed-loop system renders this control scheme well-suited for human-robot coexistence, especially when the robot is supposed to share its workspace with humans. The given control law has been implemented and experimentally tested in a realistic scenario, demonstrating the effectiveness in driving the robot to a given configuration in a cluttered environment without any offline planning phase. <s> BIB018 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Controller Design <s> Enabling robots to safely interact with humans is an essential goal of robotics research. The developments achieved over recent years in mechanical design and control made it possible to have active cooperation between humans and robots in rather complex situations. For this, safe robot behavior even under worst-case situations is crucial and forms also a basis for higher-level decisional aspects. For quantifying what safe behavior really means, the definition of injury, as well as understanding its general dynamics, are essential. This insight can then be applied to design and control robots such that injury due to robot-human impacts is explicitly taken into account. In this paper we approach the problem from a medical injury analysis point of view in order to formulate the relation between robot mass, velocity, impact geometry and resulting injury qualified in medical terms. We transform these insights into processable representations and propose a motion supervisor that utilizes injury knowledge for generating safe robot motions. The algorithm takes into account the reflected inertia, velocity, and geometry at possible impact locations. The proposed framework forms a basis for generating truly safe velocity bounds that explicitly consider the dynamic properties of the manipulator and human injury. <s> BIB019
When it comes to controlling the robot to execute a planned motion and accomplish a task, most of the industrial robots use position controllers. This is because most of the robots perform simple position-focused tasks, such as spot welding, spray painting, or pick-and-place operations, in a well-known operating environment . In tasks that demand contact with an object during operations, industrial robots adopt force control techniques to regulate the amount of force applied by the robot during the interaction BIB003 . Later, based on operational force and position constraints imposed on a manipulator, a hybrid position/force controller was introduced that uses position control on some DOF and force control for others BIB004 - BIB001 . In general, the pure position controller exhibits an infinite stiffness characteristic working in a zero-stiffness environment, while the pure force controller exhibits a zero-stiffness characteristic working in a stiff environment. For domestic robots that often operate in unstructured environments with humans, pure position control is incomplete because, if there is contact with an obstacle, the robot is not expected to go through the obstacle. Similarly, a pure force control is also inadequate as contactless tasks and motions are difficult to implement. An alternative control technique essential in domestic robotics is the interaction control scheme, which deals with regulating the dynamic behavior of the manipulator as it is interacting with the environment . The core idea behind interaction control is that manipulation is done through energy exchange, and, during the energetic interaction, the robot and the environment influence each other in a bidirectional signal exchange. Thus, by adjusting the dynamics of the robot, how it interacts with the environment during operation can be controlled. One of the most widely used interaction control schemes is impedance control presented in . Most of the operating environments of the robot, such as mass to be moved or rigid obstacles in work space, can be described as admittances that accept force inputs and output velocity during interaction. Hence, for possible interactions in such an environment, the manipulator should exhibit an impedance characteristic, which can be regulated via impedance control. Consider a simplified 1-DOF robotic manipulator modeled as a mass m at position x, which is to be moved to a desired position . xd A simple physical controller that can achieve this is a spring connected between the desired virtual point and the mass (Figure 4 ). To avoid continuous oscillation of the resulting mass-spring system and stabilize at the equilibrium point, a damper should be added to the system. The resulting controller is an impedance controller that can shape the dynamic behavior of the system. The controller resembles a conventional proportionalderivative controller and introduces a desirable compliance to the system. A number of impedance controller designs have addressed issues such as robustness , BIB005 , adding adaptive control techniques BIB002 , BIB016 , extension with a learning approach BIB017 , dynamics of a flexible robot , BIB013 , and dexterous manipulation BIB011 , BIB007 , BIB015 . Another crucial requirement in controller design for domestic robots is ensuring asymptotic stability even in the presence of apparent uncertainties about the properties of the operating environment BIB011 . To address this issue, several authors have applied passivity theory to design controllers commonly known as passivity-based controllers , BIB006 , BIB012 . Passive systems are a class of dynamic systems whose total energy is less than or equal to the sum of its initial energy and any external energy supplied to it during interaction. Hence, passivity-based controller design ensures a bounded energy content, and the system achieves equilibrium at its minimum energy state. Any energetic interconnection of two passive systems will not affect the passivity of the combined system. As a result, an interconnection of a passivity-based controller, a passive manipulator, and a typical unstructured operating environment that is often passive results in an overall passive system whose Lyapunov stability is always guaranteed. Passive controller designs for domestic robot manipulators have often been addressed together with interaction control in a unified scheme to achieve a compliant, asymptotically stable, and robust manipulator , BIB009 , BIB018 . Safety-aware control schemes that incorporate safety metrics in a controller design are also proposed in the literature. Focusing on collision risks to a human user, these controllers utilize a given safety metric to detect possible unsafe situations and use the controller to ensure that the acceptable safety levels defined in the metrics are achieved to avoid possible injuries. Using impact potential as a safety metric, BIB008 proposes an impact potential controller for a multiple-DOF manipulator. In this hierarchical controller design approach, the resulting safety status of a high-level motion controller torque output is evaluated according to the metric by a protective layer controller and clipped to an acceptable level in case of a possible unsafe condition. Using energy levels that cause failure of the cranial and spinal bones as a safety criterion, BIB014 proposes an energy regulation control that modifies the desired trajectory of the controller to limit the overall energy of a manipulator. After analyzing soft-tissue injuries and their relation with robot parameters, BIB019 proposes a velocity shaping scheme, which ensures that possible sharp contact with a multiple-DOF rigid robot will not result in unacceptable injury to a human user. Controller design can also increase postcollision safety by including a collision detection and reaction strategy. Using model-based analysis, BIB010 defines an energy-based collision detection signal using a disturbance observer and identified a number of reaction strategies to both stiff and compliant robots.
The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Conclusions <s> After analyzing fundamental impact characteristics of robot-human collisions in our previous work, the intention in the present paper is to augment existing knowledge in this field, verify previously given statements with standardized equipment of the German Automobile Club (ADAC), and provide a crash-test report for robots in general. Various new insights are achieved and a systematic and extensive set of data is provided. The presented work is divided into two papers. The main purpose of Part I is to give, similarly to reports known from the automobile world1, a fact based and result oriented view on our newest robot crash-test experiments. In Part II detailed discussions of the results listed in the present paper and recommendations towards a standard crash-test protocol for robot safety are carried out. <s> BIB001 </s> The Safety of Domestic Robotics: A Survey of Various Safety-Related Publications <s> Conclusions <s> After analyzing fundamental impact characteristics of robot-human collisions in our previous work, the intention in the present paper is to augment existing knowledge in this field, verify previously given statements with standardized equipment of the German Automobile Club (ADAC), and provide a crash-test report for robots in general. Various new insights are achieved and a systematic and extensive set of data is provided. The presented work is divided into two papers. The main purpose of Part I is to give, similarly to reports known from the automobile world1, a fact based and result oriented view on our newest robot crash-test experiments. In Part II detailed discussions of the results listed in the present paper and recommendations towards a standard crash-test protocol for robot safety are carried out. <s> BIB002
The previous sections presented different safety metrics and safety-related issues in mechanical design, actuation, and controller design of domestic robots. Although mechanical and controller subsystems are treated separately in this article, it is important to note that safety also depends on the interaction between the components making up the complete robot. For example, a failure in the sensory unit is a risk not only in the sensing aspect, but it also has consequences in the motion planning or control. Such propagation of risks is essential and must be detailed in the risk assessment level of the safety analysis. Continuous improvements in risk elimination or reduction designs are not possible without suitable safety metrics that can be used for validation. These metrics are needed not only for collision but also for other feasible risks in domestic robotics. A number of collision-focused safety metrics for domestic robots were discussed in this article, and an experimental comparison of these metrics that follows a standardized testing procedure is essential to defining a universally acceptable safety metric for collision risks in domestic robotics. A groundwork study toward a standardized safety evaluation of domestic robots for collision risks was performed at a crash-test facility in BIB001 and BIB002 . Lightweight and compliant manipulators are the mechanical designs of choice in domestic robotics. Ongoing research on mechanical design and actuation to achieve better-performing domestic robots should ensure that safety requirements are not violated as well. Control systems should also keep up with mechanical design and actuation advancements to guarantee stability and provide acceptable manipulation capability.
A Survey: Static and Dynamic Ranking <s> DYNAMIC RANKING <s> We consider the problem of ranking refinement, i.e., to improve the accuracy of an existing ranking function with a small set of labeled instances. We are, particularly, interested in learning a better ranking function using two complementary sources of information, ranking information given by the existing ranking function (i.e., a base ranker) and that obtained from users' feedbacks. This problem is very important in information retrieval where the feedback is gradually collected. The key challenge in combining the two sources of information arises from the fact that the ranking information presented by the base ranker tends to be imperfect and the ranking information obtained from users' feedbacks tends to be noisy. We present a novel boosting framework for ranking refinement that can effectively leverage the uses of the two sources of information. Our empirical study shows that the proposed algorithm is effective for ranking refinement, and furthermore significantly outperforms the baseline algorithms that incorporate the outputs from the base ranker as an additional feature. <s> BIB001 </s> A Survey: Static and Dynamic Ranking <s> DYNAMIC RANKING <s> For ambiguous queries, conventional retrieval systems are bound by two conflicting goals. On the one hand, they should diversify and strive to present results for as many query intents as possible. On the other hand, they should provide depth for each intent by displaying more than a single result. Since both diversity and depth cannot be achieved simultaneously in the conventional static retrieval model, we propose a new dynamic ranking approach. In particular, our proposed two-level dynamic ranking model allows users to adapt the ranking through interaction, thus overcoming the constraints of presenting a one-size-fits-all static ranking. In this model, a user's interactions with the first-level ranking are used to infer this user's intent, so that second-level rankings can be inserted to provide more results relevant to this intent. Unlike previous dynamic ranking models, we provide an algorithm to efficiently compute dynamic rankings with provable approximation guarantees. We also propose the first principled algorithm for learning dynamic ranking functions from training data. In addition to the theoretical results, we provide empirical evidence demonstrating the gains in retrieval quality over conventional approaches. <s> BIB002 </s> A Survey: Static and Dynamic Ranking <s> DYNAMIC RANKING <s> We present a theoretically well-founded retrieval model for dynamically generating rankings based on interactive user feedback. Unlike conventional rankings that remain static after the query was issued, dynamic rankings allow and anticipate user activity, thus providing a way to combine the otherwise contradictory goals of result diversification and high recall. We develop a decision-theoretic framework to guide the design and evaluation of algorithms for this interactive retrieval setting. Furthermore, we propose two dynamic ranking algorithms, both of which are computationally efficient. We prove that these algorithms provide retrieval performance that is guaranteed to be at least as good as the optimal static ranking algorithm. In empirical evaluations, dynamic ranking shows substantial improvements in retrieval performance over conventional static rankings. <s> BIB003 </s> A Survey: Static and Dynamic Ranking <s> DYNAMIC RANKING <s> Diversified ranking is a fundamental task in machine learning. It is broadly applicable in many real world problems, e.g., information retrieval, team assembling, product search, etc. In this paper, we consider a generic setting where we aim to diversify the top-k ranking list based on an arbitrary relevance function and an arbitrary similarity function among all the examples. We formulate it as an optimization problem and show that in general it is NP-hard. Then, we show that for a large volume of the parameter space, the proposed objective function enjoys the diminishing returns property, which enables us to design a scalable, greedy algorithm to find the (1 - 1/e) near-optimal solution. Experimental results on real data sets demonstrate the effectiveness of the proposed algorithm. <s> BIB004
As discussed earlier, the static ranking don't take into consideration the interaction with user and faces issues like query ambiguity and diversity in intent of user. There is an inherent trade-off between number of results provided for user intent and number of intents retrieved BIB002 . Dynamic Ranking provides a way to combine the otherwise contradictory goals of result diversification and high recall BIB003 . These algorithms interact with the user to know his intent amongst the various possible intents, or they try to reorder the results of first retrieval process and provide refined results to the user. They focus on both the relevance and diversity. [6] BIB004 [11] BIB001 are the ways in which dynamic ranked retrieval is obtained. Now we will discuss these algorithms in detail.
A Survey: Static and Dynamic Ranking <s> Two Level Dynamic Ranking <s> Disclosed is a bar-tumbler type safety lock, as well as a key and a coding process for said lock. According to a single-key-entry embodiment, this lock comprises a body, a two-piece washer, a latch holder and a barrel having recesses in which notched tumblers can slide and which is maintained rotatingly and removably toward the front of said body by a stud and a stopping finger retained in two annular slots made radially in this body and which can be retracted by the introduction of a special extraction key into the lock; the coding of the lock is obtained by the forming, in the annular washer-assembly groove, of notches simultaneously in each of the tumblers by a conventional turning process, with the key inserted. <s> BIB001 </s> A Survey: Static and Dynamic Ranking <s> Two Level Dynamic Ranking <s> Abstract Information retrieval is to retrieve relevant information that satisfies user's information needs. There arises a problem of how to select only information that is relevant to the user. Ranking techniques are used to find the documents in a collection of documents that are most likely to be relevant to the user's query. However, we find out that there could be retrieved documents whose contexts may not be consistent to the query. Mutual information is a measure which represents the relation between a word and another word. So, we will re-evaluate the relation between the terms in the retrieved document and the terms in the query. In this paper, we discuss a model of natural language information retrieval system that is based on a two-level document ranking method using mutual information. At the first-level, we retrieve documents based on automatically constructed index terms. At the second-level, we reorder the retrieved documents using mutual information. We will show that our method achieves considerable retrieval effectiveness improvement over a traditional linear searching method. Also, we will analyse seven newly developed formulas that reorder the retrieved documents. Among the seven formulas, we will recommend one formula that dominates the others in terms of the retrieval effectiveness. <s> BIB002 </s> A Survey: Static and Dynamic Ranking <s> Two Level Dynamic Ranking <s> In many retrieval tasks, one important goal involves retrieving a diverse set of results (e.g., documents covering a wide range of topics for a search query). First of all, this reduces redundancy, effectively showing more information with the presented results. Secondly, queries are often ambiguous at some level. For example, the query "Jaguar" can refer to many different topics (such as the car or feline). A set of documents with high topic diversity ensures that fewer users abandon the query because no results are relevant to them. Unlike existing approaches to learning retrieval functions, we present a method that explicitly trains to diversify results. In particular, we formulate the learning problem of predicting diverse subsets and derive a training method based on structural SVMs. <s> BIB003 </s> A Survey: Static and Dynamic Ranking <s> Two Level Dynamic Ranking <s> For ambiguous queries, conventional retrieval systems are bound by two conflicting goals. On the one hand, they should diversify and strive to present results for as many query intents as possible. On the other hand, they should provide depth for each intent by displaying more than a single result. Since both diversity and depth cannot be achieved simultaneously in the conventional static retrieval model, we propose a new dynamic ranking approach. In particular, our proposed two-level dynamic ranking model allows users to adapt the ranking through interaction, thus overcoming the constraints of presenting a one-size-fits-all static ranking. In this model, a user's interactions with the first-level ranking are used to infer this user's intent, so that second-level rankings can be inserted to provide more results relevant to this intent. Unlike previous dynamic ranking models, we provide an algorithm to efficiently compute dynamic rankings with provable approximation guarantees. We also propose the first principled algorithm for learning dynamic ranking functions from training data. In addition to the theoretical results, we provide empirical evidence demonstrating the gains in retrieval quality over conventional approaches. <s> BIB004 </s> A Survey: Static and Dynamic Ranking <s> Two Level Dynamic Ranking <s> We present a theoretically well-founded retrieval model for dynamically generating rankings based on interactive user feedback. Unlike conventional rankings that remain static after the query was issued, dynamic rankings allow and anticipate user activity, thus providing a way to combine the otherwise contradictory goals of result diversification and high recall. We develop a decision-theoretic framework to guide the design and evaluation of algorithms for this interactive retrieval setting. Furthermore, we propose two dynamic ranking algorithms, both of which are computationally efficient. We prove that these algorithms provide retrieval performance that is guaranteed to be at least as good as the optimal static ranking algorithm. In empirical evaluations, dynamic ranking shows substantial improvements in retrieval performance over conventional static rankings. <s> BIB005
As we have depicted earlier in figure 1, a query provided by the user can have multiple intents. Conventional ranking algorithms rank the results by maximizing the probability of relevance independently for each document BIB001 and thus they prefer documents with most prevalent intention. In order to provide the best search result, it is better to first understand the intent of the user. For this, algorithms have been devised, called diversification based algorithms. Such algorithms include at least one result for as intents as possible as given in BIB003 . In order to remove this limitation, the authors of BIB004 have provided a method through which ranking would be performed at two levels -1. First level results provide a list of diverse ranked documents. 2. Second level provides results related to the intent, as shown by the user's interaction with the first level results. In this method the second level results depend upon the first level heads but it still provides the flexibility to users to track back to another document at the first level. The dynamic ranking methods given by BIB002 and BIB005 lack this flexibility. This technique provides better result as it doesn't depends on the users to provide feedback at every level. But still it is a type of interactive retrieval, where user provides a feedback through the results provided at the first place, and this feedback is used by the system to again retrieve and rank the documents. The ranking uses a User Model which assumes that user will provide feedback at only the first level and that the user can return to first ranked results. This is also a greedy algorithm. It calculates performance measure in the form of utility Ug(Θ|t), where g is a concave, positive, non-decreasing function, Θ is used for dynamic ranking and t is the user intent. Where, di are documents and γ are position dependent discount factors which decrease with position in ranking BIB004 . For a second level document, the utility value is set to zero. If and only if the head document in first level has been assigned non-zero relevance, then the corresponding document at second level will have non-zero utility for the intent related to the head document. The diversity of intents depends on the function g. The steeper the function is more will be the diversity of the documents ranked. For a query q and set of documents D, the possible intents for the query T(q) and their distribution P(t|q), the algorithms forms a ranking matrix such that the utility is maximized. The matrix formed will be of the size LxW, where L is for length and W is for width. For every candidate row, the algorithm adds W documents which would result in maximum utility for that row. This continues till L rows have been processed.
A Survey: Static and Dynamic Ranking <s> GenDer <s> The goal of the Redundancy, Diversity, and Interdependent Document Relevance workshop was to explore how ranking, performance assessment and learning to rank can move beyond the assumption that the relevance of a document is independent of other documents. In particular, the workshop focussed on three themes: the effect of redundancy on information retrieval utility (for example, minimizing the wasted effort of users who must skip redundant information), the role of diversity (for example, for mitigating the risk of misinterpreting ambiguous queries), and algorithms for set-level optimization (where the quality of a set of retrieved documents is not simply the sum of its parts). This workshop built directly upon the Beyond Binary Relevance: Preferences, Diversity and Set-Level Judgments workshop at SIGIR 2008 [3], shifting focus to address the questions left open by the discussions and results from that workshop. As such, it was the first workshop to explicitly focus on the related research challenges of redundancy, diversity, and interdependent relevance – all of which require novel performance measures, learning methods, and evaluation techniques. The workshop program committee consisted of 15 researchers from academia and industry, with experience in IR evaluation, machine learning, and IR algorithmic design. Over 40 people attended the workshop. This report aims to summarize the workshop, and also to systematize common themes and key concepts so as to encourage research in the three workshop themes. It contains our attempt to summarize and organize the topics that came up in presentations as well as in discussions, pulling out common elements. Many audience members contributed, yet due to the free-flowing discussion, attributing all the observations to particular audience members is unfortunately impossible. Not all audience members would necessarily agree with the views presented, but we do attempt to present a consensus view as far as possible. <s> BIB001 </s> A Survey: Static and Dynamic Ranking <s> GenDer <s> Diversified ranking is a fundamental task in machine learning. It is broadly applicable in many real world problems, e.g., information retrieval, team assembling, product search, etc. In this paper, we consider a generic setting where we aim to diversify the top-k ranking list based on an arbitrary relevance function and an arbitrary similarity function among all the examples. We formulate it as an optimization problem and show that in general it is NP-hard. Then, we show that for a large volume of the parameter space, the proposed objective function enjoys the diminishing returns property, which enables us to design a scalable, greedy algorithm to find the (1 - 1/e) near-optimal solution. Experimental results on real data sets demonstrate the effectiveness of the proposed algorithm. <s> BIB002
This is a generic diversified ranking algorithm BIB002 . It tells how to provide results catering to the different possible needs of the user. The algorithm described under this technique diversifies the top k-ranked documents. The paper BIB002 introduced arbitrary relevance function and arbitrary similarity function. Using these two parameters the ranking is done. Diversity is a key factor to address the uncertainty and ambiguity in an information retrieval system. It is also an effective way to cover different aspects of information requirement BIB001 . Many diversification based algorithms have been centred on the extent of topic coverage in the result, or diversification of resultset. The ranking algorithms measure their performance based upon the relevance or similarity matrices that are dependent on topics related to query and documents. The GenDer algorithm considers the relevance to the query as well as the diversification of the results as the main factors. There is always a trade-off between relevance and diversity. If an algorithm focuses on relevance, it can miss out some documents with lesser prominence but possible relevance to intent of user. On the other hand, when diversity is targeted, the numbers of relevant documents actually required by the user are missed out as the system focuses upon providing as many topics as possible. To take care of this trade-off, GenDer uses a regularization parameter (w) is used to maintain balance between relevance and diversity. It also specifies that it is not possible to find the perfect balance, thus it provides a near optimal solution. Notations used BIB002 : X: set of n candidate documents S: similarity matrix of size n x n. It is symmetric matrix r(): ranking function. It returns relevance value for each document in X T: subset of X. It has k elements. The goal of this technique is to find this subset T. q: nx1 reference vector. Calculated as q=S.r. ith element of q gives the importance for rank of ith element in X. w: regularization parameter that defines trade-off between relevance to query and diversification among the set of documents. g(T): goodness function to calculate how good a document is in terms of both relevance and diversity.
A review of feature selection techniques in bioinformatics <s> FEATURE SELECTION TECHNIQUES <s> From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> FEATURE SELECTION TECHNIQUES <s> Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> FEATURE SELECTION TECHNIQUES <s> DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> FEATURE SELECTION TECHNIQUES <s> This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. <s> BIB004
As many pattern recognition techniques were originally not designed to cope with large amounts of irrelevant features, combining them with FS techniques has become a necessity in many applications BIB002 BIB001 BIB004 . The objectives of feature selection are manifold, the most important ones being: a) to avoid overfitting and improve model performance, i.e. prediction performance in the case of supervised classification and better cluster detection in the case of clustering, b) to provide faster and more cost-effective models, and c) to gain a deeper insight into the underlying processes that generated the data. However, the advantages of feature selection techniques come at a certain price, as the search for a subset of relevant features introduces an additional layer of complexity in the modeling task. Instead of just optimizing the parameters of the model for the full feature subset, we now need to find the optimal model parameters for the optimal feature subset, as there is no guarantee that the optimal parameters for the full feature set are equally optimal for the optimal feature subset . As a result, the search in the model hypothesis space is augmented by another dimension: the one of finding the optimal subset of relevant features. Feature selection techniques differ from each other in the way they incorporate this search in the added space of feature subsets in the model selection. In the context of classification, feature selection techniques can be organized into three categories, depending on how they combine the feature selection search with the construction of the classification model: filter methods, wrapper methods, and embedded methods. Table 1 provides a common taxonomy of feature selection methods, showing for each technique the most prominent advantages and disadvantages, as well as some examples of the most influential techniques. Filter techniques assess the relevance of features by looking only at the intrinsic properties of the data. In most cases a feature relevance score is calculated, and low scoring features are removed. Afterwards, this subset of features is presented as input to the classification algorithm. Advantages of filter techniques are that they easily scale to very high-dimensional datasets, they are computationally simple and fast, and they are independent of the classification algorithm. As a result, feature selection needs to be performed only once, and then different classifiers can be evaluated. A common disadvantage of filter methods is that they ignore the interaction with the classifier (the search in the feature subset space is separated from the search in the hypothesis space), and that most proposed techniques are univariate. This means that each feature is BIB003 considered separately, thereby ignoring feature dependencies, which may lead to worse classification performance when compared to other types of feature selection techniques. In order to overcome the problem of ignoring feature dependencies, a number of multivariate filter techniques were introduced, aiming at the incorporation of feature dependencies to some degree. Whereas filter techniques treat the problem of finding a good feature subset independently of the model selection step, wrapper methods embed the model hypothesis search within the feature subset search. In this setup, a search procedure in the space of possible feature subsets is defined, and various subsets of features are generated and evaluated. The evaluation of a specific subset of features is obtained by training and testing a specific classification model, rendering this approach tailored to a specific classification algorithm. To search the space of all feature subsets, a search algorithm is then "wrapped" around the classification model. However, as the space of feature subsets grows exponentially with the number of features, heuristic search methods are used to guide the search for an optimal subset. These search methods can be divided in two classes: deterministic and randomized search algorithms. Advantages of wrapper approaches include the interaction between feature subset search and model selection, and the ability to take into account feature dependencies. A common drawback of these techniques is that they have a higher risk of overfitting than filter techniques and are very computationally intensive, especially if building the classifier has a high computational cost. In a third class of feature selection techniques, termed embedded techniques, the search for an optimal subset of features is built into the classifier construction, and can be seen as a search in the combined space of feature subsets and hypotheses. Just like wrapper approaches, embedded approaches are thus specific to a given learning algorithm. Embedded methods have the advantage that they include the interaction with the classification model, while at the same time being far less computationally intensive than wrapper methods.
A review of feature selection techniques in bioinformatics <s> Content analysis <s> This paper describes a new system, GLIMMER, for finding genes in microbial genomes. In a series of tests on Haemophilus influenzae , Helicobacter pylori and other complete microbial genomes, this system has proven to be very accurate at locating virtually all the genes in these sequences, outperforming previous methods. A conservative estimate based on experiments on H.pylori and H. influenzae is that the system finds >97% of all genes. GLIMMER uses interpolated Markov models (IMMs) as a framework for capturing dependencies between nearby nucleotides in a DNA sequence. An IMM-based method makes predictions based on a variable context; i.e., a variable-length oligomer in a DNA sequence. The context used by GLIMMER changes depending on the local composition of the sequence. As a result, GLIMMER is more flexible and more powerful than fixed-order Markov methods, which have previously been the primary content-based technique for finding genes in microbial DNA. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Content analysis <s> Motivation: Most of the existing methods for genetic sequence classification are based on a computer search for homologies in nucleotide or amino acid sequences. The standard sequence alignment programs scale very poorly as the number of sequences increases or the degree of sequence identity is <30%. Some new computationally inexpensive methods based on nucleotide or amino acid compositional analysis have been proposed, but prediction results are still unsatisfactory and depend on the features chosen to represent the sequences. Results: In this paper a feature selection method based on the Gamma (or near-neighbour) test is proposed. If there is a continuous or smooth map from feature space to the classification target values, the Gamma test gives an estimate for the mean-squared error of the classification, despite the fact that one has no a priori knowledge of the smooth mapping. We can search a large space of possible feature combinations for a combination which gives a smallest estimated mean-squared error using a genetic algorithm. The method was used for feature selection and classification of the large subunits ofrRNA according to RDP (Ribosomal Database Project) phylogenetic classes. The sequences were represented by dinucleotide frequency distribution. The nearest-neighbour criterion has been used to estimate the predictive accuracy of the classification based on the selected features. For examples discussed, we found that the classification according to the first nearest neighbour is correct for 80% of the test samples. ff we consider the set of the 10 nearest neighbours, then 94% of the test samples are classified correctly. Availability: The principal novel component of this method is the Gamma test and this can be downloaded compiled for Unix Sun 4,Windows 95 and MS-DOS from http://www.cs.cfac.uk/ec/ Contact: s.margetts@cs.$ac.uk <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Content analysis <s> The GLIMMER system for microbial gene identification finds approximately 97-98% of all genes in a genome when compared with published annotation. This paper reports on two new results: (i) significant technical improvements to GLIMMER that improve its accuracy still further, and (ii) a comprehensive evaluation that demonstrates that the accuracy of the system is likely to be higher than previously recognized. A significant proportion of the genes missed by the system appear to be hypothetical proteins whose existence is only supported by the predictions of other programs. When the analysis is restricted to genes that have significant homology to genes in other organisms, GLIMMER misses <1% of known genes. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Content analysis <s> When the standard approach to predict protein function by sequence homology fails, other alternative methods can be used that require only the amino acid sequence for predicting function. One such approach uses machine learning to predict protein function directly from amino acid sequence features. However, there are two issues to consider before successful functional prediction can take place: identifying discriminatory features, and overcoming the challenge of a large imbalance in the training data. We show that by applying feature subset selection followed by undersampling of the majority class, significantly better support vector machine (SVM) classifiers are generated compared with standard machine learning approaches. As well as revealing that the features selected could have the potential to advance our understanding of the relationship between sequence and function, we also show that undersampling to produce fully balanced data significantly improves performance. The best discriminating ability is achieved using SVMs together with feature selection and full undersampling; this approach strongly outperforms other competitive learning algorithms. We conclude that this combined approach can generate powerful machine learning classifiers for predicting protein function directly from sequence. <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Content analysis <s> BackgroundMicroRNAs (miRNAs) are small noncoding RNAs, which play significant roles as posttranscriptional regulators. The functions of animal miRNAs are generally based on complementarity for their 5' components. Although several computational miRNA target-gene prediction methods have been proposed, they still have limitations in revealing actual target genes.ResultsWe implemented miTarget, a support vector machine (SVM) classifier for miRNA target gene prediction. It uses a radial basis function kernel as a similarity measure for SVM features, categorized by structural, thermodynamic, and position-based features. The latter features are introduced in this study for the first time and reflect the mechanism of miRNA binding. The SVM classifier produces high performance with a biologically relevant data set obtained from the literature, compared with previous tools. We predicted significant functions for human miR-1, miR-124a, and miR-373 using Gene Ontology (GO) analysis and revealed the importance of pairing at positions 4, 5, and 6 in the 5' region of a miRNA from a feature selection experiment. We also provide a web interface for the program.ConclusionmiTarget is a reliable miRNA target gene prediction tool and is a successful application of an SVM classifier. Compared with previous tools, its predictions are meaningful by GO analysis and its performance can be improved given more training examples. <s> BIB005
The prediction of subsequences that code for proteins (coding potential prediction) has been a focus of interest since the early days of bioinformatics. Because many features can be extracted from a sequence, and most dependencies occur between adjacent positions, many variations of Markov models were developped. To deal with the high amount of possible features, and the often limited amount of samples, BIB001 introduced the interpolated Markov model (IMM), which used interpolation between different orders of the Markov model to deal with small sample sizes, and a filter method (Chi-square) to select only relevant features. In further work, BIB003 extended the IMM framework to also deal with non-adjacent feature dependencies, resulting in the interpolated context model (ICM), which crosses a Bayesian decision tree with a filter method (Chi-square) to assess feature relevance. Recently, the avenue of FS techniques for coding potential prediction was further pursued by , who combined different measures of coding potential prediction, and then used the Markov blanket multivariate filter approach (MBF) to retain only the relevant ones. A second class of techniques focuses on the prediction of protein function from sequence. The early work of BIB002 , who combined a genetic algorithm in combination with the Gamma test to score feature subsets for classification of large subunits of rRNA, inspired researchers to use FS techniques to focus on important subsets of amino acids that relate to the protein's functional class BIB004 . An interesting technique is described in , using selective kernel scaling for support vector machines (SVM) as a way to asses feature weights, and subsequently remove features with low weights. The use of FS techniques in the domain of sequence analysis is also emerging in a number of more recent applications, such as the recognition of promoter regions , and the prediction of microRNA targets BIB005 .
A review of feature selection techniques in bioinformatics <s> Signal analysis <s> Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Signal analysis <s> Motivation: Many methods have been described to identify regulatory motifs in the transcription control regions of genes that exhibit similar patterns of gene expression across a variety of experimental conditions. Here we focus on a single experimental condition, and utilize gene expression data to identify sequence motifs associated with genes that are activated under this experimental condition. We use a linear model with two-way interactions to model gene expression as a function of sequence features (words) present in presumptive transcription control regions. The most relevant features are selected by a feature selection method called stepwise selection with monte carlo cross validation. We apply this method to a publicly available dataset of the yeast Saccharomyces cerevisiae, focussing on the 800 basepairs immediately upstream of each gene’s translation start site (the upstream control region (UCR)). Result: We successfully identify regulatory motifs that are known to be active under the experimental conditions analyzed, and find additional significant sequences that may represent novel regulatory motifs. We also discuss a complementary method that utilizes gene expression data from a single microarray experiment and allows averaging over variety of experimental conditions as an alternative to motif finding methods that act on clusters of co-expressed genes. Availability: The software is available upon request from the first author or may be downloaded from http://www.stat. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Signal analysis <s> BackgroundThe identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data.ResultsIn this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing.ConclusionWe show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do) this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Signal analysis <s> Motivation: Understanding the mechanisms that determine gene expression regulation is an important and challenging problem. A common approach consists of identifying DNA-binding sites from a collection of co-regulated genes and their nearby non-coding DNA sequences. Here, we consider a regression model that linearly relates gene expression levels to a sequence matching score of nucleotide patterns. We use Bayesian models and stochastic search techniques to select transcription factor binding site candidates, as an alternative to stepwise regression procedures used by other investigators. ::: ::: Results: We demonstrate through simulated data the improved performance of the Bayesian variable selection method compared to the stepwise procedure. We then analyze and discuss the results from experiments involving well-studied pathways of Saccharomyces cerevisiae and Schizosaccharomyces pombe. We identify regulatory motifs known to be related to the experimental conditions considered. Some of our selected motifs are also in agreement with recent findings by other researchers. In addition, our results include novel motifs that constitute promising sets for further assessment. ::: ::: Availability: The Matlab code for implementing the Bayesian variable selection method may be obtained from the corresponding author. <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Signal analysis <s> The translation initiation site (TIS) prediction problem is about how to correctly identify TIS in mRNA, cDNA, or other types of genomic sequences. High prediction accuracy can be helpful in a better understanding of protein coding from nucleotide sequences. This is an important step in genomic analysis to determine protein coding from nucleotide sequences. In this paper, we present an in silico method to predict translation initiation sites in vertebrate cDNA or mRNA sequences. This method consists of three sequential steps as follows. In the first step, candidate features are generated using k-gram amino acid patterns. In the second step, a small number of top-ranked features are selected by an entropy-based algorithm. In the third step, a classification model is built to recognize true TISs by applying support vector machines or ensembles of decision trees to the selected features. We have tested our method on several independent data sets, including two public ones and our own extracted sequences. The experimental results achieved are better than those reported previously using the same data sets. Our high accuracy not only demonstrates the feasibility of our method, but also indicates that there might be "amino acid" patterns around TIS in cDNA and mRNA sequences. <s> BIB005
Many sequence analysis methodologies involve the recognition of short, more or less conserved signals in the sequence, representing mainly binding sites for various proteins or protein complexes. A common approach to find regulatory motifs, is to relate motifs to gene expression levels using a regression approach. Feature selection can then be used to search for the motifs that maximize the fit to the regression model BIB002 BIB004 . In , a classification approach is chosen to find discriminative motifs. The method is inspired by BIB001 who use the threshold number of misclassification (TNoM, see further in the section on microarray analysis) to score genes for relevance to tissue classification. From the TNoM score, a p-value is calculated that represents the significance of each motif. Motifs are then sorted according to their p-value. Another line of research is performed in the context of the gene prediction setting, where structural elements such as the translation initiation site (TIS) and splice sites are modelled as specific classification problems. The problem of feature selection for structural element recognition was pioneered in for the problem of splice site prediction, combining a sequential backward method together with an embedded SVM evaluation criterion to assess feature relevance. In BIB003 an estimation of distribution algorithm (EDA, a generalization of genetic algorithms) was used to gain more insight in the relevant features for splice site prediction. Similarly, the prediction of TIS is a suitable problem to apply feature selection techniques. In BIB005 , the authors demonstrate the advantages of using feature selection for this problem, using the feature-class entropy as a filter measure to remove irrelevant features. In future research, FS techniques can be expected to be useful for a number of challenging prediction tasks, such as identifying relevant features related to alternative splice sites and alternative TIS.
A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Oligonucleotide arrays can provide a broad picture of the state of the cell, by monitoring the expression level of thousands of genes at the same time. It is of interest to develop techniques for extracting useful information from the resulting data sets. Here we report the application of a two-way clustering method for analyzing a data set consisting of the expression patterns of different cell types. Gene expres- sion in 40 tumor and 22 normal colon tissue samples was analyzed with an Affymetrix oligonucleotide array comple- mentary to more than 6,500 human genes. An efficient two- way clustering algorithm was applied to both the genes and the tissues, revealing broad coherent patterns that suggest a high degree of organization underlying gene expression in these tissues. Coregulated families of genes clustered together, as demonstrated for the ribosomal proteins. Clustering also separated cancerous from noncancerous tissue and cell lines from in vivo tissues on the basis of subtle distributed patterns of genes even when expression of individual genes varied only slightly between the tissues. Two-way clustering thus may be of use both in classifying genes into functional groups and in classifying tissues based on gene expression. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> We used cDNA microarrays to explore the variation in expression of approximately 8,000 unique genes among the 60 cell lines used in the National Cancer Institute's screen for anti-cancer drugs. Classification of the cell lines based solely on the observed patterns of gene expression revealed a correspondence to the ostensible origins of the tumours from which the cell lines were derived. The consistent relationship between the gene expression patterns and the tissue of origin allowed us to recognize outliers whose previous classification appeared incorrect. Specific features of the gene expression patterns appeared to be related to physiological properties of the cell lines, such as their doubling time in culture, drug metabolism or the interferon response. Comparison of gene expression patterns in the cell lines to those observed in normal breast tissue or in breast tumour specimens revealed features of the expression patterns in the tumours that had recognizable counterparts in specific cell lines, reflecting the tumour, stromal and inflammatory components of the tumour tissue. These results provided a novel molecular characterization of this important group of human cell lines and their relationships to tumours in vivo. <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Microarrays are a novel technology that facilitates the simultaneous measurement of thousands of gene expression levels. A typical microarray experiment can produce millions of data points, raising serious problems of data reduction, and simultaneous inference. We consider one such experiment in which oligonucleotide arrays were employed to assess the genetic effects of ionizing radiation on seven thousand human genes. A simple nonparametric empirical Bayes model is introduced, which is used to guide the efficient reduction of the data to a single summary statistic per gene, and also to make simultaneous inferences concerning which genes were affected by the radiation. Although our focus is on one specific experiment, the proposed methods can be applied quite generally. The empirical Bayes inferences are closely related to the frequentist false discovery rate (FDR) criterion. <s> BIB005 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data. Results: We develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t-test, provide a systematic inference approach that compares favorably with simple t-test or fold methods, and partly compensate for the lack of replication. <s> BIB006 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> We consider the problem of inferring fold changes in gene expression from cDNA microarray data. Standard procedures focus on the ratio of measured fluorescent intensities at each spot on the microarray, but to do so is to ignore the fact that the variation of such ratios is not constant. Estimates of gene expression changes are derived within a simple hierarchical model that accounts for measurement error and fluctuations in absolute gene expression levels. Significant gene expression changes are identified by deriving the posterior odds of change within a similar model. The methods are tested via simulation and are applied to a panel of Escherichia coli microarrays. <s> BIB007 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Microarrays can measure the expression of thousands of genes to identify changes in expression between different biological states. Methods are needed to determine the significance of these changes while accounting for the enormous number of genes. We describe a method, Significance Analysis of Microarrays (SAM), that assigns a score to each gene on the basis of change in gene expression relative to the standard deviation of repeated measurements. For genes with scores greater than an adjustable threshold, SAM uses permutations of the repeated measurements to estimate the percentage of genes identified by chance, the false discovery rate (FDR). When the transcriptional response of human cells to ionizing radiation was measured by microarrays, SAM identified 34 genes that changed at least 1.5-fold with an estimated FDR of 12%, compared with FDRs of 60 and 84% by using conventional methods of analysis. Of the 34 genes, 19 were involved in cell cycle regulation and 3 in apoptosis. Surprisingly, four nucleotide excision repair genes were induced, suggesting that this repair pathway for UV-damaged DNA might play a previously unrecognized role in repairing DNA damaged by ionizing radiation. <s> BIB008 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Gene expression studies bridge the gap between DNA information and trait information by dissecting biochemical pathways into intermediate components between genotype and phenotype. These studies open new avenues for identifying complex disease genes and biomarkers for disease diagnosis and for assessing drug efficacy and toxicity. However, the majority of analytical methods applied to gene expression data are not efficient for biomarker identification and disease diagnosis. In this paper, we propose a general framework to incorporate feature (gene) selection into pattern recognition in the process to identify biomarkers. Using this framework, we develop three feature wrappers that search through the space of feature subsets using the classification error as measure of goodness for a particular feature subset being "wrapped around": linear discriminant analysis, logistic regression, and support vector machines. To effectively carry out this computationally intensive search process, we employ sequential forward search and sequential forward floating search algorithms. To evaluate the performance of feature selection for biomarker identification we have applied the proposed methods to three data sets. The preliminary results demonstrate that very high classification accuracy can be attained by identified composite classifiers with several biomarkers. <s> BIB009 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> A reliable and precise classification of tumors is essential for successful diagnosis and treatment of cancer. cDNA microarrays and high-density oligonucleotide chips are novel biotechnologies increasingly used in cancer research. By allowing the monitoring of expression levels in cells for thousands of genes simultaneously, microarray experiments may lead to a more complete understanding of the molecular variations among tumors and hence to a finer and more informative classification. The ability to successfully distinguish between tumor classes (already known or yet to be discovered) using gene expression data is an important aspect of this novel approach to cancer classification. This article compares the performance of different discrimination methods for the classification of tumors based on gene expression data. The methods include nearest-neighbor classifiers, linear discriminant analysis, and classification trees. Recent machine learning approaches, such as bagging and boosting, are also considere... <s> BIB010 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: Gene expression experiments provide a fast and systematic way to identify disease markers relevant to clinical care. In this study, we address the problem of robust identification of differentially expressed genes from microarray data. Differentially expressed genes, or discriminator genes, are genes with significantly different expression in two user-defined groups of microarray experiments. We compare three model-free approaches: (1) nonparametric t-test, (2) Wilcoxon (or Mann‐Whitney) rank sum test, and (3) a heuristic method based on high Pearson correlation to a perfectly differentiating gene (‘ideal discriminator method’). We systematically assess the performance of each method based on simulated and biological data under varying noise levels and p-value cutoffs. Results: All methods exhibit very low false positive rates and identify a large fraction of the differentially expressed genes in simulated data sets with noise level similar to that of actual data. Overall, the rank sum test appears most conservative, which may be advantageous when the computationally identified genes need to be tested biologically. However, if a more inclusive list of markers is desired, a higher p-value cutoff or the nonparametric t-test may be appropriate. When applied to data from lung tumor and lymphoma data sets, the methods identify biologically relevant differentially expressed genes that allow clear separation of groups in question. Thus the methods described and evaluated here provide a convenient and robust way to identify differentially expressed genes for further biological and clinical analysis. Availability: By request from the authors. <s> BIB011 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential "p"-value rejection methods based on the observed data. Whereas a sequential "p"-value method fixes the error rate and "estimates" its corresponding rejection region, we propose the opposite approach-we "fix" the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the "q"-value, the pFDR analogue of the "p"-value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini-Hochberg FDR method. Copyright 2002 Royal Statistical Society. <s> BIB012 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. <s> BIB013 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Since most classi%cation articles have applied a single technique to a single gene expression dataset, it is crucial to assess the performance of each method through a comprehensive comparative study. We evaluate by extensive comparison study extending Dudoit et al. (J. Amer. Statist. Assoc. 97 (2002) 77) the performance of recently developed classi%cation methods in microarray experiment, and provide the guidelines for %nding the most appropriate classi%cation tools in various situations. We extend their comparison in three directions: more classi%cation methods (21 methods), more datasets (7 datasets) and more gene selection techniques (3 methods). Our comparison study shows several interesting facts and provides the biologists and the biostatisticians some insights into the classi%cation tools in microarray data analysis. This study also shows that the more sophisticated classi%ers give better performances than classical methods such as kNN, DLDA, DQDA and the choice of gene selection method has much e>ect on the performance of the classi%cation methods, and thus the classi%cation methods should be considered together with the gene selection criteria. c 2004 Elsevier B.V. All rights reserved. <s> BIB014 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Summary: This paper studies the problem of building multiclass classifiers for tissue classification based on gene expression. The recent development of microarray technologies has enabled biologists to quantify gene expression of tens of thousands of genes in a single experiment. Biologists have begun collecting gene expression for a large number of samples. One of the urgent issues in the use of microarray data is to develop methods for characterizing samples based on their gene expression. The most basic step in the research direction is binary sample classification, which has been studied extensively over the past few years. This paper investigates the next step---multiclass classification of samples based on gene expression. The characteristics of expression data (e.g. large number of genes with small sample size) makes the classification problem more challenging. ::: ::: The process of building multiclass classifiers is divided into two components: (i) selection of the features (i.e. genes) to be used for training and testing and (ii) selection of the classification method. This paper compares various feature selection methods as well as various state-of-the-art classification methods on various multiclass gene expression datasets. ::: ::: Our study indicates that multiclass classification problem is much more difficult than the binary one for the gene expression datasets. The difficulty lies in the fact that the data are of high dimensionality and that the sample size is small. The classification accuracy appears to degrade very rapidly as the number of classes increases. In particular, the accuracy was very low regardless of the choices of the methods for large-class datasets (e.g. NCI60 and GCM). While increasing the number of samples is a plausible solution to the problem of accuracy degradation, it is important to develop algorithms that are able to analyze effectively multiple-class expression data for these special datasets. <s> BIB015 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology. We are seeking to develop a computer system for powerful and reliable cancer diagnostic model creation based on microarray data. To keep a realistic perspective on clinical applications we focus on multicategory diagnosis. To equip the system with the optimum combination of classifier, gene selection and cross-validation methods, we performed a systematic and comprehensive evaluation of several major algorithms for multicategory classification, several gene selection methods, multiple ensemble classifier methods and two cross-validation designs using 11 datasets spanning 74 diagnostic categories and 41 cancer types and 12 normal tissue types. ::: ::: Results: Multicategory support vector machines (MC-SVMs) are the most effective classifiers in performing accurate cancer diagnosis from gene expression data. The MC-SVM techniques by Crammer and Singer, Weston and Watkins and one-versus-rest were found to be the best methods in this domain. MC-SVMs outperform other popular machine learning algorithms, such as k-nearest neighbors, backpropagation and probabilistic neural networks, often to a remarkable degree. Gene selection techniques can significantly improve the classification performance of both MC-SVMs and other non-SVM learning algorithms. Ensemble classifiers do not generally improve performance of the best non-ensemble models. These results guided the construction of a software system GEMS (Gene Expression Model Selector) that automates high-quality model construction and enforces sound optimization and performance estimation procedures. This is the first such system to be informed by a rigorous comparative analysis of the available algorithms and datasets. ::: ::: Availability: The software system GEMS is available for download from http://www.gems-system.org for non-commercial use. ::: ::: Contact: [email protected] <s> BIB016 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> One of the main objectives in the analysis of microarray experiments is the identification of genes that are differentially expressed under two experimental conditions. This task is complicated by the noisiness of the data and the large number of genes that are examined simultaneously. Here, we present a novel technique for identifying differentially expressed genes that does not originate from a sophisticated statistical model but rather from an analysis of biological reasoning. The new technique, which is based on calculating rank products (RP) from replicate experiments, is fast and simple. At the same time, it provides a straightforward and statistically stringent way to determine the significance level for each gene and allows for the flexible control of the false-detection rate and familywise error rate in the multiple testing situation of a microarray experiment. We use the RP technique on three biological data sets and show that in each case it performs more reliably and consistently than the non-parametric t-test variant implemented in Tusher et al.'s significance analysis of microarrays (SAM). We also show that the RP results are reliable in highly noisy data. An analysis of the physiological function of the identified genes indicates that the RP approach is powerful for identifying biologically relevant expression changes. In addition, using RP can lead to a sharp reduction in the number of replicate experiments needed to obtain reproducible results. <s> BIB017 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> BackgroundMicroarray studies in cancer compare expression levels between two or more sample groups on thousands of genes. Data analysis follows a population-level approach (e.g., comparison of sample means) to identify differentially expressed genes. This leads to the discovery of 'population-level' markers, i.e., genes with the expression patterns A > B and B > A. We introduce the PPST test that identifies genes where a significantly large subset of cases exhibit expression values beyond upper and lower thresholds observed in the control samples.ResultsInterestingly, the test identifies A > B and B < A pattern genes that are missed by population-level approaches, such as the t-test, and many genes that exhibit both significant overexpression and significant underexpression in statistically significantly large subsets of cancer patients (ABA pattern genes). These patterns tend to show distributions that are unique to individual genes, and are aptly visualized in a 'gene expression pattern grid'. The low degree of among-gene correlations in these genes suggests unique underlying genomic pathologies and high degree of unique tumor-specific differential expression. We compare the PPST and the ABA test to the parametric and non-parametric t-test by analyzing two independently published data sets from studies of progression in astrocytoma.ConclusionsThe PPST test resulted findings similar to the nonparametric t-test with higher self-consistency. These tests and the gene expression pattern grid may be useful for the identification of therapeutic targets and diagnostic or prognostic markers that are present only in subsets of cancer patients, and provide a more complete portrait of differential expression in cancer. <s> BIB018 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: Recent attempts to account for multiple testing in the analysis of microarray data have focused on controlling the false discovery rate (FDR). However, rigorous control of the FDR at a preselected level is often impractical. Consequently, it has been suggested to use the q-value as an estimate of the proportion of false discoveries among a set of significant findings. However, such an interpretation of the q-value may be unwarranted considering that the q-value is based on an unstable estimator of the positive FDR (pFDR). Another method proposes estimating the FDR by modeling p-values as arising from a beta-uniform mixture (BUM) distribution. Unfortunately, the BUM approach is reliable only in settings where the assumed model accurately represents the actual distribution of p-values. ::: ::: Methods: A method called the spacings LOESS histogram (SPLOSH) is proposed for estimating the conditional FDR (cFDR), the expected proportion of false positives conditioned on having k 'significant' findings. SPLOSH is designed to be more stable than the q-value and applicable in a wider variety of settings than BUM. ::: ::: Results: In a simulation study and data analysis example, SPLOSH exhibits the desired characteristics relative to the q-value and BUM. ::: ::: Availability: The Web site www.stjuderesearch.org/statistics/splosh.html has links to freely available S-plus code to implement the proposed procedure. <s> BIB019 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> DNA microarray experiments generating thousands of gene expression measurements, are used to collect information from tissue and cell samples regarding gene expression differences that could be useful for diagnosis disease, distinction of the specific tumor type, etc. One important application of gene expression microarray data is the classification of samples into known categories. As DNA microarray technology measures the gene expression en masse, this has resulted in data with the number of features (genes) far exceeding the number of samples. As the predictive accuracy of supervised classifiers that try to discriminate between the classes of the problem decays with the existence of irrelevant and redundant features, the necessity of a dimensionality reduction process is essential. We propose the application of a gene selection process, which also enables the biology researcher to focus on promising gene candidates that actively contribute to classification in these large scale microarrays. Two basic approaches for feature selection appear in machine learning and pattern recognition literature: the filter and wrapper techniques. Filter procedures are used in most of the works in the area of DNA microarrays. In this work, a comparison between a group of different filter metrics and a wrapper sequential search procedure is carried out. The comparison is performed in two well-known DNA microarray datasets by the use of four classic supervised classifiers. The study is carried out over the original-continuous and three-intervals discretized gene expression data. While two well-known filter metrics are proposed for continuous data, four classic filter measures are used over discretized data. The same wrapper approach is used for both continuous and discretized data. The application of filter and wrapper gene selection procedures leads to considerably better accuracy results in comparison to the non-gene selection approach, coupled with interesting and notable dimensionality reductions. Although the wrapper approach mainly shows a more accurate behavior than filter metrics, this improvement is coupled with considerable computer-load necessities. We note that most of the genes selected by proposed filter and wrapper procedures in discrete and continuous microarray data appear in the lists of relevant-informative genes detected by previous studies over these datasets. The aim of this work is to make contributions in the field of the gene selection task in DNA microarray datasets. By an extensive comparison with more popular filter techniques, we would like to make contributions in the expansion and study of the wrapper approach in this type of domains. <s> BIB020 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Background ::: Due to the high cost and low reproducibility of many microarray experiments, it is not surprising to find a limited number of patient samples in each study, and very few common identified marker genes among different studies involving patients with the same disease. Therefore, it is of great interest and challenge to merge data sets from multiple studies to increase the sample size, which may in turn increase the power of statistical inferences. In this study, we combined two lung cancer studies using micorarray GeneChip®, employed two gene shaving methods and a two-step survival test to identify genes with expression patterns that can distinguish diseased from normal samples, and to indicate patient survival, respectively. <s> BIB021 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. <s> BIB022 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> BackgroundDetermining whether a gene is differentially expressed in two different samples remains an important statistical problem. Prior work in this area has featured the use of t-tests with pooled estimates of the sample variance based on similarly expressed genes. These methods do not display consistent behavior across the entire range of pooling and can be biased when the prior hyperparameters are specified heuristically.ResultsA two-sample Bayesian t-test is proposed for use in determining whether a gene is differentially expressed in two different samples. The test method is an extension of earlier work that made use of point estimates for the variance. The method proposed here explicitly calculates in analytic form the marginal distribution for the difference in the mean expression of two samples, obviating the need for point estimates of the variance without recourse to posterior simulation. The prior distribution involves a single hyperparameter that can be calculated in a statistically rigorous manner, making clear the connection between the prior degrees of freedom and prior variance.ConclusionThe test is easy to understand and implement and application to both real and simulated data shows that the method has equal or greater power compared to the previous method and demonstrates consistent Type I error rates. The test is generally applicable outside the microarray field to any situation where prior information about the variance is available and is not limited to cases where estimates of the variance are based on many similar observations. <s> BIB023 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Gene expression microarray is a rapidly maturing technology that provides the opportunity to assay the expression levels of thousands or tens of thousands of genes in a single experiment. We present a new heuristic to select relevant gene subsets in order to further use them for the classification task. Our method is based on the statistical significance of adding a gene from a ranked-list to the final subset. The efficiency and effectiveness of our technique is demonstrated through extensive comparisons with other representative heuristics. Our approach shows an excellent performance, not only at identifying relevant genes, but also with respect to the computational cost. <s> BIB024 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: An important application of microarrays is to discover genomic biomarkers, among tens of thousands of genes assayed, for disease classification. Thus there is a need for developing statistical methods that can efficiently use such high-throughput genomic data, select biomarkers with discriminant power and construct classification rules. The ROC (receiver operator characteristic) technique has been widely used in disease classification with low-dimensional biomarkers because (1) it does not assume a parametric form of the class probability as required for example in the logistic regression method; (2) it accommodates case--control designs and (3) it allows treating false positives and false negatives differently. However, due to computational difficulties, the ROC-based classification has not been used with microarray data. Moreover, the standard ROC technique does not incorporate built-in biomarker selection. ::: ::: Results: We propose a novel method for biomarker selection and classification using the ROC technique for microarray data. The proposed method uses a sigmoid approximation to the area under the ROC curve as the objective function for classification and the threshold gradient descent regularization method for estimation and biomarker selection. Tuning parameter selection based on the V-fold cross validation and predictive performance evaluation are also investigated. The proposed approach is demonstrated with a simulation study, the Colon data and the Estrogen data. The proposed approach yields parsimonious models with excellent classification performance. ::: ::: Availability: R code is available upon request. ::: ::: Contact: [email protected] <s> BIB025 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> High-throughput gene expression technologies such as microarrays have been utilized in a variety of scientific applications. Most of the work has been done on assessing univariate associations between gene expression profiles with clinical outcome (variable selection) or on developing classification procedures with gene expression data (supervised learning). We consider a hybrid variable selection/classification approach that is based on linear combinations of the gene expression profiles that maximize an accuracy measure summarized using the receiver operating characteristic curve. Under a specific probability model, this leads to the consideration of linear discriminant functions. We incorporate an automated variable selection approach using LASSO. An equivalence between LASSO estimation with support vector machines allows for model fitting using standard software. We apply the proposed method to simulated data as well as data from a recently published prostate cancer study. <s> BIB026 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> BackgroundSelection of relevant genes for sample classification is a common task in most gene expression studies, where researchers try to identify the smallest possible set of genes that can still achieve good predictive performance (for instance, for future use with diagnostic purposes in clinical practice). Many gene selection approaches use univariate (gene-by-gene) rankings of gene relevance and arbitrary thresholds to select the number of genes, can only be applied to two-class problems, and use gene selection ranking criteria unrelated to the classification algorithm. In contrast, random forest is a classification algorithm well suited for microarray data: it shows excellent performance even when most predictive variables are noise, can be used when the number of variables is much larger than the number of observations and in problems involving more than two classes, and returns measures of variable importance. Thus, it is important to understand the performance of random forest with microarray data and its possible use for gene selection.ResultsWe investigate the use of random forest for classification of microarray data (including multi-class problems) and propose a new method of gene selection in classification problems based on random forest. Using simulated and nine microarray data sets we show that random forest has comparable performance to other classification methods, including DLDA, KNN, and SVM, and that the new gene selection procedure yields very small sets of genes (often smaller than alternative methods) while preserving predictive accuracy.ConclusionBecause of its performance and features, random forest and gene selection using random forest should probably become part of the "standard tool-box" of methods for class prediction and gene selection with microarray data. <s> BIB027 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> BackgroundThe analysis of large-scale gene expression data is a fundamental approach to functional genomics and the identification of potential drug targets. Results derived from such studies cannot be trusted unless they are adequately designed and reported. The purpose of this study is to assess current practices on the reporting of experimental design and statistical analyses in gene expression-based studies.MethodsWe reviewed hundreds of MEDLINE-indexed papers involving gene expression data analysis, which were published between 2003 and 2005. These papers were examined on the basis of their reporting of several factors, such as sample size, statistical power and software availability.ResultsAmong the examined papers, we concentrated on 293 papers consisting of applications and new methodologies. These papers did not report approaches to sample size and statistical power estimation. Explicit statements on data transformation and descriptions of the normalisation techniques applied prior to data analyses (e.g. classification) were not reported in 57 (37.5%) and 104 (68.4%) of the methodology papers respectively. With regard to papers presenting biomedical-relevant applications, 41(29.1 %) of these papers did not report on data normalisation and 83 (58.9%) did not describe the normalisation technique applied. Clustering-based analysis, the t-test and ANOVA represent the most widely applied techniques in microarray data analysis. But remarkably, only 5 (3.5%) of the application papers included statements or references to assumption about variance homogeneity for the application of the t-test and ANOVA. There is still a need to promote the reporting of software packages applied or their availability.ConclusionRecently-published gene expression data analysis studies may lack key information required for properly assessing their design quality and potential impact. There is a need for more rigorous reporting of important experimental factors such as statistical power and sample size, as well as the correct description and justification of statistical methods applied. This paper highlights the importance of defining a minimum set of information required for reporting on statistical design and analysis of expression data. By improving practices of statistical analysis reporting, the scientific community can facilitate quality assurance and peer-review processes, as well as the reproducibility of results. <s> BIB028 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Background ::: Identification of molecular markers for the classification of microarray data is a challenging task. Despite the evident dissimilarity in various characteristics of biological samples belonging to the same category, most of the marker – selection and classification methods do not consider this variability. In general, feature selection methods aim at identifying a common set of genes whose combined expression profiles can accurately predict the category of all samples. Here, we argue that this simplified approach is often unable to capture the complexity of a disease phenotype and we propose an alternative method that takes into account the individuality of each patient-sample. <s> BIB029 </s> A review of feature selection techniques in bioinformatics <s> Feature selection for microarray analysis <s> Motivation: The false discovery rate (fdr) is a key tool for statistical assessment of differential expression (DE) in microarray studies. Overall control of the fdr alone, however, is not sufficient to address the problem of genes with small variance, which generally suffer from a disproportionally high rate of false positives. It is desirable to have an fdr-controlling procedure that automatically accounts for gene variability. ::: ::: Methods: We generalize the local fdr as a function of multiple statistics, combining a common test statistic for assessing DE with its standard error information. We use a non-parametric mixture model for DE and non-DE genes to describe the observed multi-dimensional statistics, and estimate the distribution for non-DE genes via the permutation method. We demonstrate this fdr2d approach for simulated and real microarray data. ::: ::: Results: The fdr2d allows objective assessment of DE as a function of gene variability. We also show that the fdr2d performs better than commonly used modified test statistics. ::: ::: Availability: An R-package OCplus containing functions for computing fdr2d() and other operating characteristics of microarray data is available at http://www.meb.ki.se/~yudpaw ::: ::: Contact: [email protected] <s> BIB030
During the last decade, the advent of microarray datasets stimulated a new line of research in bioinformatics. Microarray data pose a great challenge for computational techniques, because of their large dimensionality (up to several tens of thousands of genes) and their small sample sizes BIB013 . Furthermore, additional experimental complications like noise and variability render the analysis of microarray data an exciting domain. In order to deal with these particular characteristics of microarray data, the obvious need for dimension reduction techniques was realized BIB001 BIB003 BIB002 BIB004 , and soon their application became a de facto standard in the field. Whereas in 2001, the field of microarray analysis was still claimed to be in its infancy BIB005 , a considerable and valuable effort has since been done to contribute new and adapt known FS methodologies BIB028 . A general overview of the most influential techniques, organized according to the general FS taxonomy of Section 2, is shown in Table 2 . BIB010 BIB014 BIB015 BIB016 . This domination of the univariate approach can be explained by a number of reasons: • the output provided by univariate feature rankings is intuitive and easy to understand; • the gene ranking output could fulfill the objectives and expectations that bio-domain experts have when wanting to subsequently validate the result by laboratory techniques or in order to explore literature searches. The experts could not feel the need for selection techniques that take into account gene interactions; • the possible unawareness of subgroups of gene expression domain experts about the existence of data analysis techniques to select genes in a multivariate way; • the extra computation time needed by multivariate gene selection techniques. Some of the simplest heuristics for the identification of differentially expressed genes include setting a threshold on the observed foldchange differences in gene expression between the states under study, and the detection of the threshold point in each gene that minimizes the number of training sample misclassification (threshold number of misclassification, TNoM BIB003 ). However, a wide range of new or adapted univariate feature ranking techniques has since then been developped. These techniques can be divided into two classes: parametric and model-free methods (see Table 2 ). Parametric methods assume a given distribution from which the samples (observations) have been generated. The two sample ttest and ANOVA are among the most widely used techniques in microarray studies, although the usage of their basic form, possibly without justification of their main assumptions, is not advisible BIB028 . Modifications of the standard t-test to better deal with the small sample size and inherent noise of gene expression datasets include a number of t-or t-test like statistics (differing primarily in the way the variance is estimated) and a number of Bayesian frameworks BIB006 BIB023 . Although Gaussian assumptions have dominated the field, other types of parametrical approaches can also be found in the literature, such as regression modelling approaches and Gamma distribution models BIB007 . Due to the uncertainty about the true underlying distribution of BIB007 TNoM BIB003 many gene expression scenarios, and the difficulties to validate distributional assumptions because of small sample sizes, nonparametric or model-free methods have been widely proposed as an attractive alternative to make less stringent distributional assumptions BIB011 . Many model-free metrics, frequently borrowed from the statistics field, have demonstrated their usefulness in many gene expression studies, including the Wilcoxon rank-sum test , the between-within classes sum of squares (BSS/WSS) BIB010 and the rank products method BIB017 . A specific class of model-free methods estimates the reference distribution of the statistic using random permutations of the data, allowing the computation of a model-free version of the associated parametric tests. These techniques have emerged as a solid alternative to deal with the specificities of DNA microarray data, and do not depend on strong parametric assumptions BIB005 BIB008 . Their permutation principle partly alleviates the problem of small sample sizes in microarray studies, enhancing the robustness against outliers. We also mention promising types of non-parametric metrics which, instead of trying to identify differentially expressed genes at the whole population level (e.g. comparison of sample means), are able to capture genes which are significantly disregulated in only a subset of samples BIB018 BIB029 . These types of methods offer a more patient specific approach for the identification of markers, and can select genes exhibiting complex patterns that are missed by metrics that work under the classical comparison of two prelabeled phenotypic groups. In addition, we also point out the importance of procedures for controlling the different types of errors that arise in this complex multiple testing scenario of thousands of genes BIB030 BIB019 BIB012 , with a special focus on contributions for controlling the false discovery rate (FDR). BIB020 BIB009 . An interesting hybrid filter-wrapper approach is introduced in BIB024 , crossing a univariately pre-ordered gene ranking with an incrementally augmenting wrapper method. Another characteristic of any wrapper procedure concerns the scoring function used to evaluate each gene subset found. As the 0-1 accuracy measure allows for comparison with previous works, the vast majority of papers uses this measure. However, recent proposals advocate the use of methods for the approximation of the area under the ROC curve BIB025 , or the optimization of the LASSO (Least Absolute Shrinkage and Selection Operator) model BIB026 . ROC curves certainly provide an interesting evaluation measure, especially suited to the demand for screening different types of errors in many biomedical scenarios. The embedded capacity of several classifiers to discard input features and thus propose a subset of discriminative genes, has been exploited by several authors. Examples include the use of random forests (a classifier that combines many single decision trees) in an embedded way to calculate the importance of each gene BIB027 BIB021 . Another line of embedded FS techniques uses the weights of each feature in linear classifiers such as SVMs BIB022 and logistic regression BIB025 . These weights are used to reflect the relevance of each gene in a multivariate way, and thus allow for the removal of genes with very small weights. Partially due to the higher computational complexity of wrapper and to a lesser degree embedded approaches, these techniques have not received as much interest as filter proposals. However, an advisable practice is to pre-reduce the search space using a univariate filter method, and only then apply wrapper or embedded methods, hence fitting the computation time to the available resources.
A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> In this study, a new high-order bioinformatics tool used to identify differences in proteomic patterns in serum was evaluated for its ability to detect the presence of cancer in the ovary. The proteomic pattern is generated using matrix-assisted laser desorption and ionization time-of-flight and surface-enhanced laser desorption and ionization time-of-flight mass spectroscopy from thousands of low-molecular-weight serum proteins. Proteomic spectra patterns were generated from 50 women with and 50 women without ovarian cancer and analyzed on the Protein Biology System 2 SELDI-TOF mass spectrometer (Ciphergen Biosystems, Freemont, CA) to find a pattern unique to ovarian cancer. In the graph of the analysis, each proteomic spectrum is comprised of 15,200 mass/charge (m/z) values located along the x axis with corresponding amplitude values along the y axis. By comparing the proteomic spectra derived from the serum of patients with known ovarian cancer to that of disease-free patients, a profile of ovarian cancer was identified in the peak amplitude values along the horizontal axis. The comparison was conducted using repetitive analysis of ever smaller subsets until discriminatory values from five protein peaks were isolated. The validity of this pattern was tested using an additional 116 masked serum samples from 50 women known to have ovarian cancer and 66 nonaffected women. All of the subjects with cancer and most of the women with no cancer were from the National Ovarian Cancer Early Detection Program at Northwestern University. The nonaffected women had been diagnosed with a variety of benign gynecologic conditions after evaluation for possible ovarian cancer and were considered to be a high-risk population. Serum samples were collected before examination, diagnosis, or treatment and frozen in liquid nitrogen. The samples were thawed and added to a C16 hydrophobic interaction protein chip for analysis. In the validation set, 63 of the 66 women with benign ovarian conditions were correctly identified in the spectra analysis. All 50 patients with a diagnosis of ovarian cancer were correctly identified in the analysis, including 18 women with stage I disease. Thus, the ability of proteomic patterns to detect the presence of ovarian cancer had a sensitively of 100%, a specificity of 95%, and a positive predictive value of 94%. In comparison, the positive predictive value for serum cancer antigen 125 in the set of patients was 35%. Additionally, no matching patterns were seen in serum samples from 266 men with benign and malignant prostate disease. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: MALDI mass spectrometry is able to elicit macromolecular expression data from cellular material and when used in conjunction with Ciphergen protein chip technology (also referred to as SELDI-Surface Enhanced Laser Desorption/lonization), it permits a semi-high throughput approach to be taken with respect to sample processing and data acquisition. Due to the large array of data that is generated from a single analysis (8-10 000 variables using a mass range of 2-15 kDa-this paper) it is essential to implement the use of algorithms that can detect expression patterns from such large volumes of data correlating to a given biological/pathological phenotype from multiple samples. If successful, the methodology could be extrapolated to larger data sets to enable the identification of validated biomarkers correlating strongly to disease progression. This would not only serve to enable tumours to be classified according to their molecular expression profile but could also focus attention upon a relatively small number of molecules that might warrant further biochemical/molecular characterization to assess their suitability as potential therapeutic targets. Results: Using a multi-layer perceptron Artificial Neural Network (ANN) (Neuroshell 2) with a back propagation algorithm we have developed a prototype approach that uses a model system (comprising five low and seven high-grade human astrocytomas) to identify mass spectral peaks whose relative intensity values correlate strongly to tumour grade. Analyzing data derived from MALDI mass spectrometry in conjunction with Ciphergen protein chip technology we have used relative importance values, determined from the weights of trained ANNs (Balls et al., Water, Air Soil Pollut., 85, 1467-1472, 1996), to identify masses that accurately predict tumour grade. Implementing a three-stage procedure, we have screened a population of approximately 100000-120000 variables and identified two ions (m/z values of 13454 and 13457) whose relative intensity pattern was significantly reduced in high-grade astrocytoma. The data from this initial study suggests that application of ANN-based approaches can identify molecular ion patterns which strongly associate with disease grade and that its application to larger cohorts of patient material could potentially facilitate the rapid identification of validated biomarkers having significant clinical (i.e. diagnostic/prognostic) potential for the field of cancer biology. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Feature selection plays an important role in classification. We present a comparative study on six feature selection heuristics by applying them to two sets of data. The first set of data are gene expression profiles from Acute Lymphoblastic Leukemia (ALL) patients. The second set of data are proteomic patterns from ovarian cancer patients. Based on features chosen by these methods, error rates of several classification algorithms were obtained for analysis. Our results demonstrate the importance of feature selection in accurately classifying new samples. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: Novel methods, both molecular and statistical, are urgently needed to take advantage of recent advances in biotechnology and the human genome project for disease diagnosis and prognosis. Mass spectrometry (MS) holds great promise for biomarker identification and genome-wide protein profiling. It has been demonstrated in the literature that biomarkers can be identified to distinguish normal individuals from cancer patients using MS data. Such progress is especially exciting for the detection of early-stage ovarian cancer patients. Although various statistical methods have been utilized to identify biomarkers from MS data, there has been no systematic comparison among these approaches in their relative ability to analyze MS data. Results: We compare the performance of several classes of statistical methods for the classification of cancer based on MS spectra. These methods include: linear discriminant analysis, quadratic discriminant analysis, k -nearest neighbor classifier, bagging and boosting classification trees, support vector machine, and random forest (RF). The methods are applied to ovarian cancer and control serum samples from the National Ovarian Cancer Early Detection Program clinic at Northwestern University Hospital. We found that RF outperforms other methods in the analysis of MS data. <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. <s> BIB005 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> This work introduces novel methods for feature selection (FS) based on support vector machines (SVM). The methods combine feature subsets produced by a variant of SVM-RFE, a popular feature ranking/selection algorithm based on SVM. Two combination strategies are proposed: union of features occurring frequently, and ensemble of classifiers built on single feature subsets. The resulting methods are applied to pattern proteomic data for tumor diagnostics. Results of experiments on three proteomic pattern datasets indicate that combining feature subsets affects positively the prediction accuracy of both SVM and SVM-RFE. A discussion about the biological interpretation of selected features is provided. <s> BIB006 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> In this paper we try to identify potential biomarkers for early stroke diagnosis using surface-enhanced laser desorption/ionization mass spectrometry coupled with analysis tools from machine learning and data mining. Data consist of 42 specimen samples, i.e., mass spectra divided in two big categories, stroke and control specimens. Among the stroke specimens two further categories exist that correspond to ischemic and hemorrhagic stroke; in this paper we limit our data analysis to discriminating between control and stroke specimens. We performed two suites of experiments. In the first one we simply applied a number of different machine learning algorithms; in the second one we have chosen the best performing algorithm as it was determined from the first phase and coupled it with a number of different feature selection methods. The reason for this was 2-fold, first to establish whether feature selection can indeed improve performance, which in our case it did not seem to confirm, but more importantly to acquire a small list of potentially interesting biomarkers. Of the different methods explored the most promising one was support vector machines which gave us high levels of sensitivity and specificity. Finally, by analyzing the models constructed by support vector machines we produced a small set of 13 features that could be used as potential biomarkers, and which exhibited good performance both in terms of sensitivity, specificity and model stability. <s> BIB007 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: Early cancer detection has always been a major research focus in solid tumor oncology. Early tumor detection can theoretically result in lower stage tumors, more treatable diseases and ultimately higher cure rates with less treatment-related morbidities. Protein mass spectrometry is a potentially powerful tool for early cancer detection. ::: ::: We propose a novel method for sample classification from protein mass spectrometry data. When applied to spectra from both diseased and healthy patients, the 'peak probability contrast' technique provides a list of all common peaks among the spectra, their statistical significance and their relative importance in discriminating between the two groups. We illustrate the method on matrix-assisted laser desorption and ionization mass spectrometry data from a study of ovarian cancers. ::: ::: Results: Compared to other statistical approaches for class prediction, the peak probability contrast method performs as well or better than several methods that require the full spectra, rather than just labelled peaks. It is also much more interpretable biologically. The peak probability contrast method is a potentially useful tool for sample classification from protein mass spectrometry data. ::: ::: Supplementary Information: http://www.stat.stanford.edu/~tibs/ppc <s> BIB008 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: High-throughput and high-resolution mass spectrometry instruments are increasingly used for disease classification and therapeutic guidance. However, the analysis of immense amount of data poses considerable challenges. We have therefore developed a novel method for dimensionality reduction and tested on a published ovarian high-resolution SELDI-TOF dataset. ::: ::: Results: We have developed a four-step strategy for data preprocessing based on: (1) binning, (2) Kolmogorov--Smirnov test, (3) restriction of coefficient of variation and (4) wavelet analysis. Subsequently, support vector machines were used for classification. The developed method achieves an average sensitivity of 97.38% (sd = 0.0125) and an average specificity of 93.30% (sd = 0.0174) in 1000 independent k-fold cross-validations, where k = 2, ..., 10. ::: ::: Availability: The software is available for academic and non-commercial institutions. ::: ::: Contact: [email protected] <s> BIB009 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. <s> BIB010 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: Mass spectrometric profiles of peptides and proteins obtained by current technologies are characterized by complex spectra, high dimensionality and substantial noise. These characteristics generate challenges in the discovery of proteins and protein-profiles that distinguish disease states, e.g. cancer patients from healthy individuals. We present low-level methods for the processing of mass spectral data and a machine learning method that combines support vector machines, with particle swarm optimization for biomarker selection. ::: ::: Results: The proposed method identified mass points that achieved high prediction accuracy in distinguishing liver cancer patients from healthy individuals in SELDI-QqTOF profiles of serum. ::: ::: Availability: MATLAB scripts to implement the methods described in this paper are available from the HWR's lab website http://lombardi.georgetown.edu/labpage ::: ::: Contact: [email protected] <s> BIB011 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Motivation: Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. ::: ::: Results: We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems. ::: ::: Supplementary information: Additional tables, appendicies and datasets may be found at http://www.montefiore.ulg.ac.be/~geurts/Papers/Proteomic-suppl.html ::: ::: Contact: [email protected] <s> BIB012 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Among the many applications of mass spectrometry, biomarker pattern discovery from protein mass spectra has aroused considerable interest in the past few years. While research efforts have raised hopes of early and less invasive diagnosis, they have also brought to light the many issues to be tackled before mass-spectra-based proteomic patterns become routine clinical tools. Known issues cover the entire pipeline leading from sample collection through mass spectrometry analytics to biomarker pattern extraction, validation, and interpretation. This study focuses on the data-analytical phase, which takes as input mass spectra of biological specimens and discovers patterns of peak masses and intensities that discriminate between different pathological states. We survey current work and investigate computational issues concerning the different stages of the knowledge discovery process: exploratory analysis, quality control, and diverse transforms of mass spectra, followed by further dimensionality reduction, classification, and model evaluation. We conclude after a brief discussion of the critical biomedical task of analyzing discovered discriminatory patterns to identify their component proteins as well as interpret and validate their biological implications. <s> BIB013 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> Currently, the best way to reduce the mortality of cancer is to detect and treat it in the earliest stages. Technological advances in genomics and proteomics have opened a new realm of methods for early detection that show potential to overcome the drawbacks of current strategies. In particular, pattern analysis of mass spectra of blood samples has attracted attention as an approach to early detection of cancer. Mass spectrometry provides rapid and precise measurements of the sizes and relative abundances of the proteins present in a complex biological/chemical mixture. This article presents a review of the development of clinical decision support systems using mass spectrometry from a machine learning perspective. The literature is reviewed in an explicit machine learning framework, the components of which are preprocessing, feature extraction, feature selection, classifier training, and evaluation. <s> BIB014 </s> A review of feature selection techniques in bioinformatics <s> Mass spectra analysis <s> We propose a novel method for phenotype identification involving a stringent noise analysis and filtering procedure followed by combining the results of several machine learning tools to produce a robust predictor. We illustrate our method on SELDI-TOF MS prostate cancer data (http://home.ccr.cancer.gov/ncifdaproteomics/ppatterns.asp). Our method identified 11 proteomic biomarkers and gave significantly improved predictions over previous analyses with these data. We were able to distinguish cancer from non-cancer cases with a sensitivity of 90.31% and a specificity of 98.81%. The proposed method can be generalized to multi-phenotype prediction and other types of data (e.g., microarray data). <s> BIB015
Mass spectrometry technology (MS) is emerging as a new and attractive framework for disease diagnosis and protein-based biomarker profiling . A mass spectrum sample is characterized by thousands of different mass/charge (m/z) ratios on the x-axis, each with their corresponding signal intensity value on the y-axis. A typical MALDI-TOF low-resolution proteomic profile can contain Genetic algorithms BIB001 Nature inspired BIB011 Embedded Random forest/decision tree BIB012 BIB004 Weight vector of SVM BIB006 BIB007 Neural network BIB002 up to 15, 500 data points in the spectrum between 500 and 20, 000 m/z, and the number of points even grows using higher resolution instruments. For data mining and bioinformatics purposes, it can initially be assumed that each m/z ratio represents a distinct variable whose value is the intensity. As Somorjai et al. BIB005 explain, the data analysis step is severely constrained by both high dimensional input spaces and their inherent sparseness, just as it is the case with gene expression datasets. Although the amount of publications on mass spectrometry based data mining is not comparable to the level of maturity reached in the microarray analysis domain, an interesting collection of methods has been presented in the last 4-5 years (see BIB013 BIB014 for recent reviews) since the pioneering work of Petricoin et al. BIB001 . Starting from the raw data, and after an inital step to reduce noise and normalize the spectra from different samples , the following crucial step is to extract the variables that will constitute the initial pool of candidate discriminative features. Some studies employ the simplest approach of considering every measured value as a predictive feature, thus applying FS techniques over initial huge pools of about 15, 000 variables BIB001 , up to around 100, 000 variables BIB002 . On the other hand, a great deal of the current studies performs aggressive feature extraction procedures using elaborated peak detection and alignment techniques (see BIB013 BIB014 for a detailed description of these techniques). These procedures tend to seed the dimensionality from which supervised FS techniques will start their work in less than 500 variables BIB015 BIB008 . A feature extraction step is thus advisable to set the computational costs of many FS techniques to a feasible size in these MS scenarios. Table 3 presents an overview of FS techniques used in the domain of mass spectrometry. Similar to the domain of microarray analysis, univariate filter techniques seem to be the most common techniques used, although the use of embedded techniques is certainly emerging as an alternative. Although the t-test maintains a high level of popularity BIB003 BIB004 , other parametric measures (such as F -test BIB015 ), and a notable variety of non-parametric scores BIB008 BIB009 have also been used in several MS studies. Multivariate filter techniques on the other hand, are still somewhat underrepresented BIB003 BIB007 . Wrapper approaches have demonstrated their usefulness in MS studies by a group of influential works. Different types of population based randomized heuristics are used as search engines in the major part of these papers: genetic algorithms BIB001 , particle swarm optimization BIB011 and ant colony procedures . It is worth noting that while the first two references start the search procedure in ≈ 15, 000 dimensions by considering each m/z ratio as an initial predictive feature, aggressive peak detection and alignment processes reduce the initial dimension to about 300 variables in the last two references BIB011 . An increasing number of papers uses the embedded capacity of several classifiers to discard input features. Variations of the popular method originally proposed for gene expression domains by Guyon et al. BIB010 , using the weights of the variables in the SVMformulation to discard features with small weights, have been broadly and successfully applied in the MS domain BIB006 BIB007 . Based on a similar framework, the weights of the input masses in a neural network classifier have been used to rank the features' importance in Ball et al. BIB002 . The embedded capacity of random forests BIB004 and other types of decision tree based algorithms BIB012 constitutes an alternative embedded FS strategy.
A review of feature selection techniques in bioinformatics <s> DEALING WITH SMALL SAMPLE DOMAINS <s> Motivation: Microarray classification typically possesses two striking attributes: (1) classifier design and error estimation are based on remarkably small samples and (2) cross-validation error estimation is employed in the majority of the papers. Thus, it is necessary to have a quantifiable understanding of the behavior of cross-validation in the context of very small samples. ::: ::: Results: An extensive simulation study has been performed comparing cross-validation, resubstitution and bootstrap estimation for three popular classification rules---linear discriminant analysis, 3-nearest-neighbor and decision trees (CART)---using both synthetic and real breast-cancer patient data. Comparison is via the distribution of differences between the estimated and true errors. Various statistics for the deviation distribution have been computed: mean (for estimator bias), variance (for estimator precision), root-mean square error (for composition of bias and variance) and quartile ranges, including outlier behavior. In general, while cross-validation error estimation is much less biased than resubstitution, it displays excessive variance, which makes individual estimates unreliable for small samples. Bootstrap methods provide improved performance relative to variance, but at a high computational cost and often with increased bias (albeit, much less than with resubstitution). ::: ::: Availability and Supplementary information: A companion web site can be accessed at the URL http://ee.tamu.edu/~edward/cv_paper. The companion web site contains: (1) the complete set of tables and plots regarding the simulation study; (2) additional figures; (3) a compilation of references for microarray classification studies and (4) the source code used, with full documentation and examples. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> DEALING WITH SMALL SAMPLE DOMAINS <s> MOTIVATION ::: In genomic studies, thousands of features are collected on relatively few samples. One of the goals of these studies is to build classifiers to predict the outcome of future observations. There are three inherent steps to this process: feature selection, model selection and prediction assessment. With a focus on prediction assessment, we compare several methods for estimating the 'true' prediction error of a prediction model in the presence of feature selection. ::: ::: ::: RESULTS ::: For small studies where features are selected from thousands of candidates, the resubstitution and simple split-sample estimates are seriously biased. In these small samples, leave-one-out cross-validation (LOOCV), 10-fold cross-validation (CV) and the .632+ bootstrap have the smallest bias for diagonal discriminant analysis, nearest neighbor and classification trees. LOOCV and 10-fold CV have the smallest bias for linear discriminant analysis. Additionally, LOOCV, 5- and 10-fold CV, and the .632+ bootstrap have the lowest mean square error. The .632+ bootstrap is quite biased in small sample sizes with strong signal-to-noise ratios. Differences in performance among resampling methods are reduced as the number of specimens available increase. ::: ::: ::: SUPPLEMENTARY INFORMATION ::: A complete compilation of results and R code for simulations and analyses are available in Molinaro et al. (2005) (http://linus.nci.nih.gov/brb/TechReport.htm). <s> BIB002
Small sample sizes, and their inherent risk of imprecision and overfitting, pose a great challenge for many modelling problems in bioinformatics BIB001 BIB002 . In the context of feature selection, two initiatives have emerged in response to this novel experimental situation: the use of adequate evaluation criteria, and the use of stable and robust feature selection models.
A review of feature selection techniques in bioinformatics <s> Adequate evaluation criteria <s> In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called .632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Adequate evaluation criteria <s> Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Adequate evaluation criteria <s> Motivation: Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology. We are seeking to develop a computer system for powerful and reliable cancer diagnostic model creation based on microarray data. To keep a realistic perspective on clinical applications we focus on multicategory diagnosis. To equip the system with the optimum combination of classifier, gene selection and cross-validation methods, we performed a systematic and comprehensive evaluation of several major algorithms for multicategory classification, several gene selection methods, multiple ensemble classifier methods and two cross-validation designs using 11 datasets spanning 74 diagnostic categories and 41 cancer types and 12 normal tissue types. ::: ::: Results: Multicategory support vector machines (MC-SVMs) are the most effective classifiers in performing accurate cancer diagnosis from gene expression data. The MC-SVM techniques by Crammer and Singer, Weston and Watkins and one-versus-rest were found to be the best methods in this domain. MC-SVMs outperform other popular machine learning algorithms, such as k-nearest neighbors, backpropagation and probabilistic neural networks, often to a remarkable degree. Gene selection techniques can significantly improve the classification performance of both MC-SVMs and other non-SVM learning algorithms. Ensemble classifiers do not generally improve performance of the best non-ensemble models. These results guided the construction of a software system GEMS (Gene Expression Model Selector) that automates high-quality model construction and enforces sound optimization and performance estimation procedures. This is the first such system to be informed by a rigorous comparative analysis of the available algorithms and datasets. ::: ::: Availability: The software system GEMS is available for download from http://www.gems-system.org for non-commercial use. ::: ::: Contact: [email protected] <s> BIB003
Several papers have warned about the substantial number of applications not performing an independent and honest validation of the reported accuracy percentages BIB001 BIB003 BIB002 . In such cases, authors often select a discriminative subset of features using the whole dataset. The accuracy of the final classification model is estimated using this subset, thus testing the discrimination rule on samples that were already used to propose the final subset of features. We feel that the need for an external feature selection process in training the classification rule at each stage of the accuracy estimation procedure is gaining space in the bioinformatics community practices. Furthermore, novel predictive accuracy estimation methods with promising characteristics, such as bolstered error estimation , have emerged to deal with the specificities of small sample domains.
A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> The analysis of the leukemia data from Whitehead/MIT group is a discriminant analysis (also called a supervised learning). Among thousands of genes whose expression levels are measured, not all are needed for discriminant analysis: a gene may either not contribute to the separation of two types of tissues/cancers, or it may be redundant because it is highly correlated with other genes. There are two theoretical frameworks in which variable selection (or gene selection in our case) can be addressed. The first is model selection, and the second is model averaging. We have carried out model selection using Akaike information criterion and Bayesian information criterion with logistic regression (discrimination, prediction, or classification) to determine the number of genes that provide the best model. These model selection criteria set upper limits of 22-25 and 12-13 genes for this data set with 38 samples, and the best model consists of only one (no.4847, zyxin) or two genes. We have also carried out model averaging over the best single-gene logistic predictors using three different weights: maximized likelihood, prediction rate on training set, and equal weight. We have observed that the performance of most of these weighted predictors on the testing set is gradually reduced as more genes are included, but a clear cutoff that separates good and bad prediction performance is not found. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: We recently introduced a multivariate approach that selects a subset of predictive genes jointly for sample classification based on expression data. We tested the algorithm on colon and leukemia data sets. As an extension to our earlier work, we systematically examine the sensitivity, reproducibility and stability of gene selection/sample classification to the choice of parameters of the algorithm. Methods: Our approach combines a Genetic Algorithm (GA) and the k-Nearest Neighbor (KNN) method to identify genes that can jointly discriminate between different classes of samples (e.g. normal versus tumor). The GA/KNN method is a stochastic supervised pattern recognition method. The genes identified are subsequently used to classify independent test set samples. Results: The GA/KNN method is capable of selecting a subset of predictive genes from a large noisy data set for sample classification. It is a multivariate approach that can capture the correlated structure in the data. We find that for a given data set gene selection is highly repeatable in independent runs using the GA/KNN method. In general, however, gene selection may be less robust than classification. Availability: The method is available at http://dir.niehs.nih. gov/microarray/datamining <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> A reliable and precise classification of tumors is essential for successful diagnosis and treatment of cancer. cDNA microarrays and high-density oligonucleotide chips are novel biotechnologies increasingly used in cancer research. By allowing the monitoring of expression levels in cells for thousands of genes simultaneously, microarray experiments may lead to a more complete understanding of the molecular variations among tumors and hence to a finer and more informative classification. The ability to successfully distinguish between tumor classes (already known or yet to be discovered) using gene expression data is an important aspect of this novel approach to cancer classification. This article compares the performance of different discrimination methods for the classification of tumors based on gene expression data. The methods include nearest-neighbor classifiers, linear discriminant analysis, and classification trees. Recent machine learning approaches, such as bagging and boosting, are also considere... <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Selection of significant genes via expression patterns is an important problem in microarray experiments. Owing to small sample size and the large number of variables (genes), the selection process can be unstable. This paper proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables to specialize the model to a regression setting and uses a Bayesian mixture prior to perform the variable selection. We control the size of the model by assigning a prior distribution over the dimension (number of significant genes) of the model. The posterior distributions of the parameters are not in explicit form and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the parameters from the posteriors. The Bayesian model is flexible enough to identify significant genes as well as to perform future predictions. The method is applied to cancer classification via cDNA microarrays where the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify a set of significant genes. The method is also applied successfully to the leukemia data. <s> BIB005 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: Novel methods, both molecular and statistical, are urgently needed to take advantage of recent advances in biotechnology and the human genome project for disease diagnosis and prognosis. Mass spectrometry (MS) holds great promise for biomarker identification and genome-wide protein profiling. It has been demonstrated in the literature that biomarkers can be identified to distinguish normal individuals from cancer patients using MS data. Such progress is especially exciting for the detection of early-stage ovarian cancer patients. Although various statistical methods have been utilized to identify biomarkers from MS data, there has been no systematic comparison among these approaches in their relative ability to analyze MS data. Results: We compare the performance of several classes of statistical methods for the classification of cancer based on MS spectra. These methods include: linear discriminant analysis, quadratic discriminant analysis, k -nearest neighbor classifier, bagging and boosting classification trees, support vector machine, and random forest (RF). The methods are applied to ovarian cancer and control serum samples from the National Ovarian Cancer Early Detection Program clinic at Northwestern University Hospital. We found that RF outperforms other methods in the analysis of MS data. <s> BIB006 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: A common objective of microarray experiments is the detection of differential gene expression between samples obtained under different conditions. The task of identifying differentially expressed genes consists of two aspects: ranking and selection. Numerous statistics have been proposed to rank genes in order of evidence for differential expression. However, no one statistic is universally optimal and there is seldom any basis or guidance that can direct toward a particular statistic of choice. ::: ::: Results: Our new approach, which addresses both ranking and selection of differentially expressed genes, integrates differing statistics via a distance synthesis scheme. Using a set of (Affymetrix) spike-in datasets, in which differentially expressed genes are known, we demonstrate that our method compares favorably with the best individual statistics, while achieving robustness properties lacked by the individual statistics. We further evaluate performance on one other microarray study. ::: ::: Availability: The approach is implemented in an R package called DEDS, which is available for download from the Bioconductor website (http://www.bioconductor.org/). ::: ::: Contact: [email protected] <s> BIB007 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> BackgroundThe use of mass spectrometry as a proteomics tool is poised to revolutionize early disease diagnosis and biomarker identification. Unfortunately, before standard supervised classification algorithms can be employed, the "curse of dimensionality" needs to be solved. Due to the sheer amount of information contained within the mass spectra, most standard machine learning techniques cannot be directly applied. Instead, feature selection techniques are used to first reduce the dimensionality of the input space and thus enable the subsequent use of classification algorithms. This paper examines feature selection techniques for proteomic mass spectrometry.ResultsThis study examines the performance of the nearest centroid classifier coupled with the following feature selection algorithms. Student-t test, Kolmogorov-Smirnov test, and the P-test are univariate statistics used for filter-based feature ranking. From the wrapper approaches we tested sequential forward selection and a modified version of sequential backward selection. Embedded approaches included shrunken nearest centroid and a novel version of boosting based feature selection we developed. In addition, we tested several dimensionality reduction approaches, namely principal component analysis and principal component analysis coupled with linear discriminant analysis. To fairly assess each algorithm, evaluation was done using stratified cross validation with an internal leave-one-out cross-validation loop for automated feature selection. Comprehensive experiments, conducted on five popular cancer data sets, revealed that the less advocated sequential forward selection and boosted feature selection algorithms produce the most consistent results across all data sets. In contrast, the state-of-the-art performance reported on isolated data sets for several of the studied algorithms, does not hold across all data sets.ConclusionThis study tested a number of popular feature selection methods using the nearest centroid classifier and found that several reportedly state-of-the-art algorithms in fact perform rather poorly when tested via stratified cross-validation. The revealed inconsistencies provide clear evidence that algorithm evaluation should be performed on several data sets using a consistent (i.e., non-randomized, stratified) cross-validation procedure in order for the conclusions to be statistically sound. <s> BIB008 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Background ::: Due to the high cost and low reproducibility of many microarray experiments, it is not surprising to find a limited number of patient samples in each study, and very few common identified marker genes among different studies involving patients with the same disease. Therefore, it is of great interest and challenge to merge data sets from multiple studies to increase the sample size, which may in turn increase the power of statistical inferences. In this study, we combined two lung cancer studies using micorarray GeneChip®, employed two gene shaving methods and a two-step survival test to identify genes with expression patterns that can distinguish diseased from normal samples, and to indicate patient survival, respectively. <s> BIB009 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: Selecting a small number of relevant genes for accurate classification of samples is essential for the development of diagnostic tests. We present the Bayesian model averaging (BMA) method for gene selection and classification of microarray data. Typical gene selection and classification procedures ignore model uncertainty and use a single set of relevant genes (model) to predict the class. BMA accounts for the uncertainty about the best set to choose by averaging over multiple models (sets of potentially overlapping relevant genes). ::: ::: Results: We have shown that BMA selects smaller numbers of relevant genes (compared with other methods) and achieves a high prediction accuracy on three microarray datasets. Our BMA algorithm is applicable to microarray datasets with any number of classes, and outputs posterior probabilities for the selected genes and models. Our selected models typically consist of only a few genes. The combination of high accuracy, small numbers of genes and posterior probabilities for the predictions should make BMA a powerful tool for developing diagnostics from expression data. ::: ::: Availability: The source codes and datasets used are available from our Supplementary website. ::: ::: Contact: [email protected] ::: ::: Supplementary information: http://www.expression.washington.edu/publications/kayee/bma <s> BIB010 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: The classification of high-dimensional data is always a challenge to statistical machine learning. We propose a novel method named shallow feature selection that assigns each feature a probability of being selected based on the structure of training data itself. Independent of particular classifiers, the high dimension of biodata can be fleetly reduced to an applicable case for consequential processing. Moreover, to improve both efficiency and performance of classification, these prior probabilities are further used to specify the distributions of top-level hyperparameters in hierarchical models of Bayesian neural network (BNN), as well as the parameters in Gaussian process models. ::: ::: Results: Three BNN approaches were derived and then applied to identify ovarian cancer from NCI's high-resolution mass spectrometry data, which yielded an excellent performance in 1000 independent k-fold cross validations (k = 2,...,10). For instance, indices of average sensitivity and specificity of 98.56 and 98.42%, respectively, were achieved in the 2-fold cross validations. Furthermore, only one control and one cancer were misclassified in the leave-one-out cross validation. Some other popular classifiers were also tested for comparison. ::: ::: Availability: The programs implemented in MatLab, R and Neal's fbm.2004-11-10. ::: ::: Contact: [email protected] <s> BIB011 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> BackgroundSelection of relevant genes for sample classification is a common task in most gene expression studies, where researchers try to identify the smallest possible set of genes that can still achieve good predictive performance (for instance, for future use with diagnostic purposes in clinical practice). Many gene selection approaches use univariate (gene-by-gene) rankings of gene relevance and arbitrary thresholds to select the number of genes, can only be applied to two-class problems, and use gene selection ranking criteria unrelated to the classification algorithm. In contrast, random forest is a classification algorithm well suited for microarray data: it shows excellent performance even when most predictive variables are noise, can be used when the number of variables is much larger than the number of observations and in problems involving more than two classes, and returns measures of variable importance. Thus, it is important to understand the performance of random forest with microarray data and its possible use for gene selection.ResultsWe investigate the use of random forest for classification of microarray data (including multi-class problems) and propose a new method of gene selection in classification problems based on random forest. Using simulated and nine microarray data sets we show that random forest has comparable performance to other classification methods, including DLDA, KNN, and SVM, and that the new gene selection procedure yields very small sets of genes (often smaller than alternative methods) while preserving predictive accuracy.ConclusionBecause of its performance and features, random forest and gene selection using random forest should probably become part of the "standard tool-box" of methods for class prediction and gene selection with microarray data. <s> BIB012 </s> A review of feature selection techniques in bioinformatics <s> Ensemble feature selection approaches <s> Motivation: Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. ::: ::: Results: We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems. ::: ::: Supplementary information: Additional tables, appendicies and datasets may be found at http://www.montefiore.ulg.ac.be/~geurts/Papers/Proteomic-suppl.html ::: ::: Contact: [email protected] <s> BIB013
Instead of choosing one particular FS method, and accepting its outcome as the final subset, different FS methods can be combined using ensemble FS approaches. Based on the evidence that there is often not a single universally optimal feature selection technique BIB007 , and due to the possible existence of more than one subset of features that discriminates the data equally well BIB010 , model combination approaches such as boosting have been adapted to improve the robustness and stability of final, discriminative methods BIB001 BIB004 . Novel ensemble techniques in the microarray and mass spectrometry domains include averaging over multiple single feature subsets BIB008 BIB002 , integrating a collection of univariate differential gene expression purpose statistics via a distance synthesis scheme BIB007 , using different runs of a genetic algorithm to asses relative importancies of each feature BIB003 , computing the KolmogorovSmirnov test in different bootstrap samples to assign a probability of being selected to each peak BIB011 , and a number of Bayesian averaging approaches BIB005 BIB010 . Furthermore, methods based on a collection of decision trees (e.g. random forests) can be used in an ensemble FS way to assess the relevance of each feature BIB012 BIB013 BIB009 BIB006 . Although the use of ensemble approaches requires additional computational resources, we would like to point out that they offer an advisable framework to deal with small sample domains, provided the extra computational resources are affordable.
A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Linkage disequilibrium (LD) analysis is traditionally based on individual genetic markers and often yields an erratic, non-monotonic picture, because the power to detect allelic associations depends on specific properties of each marker, such as frequency and population history. Ideally, LD analysis should be based directly on the underlying haplotype structure of the human genome, but this structure has remained poorly understood. Here we report a high-resolution analysis of the haplotype structure across 500 kilobases on chromosome 5q31 using 103 single-nucleotide polymorphisms (SNPs) in a European-derived population. The results show a picture of discrete haplotype blocks (of tens to hundreds of kilobases), each with limited diversity punctuated by apparent sites of recombination. In addition, we develop an analytical model for LD mapping based on such haplotype blocks. If our observed structure is general (and published data suggest that it may be), it offers a coherent framework for creating a haplotype map of the human genome. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Haplotype-based methods offer a powerful approach to disease gene mapping, based on the association between causal mutations and the ancestral haplotypes on which they arose. As part of The SNP Consortium Allele Frequency Projects, we characterized haplotype patterns across 51 autosomal regions (spanning 13 megabases of the human genome) in samples from Africa, Europe, and Asia. We show that the human genome can be parsed objectively into haplotype blocks: sizable regions over which there is little evidence for historical recombination and within which only a few common haplotypes are observed. The boundaries of blocks and specific haplotypes they contain are highly correlated across populations. We demonstrate that such haplotype frameworks provide substantial statistical power in association studies of common genetic variation across each region. Our results provide a foundation for the construction of a haplotype map of the human genome, facilitating comprehensive genetic association studies of human disease. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Common genetic polymorphisms may explain a portion of the heritable risk for common diseases. Within candidate genes, the number of common polymorphisms is finite, but direct assay of all existing common polymorphism is inefficient, because genotypes at many of these sites are strongly correlated. Thus, it is not necessary to assay all common variants if the patterns of allelic association between common variants can be described. We have developed an algorithm to select the maximally informative set of common single-nucleotide polymorphisms (tagSNPs) to assay in candidate-gene association studies, such that all known common polymorphisms either are directly assayed or exceed a threshold level of association with a tagSNP. The algorithm is based on the r(2) linkage disequilibrium (LD) statistic, because r(2) is directly related to statistical power to detect disease associations with unassayed sites. We show that, at a relatively stringent r(2) threshold (r2>0.8), the LD-selected tagSNPs resolve >80% of all haplotypes across a set of 100 candidate genes, regardless of recombination, and tag specific haplotypes and clades of related haplotypes in nonrecombinant regions. Thus, if the patterns of common variation are described for a candidate gene, analysis of the tagSNP set can comprehensively interrogate for main effects from common functional variation. We demonstrate that, although common variation tends to be shared between populations, tagSNPs should be selected separately for populations with different ancestries. <s> BIB003 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> The immense volume and rapid growth of human genomic data, especially single nucleotide polymorphisms (SNPs), present special challenges for both biomedical researchers and automatic algorithms. One such challenge is to select an optimal subset of SNPs, commonly referred as “haplotype tagging SNPs” (htSNPs), to capture most of the haplotype diversity of each haplotype block or gene-specific region. This information-reduction process facilitates cost-effective genotyping and, subsequently, genotype-phenotype association studies. It also has implications for assessing the risk of identifying research subjects on the basis of SNP information deposited in public domain databases. We have investigated methods for selecting htSNPs by use of principal components analysis (PCA). These methods first identify eigenSNPs and then map them to actual SNPs. We evaluated two mapping strategies, greedy discard and varimax rotation, by assessing the ability of the selected htSNPs to reconstruct genotypes of non-htSNPs. We also compared these methods with two other htSNP finders, one of which is PCA based. We applied these methods to three experimental data sets and found that the PCA-based methods tend to select the smallest set of htSNPs to achieve a 90% reconstruction precision. <s> BIB004 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Summary: We developed algorithms that find a set of single nucleotide polymorphism (SNP) markers based on interval regularity, given either the number of SNPs to choose (m) or the desired interval (I), subject to minimum variance or minimum sum of squared deviations from I. In both cases, the number of all possible sets increases exponentially with respect to the number of input SNPs (n), but our algorithms find the minima only with O(n2) calculations and comparisons by elimination of redundancy. ::: ::: Availability: A Windows executable program CHOISS is freely available at http://biochem.kaist.ac.kr/choiss.htm ::: ::: Supplementary information: http://biochem.kaist.ac.kr/choiss.htm <s> BIB005 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Objective: Genomic studies provide large volumes of data with the number of single nucleotide polymorphisms (SNPs) ranging into thousands. The analysis of SNPs permits determining relationships between genotypic and phenotypic information as well as the identification of SNPs related to a disease. The growing wealth of information and advances in biology call for the development of approaches for discovery of new knowledge. One such area is the identification of gene/SNP patterns impacting cure/drug development for various diseases. Methods: A new approach for predicting drug effectiveness is presented. The approach is based on data mining and genetic algorithms. A global search mechanism, weighted decision tree, decision-tree-based wrapper, a correlation-based heuristic, and the identification of intersecting feature sets are employed for selecting significant genes. Results: The feature selection approach has resulted in 85% reduction of number of features. The relative increase in cross-validation accuracy and specificity for the significant gene/SNP set was 10% and 3.2%, respectively. Conclusion: The feature selection approach was successfully applied to data sets for drug and placebo subjects. The number of features has been significantly reduced while the quality of knowledge was enhanced. The feature set intersection approach provided the most significant genes/SNPs. The results reported in the paper discuss associations among SNPs resulting in patient-specific treatment protocols. <s> BIB006 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> The large-scale genome-wide SNP data being acquired from biomedical domains have offered resources to evaluate modern data mining techniques in applications to genetic studies. The purpose of this study is to extend our recently developed gene mining approach to extracting the relevant SNPs for alcoholism using sib-pair IBD profiles of pedigrees. Application to a publicly available large dataset of 100 simulated replicates for three American populations demonstrates that the proposed ensemble decision approach has successfully identified most of the simulated true loci, thus implicating that IBD statistic could be used as one of the informatics for mining the genetic underpins for complex human diseases. <s> BIB007 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Large-scale genome-wide genetic profiling using markers of single nucleotide polymorphisms (SNPs) has offered the opportunities to investigate the possibility of using those biomarkers for predicting genetic risks. Because of the special data structure characterized with a high dimension, signal-to-noise ratio and correlations between genes, but with a relative small sample size, the data analysis needs special strategies. We propose a robust data reduction technique based on a hybrid between genetic algorithm and support vector machine. The major goal of this hybridization is to fully exploit their respective merits (e.g., robustness to the size of solution space and capability of handling a very large dimension of features) for identification of key SNP features for risk prediction. We have applied the approach to the Genetic Analysis Workshop 14 COGA data to predict affection status of a sib pair based on genome-wide SNP identical-by-decent (IBD) informatics. This application has demonstrated its potential to extract useful information from the massive SNP data. <s> BIB008 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Genetic variation analysis holds much promise as a basis for disease-gene association. However, due to the tremendous number of candidate single nucleotide polymorphisms (SNPs), there is a clear need to expedite genotyping by selecting and considering only a subset of all SNPs. This process is known as tagging SNP selection. Several methods for tagging SNP selection have been proposed, and have shown promising results. However, most of them rely on strong assumptions such as prior block-partitioning, bi-allelic SNPs, or a fixed number or location of tagging SNPs. We introduce BNTagger, a new method for tagging SNP selection, basedonconditional independence amongSNPs.Usingtheformalism of Bayesian networks (BNs), our system aims to select a subset of independentandhighlypredictiveSNPs.Similartopreviouspredictionbased methods, we aim to maximize the prediction accuracy of tagging SNPs, but unlike them, we neither fix the number nor the location of predictive tagging SNPs, nor require SNPs to be bi-allelic. In addition, for newly-genotyped samples, BNTagger directly uses genotype data as input, while producing as output haplotype data of all SNPs. Usingthreepublicdatasets,wecomparethepredictionperformance of our method to that of three state-of-the-art tagging SNP selection methods. The results demonstrate that our method consistently improves upon previous methods in terms of prediction accuracy. Moreover, our method retains its good performance even when a very small number of tagging SNPs are used. Contact: [email protected], [email protected] <s> BIB009 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> High-density single nucleotide polymorphism (SNP) array is a recently introduced technology that genotypes more than 10,000 human SNPs on a single array. It has been shown that SNP arrays can be used to determine not only SNP genotype calls, but also DNA copy number (DCN) aberrations, which are common in solid tumors. In the past, effective cancer classification has been demonstrated using microarray gene expression data, or DCN data derived from comparative genomic hybridization (CGH) arrays. However, the feasibility of cancer classification based on DCN aberrations determined by SNP arrays has not been previously investigated. In this study, we address this issue by applying state-of-the-art classification algorithms and feature selection algorithms to the DCN aberration data derived from a public SNP array dataset. Performance was measured via leave-one-out cross-validation (LOOCV) classification accuracy. Experimental results showed that the maximum accuracy was 73.33%, which is comparable to the maximum accuracy of 76.5% based on CGH-derived DCN data reported previously in the literature. These results suggest that DCN aberration data derived from SNP arrays is useful for etiology-based tumor classification. <s> BIB010 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> UNLABELLED ::: The search for the association between complex diseases and single nucleotide polymorphisms (SNPs) or haplotypes has recently received great attention. For these studies, it is essential to use a small subset of informative SNPs accurately representing the rest of the SNPs. Informative SNP selection can achieve (1) considerable budget savings by genotyping only a limited number of SNPs and computationally inferring all other SNPs or (2) necessary reduction of the huge SNP sets (obtained, e.g. from Affymetrix) for further fine haplotype analysis. A novel informative SNP selection method for unphased genotype data based on multiple linear regression (MLR) is implemented in the software package MLR-tagging. This software can be used for informative SNP (tag) selection and genotype prediction. The stepwise tag selection algorithm (STSA) selects positions of the given number of informative SNPs based on a genotype sample population. The MLR SNP prediction algorithm predicts a complete genotype based on the values of its informative SNPs, their positions among all SNPs, and a sample of complete genotypes. An extensive experimental study on various datasets including 10 regions from HapMap shows that the MLR prediction combined with stepwise tag selection uses fewer tags than the state-of-the-art method of Halperin et al. (2005). ::: ::: ::: AVAILABILITY ::: MLR-Tagging software package is publicly available at http://alla.cs.gsu.edu/~software/tagging/tagging.html <s> BIB011 </s> A review of feature selection techniques in bioinformatics <s> Single nucleotide polymorphism analysis <s> Summary: We have developed an online program, WCLUSTAG, for tag SNP selection that allows the user to specify variable tagging thresholds for different SNPs. Tag SNPs are selected such that a SNP with user-specified tagging threshold C will have a minimum R2 of C with at least one tag SNP. This flexible feature is useful for researchers who wish to prioritize genomic regions or SNPs in an association study. ::: ::: Availability: The online WCLUSTAG program is available at http://bioinfo.hku.hk/wclustag/ ::: ::: Contact: [email protected] <s> BIB012
Single nucleotide polymorphisms (SNPs) are mutations at a single nucleotide position that occurred during evolution and were passed on through heredity, accounting for most of the genetic variation among different individuals. SNPs are at the forefront of many disease-gene association studies, their number being estimated at about 7 million in the human genome . Thus, selecting a subset of SNPs that is sufficiently informative but still small enough to reduce the genotyping overhead is an important step towards disease-gene association. Typically, the number of SNPs considered is not higher than tens of thousands with sample sizes of about one hundred. Several computational methods for htSNP selection (haplotype SNPs; a set of SNPs located on one chromosome) have been proposed in the past few years. One approach is based on the hypothesis that the human genome can be viewed as a set of discrete blocks that only share a very small set of common haplotypes BIB001 . This approach aims to identify a subset of SNPs that can either distinguish all the common haplotypes BIB002 , or at least explain a certain percentage of them. Another common htSNP selection approach is based on pairwise associations of SNPs, and tries to select a set of htSNPs such that each of the SNPs on a haplotype is highly associated with one of the htSNPs BIB003 . A third approach considers htSNPs as a subset of all SNPs, from which the remaining SNPs can be reconstructed BIB009 BIB004 . The idea is to select htSNPs based on how well they predict the remaining set of the unselected SNPs. When the haplotype structure in the target region is unknown, a widely used approach is to choose markers at regular intervals BIB005 , given either the number of SNPs to choose or the desired interval. In BIB007 an ensemble approach is successfully applied to the identification of relevant SNPs for alcoholism, while BIB008 propose a robust feature selection technique based on a hybrid between a genetic algorithm and an SVM. The Relief-F feature selection algorithm, in conjunction with three classification algorithms (k-NN, SVM and naive Bayes) has been proposed in BIB010 . Genetic algorithms have been applied to the search of the best subset of SNPs, evaluating them with a multivariate filter (CFS), and also in a wrapper manner (with a decision tree as supervised classification paradigm) BIB006 . The multiple linear regression SNP prediction algorithm BIB011 predicts a complete genotype based on the values of its informative SNPs (selected with a stepwise tag selection algorithm), their positions among all SNPS, and a sample of complete genotypes. In BIB012 the tag SNP selection method allows to specify variable tagging thresholds, based on correlations, for different SNPs.
A review of feature selection techniques in bioinformatics <s> Text and literature mining <s> Motivation: Searching relevant publications for manual database annotation is a tedious task. In this paper, we apply a combination of Natural Language Processing (NLP) and probabilistic classification to re-rank documents returned by PubMed according to their relevance to SwissProt annotation, and to identify significant terms in the documents. <s> BIB001 </s> A review of feature selection techniques in bioinformatics <s> Text and literature mining <s> For the average biologist, hands-on literature mining currently means a keyword search in PubMed. However, methods for extracting biomedical facts from the scientific literature have improved considerably, and the associated tools will probably soon be used in many laboratories to automatically annotate and analyse the growing number of system-wide experimental data sets. Owing to the increasing body of text and the open-access policies of many journals, literature mining is also becoming useful for both hypothesis generation and biological discovery. However, the latter will require the integration of literature and high-throughput data, which should encourage close collaborations between biologists and computational linguists. <s> BIB002 </s> A review of feature selection techniques in bioinformatics <s> Text and literature mining <s> Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives-accuracy, F-measure, precision, and recall-since each is appropriate in different situations. The results reveal that a new feature selection metric we call 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin. <s> BIB003
Text and literature mining is emerging as a promising area for data mining in biology BIB002 . One important representation of text and documents is the so-called bag-of-words (BOW) representation, where each word in the text represents one variable, and its value consists of the frequency of the specific word in the text. It goes without saying that such a representation of the text may lead to very high dimensional datasets, pointing out the need for feature selection techniques. Although the application of feature selection techniques is common in the field of text classification (see e.g. BIB003 for a review), the application in the biomedical domain is still in its infancy. Some examples of FS techniques in the biomedical domain include the work of Dobrokhotov et al. BIB001 , who use the Kullback-Leibler divergence as a univariate filter method to find discriminating words in a medical annotation task, the work of Eom and Zhang who use symmetrical uncertainty (an entropy based filter method) for identifying relevant features for protein interaction discovery, and the work of Han et al. , which discusses the use of feature selection for a document classification task. It can be expected that, for tasks such as biomedical document clustering and classification, the large number of feature selection techniques that were already developed in the text mining community will be of practical use for researchers in biomedical literature mining .
A Survey of Text Similarity Approaches <s> Character-Based Similarity Measures <s> A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every <s> BIB001 </s> A Survey of Text Similarity Approaches <s> Character-Based Similarity Measures <s> Approximate matching of strings is reviewed with the aim of surveying techniques suitable for finding an item in a database when there may be a spelling mistake or other error in the keyword. The methods found are classified as either equivalence or similarity problems. Equivalence problems are seen to be readily solved using canonical forms. For sinuiarity problems difference measures are surveyed, with a full description of the wellestablmhed dynamic programming method relating this to the approach using probabilities and likelihoods. Searches for approximate matches in large sets using a difference function are seen to be an open problem still, though several promising ideas have been suggested. Approximate matching (error correction) during parsing is briefly reviewed. <s> BIB002 </s> A Survey of Text Similarity Approaches <s> Character-Based Similarity Measures <s> To locate matches across pairs of lists without unique identifiers it is sometimes necessary to compare strings of letters. String comparators are used in production computer matching software during the Post Enumeration Survey for the 1990 U.S. census. A string comparator metric is described that partial]y accounts for: (1) typographical variation in strings such as first name or surname; (2) decision rules that use the string comparator; and (3) improvements in empirical matching results. The string comparator metric for comparing partially agreeing strings extends the Jaro string comparator. How general methods of accounting for partial agreement fit with the Fellegi-Sunter (I. P. Fellegi and A. B. Sunter, 1969) model of record linkage is described. A formal method of modeling how to adjust matching weights between pure agreement and pure disagreement is presented. The procedure is illustrated for files for which the truth of matches is known. It is demonstrated that the theoretical rules of Fellegi and Sunter are still valid when general weighting adjustments accounting for partial agreement are performed. Eight tables contain illustrative data. (SLD) Reproductions supplied by EDRS are the best that can be mPde from the original document. U.S. DEPARTMENT OF SOUCATION Office al Educational ssssrcli and Improvement EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) is document hes been reproduced ea received horn th penal O orEaruciroOn cognate% it 0 Minor changes Rove been made to improve reproduction quality Points of view or opinions stated in th4 <s> BIB003 </s> A Survey of Text Similarity Approaches <s> Character-Based Similarity Measures <s> Probabilistic linkage technology makes it feasible and efficient to link large public health databases in a statistically justifiable manner. The problem addressed by the methodology is that of matching two files of individual data under conditions of uncertainty. Each field is subject to error which is measured by the probability that the field agrees given a record pair matches (called the m probability) and probabilities of chance agreement of its value states (called the u probability). Fellegi and Sunter pioneered record linkage theory. Advances in methodology include use of an EM algorithm for parameter estimation, optimization of matches by means of a linear sum assignment program, and more recently, a probability model that addresses both m and u probabilities for all value states of a field. This provides a means for obtaining greater precision from non-uniformly distributed fields, without the theoretical complications arising from frequency-based matching alone. The model includes an iterative parameter estimation procedure that is more robust than pre-match estimation techniques. The methodology was originally developed and tested by the author at the U.S. Census Bureau for census undercount estimation. The more recent advances and a new generalized software system were tested and validated by linking highway crashes to Emergency Medical Service (EMS) reports and to hospital admission records for the National Highway Traffic Safety Administration (NHTSA). <s> BIB004 </s> A Survey of Text Similarity Approaches <s> Character-Based Similarity Measures <s> Plagiarism, the unacknowledged reuse of text, does not end at language boundaries. Cross-language plagiarism occurs if a text is translated from a fragment written in a different language and no proper citation is provided. Regardless of the change of language, the contents and, in particular, the ideas remain the same. Whereas different methods for the detection of monolingual plagiarism have been developed, less attention has been paid to the cross-language case. ::: ::: In this paper we compare two recently proposed cross-language plagiarism detection methods (CL-CNG, based on character n-grams and CL-ASA, based on statistical translation), to a novel approach to this problem, based on machine translation and monolingual similarity analysis (T+MA). We explore the effectiveness of the three approaches for less related languages. CL-CNG shows not be appropriate for this kind of language pairs, whereas T+MA performs better than the previously proposed models. <s> BIB005
Longest Common SubString (LCS) algorithm considers the similarity between two strings is based on the length of contiguous chain of characters that exist in both strings. Damerau-Levenshtein defines distance between two strings by counting the minimum number of operations needed to transform one string into the other, where an operation is defined as an insertion, deletion, or substitution of a single character, or a transposition of two adjacent characters BIB002 . Jaro is based on the number and order of the common characters between two strings; it takes into account typical spelling deviations and mainly used in the area of record linkage. BIB004 . Jaro-Winkler is an extension of Jaro distance; it uses a prefix scale which gives more favorable ratings to strings that match from the beginning for a set prefix length BIB003 . Needleman-Wunsch algorithm is an example of dynamic programming, and was the first application of dynamic programming to biological sequence comparison. It performs a global alignment to find the best alignment over the entire of two sequences. It is suitable when the two sequences are of similar length, with a significant degree of similarity throughout BIB001 . Smith-Waterman is another example of dynamic programming. It performs a local alignment to find the best alignment over the conserved domain of two sequences. It is useful for dissimilar sequences that are suspected to contain regions of similarity or similar sequence motifs within their larger sequence context . N-gram is a sub-sequence of n items from a given sequence of text. N-gram similarity algorithms compare the n-grams from each character or word in two strings. Distance is computed by dividing the number of similar n-grams by maximal number of n-grams BIB005 .
A Survey of Text Similarity Approaches <s> Corpus-Based Similarity <s> How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. <s> BIB001 </s> A Survey of Text Similarity Approaches <s> Corpus-Based Similarity <s> Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r = 0.56 to 0.75 for individual words and from r = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users. <s> BIB002
Corpus-Based similarity is a semantic similarity measure that determines the similarity between words according to information gained from large corpora. A Corpus is a large collection of written or spoken texts that is used for language research. Figure 2 shows the Corpus-Based similarity measures. Hyperspace Analogue to Language (HAL) creates a semantic space from word co-occurrences. A word-by-word matrix is formed with each matrix element is the strength of association between the word represented by the row and the word represented by the column. The user of the algorithm then has the option to drop out low entropy columns from the matrix. As the text is analyzed, a focus word is placed at the beginning of a ten word window that records which neighboring words are counted as co-occurring. Matrix values are accumulated by weighting the co-occurrence inversely proportional to the distance from the focus word; closer neighboring words are thought to reflect more of the focus word's semantics and so are weighted higher. HAL also records word-ordering information by treating the cooccurrence differently based on whether the neighboring word appeared before or after the focus word. Latent Semantic Analysis (LSA) BIB001 is the most popular technique of Corpus-Based similarity. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per paragraph (rows represent unique words and columns represent each paragraph) is constructed from a large piece of text and a mathematical technique which called singular value decomposition (SVD) is used to reduce the number of columns while preserving the similarity structure among rows. Words are then compared by taking the cosine of the angle between the two vectors formed by any two rows. Generalized Latent Semantic Analysis (GLSA) is a framework for computing semantically motivated term and document vectors. It extends the LSA approach by focusing on term vectors instead of the dual document-term representation. GLSA requires a measure of semantic association between terms and a method of dimensionality reduction. The GLSA approach can combine any kind of similarity measure on the space of terms with any suitable method of dimensionality reduction. The traditional term document matrix is used in the last step to provide the weights in the linear combination of term vectors. Explicit Semantic Analysis (ESA) BIB002 is a measure used to compute the semantic relatedness between two arbitrary texts. The Wikipedia-Based technique represents terms (or texts) as high-dimensional vectors; each vector entry presents the TF-IDF weight between the term and one Wikipedia article. The semantic relatedness between two terms (or texts) is expressed by the cosine measure between the corresponding vectors.
A Survey of Text Similarity Approaches <s> Fig 2: Corpus-Based Similarity Measures <s> This paper presents a simple unsupervised learning algorithm for recognizing synonyms, based on statistical data acquired by querying a Web search engine. The algorithm, called PMI-IR, uses Pointwise Mutual Information (PMI) and Information Retrieval (IR) to measure the similarity of pairs of words. PMI-IR is empirically evaluated using 80 synonym test questions from the Test of English as a Foreign Language (TOEFL) and 50 synonym test questions from a collection of tests for students of English as a Second Language (ESL). On both tests, the algorithm obtains a score of 74%. PMI-IR is contrasted with Latent Semantic Analysis (LSA), which achieves a score of 64% on the same 80 TOEFL questions. The paper discusses potential applications of the new unsupervised learning algorithm and some implications of the results for LSA and LSI (Latent Semantic Indexing). <s> BIB001 </s> A Survey of Text Similarity Approaches <s> Fig 2: Corpus-Based Similarity Measures <s> Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of "society" is "database," and the equivalent of "use" is "a way to search the database". We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories <s> BIB002 </s> A Survey of Text Similarity Approaches <s> Fig 2: Corpus-Based Similarity Measures <s> We present a method for measuring the semantic similarity of texts using a corpus-based measure of semantic word similarity and a normalized and modified version of the Longest Common Subsequence (LCS) string matching algorithm. Existing methods for computing text similarity have focused mainly on either large documents or individual words. We focus on computing the similarity between two sentences or two short paragraphs. The proposed method can be exploited in a variety of applications involving textual knowledge representation and knowledge discovery. Evaluation results on two different data sets show that our method outperforms several competing methods. <s> BIB003
The cross-language explicit semantic analysis (CL-ESA) is a multilingual generalization of ESA. CL-ESA exploits a document-aligned multilingual reference collection such as Wikipedia to represent a document as a languageindependent concept vector. The relatedness of two documents in different languages is assessed by the cosine similarity between the corresponding vector representations. Pointwise Mutual Information -Information Retrieval (PMI-IR) BIB001 is a method for computing the similarity between pairs of words, it uses AltaVista's Advanced Search query \ syntax to calculate probabilities. The more often two words co-occur near each other on a web page, the higher is their PMI-IR similarity score. Second-order co-occurrence pointwise mutual information (SCO-PMI) BIB003 is a semantic similarity measure using pointwise mutual information to sort lists of important neighbor words of the two target words from a large corpus. The advantage of using SOC-PMI is that it can calculate the similarity between two words that do not cooccur frequently, because they co-occur with the same neighboring words. Normalized Google Distance (NGD) BIB002 is a semantic similarity measure derived from the number of hits returned by the Google search engine for a given set of keywords. Keywords with the same or similar meanings in a natural language sense tend to be "close" in units of Google distance, while words with dissimilar meanings tend to be farther apart. Specifically, the Normalized Google Distance between two search terms x and y is : where M is the total number of web pages searched by Google; f(x) and f(y) are the number of hits for search terms x and y, respectively; and f(x, y) is the number of web pages on which both x and y occur. If the two search terms x and y never occur together on the same web page, but do occur separately, the normalized Google distance between them is infinite. If both terms always occur together, their NGD is zero, or equivalent to the coefficient between x squared and y squared. Extracting DIStributionally similar words using COoccurrences (DISCO) Distributional similarity between words assumes that words with similar meaning occur in similar context. Large text collections are statistically analyzed to get the distributional similarity. DISCO is a method that computes distributional similarity between words by using a simple context window of size ±3 words for counting co-occurrences. When two words are subjected for exact similarity DISCO simply retrieves their word vectors from the indexed data, and computes the similarity according to Lin measure . If the most distributionally similar word is required; DISCO returns the second order word vector for the given word. DISCO has two main similarity measures DISCO1 and DISCO2; DISCO1 computes the first order similarity between two input words based on their collocation sets. DISCO2 computes the second order similarity between two input words based on their sets of distributionally similar words.
A Survey of Text Similarity Approaches <s> Knowledge-Based Similarity <s> A combined greeting card and decorative music box structure that includes a flat rectangular box formed from a resilient polymerized resin that is removably closed by a cover that on a first side supports a wind-up music producing mechanism within a confined space defined by the box and cover. A second side of the cover removably supports a greeting card. A panel bearing a decorative insignia is pivotally supported from the box, with the panel when in a first position overlying the cover and concealing the greeting card. When the panel is pivoted to a second position the music producing mechanism is actuated and the greeting card is visible. The panel is illustrated as a picture frame that removably supports a picture defined on a dimensionally stable sheet of material. <s> BIB001 </s> A Survey of Text Similarity Approaches <s> Knowledge-Based Similarity <s> This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentences as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection. <s> BIB002 </s> A Survey of Text Similarity Approaches <s> Knowledge-Based Similarity <s> This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task. <s> BIB003 </s> A Survey of Text Similarity Approaches <s> Knowledge-Based Similarity <s> This paper generalizes the Adapted Lesk Algorithm of Banerjee and Pedersen (2002) to a method of word sense disambiguation based on semantic relatedness. This is possible since Lesk's original algorithm (1986) is based on gloss overlaps which can be viewed as a measure of semantic relatedness. We evaluate a variety of measures of semantic relatedness when applied to word sense disambiguation by carrying out experiments using the English lexical sample data of SENSEVAL-2. We find that the gloss overlaps of Adapted Lesk and the semantic distance measure of Jiang and Conrath (1997) result in the highest accuracy. <s> BIB004 </s> A Survey of Text Similarity Approaches <s> Knowledge-Based Similarity <s> This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. Previous work on this problem has focused mainly on either large documents (e.g. text classification, information retrieval) or individual words (e.g. synonymy tests). Given that a large fraction of the information available today, on the Web and elsewhere, consists of short text snippets (e.g. abstracts of scientific documents, imagine captions, product descriptions), in this paper we focus on measuring the semantic similarity of short texts. Through experiments performed on a paraphrase data set, we show that the semantic similarity method out-performs methods based on simple lexical matching, resulting in up to 13% error rate reduction with respect to the traditional vector-based similarity metric. <s> BIB005
Knowledge-Based Similarity is one of semantic similarity measures that bases on identifying the degree of similarity between words using information derived from semantic networks BIB005 . WordNet BIB001 is the most popular semantic network in the area of measuring the Knowledge-Based similarity between words; WordNet is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. As shown in figure 3 , Knowledge-based similarity measures can be divided roughly into two groups: measures of semantic similarity and measures of semantic relatedness. Semantically similar concepts are deemed to be related on the basis of their likeness. Semantic relatedness, on the other hand, is a more general notion of relatedness, not specifically tied to the shape or form of the concept. In other words, Semantic similarity is a kind of relatedness between two words, it covers a broader range of relationships between concepts that includes extra similarity relations such as is-a-kind-of, is-a-specificexample-of, is-a-part-of, is-the-opposite-of BIB004 . There are six measures of semantic similarity; three of them are based on information content: Resnik (res) , Lin (lin) and Jiang & Conrath (jcn) BIB003 . The other three measures are based on path length: Leacock & Chodorow (lch) , Wu & Palmer (wup) BIB002 and Path Length (path). The related value in res measure is equal to the information content (IC) of the Least Common Subsumer (most informative subsumer). This means that the value will always be greater-than or equal-to zero. The upper bound on the value is generally quite large and varies depending upon the size of the corpus used to determine information content values. The lin and jcn measures augment the information content of the Least Common Subsumer with the sum of the information content of concepts A and B themselves. The lin measure scales the information content of the Least Common Subsumer by this sum, while jcn takes the difference of this sum and the information content of the Least Common Subsumer. lch measure returns a score denoting how similar two word senses are, based on the shortest path that connects the senses and the maximum depth of the taxonomy in which the senses occur. wup measure returns a score denoting how similar two word senses are, based on the depth of the two senses in the taxonomy and that of their Least Common Subsumer. path measure returns a score denoting how similar two word senses are, based on the shortest path that connects the senses in the is-a (hypernym/hypnoym) taxonomy.
A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. Previous work on this problem has focused mainly on either large documents (e.g. text classification, information retrieval) or individual words (e.g. synonymy tests). Given that a large fraction of the information available today, on the Web and elsewhere, consists of short text snippets (e.g. abstracts of scientific documents, imagine captions, product descriptions), in this paper we focus on measuring the semantic similarity of short texts. Through experiments performed on a paraphrase data set, we show that the semantic similarity method out-performs methods based on simple lexical matching, resulting in up to 13% error rate reduction with respect to the traditional vector-based similarity metric. <s> BIB001 </s> A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition <s> BIB002 </s> A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> We present a method for measuring the semantic similarity of texts using a corpus-based measure of semantic word similarity and a normalized and modified version of the Longest Common Subsequence (LCS) string matching algorithm. Existing methods for computing text similarity have focused mainly on either large documents or individual words. We focus on computing the similarity between two sentences or two short paragraphs. The proposed method can be exploited in a variety of applications involving textual knowledge representation and knowledge discovery. Evaluation results on two different data sets show that our method outperforms several competing methods. <s> BIB003 </s> A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> In this paper, we describe our system submitted for the semantic textual similarity (STS) task at SemEval 2012. We implemented two approaches to calculate the degree of similarity between two sentences. First approach combines corpus-based semantic relatedness measure over the whole sentence with the knowledge-based semantic similarity scores obtained for the words falling under the same syntactic roles in both the sentences. We fed all these scores as features to machine learning models to obtain a single score giving the degree of similarity of the sentences. Linear Regression and Bagging models were used for this purpose. We used Explicit Semantic Analysis (ESA) as the corpus-based semantic relatedness measure. For the knowledge-based semantic similarity between words, a modified WordNet based Lin measure was used. Second approach uses a bipartite based method over the WordNet based Lin measure, without any modification. This paper shows a significant improvement in calculating the semantic similarity between sentences by the fusion of the knowledge-based similarity measure and the corpus-based relatedness measure against corpus based measure taken alone. <s> BIB004 </s> A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> This paper describes the participation of the IRIT team to SemEval 2012 Task 6 (Semantic Textual Similarity). The method used consists of a n-gram based comparison method combined with a conceptual similarity measure that uses WordNet to calculate the similarity between a pair of concepts. <s> BIB005 </s> A Survey of Text Similarity Approaches <s> Hybrid Similarity Measures <s> We present the UKP system which performed best in the Semantic Textual Similarity (STS) task at SemEval-2012 in two out of three metrics. It uses a simple log-linear regression model, trained on the training data, to combine multiple text similarity measures of varying complexity. These range from simple character and word n-grams and common subsequences to complex features such as Explicit Semantic Analysis vector comparisons and aggregation of word similarity based on lexical-semantic resources. Further, we employ a lexical substitution system and statistical machine translation to add additional lexemes, which alleviates lexical gaps. Our final models, one per dataset, consist of a log-linear combination of about 20 features, out of the possible 300+ features implemented. <s> BIB006
Hybrid methods use multiple similarity measures; many researches covered this area. Eight semantic similarity measures were tested in BIB001 . Two of these measures were corpus-based measures and the other six were knowledgebased. Firstly, these eight algorithms were evaluated separately, then they were combined together. The best 1 http://wn-similarity.sourceforge.net/ 2 http://nltk.org/ performance was achieved using a method that combines several similarity metrics into one. A method for measuring the semantic similarity between sentences or very short texts, based on semantic and word order information was presented in BIB002 . First, semantic similarity is derived from a lexical knowledge base and a corpus. Second, the proposed method considers the impact of word order on sentence meaning. The derived word order similarity measures the number of different words as well as the number of word pairs in a different order. The authors of BIB003 presented a method and named it Semantic Text Similarity (STS). This method determines the similarity of two texts from a combination between semantic and syntactic information. They considered two mandatory functions (string similarity and semantic word similarity) and an optional function (common-word order similarity). STS method achieved a very good Pearson correlation coefficient for 30 sentence pairs of data sets and outperformed the results obtained in BIB002 . The authors of BIB004 presented an approach that combines corpus-based semantic relatedness measure over the whole sentence along with the knowledge-based semantic similarity scores that were obtained for the words falling under the same syntactic roles in both sentences. All the scores as features were fed to machine learning models, like linear regression, and bagging models to obtain a single score giving the degree of similarity between sentences. This approach showed a significant improvement in calculating the semantic similarity between sentences by the combing the knowledge-based similarity measure and the corpus-based relatedness measure against corpus based measure taken alone. A Promising correlation between manual and automatic similarity results were achieved in BIB005 by combining two modules. The first module calculates the similarity between sentences using N-gram based similarity, and the second module calculates the similarity between concepts in the two sentences using a concept similarity measure and WordNet. A system named UKP with reasonable correlation results was introduced in BIB006 , it used a simple log-linear regression model based on training data, to combine multiple text similarity measures. These measures were String similarity, Semantic similarity, Text expansion mechanisms and Measures related to structure and style. The UKP final models consisted of a log-linear combination of about 20 features, out of the possible 300 features implemented.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fueled by the recent adaptation of a variety of enabling wireless technologies such as RFID tags and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a Cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A Cloud implementation using Aneka, which is based on interaction of private and public Clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> Abstract Originally initiated in Germany, Industry 4.0, the fourth industrial revolution, has attracted much attention in recent literatures. It is closely related with the Internet of Things (IoT), Cyber Physical System (CPS), information and communications technology (ICT), Enterprise Architecture (EA), and Enterprise Integration (EI). Despite of the dynamic nature of the research on Industry 4.0, however, a systematic and extensive review of recent research on it is has been unavailable. Accordingly, this paper conducts a comprehensive review on Industry 4.0 and presents an overview of the content, scope, and findings of Industry 4.0 by examining the existing literatures in all of the databases within the Web of Science. Altogether, 88 papers related to Industry 4.0 are grouped into five research categories and reviewed. In addition, this paper outlines the critical issue of the interoperability of Industry 4.0, and proposes a conceptual framework of interoperability regarding Industry 4.0. Challenges and trends for future research on Industry 4.0 are discussed. <s> BIB002 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> The increasing demand for food, both in terms of quantity and quality, has raised the need for intensification and industrialisation of the agricultural sector. The “Internet of Things” (IoT) is a highly promising family of technologies which is capable of offering many solutions towards the modernisation of agriculture. Scientific groups and research institutions, as well as the industry, are in a race trying to deliver more and more IoT products to the agricultural business stakeholders, and, eventually, lay the foundations to have a clear role when IoT becomes a mainstream technology. At the same time Cloud Computing, which is already very popular, and Fog Computing provide sufficient resources and solutions to sustain, store and analyse the huge amounts of data generated by IoT devices. The management and analysis of IoT data (“Big Data”) can be used to automate processes, predict situations and improve many activities, even in real-time. Moreover, the concept of interoperability among heterogeneous devices inspired the creation of the appropriate tools, with which new applications and services can be created and give an added value to the data flows produced at the edge of the network. The agricultural sector was highly affected by Wireless Sensor Network (WSN) technologies and is expected to be equally benefited by the IoT. In this article, a survey of recent IoT technologies, their current penetration in the agricultural sector, their potential value for future farmers and the challenges that IoT faces towards its propagation is presented. <s> BIB003 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> Nowadays, the railway industry is in a position where it is able to exploit the opportunities created by the IIoT (Industrial Internet of Things) and enabling communication technologies under the paradigm of Internet of Trains. This review details the evolution of communication technologies since the deployment of GSM-R, describing the main alternatives and how railway requirements, specifications and recommendations have evolved over time. The advantages of the latest generation of broadband communication systems (e.g., LTE, 5G, IEEE 802.11ad) and the emergence of Wireless Sensor Networks (WSNs) for the railway environment are also explained together with the strategic roadmap to ensure a smooth migration from GSM-R. Furthermore, this survey focuses on providing a holistic approach, identifying scenarios and architectures where railways could leverage better commercial IIoT capabilities. After reviewing the main industrial developments, short and medium-term IIoT-enabled services for smart railways are evaluated. Then, it is analyzed the latest research on predictive maintenance, smart infrastructure, advanced monitoring of assets, video surveillance systems, railway operations, Passenger and Freight Information Systems (PIS/FIS), train control systems, safety assurance, signaling systems, cyber security and energy efficiency. Overall, it can be stated that the aim of this article is to provide a detailed examination of the state-of-the-art of different technologies and services that will revolutionize the railway industry and will allow for confronting today challenges. <s> BIB004 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> Internet of Things (IoT) is an emerging domain that promises ubiquitous connection to the Internet, turning common objects into connected devices. The IoT paradigm is changing the way people interact with things around them. It paves the way for creating pervasively connected infrastructures to support innovative services and promises better flexibility and efficiency. Such advantages are attractive not only for consumer applications, but also for the industrial domain. Over the last few years, we have been witnessing the IoT paradigm making its way into the industry marketplace with purposely designed solutions. In this paper, we clarify the concepts of IoT, Industrial IoT, and Industry 4.0. We highlight the opportunities brought in by this paradigm shift as well as the challenges for its realization. In particular, we focus on the challenges associated with the need of energy efficiency, real-time performance, coexistence, interoperability, and security and privacy. We also provide a systematic overview of the state-of-the-art research efforts and potential research directions to solve Industrial IoT challenges. <s> BIB005 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> The research study aims to examine and evaluate the impacts of Internet of Things (IoT) on Global Supply Chain environments. The Internet of Things (IoT) phenomenon, being a part of a digital revolution, is currently considered as a quite profitable factor for the industries and the markets worldwide. Adopting and incorporating the latest technologies, increases competitiveness and develops new ways of communication. The IoT pervades and revolutionizes supply chain sector, influencing its management and way of structure. It is proposed that its impact on Supply Chain Management (SCM) is strong and instrumental, promising profits and innovations. The IoT systems are used more by many companies worldwide, which enjoy their benefits: delivery service improvement, financial profits, cost reducing, wastage minimization, equipment monitoring, preventing and retailers and consumer’s alleviation. Although, the researchers are at the preliminary stages of the research study, this paper aims to adopt an in-depth secondary data collection method through conducting a critical literature review within the field. Through investigative understanding, the researchers aim to adopt a detailed case study approach through service based supply chain providers using a mixed research method. The outcomes and findings will further be analyzed to examine the proposed knowledge framework developed within the research. Even though there is a positive approach for the IoT expansibility dominating the public view, there are still some problems and key challenges that should be taken into consideration. The IoT complicated nature brings to light privacy, security and cost issues that should be faced and solved. However, considering its immature face, time will bring opportunities for problem solving and further development. <s> BIB006 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> I. INTRODUCTION <s> The latest advances in industry have been accomplished within the 4th Industrial Revolution, mostly noted as Industrie 4.0. This industrial revolution is boosted by the application of Internet of Things (IoT) technologies into the industrial contexts, also known as Industrial Internet of Things (IIoT), which is being supported by the implementation of Cyber-Physical Production Systems (CPPS). In this context, most of the existing work concentrates on developing IIoT models and CPPS architectures, laking the identification of validation requirements of these platforms. By rushing into releasing state-of-the-art IIoT applications, developers usually forget to implement methodologies to validate these applications. In this paper, we propose a list of requirements for IIoT platform validation, based on its architecture, as well as in requirements established by the industrial reality. A CPPS case study is presented, in order to illustrate some of these requisites and how validation of these type of system could be achieved. <s> BIB007
The Internet of Things (IoT) BIB001 refers to a system of smart devices which are connected to each other through the Internet. The basic structure of IoT systems involves the use of a large number of smart devices which are able to acquire, process, transmit and receive data between one another thereby enabling us to reliably monitor and precisely control any environment, control system or device through this system of interconnected smart devices. With forecasts predicting an estimated 28.5 billion network-connected devices to become active by 2022 , the IoT technology is poised to make a total economic impact between $3.9 trillion and $11.1 trillion per year in 2025 . While most of the IoT systems developed until now have been consumer-centric, the disruptive nature of this technology has enabled the adoption of this technology in a gamut of industrial settings thus leading to the development of Industrial Internet of Things (IIoT) technology BIB005 . IIoT technology, in essence, refers to a system of interconnected smart devices in an industrial setting which connects industrial resources including sensors, actuators, controllers, machines with each other as well as with intelligent control systems which analyze the acquired data and optimize the ongoing industrial processes in order to improve execution speed, reduce involved costs, and dynamically control the industrial environment BIB005 . One of the most important reasons behind the meteoric rise of IIoT systems in various industries is that IIoT systems can lead to a significant improvement in efficiency, throughput, and response time of operations inside these industries BIB002 . IIoT has already revolutionized companies in many major industries across the globe, including the mining industry where IIoT systems have led to the installation of wireless access points in mining tunnels and RFID tracking technology has helped companies in tracking vehicles leading to an increase in production levels by 400% . Proposed IIoT systems in agricultural settings can help farmers in nutrient monitoring as well as automated irrigation to improve crop yield BIB003 . The medical industry can also benefit from the capabilities of Industrial IoT systems where emergency services can access data from patients, ambulances and doctors to help all stakeholders in making informed decisions and improve resource utilization . Pilot projects in China have successfully implemented an NB-IoT (Narrow Band IoT) system for smart electrical meters which allows real-time collection of power consumption data thereby enabling the energy grid officials to improve the electricity supply strategy in any area . Similarly, NB-IoT smart parking systems have been deployed in cities to help drivers easily find parking spaces while integration of this system with payment solutions has led to automated transaction authorization for parking payment which has subsequently improved utilization of parking bays . The railway industry can also leverage the power of IIoT solutions to improve the functioning of surveillance systems, signalling systems, predictive maintenance and Passenger or Freight Information Systems in order to improve services and safety BIB004 . Supply Chain Management (SCM) can also benefit by adopting IIoT based systems which will directly enhance tracking and traceability while also aiding in the optimization of shipment routes based on rapidly changing customer requirements BIB006 . While the IIoT shows immense potential as a transformative technology, it is important to know the critical requirements that must be validated and verified in the design of IIoT systems so as to maximize the efficiency and performance arXiv:1912.00595v1 [cs.NI] 2 Dec 2019 Fig. 1 . Edge, Fog, and Cloud Tiers of these systems BIB007 , . These requirements arise from the challenges often faced by Cyber Physical Systems (CPS) and they include the following:
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> 1) Scalability 2) Fault Tolerance or Reliability 3) Data Security 4) Service Security 5) Functional Security 6) Data Production and Consumption Proximity <s> The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> 1) Scalability 2) Fault Tolerance or Reliability 3) Data Security 4) Service Security 5) Functional Security 6) Data Production and Consumption Proximity <s> In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named “Fog computing” has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features. We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research. <s> BIB002 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> 1) Scalability 2) Fault Tolerance or Reliability 3) Data Security 4) Service Security 5) Functional Security 6) Data Production and Consumption Proximity <s> With the various technological advances, mobile devices are not just being used as a means to make voice calls; but are being used to accomplish a variety of tasks. Mobile devices are being envisioned to practically accomplish any task which could be done on a computer. This is hurdled by the limited computational resources available with the mobile devices due to their portable size. With the mobile devices being connected to the Internet, leveraging cloud services is being seen as a promising solution to overcome this hurdle. Computationally intensive tasks can be offloaded to the Cloud servers. However, owing to the latency and cost associated with using cloud services, edge devices (termed cloudlets) stationed near the mobile devices are being seen as a prospective alternative to replace/assist the Cloud services. The mobile devices have an easier access to the cloudlets being situated in their vicinity and can offload their task requests to them to be served at a lower cost. This paper considers a network of such connected cloudlets which provide service to the mobile devices in a given area. We address the issue of task assignment in such a scenario (i.e. which cloudlet serves which mobile device) aimed towards improving the quality of service experienced by the mobile devices in terms of minimizing the latency. Through numerical simulations we demonstrate the performance gains of the proposed task assignment scheme showing lower latency as compared to the traditional scheme for task assignment. <s> BIB003
With the rise in computational power being offered by systems in recent years, the focus of most industries has shifted towards garnering practical and useful patterns from their data which has been aided by the rapid development in statistical analysis and learning-based algorithms. Today, industries that are making use of IIoT solutions want to utilize the massive amount of data being generated to collect useful insights which can help in reduction of unplanned down-times, improve efficiency of production, lower energy consumption, etc. However, in order to process such massive amounts of data, IIoT systems generally require cloud computing services which often experience large round-trip delays and poor Quality of Service (QoS) as a large amount of data needs to be transferred to centralized data-centres for computation BIB003 . Since most sensors and data acquisition devices in IIoT systems operate at the periphery of the network, more data tends to be produced near the periphery of the network, which implies that processing the data at the edge of the network would be more efficient BIB001 . Therefore, efforts in shifting the computational power towards the periphery of the network have given rise to the edge and fog computing paradigms. Edge Computing refers to the computing paradigm in which computations are performed at the edge of the network instead of the core of the network. In this scenario, the "edge" refers to any resource located on any network path between data acquisition devices (situated near the periphery of the network) and the cloud data-centre (situated at the core of the network) BIB001 . The basis of the edge computing paradigm is that the computations should be done on the "edge" which is in proximity of the data sources and this avoids the latency associated with data transfer to the network's core. The Fog Computing paradigm is similar in nature to edge computing in that it also has a decentralized architecture for computation but with the fundamental difference being that Fog Computing can expanded to the core of the network as well BIB002 . This means that resources located at both edge and core can be used for computations and consequently, fog computing can aid in the development of multi-tier solutions which can offload service demand to the core of the network as the load BIB002 . However, in most fog computing systems, the computational power is concentrated with the LAN resources which are closer to the data sources and further away from the network core, thus reducing the latency associated with data transfer to the core as seen in edge computing as well. Therefore, the fundamental difference between the edge and fog computing paradigms is basically in the location where the computational power and intelligence is stored. In case of edge computing, this computational power is concentrated at the edge of the network usually in powerful embedded devices like wireless access points or bridges whereas in the case of fog computing, the compute power is usually in the LAN resources. The rest of the paper is organized as follows: Section II discusses the background of edge and fog computing systems and how these paradigms address the requirements of modern IIoT systems. Section III describes various applications of edge computing in industrial settings. Section IV elaborates on fog computing applications. In Section V we present several outstanding issues and challenges with these computing paradigms that can be interpreted as future directions for research in this domain. Finally, in Section VI we conclude with the salient points of this paper.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> II. RELEVANT COMPUTING PARADIGMS AND <s> The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as "the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> II. RELEVANT COMPUTING PARADIGMS AND <s> The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. <s> BIB002 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> II. RELEVANT COMPUTING PARADIGMS AND <s> Abstract With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing. <s> BIB003
REQUIREMENTS The edge computing paradigm is a computing technology which enables data to be processed almost exclusively on the "edge" of the network, which refers to locations between the end devices (like sensors, controllers, and actuators) and the centralized cloud servers. The rationale behind the development of this technology is that computations performed closer to the end devices will lead to a lower latency in the system. This is because the system does not need to transfer data between edge devices and central cloud servers as the computations have been offset to closer locations on the edge. Therefore, in edge computing systems, edge devices can not only request content and services from the cloud servers but can also perform computational offloading, caching, storage, and processing, thereby making the edge devices both data producers and consumers BIB002 . The fog computing paradigm can be understood as an extension of the traditional cloud computing model wherein additional computational, data handling, and networking resources (nodes) are placed at locations on the network which are in close proximity to the end devices BIB001 . The consequence of this extension is that processes involving data management, data processing, networking, and storage can occur not only on the centralized cloud servers, but also on the connections between end devices and the cloud servers BIB003 . Fog computing, therefore, can be extremely useful for low latency applications as well as applications that generate an enormous amount of data that cannot be practically transferred to cloud servers in real-time due to bandwidth constraints [20] . As discussed in the previous section, there are many requirements which cyber physical systems need to maintain so as to become a viable supplement for real-world operations and applications. These include the following: 1) Scalability which ensures that the increased data transfer between nodes does not degrade latency or response time. 2) Fault tolerance and reliability which guarantees that the system functions normally under variable external factors like under high load conditions. 3) Data security which ensures that the system is resistant to external attacks attempting to steal confidential information stored in the system or network.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> 4) <s> The wide application of Internet of Things (IoT) in industrial automation encourages the emergence of a new paradigm of industrial IoT systems, wireless control system (WCS), where the system and/or control information is delivered over wireless channels. In practical systems, WCSs would consist of multiple control-loops in general, the resource competition among which would seriously increase mutual interferences and transmission collisions, making it is difficult to provide the required transmission reliability for the control strategy. To address this issue, we design the control strategy together with the hybrid cooperative transmission scheme for multiloop WCSs in a proactive way. We first define the overall system cost function to explore the impacts of standard linear quadratic regulator control cost and wireless transmission reliability on the control performance. In order to further minimize the overall system cost while guaranteeing the control stability, we then propose a control performance aware cooperative transmission scheme, which is formulated as a constrained optimization problem. Decomposition method and heuristic algorithms are designed based on the feature of network structure to solve the formulated mixed integer nonlinear programming problem efficiently. Finally, simulation results demonstrate that by using the proposed strategy, the overall system cost is significantly reduced, decreasing by 78% and 82% compared to the cases without considerations of system dynamics and without cooperative transmission, respectively. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> 4) <s> The rapid developments of the Internet of Things (IoT) and smart mobile devices in recent years have been dramatically incentivizing the advancement of edge computing. On the one hand, edge computing has provided a great assistance for lightweight devices to accomplish complicated tasks in an efficient way; on the other hand, its hasty development leads to the neglection of security threats to a large extent in edge computing platforms and their enabled applications. In this paper, we provide a comprehensive survey on the most influential and basic attacks as well as the corresponding defense mechanisms that have edge computing specific characteristics and can be practically applied to real-world edge computing systems. More specifically, we focus on the following four types of attacks that account for 82% of the edge computing attacks recently reported by Statista: distributed denial of service attacks, side-channel attacks, malware injection attacks, and authentication and authorization attacks. We also analyze the root causes of these attacks, present the status quo and grand challenges in edge computing security, and propose future research directions. <s> BIB002
Service security to make the system resistant to external attacks which are attempting at disrupting the service provided by the system to the industry such as through Denial-of-service (DoS) attacks or Blackhole attacks. 5) Functional security so that physical accidents such as fires, explosions, leaks do not occur at any time especially in industries handling potentially hazardous substances such as nuclear plants, chemical plants, and oil rigs 6) Data production and computation proximity which ensures that the devices collecting the data and the systems processing the data are close to each other over the network to reduce latency. In order to realize the strengths offered by the edge and fog computing paradigms, IIoT systems must be designed in accordance with network structures of these paradigms since these paradigms adhere to all the requirements of cyber physical systems: 1) Edge and fog computing based systems are scalable since increased data transfer between nodes can be addressed by the introduction of additional edge devices to compensate for the added computational load without degrading the network's latency since these devices function in proximity to end devices, and hence, do not increase the data transfer delays over the network. 2) Edge and fog computing systems are reliable and fault tolerant especially when compared with cloud-based systems since faults in the centralized cloud servers would result in a total loss of service but the decentralized nature of Edge and Fog Computing systems ensures that even if some of the computational nodes fail, the remaining healthy nodes can still maintain partial service. Furthermore, if the computational load of the failed nodes can be offset to the remaining healthy nodes, then the system can still run full service while corrective action is undertaken. 3) Edge and fog computing systems maintain data security within the system due to data decentralization which means that if an adversary wants to breach the system, it would need to breach each one of the large number of decentralized computing nodes in order to collect the entire system's data. 4) Edge and fog computing systems maintain service security by using advanced defense mechanisms such as perpacket-based detection, data perturbation, and isolation networks for the identification of and defense against attacks BIB002 . 5) Edge and fog computing systems ensure functional security since these systems as they can be used to create extremely stable and robust multi-loop control systems for functionally sensitive industrial operations such as temperature control BIB001 . 6) Edge and fog computing systems were developed with the rationale that data consumption (processing, storing, caching, etc.) and production are always in proximity which is ensured by the fundamental structure of these systems where computational nodes are located on the edges of the network, which are in close proximity to the end devices at the periphery of the network. The distributed nature of edge and fog computing systems leads to several advantages in terms of reduced communication times and improved reliability, which makes these systems especially useful in a variety of industrial settings that require reliable, latency-sensitive networks for process automation. By realizing the inherent advantages of these paradigms, a large number of industries have started to utilize these paradigms in their system designs and we shall look at several such use cases in the following sections of this paper.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> A. Manufacturing Industry <s> The article focuses on the development of an OPC UA (object linking and embedding (OLE) for process control unified architecture) server. One of the major concerns regarding this new specification is the migration from the old component object model (COM) to the new UA specification. From the various migration strategies described in this article, the authors' server represents a special adapter software solution which aggregates several COM servers, one for each flexible manufacturing system (FMS) modelled in the address space of the UA server. The article focuses on the advantages introduced by the new specification like the unified modelling capability and the extensible meta model. The address space is exemplified by means of a screw fitting station, which is part of the flexible line, on which the UA server has been tested. From the four generally accepted use cases, the server implements the first two: observation and control. They are mainly supported through the variables and the methods of the address space. During the testing phase, the minimum sampling interval regarding the communication with the underlying COM servers has been determined for a different number of FMSs. As a result the special adapter software solution is a fast but also a well-performing approach, which works well even in very complex environments. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> A. Manufacturing Industry <s> One of the primary requirements in many cyber-physical systems (CPS) is that the sensor data derived from the physical world should be disseminated in a timely and reliable manner to all interested collaborative entities. However, providing reliable and timely data dissemination services is especially challenging for CPS since they often operate in highly unpredictable environments. Existing network middleware has limitations in providing such services. In this paper, we present a novel publish/subscribe-based middleware architecture called Real-time Data Distribution Service (RDDS). In particular, we focus on two mechanisms of RDDS that enable timely and reliable sensor data dissemination under highly unpredictable CPS environments. First, we discuss the semantics-aware communication mechanism of RDDS that not only reduces the computation and communication overhead, but also enables the subscribers to access data in a timely and reliable manner when the network is slow or unstable. Further, we extend the semantics-aware communication mechanism to achieve robustness against unpredictable workloads by integrating a control-theoretic feedback controller at the publishers and a queueing-theoretic predictor at the subscribers. This integrated control loop provides Quality-of-Service (QoS) guarantees by dynamically adjusting the accuracy of the sensor models. We demonstrate the viability of the proposed approach by implementing a prototype of RDDS. The evaluation results show that, compared to baseline approaches, RDDS achieves highly efficient and reliable sensor data dissemination as well as robustness against unpredictable workloads. <s> BIB002 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> A. Manufacturing Industry <s> In recent years, there have been great advances in industrial Internet of Things (IIoT) and its related domains, such as industrial wireless networks (IWNs), big data, and cloud computing. These emerging technologies will bring great opportunities for promoting industrial upgrades and even allow the introduction of the fourth industrial revolution, namely, Industry 4.0. In the context of Industry 4.0, all kinds of intelligent equipment (e.g., industrial robots) supported by wired or wireless networks are widely adopted, and both real-time and delayed signals coexist. Therefore, based on the advancement of software-defined networks technology, we propose a new concept for industrial environments by introducing software-defined IIoT in order to make the network more flexible. In this paper, we analyze the IIoT architecture, including physical layer, IWNs, industrial cloud, and smart terminals, and describe the information interaction among different devices. Then, we propose a software-defined IIoT architecture to manage physical devices and provide an interface for information exchange. Subsequently, we discuss the prominent problems and possible solutions for software-defined IIoT. Finally, we select an intelligent manufacturing environment as an assessment test bed, and implement the basic experimental analysis. This paper will open a new research direction of IIoT and accelerate the implementation of Industry 4.0. <s> BIB003 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> A. Manufacturing Industry <s> Edge computing extends the capabilities of computation, network connection, and storage from the cloud to the edge of the network. It enables the application of business logic between the downstream data of the cloud service and the upstream data of the Internet of Things (IoT). In the field of Industrial IoT, edge computing provides added benefits of agility, real-time processing, and autonomy to create value for intelligent manufacturing. With the focus on the concept of edge computing, this article proposes an architecture of edge computing for IoT-based manufacturing. It also analyzes the role of edge computing from four aspects including edge equipment, network communication, information fusion, and cooperative mechanism with cloud computing. Finally, we give a case study to implement the active maintenance based on a prototype platform. This article aims to provide a technical reference for the deployment of edge computing in the smart factory. <s> BIB004
In order to understand the applications of edge computing in manufacturing, we will be considering the system architecture for a manufacturing-based setup as presented in Fig. 3 . After describing this architecture, a case study is presented which is based on the implementation of an active maintenance system on a prototype platform. Finally, this subsection concludes with a summary of the tests and results from this case study, as presented in BIB004 . 1) System Architecture: As depicted in Fig. 3 the architecture has been divided into four domains as follows: a. The application domain is responsible for providing a comprehensive oversight over the entire manufacturing system to aid in the active administration of the system. This oversight includes services such as monitoring of data flow and network health, as well as the capacity for control of the system. The application domain, therefore, allows the system to provide flexible, generalized, and inter-operable intelligent applications while also aiding in the maintenance of service security. b. The data domain is responsible for providing services such as data cleaning, feature extraction, and intelligent inference, which enables the system to optimize system operations so as to improve the system's throughput and efficiency. Another important feature of this domain is that it allows end nodes to quickly access data, due to the proximity of the edge computing node and the end devices, which aids in generating real-time responses for specific events. Therefore, this is a critical part of dynamically controlled manufacturing systems. c. The network domain, in essence, is responsible for connecting the end devices with the data platform and this domain utilizes the Software Defined Networking (SDN) architecture BIB003 to manage operations involved in the control plane and network transmission. A Time-Sensitive Network (TSN) protocol is also employed within this domain to handle time sensitive information and is used extensively in processing the information related to the network in sequence. This domain also offers universal standards for sustaining and supervising the time sensitive nodes, making it a critical part of the overall system architecture. d. The device domain refers to the devices located or embedded within the field apparatus like machine tools, controllers, sensors, actuators, and robots. This domain must be able to sustain an infrastructure for flexible communication models in order to maintain a variety of communication protocols by maintaining nodes which change the system's execution strategies dynamically based on the inputs obtained from the sensors. We normally observe that on the edge nodes, the information model is built with popular protocols such as OPC UA BIB001 and Data Distributed Service (DDS) BIB002 . Finally, the unified semantics of information communication are realized within this domain of the system architecture, and it is also responsible for maintaining data privacy and security. 2) Active Maintenance Case Study: With the proliferation of cyber physical systems, a wide variety of industrial projects are being migrated to edge computing based frameworks because of the promise of improved efficiency, ease of maintenance, and real-time adaptability offered by this computing paradigm. We shall be reviewing a case study on a customized production line for candy packaging, as entailed in . In this study, a private cloud was used to provide service to customer orders. In order to make stable and high speed communications possible, an ad-hoc network was built connecting the edge nodes. Furthermore, in order to achieve proper exchange of information, a standardized version of the DDS protocol and ethernet were integrated before the deployment of the system. The functioning of the system can be summarized as: i. Candy packaging tasks were associated with each robot and these tasks were also linked to the cloud. After getting their assigned tasks, the robots were required to pick up the particular candy assigned to them and keep the candy into the relevant open packaging. In this operation, backbone network nodes were represented by the robots. ii. System was also capable of shifting nodes to different positions on the production line in case of any failures. Therefore, a system with multiple agents was established to improve the self-governing functionality in this scenario. iii. The agents of the system, physically represented by robots, were independent and self-directed which means that their objective and behaviour was not constrained by other agents of the system. iv. This system of multiple agents was deployed in order complete tasks efficiently by assigning different agents with various tasks and procedures. v. CNP (Contract Net Protocol) was used to assign different tasks to different agents by using techniques such as winning modes, bidding and open tendering. vi. By the means of contests and discussions the agents are able to bargain and resolve their conflicts and so this self-organized system is able to efficiently complete the assigned tasks. The implementation of this scenario was made possible with various setups, which include the following: i. With the help of the Hadoop architecture, a distributed data processing system was built wherein at the local database level, real-time mining and analysis was performed with the help of Hadoop MapReduce and Hadoop Distributed File System (HDFS). ii. Information such as machine status and logs constituted the sensory data which was used to create a reasoningbased model which was loaded onto a Raspberry Pi system. iii. On the Raspberry Pi, an OPC UA server was made functional to perform pre-processing tasks on the transmission data that was acquired from different sensory devices. This data was raw in nature and hence, had to be transmitted safely and reliably which was made possible by the use of OPC UA server. iv. In order to integrate the data received from multiple sources, a semantic model was also built which reformed the data to maintain consistency, accuracy, and merit of the information. This semantic model used data fusion to provide generate features as inputs from the acquired data. Finally, this data was used as input to the reasoning-based model. 3) Tests performed: In order to estimate the difference in performance obtained by using an edge computing based system instead of a centralized cloud computing system, a cloud-based system was also setup. This system had a centralized control server which managed the different agents of the system. In order to test the time of operation on the systems, both were tasked with completing the same orders under similar conditions of distribution of candy types. The number of candies to be packed were varied and the average time for robot operation completion was recorded for both systems. The results are summarized in the following two points: i. With increase in quantity of orders, we observe that the self-organized version built on edge nodes is far more efficient and agile than the centralized system when the number of orders rises above 2000, as the operation completion time for the self-organized system becomes consistently lower that of the centralized system. ii. With a stable production line, the speed of the backbone network in centralized version was observed to be around 16 Mb/s. However, after the deployment of the self-organized system, the backbone network speed dropped to around 5-6 Mb/s which represents a 65% drop in speed. The results of this study suggest, that a decentralized and self-organizing system can become extremely useful in massproduction scenarios due to the reduced operation completion time. While the study shows that a decentralized system leads to reduction in transmission speeds within the backbone network, the system can still function efficiently as the reduced operation completion time outweighs the drop in the backbone network speed thereby increasing the effective system throughput.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> D. Distributed Synchronization Services <s> Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads. <s> BIB001
One of the biggest use cases of cloud computing based storage is distributed data storage, commonly referred to as cloud storage services wherein files can be accessed from anywhere on the planet by connecting a system with cloud storage servers which periodically synchronize data on different devices to enable access of files. However, even for small applications like office suite softwares, cloud storage services can often lead to unnecessary bandwidth consumption while also compromising on latency. The EdgeCourier BIB001 is file storage solution which can overcome the problems of traditional cloud computing based distributed storage options by making use of the edge-hosted personal services (EPS) technique in conjunction with the ec-sync incremental synchronization approach. The essence of EPS is to make use of computational resources on the edge nodes (like access points or base stations) to provide localized services for mobile wireless users connected to these edge nodes. The ec-sync synchronization approach requires two participants: the syncsender and sync-receiver, both of which are instrumental in the synchronization process which is explained as follows: • The sync-sender detects if there is any document which requires synchronization with the receiver and is responsible for capturing the changes made within the document, by going through every sub-document within the document. • In order to capture sub-document changes, the syncsender compares two files: the edited document and the last-synced version of the same file. • Thereafter, the sync-sender places the detected changes into a file known as edit-patch, which is transmitted to the sync-receiver. • Upon receiving the edit-patch file, the sync-receiver applies the edit-patch differences to the relevant subdocuments from the last-synced version of the same file to obtain the edited document. • This edited document is then also shared with the cloud storage services in order to transmit it to various EPS instances or nodes across the network for global synchronization. Furthermore, an important advantage of having different EPS instances is that they can can be managed by a centralized management service (on a cloud service), which can migrate data to and from the edge nodes if needed. This, therefore, leads to better oversight and increased fault tolerance as data can be migrated to different resources for analysis or in response to outages experienced at edge nodes. The overview of the EdgeCourier system can be seen in Fig. 4 . Laboratory based studies on the Edge Courier system BIB001 showed that with the rise in the size of documents that need to be synchronized, the time spent on network transmission becomes notably lower for the EdgeCourier system as seen with a document size of 1 MB which takes 0.6 seconds lesser on the EdgeCourier system than on the direct sync system. Such distributed synchronization systems can be particularly useful in the software development industry for real-time code synchronization in large team projects. Similarly, the banking industry can also derive some critical applications from these systems such as in the real-time synchronization of transactions and other banking data. These examples clearly show that edge computing powered data synchronization systems find a lot of applications in modern industries which require lowlatency and reliable network services. As we have seen, these systems lead to reduced data transmission over the network, resulting in reduced latency and lesser strain on the network's bandwidth capabilities, hence leading to dependable network services.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> E. Healthcare <s> OBJECTIVES ::: ::: To assess whether talking or reading (silently or aloud) could affect heart rate variability (HRV) and to what extent these changes require a simultaneous recording of respiratory activity to be correctly interpreted. ::: ::: BACKGROUND ::: ::: Sympathetic predominance in the power spectrum obtained from short- and long-term HRV recordings predicts a poor prognosis in a number of cardiac diseases. Heart rate variability is often recorded without measuring respiration; slow breaths might artefactually increase low frequency power in RR interval (RR) and falsely mimic sympathetic activation. ::: ::: METHODS ::: ::: In 12 healthy volunteers we evaluated the effect of free talking and reading, silently and aloud, on respiration, RR and blood pressure (BP). We also compared spontaneous breathing to controlled breathing and mental arithmetic, silent or aloud. The power in the so called low- (LF) and high-frequency (HF) bands in RR and BP was obtained from autoregressive power spectrum analysis. ::: ::: RESULTS ::: ::: Compared with spontaneous breathing, reading silently increased the speed of breathing (p < 0.05), decreased mean RR and RR variability and increased BP. Reading aloud, free talking and mental arithmetic aloud shifted the respiratory frequency into the LF band, thus increasing LF% and decreasing HF% to a similar degree in both RR and respiration, with decrease in mean RR but with minor differences in crude RR variability. ::: ::: CONCLUSIONS ::: ::: Simple mental and verbal activities markedly affect HRV through changes in respiratory frequency. This possibility should be taken into account when analyzing HRV without simultaneous acquisition and analysis of respiration. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> E. Healthcare <s> Edge computing paradigm has attracted many interests in the last few years as a valid alternative to the standard cloud-based approaches to reduce the interaction timing and the huge amount of data coming from Internet of Things (IoT) devices toward the Internet. In the next future, Edge-based approaches will be essential to support time-dependent applications in the Industry 4.0 context; thus, the paper proposes BodyEdge , a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry. It consists of a tiny mobile client module and a performing edge gateway supporting multiradio and multitechnology communication to collect and locally process data coming from different scenarios; moreover, it also exploits the facilities made available from both private and public cloud platforms to guarantee a high flexibility, robustness, and adaptive service level. The advantages of the designed software platform have been evaluated in terms of reduced transmitted data and processing time through a real implementation on different hardware platforms. The conducted study also highlighted the network conditions (data load and processing delay) in which BodyEdge is a valid and inexpensive solution for healthcare application scenarios. <s> BIB002
With the recent advancements made in the domain of medical IoT devices, the healthcare industry has started to adopt IoT solutions that provide vital medical services such as the monitoring of Electrocardiogram (ECG) data and processing of Magnetic Resonance Imaging (MRI) data. However, most of the traditional IoT based solutions for healthcare rely heavily on cloud-based processing as well as storage which has started to create problems for these solutions as the massive amount of data being generated is straining the communication networks capacity. This often leads to unpredictable delays in communication while also promoting increased latency in the network which can significantly impact healthcare operations within the hospital or clinic especially in time-sensitive situations that require urgent reactions such as in heart attacks or strokes. Therefore, modern medical IoT systems require a flexible multi-level network architecture which can cohesively work with heterogeneous sensors and process the relevant data with minimal latency to produce relevant results and responses. These requirements have led to the adoption of the edge computing paradigm in medical IoT systems due to the benefits it can provide in terms of reduced latency and improved reliability, both of which are critical for these systems. In this subsection, we will be reviewing the BodyEdge architecture BIB002 as shown in the figure below, which is structured and inspired by the edge computing paradigm to achieve the following goals: • Reduced communication delay and latency. • Wide support for scalability and responsiveness. • Limited cost in terms of bandwidth for data transmission (i.e. only limited statistics data needs to be transmitted to the cloud). • Improved Privacy (since the edge network may be interpreted as a private cloud). This architecture consists of two complementary parts. The first, is a mobile client called BodyEdge Mobile BodyClient (BE-MBC) which is primarily responsible as a relay node for communication between the sensors and the edge client using multi-radio communication technology. The second is a performing gateway known as the BodyEdge Gateway (BE-GTW), which is placed at the edge of the network and is primarily responsible for acquiring device data and locally processing it to produce valuable insights and patterns that can be relayed back to the end devices or sensors. In addition to this, the gateway also ensures communication with the cloud to allow users to maintain oversight over this system. In order to validate the BodyEdge architecture, it was physically implemented in BIB002 and compared with a cloud based architecture for the task of stress detection using cardiac sensors. Within the implementation, the BE-MBC module was installed on a smartwatch which was paired with a chest band to acquire ECG signals. The BE-GTW was installed on an independent hardware platform (Raspberry Pi3) as well as on an Azure cloud virtual machine in order to perform the comparative study. Finally, the edge-based system with the BE-GTW installed on the Raspberry Pi3 was tested on 100 athletes to determine stress levels using the Heart Rate Variability (HRV) technique BIB001 and the average round trip delay time (RTT) for this case was 152 ms. The same experiment was then conducted with the cloud-based system which yielded an average round trip delay time (RTT) of 338 ms. This result, therefore, corroborates our assumptions about the performance benefits offered by edge-computing based systems in terms of reduced latency and indicates that medical IoT systems should indeed adopt edge computing based network architectures.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> F. Agriculture <s> Precision Agriculture (PA), as the integration of information, communication and control technologies in agriculture, is growing day by day. The Internet of Things (IoT) and cloud computing paradigms offer advances to enhance PA connectivity. Nevertheless, their usage in this field is usually limited to specific scenarios of high cost, and they are not adapted to semi-arid conditions, or do not cover all PA management in an efficient way. For this reason, we propose a flexible platform able to cope with soilless culture needs in full recirculation greenhouses using moderately saline water. It is based on exchangeable low-cost hardware and supported by a three-tier open source software platform at local, edge and cloud planes. At the local plane, Cyber-Physical Systems (CPS) interact with crop devices to gather data and perform real-time atomic control actions. The edge plane of the platform is in charge of monitoring and managing main PA tasks near the access network to increase system reliability against network access failures. Finally, the cloud platform collects current and past records and hosts data analytics modules in a FIWARE deployment. IoT protocols like Message Queue Telemetry Transport (MQTT) or Constrained Application Protocol (CoAP) are used to communicate with CPS, while Next Generation Service Interface (NGSI) is employed for southbound and northbound access to the cloud. The system has been completely instantiated in a real prototype in frames of the EU DrainUse project, allowing the control of a real hydroponic closed system through managing software for final farmers connected to the platform. <s> BIB001
Modern agriculture has extensively embraced automation and modern technology so as to improve and optimize existing agricultural processes due to the improved connectivity between agricultural resources. As technology is becoming increasingly interconnected, edge computing based infrastructures have started to dominate most network-based applications and in order to tackle the growing amount of data being generated by end devices, the agricultural industry has also started edge computing based architectures in order to create latency-sensitive applications for agricultural processes. The concept of Precision Agriculture (PA) has seen a significant rise in popularity due to the improvement in sensor technologies, and several systems based on edge computing have been proposed, like the precision agriculture platform BIB001 . These systems make use of intelligent algorithms in conjunction with smart sensors and actuators in the field to providing real-time monitoring services that enable control services to maintain optimal environments for crop growth. In the system proposed in BIB001 , the architecture is divided into 3 tiers namely: crop (Cyber Physical System or CPS) tier, edge computing tier, and the cloud tier. The architecture has been illustrated in Fig 6. The crop (CPS) tier is majorly comprised of sensors that aid in real-time monitoring of various environmental parameters such as temperature, humidity, pH, CO 2 levels, solar radiation, and other important factors. In addition to sensors, this tier also supports various actuation devices such as soil nutrition pumps, valves, irrigation devices, ventilation devices, and light-control devices. Within this architecture, operations at this tier require low latency and high reliability in communication so that emergency services can be enacted without human intervention, which is made possible through the edge computing based computational nodes situated closer to the data sources. In continuation, edge nodes within the edge computing tier are responsible for executing commands through actuation devices based on inputs received from sensor networks in the crop tier. Therefore, this layer is responsible for control of irrigation, climate control, nutrition control, and other auxiliary tasks like alarm and energy management. Finally, the cloud tier is responsible for long-term data analytics and system management services. The physical implementation of this system showed savings of more than 30% in terms of water consumption along with savings of nearly 80% in terms of some soil nutrients when compared with a regular open crop. In addition to environment monitoring, edge computing powered systems can also be employed for video analytics through UAVs that can help farmers in optimized weeding and harvesting. This clearly illustrates the impact of automation on the agricultural industry, and shows how edge computing based architectures can replace cloud computing frameworks especially in applications that require low-latency and high reliability.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> C. Manufacturing Process Monitoring <s> The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> C. Manufacturing Process Monitoring <s> Abstract Small- and medium-sized manufacturers, as well as large original equipment manufacturers (OEMs), have faced an increasing need for the development of intelligent manufacturing machines with affordable sensing technologies and data-driven intelligence. Existing monitoring systems and prognostics approaches are not capable of collecting the large volumes of real-time data or building large-scale predictive models that are essential to achieving significant advances in cyber-manufacturing. The objective of this paper is to introduce a new computational framework that enables remote real-time sensing, monitoring, and scalable high performance computing for diagnosis and prognosis. This framework utilizes wireless sensor networks, cloud computing, and machine learning. A proof-of-concept prototype is developed to demonstrate how the framework can enable manufacturers to monitor machine health conditions and generate predictive analytics. Experimental results are provided to demonstrate capabilities and utility of the framework such as how vibrations and energy consumption of pumps in a power plant and CNC machines in a factory floor can be monitored using a wireless sensor network. In addition, a machine learning algorithm, implemented on a public cloud, is used to predict tool wear in milling operations. <s> BIB002
With rapid globalization, industries across the globe have started to adopt modern process control systems which rely heavily on sensor networks that efficiently monitor production lines and processes while collecting valuable data which can be used to identify faults before they occur while also aiding in optimization efforts so as to improve the throughput and performance of the industry. In this regard, we shall be looking at fog computing-based framework for process monitoring in different production environments. The proposed system architecture in BIB002 is described in a sequential manner: • Step 1: Collect machine data from the production environment that streams real-time data from various sensor networks and communication adapters that function on protocols such as Simple Object Access Protocol (SOAP), MTConnect, and Open Platform Communications Unified Architecture (OPC UA). • Step 2: Stream the raw data to a private computational fog node which is responsible for real-time monitoring and providing time-sensitive control signals to the production environment. This allows the system to function with low response times, improves reliability, and reduces the strain on the network's capacity as data is processed in a fog computing node that is situated close to the production environment. • Step 3: In addition, various samples from this data can be sent to high-performance cloud data centers which can be used to build models for predictive maintenance and process optimization. Since these samples are small in size and sporadically transferred to the cloud, the strain on the network's capacity is minimal while the models built with the sampled data can be extremely beneficial for the industry in terms of improved throughput and reduced unplanned downtimes. • Step 4: Apply these predictive models to raw data and obtain tangible insights into the production environment's real-time health and performance. The edge and fog computing paradigms are considered as powerful extensions to the cloud computing paradigm, however, they face some common challenges BIB001 that are yet to be addressed. In this section, we describe the some of the major issues faced by these paradigms which can also serve as potential research directions.
Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> B. Security and Privacy <s> Fog computing is a new paradigm that extends cloud computing to the network edges. As data processing, communications, and control are performed more closely to the end-user devices in fog computing, chances for the attackers to gain unauthorized accesses to sensitive data have been greatly increased. In this paper, we propose a new resource-efficient physical unclonable function (PUF) based authentication scheme to protect the security and privacy of the confidential information in edge devices. Unlike other PUF based lightweight authentication schemes, our proposed method remarkably increases the machine learning attack time without requiring a server to store a large amount of challenge response pairs (CRPs). Besides, a new strong PUF with feedback loop is employed in our scheme to further resist the machine learning attacks that have demonstrated efficacy in compromising strong PUFs. Our proof-of-concept implementation shows that the proposed scheme is suitable for resource-constrained end-user devices in terms of memory, computation, and security. <s> BIB001 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> B. Security and Privacy <s> Recent advancements in the Internet of Things (IoT) has enabled the collection, processing, and analysis of various forms of data including the personal data from billions of objects to generate valuable knowledge, making more innovative services for its stakeholders. Yet, this paradigm continuously suffers from numerous security and privacy concerns mainly due to its massive scale, distributed nature, and scarcity of resources towards the edge of IoT networks. Interestingly, blockchain based techniques offer strong countermeasures to protect data from tampering while supporting the distributed nature of the IoT. However, the enormous amount of energy consumption required to verify each block of data make it difficult to use with resource-constrained IoT devices and with real-time IoT applications. Nevertheless, it can expose the privacy of the stakeholders due to its public ledger system even though it secures data from alterations. Edge computing approaches suggest a potential alternative to centralized processing in order to populate real-time applications at the edge and to reduce privacy concerns associated with cloud computing. Hence, this paper suggests the novel privacy preserving blockchain called TrustChain which combines the power of blockchains with trust concepts to eliminate issues associated with traditional blockchain architectures. This work investigates how TrustChain can be deployed in the edge computing environment with different levels of absorptions to eliminate delays and privacy concerns associated with centralized processing and to preserve the resources in IoT networks. <s> BIB002 </s> Industrial Internet of Things (IIoT) Applications of Edge and Fog Computing: A Review and Future Directions <s> B. Security and Privacy <s> Abstract Fog computing (fog networking) is known as a decentralized computing infrastructure in which data, applications, compute as well as data storage are scattered in the most logical and efficient place among the data source (i.e., smart devices) and the cloud. It gives better services than cloud computing because it has better performance with reasonably low cost. Since the cloud computing has security and privacy issues, and fog computing is an extension of cloud computing, it is therefore obvious that fog computing will inherit those security and privacy issues from cloud computing. In this paper, we design a new secure key management and user authentication scheme for fog computing environment, called SAKA-FC. SAKA-FC is efficient as it only uses the lightweight operations, such as one-way cryptographic hash function and bitwise exclusive-OR (XOR), for the smart devices as they are resource-constrained in nature. SAKA-FC is shown to be secure with the help of the formal security analysis using the broadly accepted Real-Or-Random (ROR) model, the formal security verification using the widely-used Automated Validation of Internet Security Protocols and Applications (AVISPA) tool and also the informal security analysis. In addition, SAKA-FC is implemented for practical demonstration using the widely-used NS2 simulator. <s> BIB003
With an increased interest in the edge and fog computing paradigms, people have started to appreciate the capabilities of these paradigms which enable the extension of storage, networking, and processing resources of cloud computing servers toward the edge of network. However, with this rise in flexibility and distribution leads to several security and privacy concerns that must be addressed by system designers. After analyzing several different aspects of the network security, we can summarize the major security and privacy concerns as follows: 1) Trust and Authentication: Edge and Fog Computing based networks are expected to provide secure and reliable services to all users and this leads to an important requirement in that all devices on the network should be able to trust one another. Therefore, trust plays a twoway role within edge and fog computing based networks. This implies that fog or edge nodes that offer services to the network must be in a position to validate whether the resources requesting these services are indeed genuine. Similarly, edge or fog nodes that are transmitting data to or requesting services from network resources should also be able to verify whether these resources are genuine or not. These concerns have given rise to various authentication mechanisms which can be used to authenticate network resources before transmissions and requests. Systems can employ mechanisms such as permissioned blockchain networks like TrustChain BIB002 for authentication, cryptographic authentication schemes like SAKA-FC BIB003 , and hardware-based authentication schemes like Physically Unclonable Functions (PUF) BIB001 , to authenticate network resources. 2) Integrity: Edge and Fog Computing systems should always ensure that data transmission within the network should be done in a secure manner so that transmitted data is not altered or modified by attackers. The most prominent method to ensure integrity of data in networks is through the cryptographic signature verification systems like the GNU Privacy Guard (GPG) system [42] which is used to digitally sign transmitted data. The received data is then verified at the receiving station to establish integrity of the data, which is extremely important in edge and fog computing based systems as they rely heavily on intra-network data transfers due to their distributed topology. 3) Availability: The availability of information refers to the ability of the system to ensure that authorized parties are able to access relevant information whenever needed. The biggest concern with respect to availability of information is Denial of Service (DoS) attacks that hamper or eliminate accessibility to information. Edge and Fog Computing based systems are generally well equipped to handle DoS attacks since these systems have distributed computational resources, however Distributed Denial of Service (DDoS) attacks can still impact these systems and in order to protect networks or applications against DoS attacks, designers often make use of Web Application Firewalls (WAF), smart DNS resolution services, and other intelligent traffic management techniques to ensure service security. 4) Confidentiality: The confidentiality of information represents the ability of the system to protect information from being disclosed to unauthorized parties. This implies that edge and fog computing paradigms should ensure that information is stored securely in order to prevent data leaks, which is especially likely due to the distributed architecture of these paradigms. Edge and Fog Computing based architectures often use homomorphic encryption schemes as well as cryptographic hashing techniques to store confidential data at different distributed locations within the network. Due to the use of these techniques, even if attackers are able to gain access to secure databases, they will not be able to understand the data as it will be in an encrypted format. 5) Data Ownership: This issue extends from the fact that unlike cloud computing based systems, edge and fog computing based systems store data in distributed locations across the network which means that the system can store data locally at the computational nodes, thereby providing complete access and ownership to the end users. However, these paradigms often involve transmission of data between nodes especially when processing or computations have been offloaded to different nodes on the network, and this creates a problem in the data ownership. Thus system designers should take this behaviour into account while drafting the privacy policy of the network. This also involves thinking about legal jurisdictions, such as when data crosses international borders, it may be subject to different regulations. This means that data transfer methods should consider the compatibility of data with two different data regulation policies with respect to the source and destination.
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> INTRODUCTION <s> Adaptive beamforming algorithms can be extremely sensitive to slight errors in array characteristics. Errors which are uncorrelated from sensor to sensor pass through the beamformer like uncorrelated or spatially white noise. Hence, gain against white noise is a measure of robustness. A new algorithm is presented which includes a quadratic inequality constraint on the array gain against uncorrelated noise, while minimizing output power subject to multiple linear equality constraints. It is shown that a simple scaling of the projection of tentative weights, in the subspace orthogonal to the linear constraints, can be used to satisfy the quadratic inequality constraint. Moreover, this scaling is equivalent to a projection onto the quadratic constraint boundary so that the usual favorable properties of projection algorithms apply. This leads to a simple, effective, robust adaptive beamforming algorithm in which all constraints are satisfied exactly at each step and roundoff errors do not accumulate. The algorithm is then extended to the case of a more general quadratic constraint. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> INTRODUCTION <s> An overview of beamforming from a signal-processing perspective is provided, with an emphasis on recent research. Data-independent, statistically optimum, adaptive, and partially adaptive beamforming are discussed. Basic notation, terminology, and concepts are included. Several beamformer implementations are briefly described. > <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> INTRODUCTION <s> Simulations were used to investigate the effect of covariance matrix sample size on the system performance of adaptive arrays using the sample matrix inversion (SMI) algorithm. Inadequate estimation of the covariance matrix results in adapted antenna patterns with high sidelobes and distorted mainbeams. A technique to reduce these effects by modifying the covariance matrix estimate is described from the point of view of eigenvector decomposition. This diagonal loading technique reduces the system nulling capability against low-level interference, but parametric studies show that it is an effective approach in many situations. > <s> BIB003 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> INTRODUCTION <s> The performance of the direct matrix inversion (DMI) method for antenna arrays of arbitrary geometry is analyzed by asymptotic statistical techniques. The effects of eigenspace disturbance caused by finite samples on the output interference and noise powers are examined under the unit gain constraint in the direction of the desired signal. The results show that the performance of the DMI method is degraded mostly by the disturbed noise subspace. That suggests the use of an eigenspace-based beamformer in which the weight vector is computed by using the signal-plus-interference subspace component of the sample correlation matrix. Convergence properties of the eigenspace-based beamformer are evaluated for the cases in which the source number is known and in which it is overestimated. Theoretical analyses validated by computer simulations indicate that the eigenspace-based beamformer has faster convergence rate than the DMI method. > <s> BIB004 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> INTRODUCTION <s> Minimum variance beamformers are usually complemented with diagonal loading techniques in order to provide robustness against several impairments such as imprecise knowledge of the steering vector or finite sample size effects. This paper concentrates on this last application of diagonal loading techniques, i.e., it is assumed that the steering vector is perfectly known and that diagonal loading is used to alleviate the finite sample size impairments. The analysis herein is asymptotic in the sense that it is assumed that both the number of antennas and the number of samples are high but have the same order of magnitude. Borrowing some results of random matrix theory, the authors first derive a deterministic expression that describes the asymptotic signal-to-noise-plus-interference ratio (SINR) at the output of the diagonally loaded beamformer. Then, making use of the statistical theory of large observations (also known as general statistical analysis or G-analysis), the authors derive an estimator of the optimum loading factor that is consistent when both the number of antennas and the sample size increase without bound at the same rate. Because of that, the estimator has an excellent performance even in situations where the quotient between the number of observations is low relative to the number of elements of the array. <s> BIB005
The problem of estimating the wave number or angle of arrival of a plane wave is referred to as direction finding or DOA estimation problem. It has a large application in radar, sonar, seismic systems, electronic surveillance, medical diagnosis and treatment, radio astrology and other areas. Because of its widespread applications and difficulty of obtaining an optimum estimator, the topic has a received a significant amount of attention over the last several decades. Several methods exist to address the problem of estimating the direction-of-arrivals (DOAs) of multiple sources using the signals received at the sensors The application of the array processing requires either the knowledge of a reference signal or the direction of the desired signal source to achieve its desired objectives. Antenna arrays are widely used to solve direction finding. Beamforming is used along with an array of antennas/sensors to transmit/receive signals to/from a specified spatial direction in the presence of interference and noise. Hence it acts as a spatial filter BIB002 . It is a classic yet continuously developing field that has enormous practical applications. In the last decade, there has been renewed interest in beamforming driven by applications in wireless communications, where multiantenna techniques have emerged as one of the key technologies to accommodate the explosive growth of the number of users and rapidly increasing demands for new high data-rate services. The techniques for estimating the directions of arrival of signals using an antenna array have been booming in recent years. Many methods exist and are classified according to the technique used, the information they require (external or not) and finally the criterion used (conventional methods, projection on the noise or source subspace, maximum likelihood method, etc. A receive beamformer is commonly used to estimate the signal arriving from a speτcific direction in the presence of noise and interfering signals. In a receive beamformer, the output of the array of sensors are linearly combined using spatial filter coefficients(weight vector) so that the signals coming from a desired direction are passed to the beamformer output undistorted, while signals from other directions are attenuated. With a central focus on bearing estimation, the prime objective here is to locate the source of transmitted communication/ radar signal. Through this paper, a detailed literature survey is made on the various bearing estimation techniques and algorithms till date. These many estimation algorithms which are available in the literature have different capabilities and limitations - BIB005 . The DOA estimation problem in some cases, is first solved by estimation methods of the number of sources BIB001 - BIB003 - BIB004 and then applying a high-resolution method to estimate the angular position of these sources. These high-resolution methods are known to be more robust than conventional techniques. The most general beamforming techniques include conventional as well as adaptive beamformers. For the conventional non-adaptive beamformers, the weight vector for a specific direction of arrival (DOA) depends on the array response alone and can be pre-calculated, independent of the received data. Hence they are data independent beamformers and they present a constant response for all signal/interference scenarios. The adaptive beamformers are data-dependent since the weight vectors are calculated as a function of the incoming data to optimize the performance subject to various constraints . They have better resolution and much better interference rejection capability than the data-independent beamformers. However, in practical array systems, traditional adaptive beamforming algorithms are known to degrade, if some of exploited assumptions on the environment, sources, or sensor array become wrong or imprecise. Similar types of degradation can occur when the signal array response is known exactly, but the training sample size is small. Therefore, the robustness of adaptive beamforming techniques against environmental and array imperfections and uncertainties remains one of the key issues. The commonly used algorithms, acquires the source signals at the Nyquist rate and transmits all measurements to a central processor in order to estimate just a small number of source bearings. The communication load between sensors can be drastically reduced, however, by exploiting spatial sparsity, i.e., the fact that the number of sources we are trying to find is much less than the total number of possible source bearings. Compressive sensing (CS) is a recent technique which accomplishes with the concept of sparsity proving to be advantageous over the popular Nyquist Sampling rate. In this paper, a detailed literature survey on the commonly used DOA estimation techniques, is made first, and later moving on to the Compressive Sensing (CS) technique which may be more advantageous for DOA estimation.
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Diagonal Loading <s> Adaptive beamforming algorithms can be extremely sensitive to slight errors in array characteristics. Errors which are uncorrelated from sensor to sensor pass through the beamformer like uncorrelated or spatially white noise. Hence, gain against white noise is a measure of robustness. A new algorithm is presented which includes a quadratic inequality constraint on the array gain against uncorrelated noise, while minimizing output power subject to multiple linear equality constraints. It is shown that a simple scaling of the projection of tentative weights, in the subspace orthogonal to the linear constraints, can be used to satisfy the quadratic inequality constraint. Moreover, this scaling is equivalent to a projection onto the quadratic constraint boundary so that the usual favorable properties of projection algorithms apply. This leads to a simple, effective, robust adaptive beamforming algorithm in which all constraints are satisfied exactly at each step and roundoff errors do not accumulate. The algorithm is then extended to the case of a more general quadratic constraint. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Diagonal Loading <s> Simulations were used to investigate the effect of covariance matrix sample size on the system performance of adaptive arrays using the sample matrix inversion (SMI) algorithm. Inadequate estimation of the covariance matrix results in adapted antenna patterns with high sidelobes and distorted mainbeams. A technique to reduce these effects by modifying the covariance matrix estimate is described from the point of view of eigenvector decomposition. This diagonal loading technique reduces the system nulling capability against low-level interference, but parametric studies show that it is an effective approach in many situations. > <s> BIB002
Among the many robust adaptive beamformers proposed in the literature, diagonal loading emerges as the most widely used method due to its simplicity and its effectiveness in handling a wide variety of errors, including steering vector and finite-sample errors BIB001 . It is robust against finite sample errors BIB002 . However, a serious drawback of the diagonal loading technique is that there is no reliable way to choose the diagonal loading factor, which directly affects its performance.
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> LCMV Beamformer <s> In this paper a class of linear constraints, also termed as derivative constraints, which is applicable to broad-band element space antenna array processors, is presented. The performance characteristics of the optimum processor with derivative constraints are demonstrated by computer studies involving two types of array geometries, namely linear and circular arrays. As a consequence of derivative constraints, the beam width in the look direction can be made as broad as desired and the beam spacings can be selected without fear of substantial signal suppression in the event of signal arrivals between beams. However, this increased beam width is achieved at the price of reducing array gain. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> LCMV Beamformer <s> We present a new modification of the Hung-Turner (HT) adaptive beam-forming algorithm, providing additional robustness of a narrowband adaptive array in wideband and moving-jammer scenarios. The robustness is achieved by involving the derivative constraints toward the jammer directions in the conventional Hung-Turner (1983) algorithm. The important advantage of the constraints used is that they do not require any a priori information about jammer directions. The computer simulations with wideband and moving jammers show that the proposed algorithm provides the significant improvement of the adaptive array performance as compared with the conventional HT algorithm. At the same time, for a moderate order of derivative constraints, the new algorithm has a computational efficiency, comparable with the conventional HT algorithm. <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> LCMV Beamformer <s> The performance of adaptive beamforming methods is known to degrade severely in the presence of even small mismatches between the actual and presumed array responses to the desired signal. Such mismatches may frequently occur in practical situations because of violation of underlying assumptions on the environment, sources, or sensor array. This is especially true when the desired signal components are present in the beamformer "training" data snapshots because in this case, the adaptive array performance is very sensitive to array and model imperfections. The similar phenomenon of performance degradation can occur even when the array response to the desired signal is known exactly, but the training sample size is small. We propose a new powerful approach to robust adaptive beamforming in the presence of unknown arbitrary-type mismatches of the desired signal array response. Our approach is developed for the most general case of an arbitrary dimension of the desired signal subspace and is applicable to both the rank-one (point source) and higher rank (scattered source/fluctuating wavefront) desired signal models. The proposed robust adaptive beamformers are based on explicit modeling of uncertainties in the desired signal array response and data covariance matrix as well as worst-case performance optimization. Simple closed-form solutions to the considered robust adaptive beamforming problems are derived. Our new beamformers have a computational complexity comparable with that of the traditional adaptive beamforming algorithms, while, at the same time, offer a significantly improved robustness and faster convergence rates. <s> BIB003
To improve the robustness of the beamformer against the DOA angle mismatch error, additional derivative constraints BIB001 can be imposed to the LCMV beamformer so that a wider main beam can be obtained to cover all the possible directions of the signal of interest. Derivative constraints can be used at the interference directions also, under conditions when the interfering sources are rapidly moving (nonstationary). The data nonstationarity can cause these sources to may move away from the sharp notches of the adapted pattern, and this may lead to a strong degradation of the output Signal-to-Interference plus Noise ratio(SINR) . An efficient remedy for adaptive array performance in such situations is based upon artificial broadening of the null width toward the directions of interfering sources using derivative constraints BIB002 , [10] . A robust beamformer for the most general case of an arbitrary dimension of the desired signal subspace is developed in BIB003 , and is applicable to both the rank-one (point source) and higher rank (scattered source/fluctuating wavefront) desired signal models. The proposed robust adaptive beamformers are based on explicit modeling of uncertainties in the desired signal array response and data covariance matrix as well as worst-case performance optimization. Closed form solutions and computationally efficient online implementations of the robust algorithm are also developed in BIB003 .
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Capon Beamforming <s> The Capon (1969) beamformer has better resolution and much better interference rejection capability than the standard (data-independent) beamformer, provided that the array steering vector corresponding to the signal of interest (SOI) is accurately known. However, whenever the knowledge of the SOI steering vector is imprecise (as is often the case in practice), the performance of the Capon beamformer may become worse than that of the standard beamformer. Diagonal loading (including its extended versions) has been a popular approach to improve the robustness of the Capon beamformer. We show that a natural extension of the Capon beamformer to the case of uncertain steering vectors also belongs to the class of diagonal loading approaches, but the amount of diagonal loading can be precisely calculated based on the uncertainty set of the steering vector. The proposed robust Capon beamformer can be efficiently computed at a comparable cost with that of the standard Capon beamformer. Its excellent performance for SOI power estimation is demonstrated via a number of numerical examples. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Capon Beamforming <s> The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of the SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we will first provide a complete analysis of a norm constrained Capon beamforming (NCCB) approach, which uses a norm constraint on the weight vector to improve the robustness against array steering vector errors and noise. Our analysis of NCCB is thorough and sheds more light on the choice of the norm constraint than what was commonly known. We also provide a natural extension of the SCB, which has been obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double constraint on the array steering vector, viz. a constant norm constraint and a spherical uncertainty set constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). NCCB and DCRCB can both be efficiently computed at a comparable cost with that of the SCB. Performance comparisons of NCCB, DCRCB, and several other adaptive beamformers via a number of numerical examples are also presented. <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Capon Beamforming <s> This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncertainty in the array manifold is explicitly modeled via an ellipsoid that gives the possible values of the array for a particular look direction. We choose weights that minimize the total weighted power output of the array, subject to the constraint that the gain should exceed unity for all array responses in this ellipsoid. The robust weight selection process can be cast as a second-order cone program that can be solved efficiently using Lagrange multiplier techniques. If the ellipsoid reduces to a single point, the method coincides with Capon's method. We describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. We form separate uncertainty ellipsoids for each component in the signal path (e.g., antenna, electronics) and then determine an aggregate uncertainty ellipsoid from these. We give new results for modeling the element-wise products of ellipsoids. We demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples. <s> BIB003
In BIB003 , the Robust Capon beamformer is proposed, where the covariance fitting formulation of the standard capon beamformer, is coupled with the constraint that the beamformer response be above some level for all the steering vectors that lie in an ellipsoid (sphere) centred on the nominal or presumed steering vector of interest. In BIB001 , an additional norm constraint is also used to get the doubly constrained Robust Capon Beamformer. A computationally efficient robust adaptive beamforming scheme is developed in BIB002 , to account for the signal array response mismatch and small training sample size. It includes a quadratic inequality constraint and is implemented with gradient descent method. All the robust adaptive algorithms, surveyed till now, was for narrowband signals. But in many applications, the signals are wideband and hence robust wideband adaptive algorithms are essential. The most popular approach in the design of wideband adaptive beamformers is to decompose the received broadband signals into narrowband components (subbands) and then to apply separate narrowband beamformers to each subband .
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Tapped Delay Line Beamformer <s> The nulling bandwidth of adaptive arrays with tapped delay-line processing is examined. Linear arrays with up to 10 elements are considered. It is shown how the number of taps in the delay lines and the amount of delay between taps affect the nulling bandwidth. For each size of array, the optimal number of delay-line taps and the optimal intertap delays are determined as functions of the required nulling bandwidth. > <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> Tapped Delay Line Beamformer <s> In this paper, two robust presteered broadband (PB) beamformers are developed using worst-case designs. The proposed techniques are shown to enjoy a reduced computational complexity and/or significant performance improvements as compared to the existing robust wideband beamforming techniques in scenarios with array response errors. <s> BIB002
An alternate approach in the design of wide band beamformers is to use tapped delay-lines (TDLs) BIB001 , which can form a frequency dependent response for each of the received broadband sensor signals to compensate the phase difference for different frequency components. A robust algorithm for broadband arrays was proposed in BIB002 using worst case optimization, where a group of constraints are imposed on sampled frequency points over the frequency range of interest to prevent the mismatched desired signal Adaptive Algorithms 7 from being filtered out by the beamformer. High computational complexity & inability to control the response consistency to the mismatched desired signal are the demerits of BIB002 which are being solved in the recent method proposed in [18] [19] . Here, the robustness of the wideband beamforming structure is improved using a combination of frequency invariance constraint and worst case performance optimization. It is formulated as convex optimization problem and solved using existing convex optimisation techniques.
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MVDR Algorithms <s> In this paper, we first study the signal cancellation and interference rejection effects of the optimum (constrained least squares or minimum variance) beamformer in the presence of partially and fully correlated interfering sources. In particular, we derive expressions for the output power and the gain in the interference direction of the beamformer in terms of the source powers, correlation, and the sensor noise power, and show quantitatively the penalties arising from increasing correlation in several scenarios of interest. Next, we show that spatial smoothing progressively decorrelates the sources at a rate that depends on the spacing and directions of the sources, and thus relate the degree of smoothing to the improvement in signal cancellation and interference rejection behavior provided by spatial smoothing. Results of computer simulations are included to support our analysis. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MVDR Algorithms <s> The finite-data performance of a minimum-variance distortionless response (MVDR) beamformer is analyzed with and without spatial smoothing, using first-order perturbation theory. In particular, expressions are developed for the mean values of the power gain in any direction of interest, the output power, and the norm of the weight-error vector, as a function of the number of snapshots and the number of smoothing steps. It is shown that, in general, the smoothing, in addition to decorrelating the sources, can alleviate the effects of finite-data perturbations. The above expressions are reduced to the case in which no spatial smoothing is used. These expressions are valid for an arbitrary array and for arbitrarily correlated signals. For this case, an expression for the variance of the power gain is also developed. For a single interference case it is shown explicitly how the SNR, spacing of the interference from the desired signal and the correlation between them influence the beamformer performance. Simulations verify the usefulness of the theoretical expressions. > <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MVDR Algorithms <s> Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near-far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms. <s> BIB003 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MVDR Algorithms <s> Adaptive beamforming methods degrade in the presence of both signal steering vector errors and interference nonstationarity. We develop a new approach to adaptive beamforming that is jointly robust against these two phenomena. Our beamformer is based on the optimization of the worst case performance. A computationally efficient convex optimization-based algorithm is proposed to compute the beamformer weights. Computer simulations demonstrate that our beamformer has an improved robustness as compared to other popular robust beamforming algorithms. <s> BIB004
In Minimum Variance Distortionless Response beamformer the linear filter weights used in the beamformer are adaptively calculated depending on the environment so as to suppress the interferences to the maximum, leaving the signal of interest undistorted . Here the computation of the inverse correlation matrix and its multiplication with steering vector are the most important parts in the process of optimal weight computation. The array correlation matrix (R) is a measure of the spatial correlation of the signal and noise arriving at the array. Adaptive beam forming techniques measures the array correlation matrix instead of assuming that the noise is white and Gaussian. This array correlation matrix measurement is then used to determine the spatial filter coefficients (weights). MVDR shows degraded performance compared to conventional beamformers, when there is position errors in sensors. Root MVDR performs reasonably well above threshold, but threshold is higher than Maximum Likelihood algorithms closely spaced signals. However, the threshold of MVDR algorithm is higher than root MVDR . It is used as a preliminary processor to indicate the number of plane waves impinging on the array, their approximate location, and approximate signal power. However it suffers a demerit that, in case of 2 closely spaced plane waves, algorithms will think they are single plane waves and underestimate the number of signals. The performance of the MVDR beamformer is severely affected by the correlation between the look-direction signal and the interferences. Spatial smoothing is a technique used to alleviate the problems due to correlation where the array is divided into smaller sub arrays, and the average of the all the sub array covariance matrices are used to form a smoothed R matrix. In BIB001 , it is shown that spatial smoothing progressively decorrelates the sources by diagonalizing the source covariance matrix. This decorrelation results in reduced signal cancellation and increased rejection of the coherent interference. In BIB002 , the finite-data performance of the MVDR Beamformer with and without spatial smoothing is analyzed. It is shown that the smoothing, in addition to decorrelating the sources, can also alleviate the effects of finite-data perturbations (the covariance matrix errors due to the finite number of snapshots used for its estimation). In the recent past, some robust algorithms with clear theoretical background have been proposed which make explicit use of an uncertainty set of the array steering vector. In BIB003 , spherical uncertainty sets are used and in BIB004 ellipsoidal (including flat ellipsoidal) uncertainty sets are used. Here, the beamformer is designed to minimize the output power subject to the constraint that the beamformer response is above some level for all the steering vectors that lie in an ellipsoid (sphere) centred on the nominal or presumed steering vector of interest. This guarantees that the signal of interest, whose steering vector is expected to lie in the ellipsoid (sphere), will not be eliminated, and hence, robustness is improved. When the ellipsoid is a sphere, then the solution to the above-mentioned problem is of the diagonal loading type, where the loading level is obtained using the covariance matrix and the radius of the sphere. In the case where the ellipsoid is not a sphere or is flat, the robust beamformer takes the form of a general (i.e., not necessarily diagonal) loading of the covariance matrix. In either case, the solution is given by (R+Q) -1 s where s denotes the nominal steering vector (in the absence of any uncertainty), and Q stands for the loading matrix. Classes of robust MV beamforming algorithms based on optimization of worst-case performance are proposed in BIB003 BIB004 . The robustness of the MVDR beamformer is improved in BIB003 where it explicitly models an arbitrary (but bounded in norm) mismatch in the desired signal array response for point source signal models and uses worst-case performance optimization . This method is based on a convex optimization using second-order cone programming (SOCP). Although several efficient convex optimization software tools are currently available, the SOCP-based method does not provide any closed-form solution and does not have simple online implementations. In BIB004 , the approach of BIB003 is extended to a more general case where, apart from the steering vector mismatch, there is a nonstationarity of the training data also. The norms of both the steering vector mismatch and the data matrix mismatch are bounded by some known constants, and the weights are calculated by optimising the worst case performance
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MUSIC Algorithms <s> In the classical approach to underwater passive listening, the medium is sampled in a convenient number of "look-directions" from which the signals are estimated in order to build an image of the noise field. In contrast, a modern trend is to consider the noise field as a global entity depending on few parameters to be estimated simultaneously. In a Gaussian context, it is worthwhile to consider the application of likelihood methods in order to derive a detection test for the number of sources and estimators for their locations and spectral levels. This paper aims to compute such estimators when the wavefront shapes are not assumed known a priori. This justifies results previously found using the asymptotical properties of the eigenvalue-eigenvector decomposition of the estimated spectral density matrix of the sensor signals: they have led to a variety of "high resolution" array processing methods. More specifically, a covariance matrix test for equality of the smallest eigenvalues is presented for source detection. For source localization, a "best fit" method and a test of orthogonality between the "smallest" eigenvectors and the "source" vectors are discussed. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MUSIC Algorithms <s> Processing the signals received on an array of sensors for the location of the emitter is of great enough interest to have been treated under many special case assumptions. The general problem considers sensors with arbitrary locations and arbitrary directional characteristics (gain/phase/polarization) in a noise/interference environment of arbitrary covariance matrix. This report is concerned first with the multiple emitter aspect of this problem and second with the generality of solution. A description is given of the multiple signal classification (MUSIC) algorithm, which provides asymptotically unbiased estimates of 1) number of incident wavefronts present; 2) directions of arrival (DOA) (or emitter locations); 3) strengths and cross correlations among the incident waveforms; 4) noise/interference strength. Examples and comparisons with methods based on maximum likelihood (ML) and maximum entropy (ME), as well as conventional beamforming are included. An example of its use as a multiple frequency estimator operating on time series is included. <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> MUSIC Algorithms <s> The authors analyze the performance of Root-Music, a variation of the MUSIC algorithm, for estimating the direction of arrival (DOA) of plane waves in white noise in the case of a linear equispaced sensor array. The performance of the method is analyzed by examining the perturbation in the roots of the polynomial formed in the intermediate step of Root-Music. In particular, asymptotic results for the mean squared error in the estimates of the direction of arrival are derived. Simplified expressions are presented for the one- and two-source case and compared to those obtained for least-squares ESPRIT. Computer simulations are also presented, and they are in close agreement with the theory. An important outcome of this analysis is the fact that the error in the signal zeros has a largely radial component. This provides an explanation as to why the Root-Music is superior to the spectral MUSIC algorithm. > <s> BIB003
Multiple Signal Classification algorithm BIB002 BIB001 uses the eigenvectors decomposition and eigenvalues of the covariance matrix of the antenna array for estimating directions-of-arrival of sources based on the properties of the signal and noise subspaces. Several variants of MUSIC like Spectral, Unitary, Root MUSIC methods have been proposed to reduce complexity, increase performance and resolution power. The advantage of Root Music is the direct calculation of the DOA by the search for zeros of a polynomial, which replaces the search for maxima BIB003 , necessary in the case of MUSIC. This method is limited to linear antennas uniformly spaced out. But it allows a reduction in computing time and so an increase in the angular resolution by exploiting certain properties of the received signals. The principle of the Root-
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> International Journal of Computer Applications (0975 -8887) Volume 61-No.11, January 2013 <s> The authors analyze the performance of Root-Music, a variation of the MUSIC algorithm, for estimating the direction of arrival (DOA) of plane waves in white noise in the case of a linear equispaced sensor array. The performance of the method is analyzed by examining the perturbation in the roots of the polynomial formed in the intermediate step of Root-Music. In particular, asymptotic results for the mean squared error in the estimates of the direction of arrival are derived. Simplified expressions are presented for the one- and two-source case and compared to those obtained for least-squares ESPRIT. Computer simulations are also presented, and they are in close agreement with the theory. An important outcome of this analysis is the fact that the error in the signal zeros has a largely radial component. This provides an explanation as to why the Root-Music is superior to the spectral MUSIC algorithm. > <s> BIB001
MUSIC algorithm is to form a polynomial of degree 2(M-1) and extract the roots BIB001 . Spectral MUSIC has less resolution capability than ROOT MUSIC BIB001 . UnitaryMUSIC gives the same performance of ROOT MUSIC with an advantage of less computational complexity.
Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> The application of a subspace invariance approach (ESPRIT) to the estimation of parameters (frequencies and powers) of cisoids in noise is described. ESPRIT exploits an underlying rotational invariance of signal subspaces spanned by two temporally displaced data sets. The new approach has several advantages including improved resolution over Pisarenko's technique for harmonic retrieval. <s> BIB001 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> An approach to the general problem of signal parameter estimation is described. The algorithm differs from its predecessor in that a total least-squares rather than a standard least-squares criterion is used. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a translational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC. > <s> BIB002 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> The authors present a rigorous bias analysis of the MUSIC location estimator, and they derive an accurate and concise bias expression. The analysis is based on the second-order Taylor series expansion of the derivative of the null spectrum, properties of the null spectrum, and statistics of the estimated signal eigenvectors. It is proven that in the derivation the remainder term in the second-order Taylor series can be dropped but the second-order terms cannot be. Simulations verify that the bias expression is valid over a wide range of signal-to-noise ratios (SNRs) extending down into the resolution threshold region of MUSIC. Although asymptotic, this expression can be accurately applied to a limited number of snapshot cases. The utility of the expression is shown by using it in a study of MUSIC location estimator characteristics. Estimate bias and standard deviation are compared for variations in SNR, numbers of sensors and snapshots, and source correlation. MUSIC resolvability and estimator performance bounds are addressed, accounting for bias. > <s> BIB003 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the /spl lscr//sub 1/-norm. A number of recent theoretical results on sparsifying properties of /spl lscr//sub 1/ penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a second-order cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Crame/spl acute/r-Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization. <s> BIB004 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> Compressive sensing (CS) is an emerging area which uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. This paper considers the direction-of-arrival (DOA) estimation problem with an array of sensors using CS. We show that by using random projections of the sensor data, along with a full waveform recording on one reference sensor, a sparse angle space scenario can be reconstructed, giving the number of sources and their DOA's. The number of projections can be very small, proportional to the number sources. We provide simulations to demonstrate the performance and the advantages of our compressive beamformer algorithm. <s> BIB005 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> A new direction-of-arrival (DOA) estimation method is proposed based on a novel data model using the concept of a sparse representation of array covariance vectors (SRACV), in which DOA estimation is achieved by jointly finding the sparsest coefficients of the array covariance vectors in an overcomplete basis. The proposed method not only has high resolution and the capability of estimating coherent signals based on an arbitrary array, but also gives an explicit error-suppression criterion that makes it statistically robust even in low signal-to-noise-ratio (SNR) cases. Simulation experiments are conducted to validate the effectiveness of the proposed method. The performance is compared with several existing DOA estimation methods and the Cramer-Rao lower bound (CRLB). <s> BIB006 </s> Beamforming for Direction-of-Arrival (DOA) Estimation-A Survey <s> ESPIRIT Algorithms <s> This work addresses the problem of direction-of-arrival (DOA) estimation of multiple sources using short and dynamic sensor arrays. We propose to utilize compressive sensing (CS) theory to reconstruct the high-resolution spatial spectrum from a small number of spatial measurements. Motivated by the physical structure of the spatial spectrum, we model it as a sparse signal in the wavenumber-frequency domain, where the array manifold is proposed to serve as a deterministic sensing matrix. The proposed spatial CS (SCS) approach allows exploitation of the array orientation diversity (achievable via array dynamics) in the CS framework to address challenging array signal processing problems such as left-right ambiguity and poor estimation performance at endfire. The SCS is conceptually different from well-known classical and subspace-based methods because it provides high azimuth resolution using a short dynamic linear array without restricting requirements on the spatial and temporal stationarity and correlation properties of the sources and the noise. The SCS approach was shown to outperform current superresolution and orientation diversity based methods in single-snapshot simulations with multiple sources. <s> BIB007
Estimation of Signal parameters via rotational invariance technique, is based on the rotational invariance property of the signal space BIB002 BIB002 to make a direct estimation of the DOA and obtain the angles of arrival without the calculation of a pseudo-spectrum on the extent of space, nor even the search for roots of a polynomial. ESPIRIT is similar to MUSIC algorithm with slight modifications. The main advantage of this method is that it avoids the heavy research of maxima of a pseudo-spectrum or a cost function (therefore a gain calculation) and the simplicity of its implementation. In addition, this technique is less sensitive to noise than MUSIC and Root-MUSIC BIB003 . It has been shown in - BIB001 that MUSIC and ESPRIT algorithms achieve almost identical performance in the case of unmodulated sinusoids, but that ESPRIT is slightly better than MUSIC. Ultimately, ESPRIT appears less sensitive to noise than MUSIC BIB002 . These prevalent background algorithms acquire the source signals at the Nyquist rate and take a higher number of samples to estimate a parameter which is sparse in some other domain. Hence it is advisable to exploit this sparsity property to estimate the DOA of the desired signal. The sparsity property of signals has been utilized in a variety of applications including image reconstruction, medical imaging radar imaging , blind source separation and shape detection . In the literature, sparsity information has also been used previously for beamforming and source localization. The concept of the spatial sparsity of sources can be exploited to accomplish source localization in arbitrary shaped sensor arrays for both narrowband and wideband signals by using a very small number of measurements, thereby improving the communication efficiency of sensor networks BIB006 BIB004 . Although prior research has validated the benefits of exploiting spatial sparsity in source localization, such as improved resolution, the methods also require a high sampling rate of source signals, which increases the communication load between sensors. This is an important consideration for energy efficient wireless sensor networks. Furthermore, in some applications, data acquisition might be very expensive. A comparison of the different beamforming techniques and algorithms is given in Table 1 . which is k-sparse in the basis defined by the columns of ψ. According to CS, if non-traditional linear measurements in the form of randomized projections are taken, The signal x can be exactly reconstructed with a high probability with a lesser number of samples, from the compressive measurements by solving a convex optimization problem subject to : (10) which can be solved efficiently with linear programming. The key result is that the required number of measurements is linked linearly to the sparsity-k of the signal. The Compression is done at Sensing level, rather after the sensing. This leads to a greater reduction of samples, taking only fewer measurements, M, with (11) where, K is the sparsity in angle domain, N is the original number of samples used. The technique of Compressive Sensing can be used for DOA estimation, which consider only sparse number of samples. In , a compressive wireless array is proposed for bearing estimation. In BIB005 , a compressive beamforming method is presented. Both approaches apply compressive sampling in the time domain to reduce the ADC sampling rate or the number of time samples for each element of the array. In the DOA estimation of narrowband sources impinging on a uniform circular array was considered. In BIB004 the formulation leads to second-order cone (SOC) programming where the optimization is performed over the entire signal space. The very high computational complexity of this formulation can be reduced by introducing the singular value decomposition (SVD) of the measured data matrix. The method in tries to reconstruct the signals for sparse sources in the time domain with a combined l1-l2 norm minimization similar to BIB004 . In a new hardware architecture exploiting compressive sensing (CS) for direction estimation is also used. In BIB007 , a Spatial Compressive Sensing (SCS) approach is proposed, the sensing and the reconstruction processes can be performed incrementally while improving the spatial spectrum estimation performance proportional to an increase in array orientation diversity (the number of array orientations). Finally, array orientation diversity is proposed to address some of the challenging problems that arise in passive sonar applications, such as: low bearing resolution when using short arrays, incoherency between sensors when using long arrays, poor estimation performance at endfire, short samples support when the temporal coherency is limited by the motion of the array or sources, spatial correlation of the ambient noise and the correlation among sources etc. The work shows that array orientation diversity provides improvement in spatial spectrum estimation which is not associated with a linear increase in the number of spatial measurements. The compressive bearing estimation approach based on spatial sparsity has several advantages over other approaches in the literature, such as MVDR, MUSIC, and previous methods using sparsity, which require Nyquist sampling at the sensors BIB007 . Creating a bearing spectrum with many fewer measurements decreases the communication load in wireless networks and enables lower data acquisition rates, which might be very important for high bandwidth applications. Moreover, the array geometry can be arbitrary but known. Other advantages include increased resolution and robust to noise. A considerable reduction in hardware costs may also be achieved.
A survey on cloud‐based sustainability governance systems <s> Introduction <s> IT governance is critical to most organisations and has an influence on the value generated by IT investments. Unfortunately, IT governance is more aspiration than reality in many organisations. This research seeks to address the dearth of empirical evidence about IT governance in practice, presenting the findings of an IT governance case study in an Australian organisation. Recommendations are provided to assist organisations to maximise potential of IT governance and insights are provided for researchers. <s> BIB001 </s> A survey on cloud‐based sustainability governance systems <s> Introduction <s> Currently, business requirements for rapid operational efficiency, customer responsiveness as well as rapid adaptability are driving the need for ever increasing communication and integration capabilities of the software assets. Service Oriented Architecture (SOA) is generally acknowledged as being a potential solution to expose finely grained pieces of software components on a network that are reusable and composable. Provisioning of business services for different business purposes may require the rapid assembly of their core functionality with different infrastructure capabilities and policies in different contexts. In this paper, the authors propose a SOA based governance model that permits to handle non functional requirements in a dynamic way. <s> BIB002
Over the last few years, devices, equipments, cars, resident houses and commercial buildings have been increasingly instrumented with smart meters and monitoring sensors to provide different types of data for not only monitoring and detecting abnormal status but also supporting sustainability development. However, to support sustainability development in the ecosystem of facilities, we must have adequate governance processes for sustainability. For example, to monitor resource consumption at near real-time in a large commercial/resident building, several types of monitoring data of equipments and spaces in the building, such as electricity consumption, temperature, water consumption, fans, freezers, chillers, etc. have to be gathered and combined. Then, we need to store and to share these types of data over the time The work mentioned in this paper is partially funded by the Pacific Control Cloud Computing Lab. The authors thank Vivek Sundaram for his discussion and support on an early draft of this paper; also the Pacific Control Cloud Computing Lab for providing detailed information about the Galaxy platform. for short-and long-term data analysis, reporting and auditing of sustainability measurements, such as trend analysis of greenhouse gas (GHG) emissions and electricity consumption. In particular, to meet sustainability compliance rules (e.g. for GHG emission and air quality) and to maintain the sustainability of these systems, various complex analysis methods need to be conducted to understand the behaviors of monitored systems and multiple stakeholders involve in the monitoring and analysis of these systems. The complexity of data storage, sharing, analysis and application integration poses several challenges for any sustainability governance platform. Especially, sustainability monitoring and analysis of large facilities involve different stakeholders and multi-objective optimization (e.g. to meet law compliance and economical factors). While several information systems have been built for management of energy consumption of facilities in home and enterprise contexts, and their features may be accessed via the internet, such systems are typically hosted and managed by or dedicated for only the facility owner. They do not support well multi-stakeholder and multi-objective optimization in compliance with diverse regulations. In our focus on sustainability governance of GHG and energy consumption, we believe that a cloud computing model naturally would be a candidate for overcoming the above-mentioned challenges due to several reasons, such as reducing cost, easing data access and sharing, and enabling complex analysis and compliance assurance. However, to date, most cloud systems are targeted to generic computational resources and storage, and other domains, such as for small and medium enterprises (SMEs), rather than to facility sustainability governance. Only a few industrial systems have been focused on facility management, such as Galaxy (Pacific Control Systems, 2011) , generic sensor data sharing and electricity data management, such as Pachube (Pachube, 2011) , and carbon footprints analysis, such as AMEE (AMEE, 2011) . Although several enabling techniques have been developed in research communities, they are not well integrated into cloud-based solutions for sustainability governance of facilities. While existing industrial and research cloud systems enable certain sustainability governance features, they still cover only few aspects in the ecosystem of sustainability governance. Therefore, we examined how cloud computing offerings can support sustainability governance from the perspective of data integration, sharing, and management, data analytics capability, and interoperable cloud platforms. In this paper, we analyze three aspects for sustainability governance with a focus on carbon footprints and energy consumption: (1) a detailed analysis model of sustainability governance based on a cloud computing model; (2) comparison of existing cloud systems enabling sustainability governance; and (3) open research issues. The rest of this paper is organized as follows: Section 2 discusses background and related work. Section 3 discusses the model of sustainability monitoring and analysis in the cloud. We present a detailed analysis of cloud production systems for sustainability monitoring and analysis in Section 4. Section 5 discusses research prototypes that can be used for sustainability governance. We discuss open issues in Section 6. Section 7 summarized the paper and gives an outlook to our future work. Cloud-based sustainability governance 279 2. Background and related work 2.1 Sustainability governance in the context of facility management We consider sustainability for humans, which is defined as "the potential for long-term maintenance of well-being" and which has "environmental, economic and social dimensions" (see detailed definition in http://en.wikipedia.org/wiki/Sustainable development). In this paper, we will focus on sustainability in the context of facility management. A facility can be a building, a home, a car, or an equipment, and its sub components/elements. In order to support sustainability development for facilities, we will focus on techniques for capturing, monitoring and analyzing sustainability measurements that characterize human consumption and for examining whether such measurements can meet compliance rules and can support the utilization of resources in a sustainable way. Current sustainability measurements are diverse (see http://en.wikipedia.org/wiki/ Sustainability measurement for further information). However, in our paper, we consider sustainability measurements related to facility resource consumption by human, in particular, GHG and energy consumption. In this paper, sustainability governance (see http://en.wikipedia.org/wiki/Governance for what does it mean governance) applied in facility management is related to models, techniques and processes to "maintain monitoring, analysis, management and compliance assurance of sustainability measurements" to meet both consumer's expectation and regulation requirements. Concretely, to support sustainability governance for facilities, platforms for monitoring, analysis, management and operation of sustainable facilities should consider: . Service governance. Considered as a part of IT governance (http://en.wikipedia. org/wiki/IT_governance) which has multiple facets BIB001 , can cover several aspects, such as service lifecycle management, quality of service, service change management and service contract BIB002 . In our work, we focus "on possible services and their quality that support the monitoring, analysis of sustainability measurements for sustainability standards or laws". . Data governance. Data governance (http://en.wikipedia.org/wiki/Data_ governance) is complex but in our work, we will examine "processes and policies that ensure the quality of data, data security and privacy of the sensory data and sustainability measurements in these platforms, and the data lifecycle to comply with sustainability regulations". . Stakeholder governance. Reflects the role of stakeholders, e.g. how stakeholder access data. This is based on interests and roles of stakeholders in corporate governance (http://en.wikipedia.org/wiki/Corporate_Governance). In our work, we will examine "how well existing platforms support stakeholders in the ecosystem of sustainable facilities".
A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> The emergence of machine-to-machine (M2M) technologies as a business opportunity is based on the observation that there are many more machines and objects in the world than people and that an everyday object has more value when it is networked. In this paper, we describe an M2M middleware that we have developed for a facility management application. Facility management is a time and labour-intensive service industry, which can greatly benefit from the use of M2M technologies for automating business processes. The need to manage diverse facilities motivates several requirements, such as predictive maintenance, inventory management, access control, location tracking, and remote monitoring, for which an M2M solution would be useful. Our middleware includes software modules for interfacing with intelligent devices that are deployed in customer facilities to sense real-world conditions and control physical devices; communication modules for relaying data from the devices in the customer premises to a centralized data center; and service modules that analyze the data and trigger business events. We also present performance results of our middleware using our testbed and show that our middleware is capable of scalably and reliably handling concurrent events generated by different types of M2M devices, such as RFID tags, Zigbee sensors, and location tracking tags. <s> BIB001 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users. This paper reviews recent advances of Cloud computing, identifies the concepts and characters of scientific Clouds, and finally presents an example of scientific Cloud for data centers <s> BIB002 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> This paper describes the methodic of three-tier client-server architecture implementation for Web service creation. Data from SCADA system application are dynamically loaded to database by implementing ODBC method. For acceleration of data retrieving the parameters from SCADA application are transmitted to procedure that is stored in DB server. This stored procedure exports SCADA data to XML. DB interacts with Web server in XML. SOAP protocol is used for messaging with Web service client browser. The sequence of programming steps for interoperability with concrete Web service, which visualizes data, is shown. <s> BIB003 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> Given the energy waste problem in contemporary households and the consequent need for optimal energy use, this article presents a novel network architecture that is generically applicable on domestic appliances, such as white goods, and audiovisual and communication equipment, and is capable of performing real-time management of their energy consumption. Deploying the latest information and communication technology, the proposed architecture enables definition of energy saving applications that perform three main functions: real-time estimation of the energy consumption of the home environment, without exploiting smart metering devices; control of domestic appliances energy use so that energy consumption of the home environment is kept within user-defined limits; and autonomous identification and management of standby devices, targeting minimal energy consumption. <s> BIB004 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> The guest editor of this special issue on cloud computing defines the term and describes the articles highlighted. <s> BIB005 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> This paper presents an ubiquitous solution for providing a secure access for monitoring and controlling of a smart home by using a mobile device. It describes an end-to end solution for an user friendly house and facility monitoring and control system. The solution enables the ubiquitous control of various house and facility automation devices by using a mobile smart phone and mobile internet access. The paper describes the overall system design, the used components, the implementation, the security featires, as well as the testing and evaluation of the corresponding service access times. Home control, House automation, Ubiquitous computing, Remote control <s> BIB006 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> The increasing usage of smart embedded devices in business blurs the line between the virtual and real worlds. This creates new opportunities to build applications that better integrate real-time state of the physical world, and hence, provides enterprise services that are highly dynamic, more diverse, and efficient. Service-Oriented Architecture (SOA) approaches traditionally used to couple functionality of heavyweight corporate IT systems, are becoming applicable to embedded real-world devices, i.e., objects of the physical world that feature embedded processing and communication. In such infrastructures, composed of large numbers of networked, resource-limited devices, the discovery of services and on-demand provisioning of missing functionality is a significant challenge. We propose a process and a suitable system architecture that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real-world business applications. <s> BIB007 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> In recent years, the standards of OGC's Sensor Web Enablement (SWE) initiative have been applied in a multitude of projects to encapsulate heterogeneous geosensors for web-based discovery, tasking and access. Currently, SWE services and the different types of geosensors are integrated manually due to a conceptual gap between these two layers. Pair-wise adapters are created to connect an implementation of a particular SWE service with a particular type of geosensor. This approach is contrary to the aim of reaching interoperability and leads to an extensive integration effort in large scale systems with various types of geosensors and various SWE service implementations. To overcome this gap between geosensor networks and the Sensor Web, this work presents an intermediary layer for integrating these two distinct layers seamlessly. This intermediary layer is called the Sensor Bus as it is based on the message bus architecture pattern. It reduces the effort of connecting a sensor with the SWE services, since only the adaption to the Sensor Bus has to be created. The communication infrastructure which acts as the basis for the Sensor Bus is exchangeable. In this work, the Sensor Bus is based on Twitter. The involved SWE services as well as connected geosensors are represented as user profiles of the Twitter platform. <s> BIB008 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> In this article, we make a case for using Information Technology (IT) to green building facilities and enumerate the important features that an IT framework should possess towards this end. We present ECView -- an IT framework that was developed at Tata Consultancy Services (TCS) to manage the carbon footprint of office buildings. We demonstrate the advantages of employing IT for greening facilities through a case study that uses ECView to green a TCS building. <s> BIB009 </s> A survey on cloud‐based sustainability governance systems <s> Facility monitoring using service-oriented architecture <s> Measured energy performance data are essential to national efforts to improve building efficiency, as evidenced in recent benchmarking mandates, and in a growing body of work that indicates the value of permanent monitoring and energy information feedback. This paper presents case studies of energy information systems (EIS) at four enterprises and university campuses, focusing on the attained energy savings, and successes and challenges in technology use and integration. EIS are broadly defined as performance monitoring software, data acquisition hardware, and communication systems to store, analyze, and display building energy information. Case investigations showed that the most common energy savings and instances of waste concerned scheduling errors, measurement and verification, and inefficient operations. Data quality is critical to effective EIS use, and is most challenging at the subsystem or component level, and with non-electric energy sources. Sophisticated prediction algorithms may not be well understood but can be applied quite effectively, and sites with custom benchmark models or metrics are more likely to perform analyses external to the EIS. Finally, resources and staffing were identified as a universal challenge, indicating a need to identify additional models of EIS use that extend beyond exclusive in-house use, to analysis services. <s> BIB010
The service-oriented architecture (SOA) model has been applied to monitor facilities over the past few years. However, to date the main use of the service model in this respect is focused on the use of web services to remotely monitor and control these monitored objects[1] with a basic facility management model in which typically IJWIS 8,3 the owner of monitored objects monitors and controls her objects. Supports for sustainability governance are negligible, as all monitored data is owned and managed by the owner. For example, in BIB006 home is monitored via web services. A SCADA system accessed from web service is given in BIB003 . Web-based systems for buildings and energy management have been demonstrated to be very useful BIB010 . Several frameworks have been developed to support the integration of different monitoring sensors to provide data for buildings, houses and transportation vehicles, such as BIB004 , BIB001 and . Such monitoring data can also be exposed through web services and integrated into business processes BIB007 . While middleware can be used to relay monitoring data to consumers, such as BIB008 and , these systems are typically limited to the boundary of a single organization. It means that a system can be used to monitor objects in distributed facilities but there is only a single owner and consumer [2] of the system. In our view, techniques for integration sensors are enabling technology for providing data which can be stored and processed by the cloud model but they do not support the cloud computing model in which they act as a platform for multiple organizations/customers. Recently, several cloud-based platforms to support the monitoring of energy consumption have been introduced, such as and . These systems, handling only data from their own devices, act as a platform to store electricity consumption information which is updated and accessed from different homes. However, they are mostly for near real-time monitoring rather than for sustainability governance. Going beyond these monitoring systems, generic cloud-based services have been provided to store different types of monitoring data to facilitate sustainability monitoring and analysis, such as Pachube (2011). Furthermore, there are systems supporting sustainability governance for buildings such as EC View BIB009 and Galaxy (Pacific Control Systems, 2011) . Some systems have supported generic ways to determine carbon footprints based on standard profiles for sustainability governance, such as the AMEE platform (AMEE, 2011) . While there are several research reports on generic computational and data storage cloud systems BIB002 , and for e-science BIB005 ), we are not aware of any work discussing how sustainability governance in general utilizes cloud computing offerings and how cloud computing could be useful for sustainability governance. 3. Towards cloud-based sustainability governance 3.1 The ecosystem of facility sustainability governance In sustainability governance, we have to consider the complexity of data. There exist many different types of monitoring data, each type for a kind of (e.g. a chiller) or a part of a monitored object (e.g. a room). Methods and algorithms for sustainability analysis are complex due to the huge amount of different monitored objects and sustainability measurements (e.g. even GHG has ten primary different types as discussed in Center for Sustainable Systems (2010)). These methods and algorithms will rely on a large set of reference/standard models which specify basic information and calculation models for determining sustainability measurements of different types of objects. Cloud-based sustainability governance Moreover, for large-scale facilities, several stakeholders conduct different activities that are inherent to the evolution of the ecosystem. Conceptually, in an end-to-end view of sustainability governance, shown in Figure 1 first, monitoring sensors are used to monitor (sustainable) systems to provide monitoring data for data analysis to determine sustainability measurements. Second, monitoring data will be stored for the analysis of sustainability measurements which involves complex calculation, estimation and prediction methods and utilizes various reference models. Third, application-specific sustainability measurements will be provided to governance applications. Along these paths, different activities are performed by different stakeholders and data will be stored, analyzed and shared due to sustainability governance rules. From the system architecture and integration perspective, these main building blocks can be distributed and provided by different providers. Furthermore, interactions among these building blocks can be carried out via the internet. In order to understand why cloud computing could offer benefits for sustainability governance, we must analyze the stakeholders and their roles in the ecosystem, and the evolution of the ecosystem, by considering the above-mentioned end-to-end data flows. We have observed several activities required for sustainability governance that are performed by different stakeholders, shown in Figure 1 . Main activities are: . Gather and store monitoring data for sustainability measurement. This can be done automatically or manually using different methods, such as monitoring sensors push data to the platform or the platform pulls data from monitoring sensors.
A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery “pipelines”. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. “the Grid”). However, this infrastructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemyii system, planned extensions, and areas of future research. Kepler is a communitydriven, open source project, and we always welcome related projects and new contributors to join. <s> BIB001 </s> A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> Taverna is an application that eases the use and integration of the growing number of molecular biology tools and databases available on the web, especially web services. It allows bioinformaticians to construct workflows or pipelines of services to perform a range of different analyses, such as sequence analysis and genome annotation. These high-level workflows can integrate many different resources into a single analysis. Taverna is available freely under the terms of the GNU Lesser General Public License (LGPL) from http://taverna.sourceforge.net/. <s> BIB002 </s> A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> Introduction Carbon Monoxide (CO) is a poisonous air pollutant produced from the incomplete oxidation of carbon during the combustion process. It has a direct effect on the human body due to its affinity for blood hemoglobin, which inhibits the absorption of oxygen to the blood. The formation of carboxyhemoglobin complex can profoundly affect human health both on an acute and a chronic basis. CO can also be found inside any house at the level of 0.5-30 ppm [http://www.epa.gov/iaq/co.html] because it can be produced from the combustion of household utilities such as heater, stove, fireplace and automobile exhaust in the attached household garage. As CO is a colorless and an odorless gas, CO detectors need to be installed to monitor the CO concentration in a working environment. For an ambient environment, the most popular way of measuring CO uses the principles of nondispersive infrared absorption (NDIR). Other useful methods are Gas Chromatography with flame ionization detector (GC/FID) or Catalytic oxidation techniques. U.S. Environmental Protection Agency (USEPA) employs NDIR as a traditional reference method for CO monitoring regulation. This method is performed by an analyzer and required standard gas system, pump, monitoring station, air conditioner or heater, computing equipment with appropriate programming, and other related equipment. All the necessary equipment needs to be housed and operated inside a room, and protected from rain, dust, and sunlight. Such preventive issues make this method complicated, cumbersome, and expensive. Recent advances in wireless sensor networks (WSNs) make them an attractive solution for monitoring air quality. For instance, a wireless system designed to monitor indoor CO2 concentration is described in the literature. Lindsay Seders et al. deployed a sensor network to monitor water quality in St. Mary's Lake on the University of Notre Dame campus. This wireless sensor network used nodes by Mica2 and MDA300 from Crossbow Inc. [http://www.epa.gov/iaq/co.html]. Cardell-Oliver et al. developed and evaluated a reactive sensor network for monitoring soil moisture, which can adaptively change the sampling rate based on rainfall events. The successful deployment of these systems demonstrates that WSNs can be useful for some environmental monitoring scenarios. Very little work has been done for CO monitoring with wireless sensor networks. Agrawal et al. have indicated that WSNs can provide continuous, real-time data of ambient air quality. The sensor systems, combined with the wireless communication network, give the benefit of convenience in deployment, and lower operation and maintenance cost when compared with NDIR technique. The sensor nodes can be powered by either batteries and/or solar energy sources. With the objective of monitoring the area around the University of Cincinnati (UC), 5 out of 15 planned CO sensors were placed on electric poles as shown in Figure 1. This was done to check the proof of the concept and the rest of sensors will be placed in the near future. <s> BIB003 </s> A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> The increasing usage of smart embedded devices in business blurs the line between the virtual and real worlds. This creates new opportunities to build applications that better integrate real-time state of the physical world, and hence, provides enterprise services that are highly dynamic, more diverse, and efficient. Service-Oriented Architecture (SOA) approaches traditionally used to couple functionality of heavyweight corporate IT systems, are becoming applicable to embedded real-world devices, i.e., objects of the physical world that feature embedded processing and communication. In such infrastructures, composed of large numbers of networked, resource-limited devices, the discovery of services and on-demand provisioning of missing functionality is a significant challenge. We propose a process and a suitable system architecture that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real-world business applications. <s> BIB004 </s> A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> Increasing concern about energy consumption is leading to infrastructure that continuously monitors consumer energy usage and allow power utilities to provide dynamic feedback to curtail peak power load. Smart Grid infrastructure being deployed globally needs scalable software platforms to rapidly integrate and analyze information streaming from millions of smart meters, forecast power usage and respond to operational events. Cloud platforms are well suited to support such data and compute intensive, always-on applications. We examine opportunities and challenges of using cloud platforms for such applications in the emerging domain of energy informatics. <s> BIB005 </s> A survey on cloud‐based sustainability governance systems <s> Research cloud systems for sustainability governance <s> This document introduces the TimeCloud Front End, a webbased interface for the TimeCloud platform that manages large-scale time series in the cloud. While the Back End is built upon scalable, fault-tolerant distributed systems as Hadoop and HBase and takes novel approaches for facilitating data analysis over massive time series, the Front End was built as a simple and intuitive interface for viewing the data present in the cloud, both with simple tabular display and the help of various visualizations. In addition, the Front End implements modelbased views and data fetch on-demand for reducing the amount of work performed at the Back End. <s> BIB006
Consider building blocks of an end-to-end system for sustainability governance in Figure 1 several techniques have been developed for different purposes but can be used, as parts of, for sustainability governance. For example, for sensors and sensor web, several techniques have been developed to capture different types of monitoring data and related monitoring data to the central places or allows monitoring data to be accessed. However, they do not follow the cloud computing model, either they follow the web and everyone can access the data, such as sensor web (Gibbons et al., 2003) or are designed for specific purposes BIB003 ). Furthermore, their focus is on monitoring, rather than sustainability governance (although data are usually archived). Although several techniques are common, such data query, data integration, realtime monitoring, etc. we will not discuss them in this survey since we focus on cloud platforms, rather enabling techniques for sensor data integration. In SOA, different techniques have been developed for integrating sensors via the service model, such as in BIB004 . However, although we consider the integration of sensors into SOA-based platforms as a fundamental part of the whole end-to-end facility governance, it is not the focus of our study in this paper. Considering our architectural view of SusGov systems, different techniques have been developed atop cloud infrastructure that can be utilized. Table IV describes relevant techniques for cloud-based sustainability governance: . SusGov DaaS. Investigation of cloud computing for storing and processing sensor data has been conducted recently. BIB006 present techniques to access sensor data stored in their cloud using HBase, etc. Although it has not been tested with facility sustainability governance, it could be useful for the development of SusGov DaaS wrt monitoring data. However, several issues related to stakeholders and how they access data have not been addressed. . SusGov MOaaS. With respect to sensor data and its processing, BIB005 presents how sensor data from smart meters can be processed using cloud virtual machines. Their work introduces neither a SusGov system nor establish Cloud-based sustainability governance SusGov specific governance techniques. However, they present ideas about a platform (PaaS) for processing data on the fly. This is similar to recent stream data analysis frameworks that handle events using cloud infrastructures. Since handling events on the fly is one goal of SusGov MOaaS, these works could be useful for the design of SusGov MOaaS atop cloud infrastructures. . SusGov AaaS. Several research works have been investigated for the so-called computational sustainability (Gommes, 2009) , in which processes for analyzing sustainability measurements are developed, e.g. utilizing archived data with workflow, data analytics, etc. There are several workflow systems, which are able to analyze data from different sources, such as Kepler BIB001 , Taverna BIB002 ), Trident (Trident, 2011 . However, they have not been tested and integrated for sustainability analysis. Still many analysis algorithms are implemented in sequential programs, R and MathLab scripts. Patnaik et al. show data analytics techniques for analyzing chillers in data centers but this work is just focused on data analysis aspect isolated from cloud sustainability governance systems. Overall, a research system based on cloud computing model for sustainability governance has not been observed. Several research efforts has spent to develop enabling techniques that allow us to easy connect sensor data to cloud but there is a lack of integrated system, the lack of techniques to support multiple stakeholders and governance and lack of integration of analysis workflows. 6. Open research issues on sustainability governance using cloud computing What we have observed in the previous section is that existing systems have basic support for data storage and data retrieval but they are still limited to basic monitoring and analysis, with very little, if at all, support for multiple stakeholders, complex analysis and compliance processes. To improve the support of sustainability governance using the cloud computing model, we believe we need to address the following points. Linked data concepts and monitored object dependencies Currently, for example, our studied systems mainly support a hierarchical structure of monitored objects, data streams and individual data points based on different specifications. We observed two issues: (1) data exchange among cloud systems for sustainability governance; and (2) complex dependency among monitored objects in the analysis of sustainability measurements. The first issue is the difficulty to utilize different cloud systems due to the different data models in different cloud systems. The second issue is that it is difficult to support complex analyses requiring multiple types of data. Due to the diversity of types of monitored objects, we do not expect a single data model to represent reference profiles and monitoring data. However, as complex sustainability analysis requires different types of monitoring data, we expect data models to be linked to reflecting the dependency of monitored objects they represent. Currently, generic data models are used for measurements produced by monitoring sensors but they represent individually monitored objects. We lack a mechanism to specify the above-mentioned dependencies. In particular, most systems provide monitoring data but metadata about monitored objects are inadequate.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. INTRODUCTION <s> Software-defined-network technologies like OpenFlow could change how datacenters, cloud systems, and perhaps even the Internet handle tomorrow's heavy network loads. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. INTRODUCTION <s> Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> I. INTRODUCTION <s> OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology. <s> BIB003
At least a decade ago it was recognized that new network abstraction layers for network control functions needed to be developed to both simplify and automate network management. Software Defined Networking (SDN) BIB002 - BIB001 is the design principle that emerged to structure the development of those new abstraction layers. Fundamentally, SDN is defined by three architectural principles , : (i) the separation of control plane functions and data plane functions, (ii) the logical centralization of control, and (iii) programmability of network functions. The first two architectural principles are related in that they combine to allow for network control functions to have a wider perspective on the network. The idea is that networks can be made easier to manage (i.e., control and monitor) with a move away from significantly distributed control. A tradeoff is then considered that balances ease of management arising from control centralization and scalability issues that naturally arise from that centralization. The SDN abstraction layering consists of three generally accepted layers inspired by computing systems, from the bottom layer to the top layer: (i) the infrastructure layer, (ii) the control layer, and (iii) the application layer, as illustrated in Fig. 1 . The interface between the application layer and the control layer is referred to as the NorthBound Interface (NBI), while the interface between the control layer and the infrastructure layer is referred to as the SouthBound Interface (SBI). There are a variety of standards emerging for these interfaces, e.g., the OpenFlow protocol BIB003 for the SBI. The application layer is modeled after software applications that utilize computing resources to complete tasks. The control layer is modeled after a computer's Operating System (OS) that manages computer resources (e.g., processors and memory), provides an abstraction layer to simplify interfacing with the computer's devices, and provides a common set of services that all applications can leverage. Device drivers in a computer's OS hide the details of interfacing with many different devices from the applications by offering a simple and unified interface for various device types. In the SDN model both the unified SBI as well as the control layer functionality provide the equivalent of a device driver for interfacing with devices in the infrastructure layer, e.g., packet switches. Optical networks play an important role in our modern information technology due to their high transmission capacities. At the same time, the specific optical (photonic) transmission and switching characteristics, such as circuit, burst, and packet switching on wavelength channels, pose challenges for controlling optical networks. This article presents a comprehensive survey of Software Defined Optical Networks (SDONs). SDONs seek to leverage the flexibility of SDN control for supporting networking applications with an underlying optical network infrastructure. This survey comprehensively covers SDN related mechanisms that have been studied to date for optical networks.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> 1. DSL Fundamentals. Alternatives to DSLs: Fiber, Wireless, and Coax. Worldwide Extent. Voice-Band Modems and DSLs. Transmission Modes. DSL Terminology. Rate Versus Reach. Crosstalk. Enabling and Disabling Forces. Applications. Evolution of Digital Transmission. 2. Types of DSLs. DSL Design Margin. DSL Precursors. Basic Rate ISDN. HDSL. ADSL. VDSL. 3. Twisted-Pair Transmission. Twisted-Wire-Pair Origins. Telephone Network and Loop Plant Characteristics. Line Powering. Sealing Current. Transmission Line Characterization. Noises. Spectral Compatibility. More Two-Port Networks. Three-Port Networks for DSLs. References. 4. Comparison with Other Media. Fiber-to-the-Home. Coax and Hybrid Fiber Coax. Wireless Alternatives. Satellite Services. References. 5. Transmission Duplexing Methods. Four-Wire Duplexing. Echo Cancellation. Time-Division Duplexing. Frequency-Division Multiplexing. References. 6. Basic Digital Transmission Methods. Basic Modulation and Demodulation. Baseband Codes. Passband Codes. References. 7. Loop Impairments, Solutions, and DMT. Intersymbol Interference. Multichannel Line Codes. Trellis Coding. Error Control. References. 8. Initialization, Timing and Performance. Initialization Methods. Adaptation of Receiver and Transmitter. Measurement of Performance. Timing Recovery Methods. References. 9. Operations, Administration Maintenance, and Provisioning. OAM&P Features. Loop Qualification. 10. DSL in the Context of the ISO Reference Model. The ISO Model. Theory and Reality. The Internet Protocol Suite. ATM in the Seven-Layer Model. 11. ADSL: The Bit Pump. ADSL System Reference Model. ATU-C Reference Model. ATU-R Reference Model. Specific Configurations to Support ATM. Framing. Operations and Maintenance. Initialization. Reference. 12. ATM Transmission Convergence on ADSL. Functions of ATM Transmission Convergence. Transmission Convergence in an ADSL Environment. Reference. 13. Frame-Based Protocols over ADSL. PPP over a Frame-Based ADSL. FUNI over ADSL. Reference. 14. ADSL in the Context of End-to-End Systems. An Overview of a Generic DSL Architecture. Potential ADSL Services and the Service Requirements. Specific Architectures for Deploying ADSL in Different Business Models. Several ADSL Architectures. References. 15. Network Architecture and Regulation. Private Line. Circuit Switched. Packet Switched. ATM. Remote Terminal. Competitive Data Access Alternatives. Regulation. 16. Standards. ITU. Committee T1. ETSI. ADSL Forum. ATM Forum. DAVIC. IETF. EIA/TIA. IEEE. The Value of Standards and Participation in Their Development. Standards Process. Appendix A: Glossary. Appendix B: Selected Standards and Specifications. Appendix C: Selected T1E1.4 Contributions and ADSL Forum Technical Reports (found on website). Index. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software Defined Networking (SDN) is an emerging networking paradigm that separates the network control plane from the data forwarding plane with the promise to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. Although traffic engineering techniques have been widely exploited in the past and current data networks, such as ATM networks and IP/MPLS networks, to optimize the performance of communication networks by dynamically analyzing, predicting, and regulating the behavior of the transmitted data, the unique features of SDN require new traffic engineering techniques that exploit the global network view, status, and flow patterns/characteristics available for better traffic control and management. This paper surveys the state-of-the-art in traffic engineering for SDNs, and mainly focuses on four thrusts including flow management, fault tolerance, topology update, and traffic analysis/characterization. In addition, some existing and representative traffic engineering tools from both industry and academia are explained. Moreover, open research issues for the realization of SDN traffic engineering solutions are discussed in detail. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> This paper gives an overview of software-defined optical networks (SDONs). It explains the general concepts on software-defined networks (SDNs), their relationship with network function virtualization, and also about OpenFlow, which is a pioneer protocol for SDNs. It then explains the benefits and challenges of extending SDNs to multilayer optical networks, including flexible grid and elastic optical networks, and how it compares to generalized multi-protocol label switching for implementing a unified control plane. An overview on the industry and research efforts on SDON standardization and implementation is given next, to bring the reader up to speed with the current state of the art in this field. Finally, the paper outlines the benefits achieved by SDONs for network operators, and also some of the important and relevant research problems that need to be addressed. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> This paper discusses SDN for optical access networks, with a focus on SDN overlays for existing networks, a unified control plane for next-generation optical access, and an overview of recent research progress in this area. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> The tight connection between advanced mobile techniques and optical networking has already been made by emerging cloud radio access network architectures, wherein fiber-optic links to/from remote cell sites have been identified as the leading high-speed, low-latency connectivity solution. By taking such fiber-optic mobile fronthaul networks as the reference case, this paper will consider their scaling to meet 5G demands as driven by key 5G mobile techniques, including massive multiple input multiple output (MIMO) and coordinated multipoint (CoMP), network densification via small/pico/femto cells, device-to-device (D2D) connectivity, and an increasingly heterogeneous bring-your-own-device (BYOD) networking environment. Ramifications on mobile fronthaul signaling formats, optical component selection and wavelength management, topology evolution and network control will be examined, highlighting the need to move beyond raw common public radio interface (CPRI) solutions, support all wavelength division multiplexing (WDM) optics types, enable topology evolution towards a meshed architecture, and adopt a software-defined networking (SDN)-based network control plane. The proposed optical network evolution approaches are viewed as opportunities for both optimizing user-side quality-of-experience (QoE) and monetizing the underlying optical network. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> This paper explores the applicability of the Software Defined Networking (SDN) paradigm to access networks. In particular, it describes Broadband and Enterprise use cases where SDN can play a role in enabling new network services. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of MWN and significantly benefit the future mobile and wireless network. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software defined networking (SDN) decouples the network control and data planes. The network intelligence and state are logically centralized and the underlying network infrastructure is abstracted from applications. SDN enhances network security by means of global visibility of the network state where a conflict can be easily resolved from the logically centralized control plane. Hence, the SDN architecture empowers networks to actively monitor traffic and diagnose threats to facilitates network forensics, security policy alteration, and security service insertion. The separation of the control and data planes, however, opens security challenges, such as man-in-the middle attacks, denial of service (DoS) attacks, and saturation attacks. In this paper, we analyze security threats to application, control, and data planes of SDN. The security platforms that secure each of the planes are described followed by various security approaches for network-wide security in SDN. SDN security is analyzed according to security dimensions of the ITU-T recommendation, as well as, by the costs of security solutions. In a nutshell, this paper highlights the present and future security challenges in SDN and future directions for secure SDN. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> SDN is an emerging paradigm currently evidenced as a new driving force in the general area of computer networks. Many investigations have been carried out in the last few years about the benefits and drawbacks in adopting SDN. However, there are few discussions on how to manage networks based on this new paradigm. This article contributes to this discussion by identifying some of the main management requirements of SDN. Moreover, we describe current proposals and highlight major challenges that need to be addressed to allow wide adoption of the paradigm and related technology. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Satellite networks have traditionally been considered for specific purposes. Recently, new satellite technologies have been pushed to the market enabling high-performance satellite access networks. On the other hand, network architectures are taking advantage of emerging technologies such as software-defined networking (SDN), network virtualization and network functions virtualization (NFV). Therefore, benefiting communications services over satellite networks from these new technologies at first, and their seamless integration with terrestrial networks at second, are of great interest and importance. In this paper, and through comprehensive use cases, the advantages of introducing network programmability and virtualization using SDN and/or NFV in satellite networks are investigated. The requirements to be fulfilled in each use case are also discussed. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Wireless Sensor Networks (WSNs) are the key components of the emerging Internet-of-Things (IoT) paradigm. They are now ubiquitous and used in a plurality of application domains. WSNs are still domain specific and usually deployed to support a specific application. However, as WSNs' nodes are becoming more and more powerful, it is getting more and more pertinent to research how multiple applications could share a very same WSN infrastructure. Virtualization is a technology that can potentially enable this sharing. This paper is a survey on WSN virtualization. It provides a comprehensive review of the state-of-the-art and an in-depth discussion of the research issues. We introduce the basics of WSN virtualization and motivate its pertinence with carefully selected scenarios. Existing works are presented in detail and critically evaluated using a set of requirements derived from the scenarios. The pertinent research projects are also reviewed. Several research issues are also discussed with hints on how they could be tackled. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Related Work <s> Software-defined networking (SDN) features the decoupling of the control plane and data plane, a programmable network and virtualization, which enables network infrastructure sharing and the "softwarization" of the network functions. Recently, many research works have tried to redesign the traditional mobile network using two of these concepts in order to deal with the challenges faced by mobile operators, such as the rapid growth of mobile traffic and new services. In this paper, we first provide an overview of SDN, network virtualization, and network function virtualization, and then describe the current LTE mobile network architecture as well as its challenges and issues. By analyzing and categorizing a wide range of the latest research works on SDN and virtualization in LTE mobile networks, we present a general architecture for SDN and virtualization in mobile networks (called SDVMN) and then propose a hierarchical taxonomy based on the different levels of the carrier network. We also present an in-depth analysis about changes related to protocol operation and architecture when adopting SDN and virtualization in mobile networks. In addition, we list specific use cases and applications that benefit from SDVMN. Last but not least, we discuss the open issues and future research directions of SDVMN. <s> BIB018
The general principles of SDN have been extensively covered in several surveys, see for instance, BIB005 , , BIB006 , BIB007 - . SDN security has been surveyed in BIB013 , , while management of SDN networks has been surveyed in BIB014 and SDN-based satellite networking is considered in BIB015 . To date, there have been relatively few overview and survey articles on SDONs. Zhang et al. BIB002 have presented a thorough survey on flexible optical networking based on Orthogonal Frequency Division Multiplexing (OFDM) in core The infrastructure layer implements the data plane, e.g., with OpenFlow (OF) switches BIB006 or network elements (devices) controlled with the NETCONF protocol . A controller at the control layer, e.g., the ONOS controller , controls the infrastructure layer based on the application layer requirements. The interface between the application and control layers is commonly referred to as the NorthBound Interface (NBI), while the interface between the control and infrastructure layers is commonly referred to as the SouthBound Interface (SBI). The WestBound Interface (WBI) interconnects multiple SDN domains, while the EastBound Interface (EBI) interconnects with non-SDN domains. (backbone) networks. The survey briefly notes how OFDMbased elastic networking can facilitate network virtualization and surveys a few studies on OFDM-based network virtualization in core networks. Bhaumik et al. BIB008 have presented an overview of SDN and network virtualization concepts and outlined principles for extending SDN and network virtualization concepts to the field of optical networking. Their focus has been mainly on industry efforts, reviewing white papers on SDN strategies from leading networking companies, such as Cisco, Juniper, Hewlett-Packard, Alcatel-Lucent, and Huawei. A few selected academic research projects on general SDN optical networks, namely projects reported in the journal articles BIB003 , BIB004 and a few related conference papers, have also been reviewed by Bhaumik et al. BIB008 . In contrast to Bhaumik et al. BIB008 , we provide a comprehensive up-to-date review of academic research on SDONs. Whereas Bhaumik et al. BIB008 presented a small sampling of SDON research organized by research projects, we present a comprehensive SDON survey that is organized according to the SDN infrastructure, control, and application layer architecture. For the SDON sub-domain of access networks, Cvijetic BIB009 has given an overview of access network challenges that can be addressed with SDN. These challenges include lack of support for on-demand modifications of traffic transmission policies and rules and limitations to vendor-proprietary policies, rules, and software. Cvijetic BIB009 also offers a very brief overview of research progress for SDN-based optical access networks, mainly focusing on studies on the physical (photonics) infrastructure layer. Cvijetic BIB010 has further expanded the overview of SDON challenges by considering the incorporation of 5G wireless systems. Cvijetic BIB010 has noted that SDN access networks are highly promising for lowlatency and high-bandwidth back-hauling from 5G cell base stations and briefly surveyed the requirements and areas of future research required for integrating 5G with SDON access networks. A related overview of general software defined access networks based on a variety of physical transmission media, including copper Digital Subscriber Line (DSL) BIB001 and Passive Optical Networks (PONs), has been presented by Kerpez et al. . Bitar BIB011 has surveyed use cases for SDN controlled broadband access, such as on-demand bandwidth boost, dynamic service re-provisioning, as well as value-added services and service protection. Bitar BIB011 has discussed the commercial perspective of the access networks that are enhanced with SDN to add cost-value to the network operation. Almeida Amazonas et al. have surveyed the key issues of incorporating SDN in optical and wireless access networks. They briefly outlined the obstacles posed by the different specific physical characteristics of optical and wireless access networks. Although our focus is on optical networks, for completeness we note that for the field of wireless and mobile networks, SDN based networking mechanisms have been surveyed in - BIB012 while network virtualization has been surveyed in BIB016 for general wireless networks and in BIB017 for wireless sensor networks. SDN and virtualization strategies for LTE wireless cellular networks have been surveyed in BIB018 . SDNbased 5G wireless network developments for mobile networks have been outlined in - .
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Survey Organization <s> A flexible and programmable forwarding plane is essential to maximize the value of Software-Defined Networks (SDN). In this paper, we propose Protocol-Oblivious Forwarding (POF) as a key enabler for highly flexible and programmable SDN. Our goal is to remove any dependency on protocol-specific configurations on the forwarding elements and enhance the data-path with new stateful instructions to support genuine software defined networking behavior. A generic flow instruction set (FIS) is defined to fulfill this purpose. POF helps to lower network cost by using commodity forwarding elements and to create new value by enabling numerous innovative network services. We built both hardware-based and open source software-based prototypes to demonstrate the feasibility and advantages of POF. We report the preliminary evaluation results and the insights we learnt from the experiments. POF is future-proof and expressive. We believe it represents a promising direction to evolve the OpenFlow protocol and the future SDN forwarding elements. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Survey Organization <s> P4 is a high-level language for programming protocol-independent packet processors. P4 works in conjunction with SDN control protocols like OpenFlow. In its current form, OpenFlow explicitly specifies protocol headers on which it operates. This set has grown from 12 to 41 fields in a few years, increasing the complexity of the specification while still not providing the flexibility to add new headers. In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: Programmers should be able to change the way switches process packets once they are deployed. (2) Protocol independence: Switches should not be tied to any specific network protocols. (3) Target independence: Programmers should be able to describe packet-processing functionality independently of the specifics of the underlying hardware. As an example, we describe how to use P4 to configure a switch to add a new hierarchical label. <s> BIB002
We have mainly organized our survey according to the threelayer SDN architecture illustrated in Fig. 1 . In particular, we have organized the survey in a bottom-up manner, surveying first SDON studies focused on the infrastructure layer in Section III. Subsequently, we survey SDON studies focused on the control layer in Section IV. The virtualization of optical networks is commonly closely related to the SDN control layer. Therefore, we survey SDON studies focused on virtualization in Section V, right after the SDON control layer section. Resuming the journey up the layers in Fig. 1 , we survey SDON studies focused on the application layer in Section VI. We survey mechanisms for the overarching orchestration of the application layer and lower layers, possibly across multiple network domains (see Fig. 2 ), in Section VII. Finally, we outline open challenges and future research directions in Section VIII and conclude the survey in Section IX. switching or routing. Additionally, these forwarding actions in the traditional network elements are autonomously established based on self-evaluated topology information that is often obtained through proprietary vendor-specific algorithms. Therefore, the configuration setups of traditional network elements are generally not reconfigurable without a service disruption, limiting the network flexibility. In contrast, SDN decouples the autonomous control functions, such as forwarding algorithms and neighbor discovery of the network nodes, and moves these control functions out of the infrastructure to a centrally controlled logical node, the controller. In doing so, the network elements act only as dumb switches which act upon the instructions of the controller. This decoupling reduces the network element complexity and improves reconfigurability. In addition to decoupling the control and data planes, packet modification capabilities at the line-rates of network elements have been significantly improved with SDN. P4 BIB002 is a programmable protocol-independent packet processor, that can arbitrarily match the fields within any formatted packet and is capable of applying any arbitrary actions (as programmed) on the packet before forwarding. A similar forwarding mechanism, Protocol-oblivious Forwarding (PoF) has been proposed by Huawei Technologies BIB001 . 2) Control Layer: The control layer is responsible for programming (configuring) the network elements (switches) via the SBIs. The SDN controller is a logical entity that identifies the south bound instructions to configure the network infrastructure based on application layer requirements. To efficiently manage the network, SDN controllers can request information from the SDN infrastructures, such as flow statistics, topology information, neighbor relations, and link status from the network elements (nodes). The software entity that implements the SDN controller is often referred to as Network Operating System (NOS). Generally, a NOS can be implemented independently of SDN, i.e., without supporting SDN. On the other hand, in addition to supporting SDN operations, a NOS can provide advanced capabilities, such as virtualization, application scheduling, and database management. The Open Network Operating System (ONOS) is an example of an SDN based NOS with a distributed control architecture designed to operate over Wide Area Networks (WANs). Furthermore, Cisco has recently developed the one Platform Kit (onePK) , which consists of a set of Application Program Interfaces (APIs) that allow the network applications to control Cisco network devices without a command line interface. The onePK libraries act as an SBI for Cisco ONE controllers and are based on C and Java compilers. 3) Application Layer: The application layer comprises network applications and services that utilize the control plane to realize network functions over the physical or virtual infrastructure. Examples of network applications include network topology discovery, provisioning, and fault restoration. The SDN controller presents an abstracted view of the network to the SDN applications to facilitate the realization of application functionalities. The applications can also include higher levels of network management, such as network data analytics, or specialized functions requiring processing in large data centers. For instance, the Central Office Re-architected as Overview of SDN orchestrator and SDN controllers: The SDN orchestration coordinates and manages at a higher abstracted layer, above the SDN applications and SDN controllers. SDN controllers, which may be in a hierarchy (see left part), implement the orchestrator decisions. A virtualization hypervisor may intercept the SouthBound Interfaces (SBIs) to create multiple virtual networks from a given physical network infrastructure. (The optical orchestrator on the right can be ignored for now and will be addressed in Section VIII-F.) a Data center (CORD) is an SDN application based on ONOS , that implements the typical central office network functions, such as optical line termination, as well as BaseBand Unit (BBU) and Data Over Cable Interface (DOCSIS) processing as virtualized software entities, i.e., as SDN applications. 4) Orchestration Layer: Although the orchestration layer is commonly not considered one of the main SDN architectural layers illustrated in Fig. 1 , as SDN systems become more complex, orchestration becomes increasingly important. We introduce therefore the orchestration layer as an important SDN architectural layer in this background section. Typically, an SDN orchestrator is the entity that coordinates software modules within a single SDN controller, a hierarchical structure of multiple SDN controllers, or a set of multiple SDN controllers in a "flat" arrangement (i.e., without a hierarchy) as illustrated in Fig. 2 . An SDN controller in contrast can be viewed as a logically centralized single control entity. This logically centralized single control entity appears as the directly controlling entity to the network elements. The SDN controller is responsible for signaling the control actions or rules that are typically predefined (e.g., through OpenFlow) to the network elements. In contrast, the SDN orchestrator makes control decisions that are generally not predefined. More specifically, the SDN orchestrator could make an automated decision with the help of SDN applications or seek a manual recommendation from user inputs; therefore, results are generally not predefined. These orchestrator decisions (actions/configurations) are then delegated via the SDN controllers and the SBIs to the network elements. Intuitively speaking, SDN orchestration can be viewed as a distinct abstracted (higher) layer for coordination and management that is positioned above the SDN control and application layers. Therefore, we generalize the term SDN orchestrator as an entity that realizes a wider, more general (more encompassing) network functionality as compared to the SDN controllers. For instance, a cloud SDN orchestrator can instantiate and tear down Virtual Machines (VMs) according to the cloud workload, i.e., make decisions that span across multiple network domains and layers. In contrast, SDN controllers realize more specific network functions, such as routing and path computation.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. SDN Interfaces 1) Northbound Interfaces (NBIs): <s> Constraint-based path computation is a fundamental building block for ::: traffic engineering systems such as Multiprotocol Label Switching ::: (MPLS) and Generalized Multiprotocol Label Switching (GMPLS) networks. ::: Path computation in large, multi-domain, multi-region, or multi-layer ::: networks is complex and may require special computational components ::: and cooperation between the different network domains. This document ::: specifies the architecture for a Path Computation Element (PCE)-based ::: model to address this problem space. This document does not attempt to ::: provide a detailed description of all the architectural components, ::: but rather it describes a set of building blocks for the PCE ::: architecture from which solutions may be constructed. This memo ::: provides information for the Internet community. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. SDN Interfaces 1) Northbound Interfaces (NBIs): <s> This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. SDN Interfaces 1) Northbound Interfaces (NBIs): <s> This document specifies the Path Computation Element (PCE) ::: Communication Protocol (PCEP) for communications between a Path ::: Computation Client (PCC) and a PCE, or between two PCEs. Such ::: interactions include path computation requests and path computation ::: replies as well as notifications of specific states related to the use ::: of a PCE in the context of Multiprotocol Label Switching (MPLS) and ::: Generalized MPLS (GMPLS) Traffic Engineering. PCEP is designed to be ::: flexible and extensible so as to easily allow for the addition of ::: further messages and objects, should further requirements be expressed ::: in the future. [STANDARDS-TRACK] <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. SDN Interfaces 1) Northbound Interfaces (NBIs): <s> OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. SDN Interfaces 1) Northbound Interfaces (NBIs): <s> REST architectural style has become a prevalent choice for distributed resources, such as the northbound API of software-defined networking (SDN). As services often undergo frequent changes and updates, the corresponding REST APIs need to change and update accordingly. To allow REST APIs to change and evolve without breaking its clients, a REST API can be designed to facilitate hypertext-driven navigation and its related mechanisms to deal with structure changes in the API. This paper addresses the issues in hypertext-driven navigation in REST APIs from three aspects. First, we present REST Chart, a Petri-Net-based REST service description framework and language to design extensible REST APIs, and it is applied to cope with the rapid evolution of SDN northbound APIs. Second, we describe some important design patterns, such as backtracking and generator, within the REST Chart framework to navigate through large scale APIs in the RESTful architecture. Third, we present a client side differential cache mechanism to reduce the overhead of hypertext-driven navigation, addressing a major issue that affects the application of REST API. The proposed approach is applied to applications in SDN, which is integrated with a generalized SDN controller, SOX. The benefits of the proposed approach are verified in different conditions. Experimental results on SDN applications show that on average, the proposed cache mechanism reduces the overhead of using the hypertext-driven REST API by 66%, while fully maintaining the desired flexibility and extensibility of the REST API. <s> BIB005
A logical interface that interconnects the SDN controller and a software entity operating at the application layer is commonly referred to as a NorthBound Interface (NBI), or as Application-Controller Plane Interface (A-CPI). a) REST: REpresentational State Transfer (REST) is generally defined as a software architectural style that supports flexibility, interoperability, and scalability. In the context of the SDN NBI, REST is commonly defined as an API that meets the REST architectural style BIB005 , i.e., is a so-called RESTful API: • Client-Sever: Two software entities should follow the client-server model. In SDN, a controller can be a server and the application can be the client. This allows multiple heterogeneous SDN applications to coexist and operate over a common SDN controller. • Stateless: The client is responsible for managing all the states and the server acts upon the client's request. In SDN, the applications collect and maintain the states of the network, while the controller follows the instructions from the applications. • Caching: The client has to support the temporary local storage of information such that interactions between the client and server are reduced so as to improve performance and scalability. • Uniform/Interface Contract: An overarching technical interface must be followed across all services using the REST API. For example, the same data format, such as Java Script Object Notation (JSON) or eXtended Markup Language (XML), has to be followed for all interactions sharing the common interface. • Layered System: In a multilayered architectural solution, the interface should only be concerned with the next immediate node and not beyond. Thus, allowing more layers to be inserted, modified, or removed without affecting the rest of the system. 2) Southbound Interfaces (SBIs): A logical interface that interconnects the SDN controller and the network element operating on the infrastructure layer (data plane) is commonly referred to as a SouthBound Interface (SBI), or as the DataController Plane Interface (D-CPI). Although a higher level connection, such as a UDP or TCP connection, is sufficient for enabling the communication between two entities of the SDN architecture, e.g., the controller and the network elements, specific SBI protocols have been proposed. These SBI protocols are typically not interoperable and thus are limited to work with SBI protocol-specific network elements (e.g., an OpenFlow switch does not work with the NETCONF protocol). a) OpenFlow Protocol: The interaction between an OpenFlow switching element (data plane) and an OpenFlow controller (control plane) is carried out through the OpenFlow protocol BIB004 , BIB002 . This SBI (or D-CPI) is therefore also sometimes referred to as the OpenFlow control channel. SDN mainly operates through packet flows that are identified through matches on prescribed packet fields that are specified in the OpenFlow protocol specification. For matched packets, SDN switches then take prescribed actions, e.g., process the flow's packets in a particular way, such as dropping the packet, duplicating it on a different port or modifying the header information. b) Path Computation Element Protocol (PCEP): The PCEP enables communication between the Path Computation Client (PCC) of the network elements and the Path Computation Element (PCE) residing within the controller. The PCE centrally computes the paths based on constraints received from the network elements. Computed paths are then forwarded to the individual network elements through the PCEP protocol BIB001 , BIB003 . c) Network Configuration (NETCONF) Protocol: The NETCONF protocol provides mechanisms to configure, modify, and delete configurations on a network device. Configuration of the data and protocol messages are encoded in the NETCONF protocol using an eXtensible Markup Language (XML). Remote procedure calls are used to realize the NETCONF protocol operations. Therefore, only devices that are enabled with required remote procedure calls allow the NETCONF protocol to remotely modify device configurations. d) Border Gateway Protocol Link State Distribution (BGP-LS) Protocol: The central controller needs a topology information database, also known as Traffic Engineering Database (TED), for optimized end-to-end path computation. The controller has to request the information for building the TED, such as topology and bandwidth utilization, via the SBIs from the network elements. This information can be gathered by a BGP extension, which is referred to as BGP-LS.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> From the Publisher: ::: This fully updated and expanded second edition of Optical Networks: A Practical Perspective succeeds the first as the authoritative source for information on optical networking technologies and techniques. Written by two of the field's most respected individuals, it covers componentry and transmission in detail but also emphasizes the practical networking issues that affect organizations as they evaluate, deploy, or develop optical solutions. ::: This book captures all the hard-to-find information on architecture, control and management, and other communications topics that will affect you every step of the way-from planning to decision-making to implementation to ongoing maintenance. If your goal is to thoroughly understand practical optical networks, this book should be your first and foremost resource. ::: Features ::: Focuses on practical, networking-specific issues: everything you need to know to implement currently available optical solutions. ::: Provides the transmission and component details you need to understand and assess competing technologies. ::: Offers updated and expanded coverage of propagation, lasers and optical switching technology, network design, transmission design, IP over WDM, wavelength routing, optical standards, and more. <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Metro wavelength-division multiplexed (WDM) networks play an important role in the emerging Internet hierarchy; they interconnect the backbone WDM networks and the local-access networks. The current circuit-switched SONET/synchronous digital hierarchy (SDH)-over-WDM-ring metro networks are expected to become a serious bottleneck-the so-called metro gap-as they are faced with an increasing amount of bursty packet data traffic and quickly increasing bandwidths in the backbone networks and access networks. Innovative metro WDM networks that are highly efficient and able to handle variable-size packets are needed to alleviate the metro gap. In this paper, we study an arrayed-waveguide grating (AWG)-based single-hop WDM metro network. We analyze the photonic switching of variable-size packets with spatial wavelength reuse. We derive computationally efficient and accurate expressions for the network throughput and delay. Our extensive numerical investigations-based on our analytical results and simulations-reveal that spatial wavelength reuse is crucial for efficient photonic packet switching. In typical scenarios, spatial wavelength reuse increases the throughput by 60% while reducing the delay by 40%. Also, the throughput of our AWG-based network with spatial wavelength reuse is roughly 70% larger than the throughput of a comparable single-hop WDM network based on a passive star coupler (PSC). <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Feature Issue on Optical Ethernet (OE) Ethernet passive optical network (EPON) efficiency issues in both upstream and downstream directions are discussed in detail, describing each component of the overall transmission overhead as well as quantifying their effect on the system's performance and comparing them with the other existing passive optical network (PON) access systems, namely, asynchronous transfer mode PON (APON) and generic framing PON (GPON). For EPON, two main transmission overhead groups are defined, namely, Ethernet encapsulation overhead and EPON-specific scheduling overhead. Simulations are performed using the source aggregation algorithm (SAA) to verify the Ethernet encapsulation overhead for various synthetic and measured packet size distributions (PSDs). A SAA based an EPON simulator is used to verify both upstream and downstream overall channel efficiencies. The obtained simulation results closely match the theoretical limits estimated based on the IEEE 802.3ah standard. An estimated throughput of 820 to 900 Mbits/s is available in the upstream direction, whereas in the downstream direction effective throughput ranges from 915 to 935 Mbits/s. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization has long been a goal of of the network research community. With it, multiple isolated logical networks each with potentially different addressing and forwarding mechanisms can share the same physical infrastructure. Typically this is achieved by taking advantage of the flexibility of software (e.g. [20, 23]) or by duplicating components in (often specialized) hardware[19]. In this paper we present a new approach to switch virtualization in which the same hardware forwarding plane can be shared among multiple logical networks, each with distinct forwarding logic. We use this switch-level virtualization to build a research platform which allows multiple network experiments to run side-by-side with production traffic while still providing isolation and hardware forwarding speeds. We also show that this approach is compatible with commodity switching chipsets and does not require the use of programmable hardware such as FPGAs or network processors. We build and deploy this virtualization platform on our own production network and demonstrate its use in practice by running five experiments simultaneously within a campus network. Further, we quantify the overhead of our approach and evaluate the completeness of the isolation between virtual slices. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Ethernet Passive Optical Network (EPON) has been widely considered as a promising technology for implementing the FTTx solutions to the ''last mile'' bandwidth bottleneck problem. Bandwidth allocation is one of the critical issues in the design of EPON systems. In an EPON system, multiple optical network units (ONUs) share a common upstream channel for data transmission. To efficiently utilize the limited bandwidth of the upstream channel, an EPON system must dynamically allocate the upstream bandwidth among multiple ONUs based on the instantaneous bandwidth demands and quality of service requirements of end users. This paper introduces the fundamental concepts on EPONs, discusses the major issues related to bandwidth allocation in EPON systems, and presents a survey of the state-of-the-art dynamic bandwidth allocation (DBA) algorithms for EPONs. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Long-Reach optical access is a promising proposal for future access networks. This technology can enable broadband access for a large number of customers in the access/metro area, while decreasing capital and operational expenditures for the network operator. First, the paper reviews the evolutionary path of access networks and shows the drivers from technology and business perspectives for high bandwidth and low cost. A variety of research challenges in this field is reviewed, from optical components in the physical layer to the control and management issues in the upper layers. We discuss the requisites for optical sources, optical amplifiers, and optical receivers when used in networks with high transmission rate (10 Gbps) and large power attenuation (due to large split, transmission over 100 km and beyond, and propagation), and the key topological structures that allow to guarantee physical protection (tree-and-branch, ring-and-spur). Then, some relevant demonstrations of Long-Reach Optical Access Networks developed worldwide by different research institutes are presented. Finally, Dynamic Bandwidth Allocation (DBA) algorithms that allow to mitigate the effect of the increased control-plane delay in an extended-reach network are investigated. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> The ultimate goal of Fiber-Wireless (FiWi) networks is the convergence of various optical and wireless technologies under a single infrastructure in order to take advantage of their complementary features and therefore provide a network capable of supporting bandwidth-hungry emerging applications in a seamless way for both fixed and mobile clients. This article surveys possible FiWi network architectures that are based on a Radio-and-Fiber (R&F) network integration, an approach that is different compared to the Radio-over-Fiber (RoF) proposal. The survey distinguishes FiWi R&F architectures based on a three- level network deployment of different optical or wireless technologies and classifies them into three main categories based on the technology used in the first level network. Future research challenges that should be explored in order to achieve a feasible FiWi R&F architecture are also discussed. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization is considered an important potential solution to the gradual ossification of the Internet. In a network virtualization environment, a set of virtual networks share the resources of a common physical network although each virtual network is isolated from others. Benefits include increased flexibility, diversity, security and manageability. Resource discovery and allocation are fundamental steps in the process of creating new virtual networks. This paper surveys previous work on, and the present status of, resource discovery and allocation in network virtualization. We also describe challenges and suggest future directions for this area of research. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization is a relatively new research topic. A number of articles propose that certain benefits can be realized by virtualizing links between network elements as well as adding virtualization on intermediate network elements. In this article we argue that network virtualization may bring nothing new in terms of technical capabilities and theoretical performance, but it provides a way of organizing networks such that it is possible to overcome some of the practical issues in today?s Internet. We strengthen our case by an analogy between the concept of network virtualization as it is currently presented in research, and machine virtualization as proven useful in deployments in recent years. First we make an analogy between the functionality of an operating system and that of a network, and identify similar concepts and elements. Then we emphasize the practical benefits realized by machine virtualization, and we exploit the analogy to derive potential benefits brought by network virtualization. We map the established applications for machine virtualization to network virtualization, thus identifying possible use cases for network virtualization. We also use this analogy to structure the design space for network virtualization. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Optical networks are undergoing significant changes, fueled by the exponential growth of traffic due to multimedia services and by the increased uncertainty in predicting the sources of this traffic due to the ever changing models of content providers over the Internet. The change has already begun: simple on-off modulation of signals, which was adequate for bit rates up to 10 Gb/s, has given way to much more sophisticated modulation schemes for 100 Gb/s and beyond. The next bottleneck is the 10-year-old division of the optical spectrum into a fixed "wavelength grid," which will no longer work for 400 Gb/s and above, heralding the need for a more flexible grid. Once both transceivers and switches become flexible, a whole new elastic optical networking paradigm is born. In this article we describe the drivers, building blocks, architecture, and enabling technologies for this new paradigm, as well as early standardization efforts. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Long-reach Passive Optical Networks (LR-PONs) aim to combine the capacity of metro and access networks by extending the reach and split ratio of the conventional PONs. LR-PONs appear as efficient solutions having feeder distances around 100km and high split ratios up to 1000-way. On the other hand, transmission of the signals in long distances up to 100km leads to increased propagation delay whereas high split ratio may lead to long cycle times resulting in large queue occupancies and long packet delays. Before LR-PON becomes widely adopted, the trade-off between the advantages and performance degradation problem which is resulting from long reach and high split ratio properties of LR-PONs needs to be solved. Recent studies have focused on enhancing the performance of dynamic bandwidth allocation in LR-PONs. This article presents a comprehensive survey on the dynamic bandwidth allocation schemes for LR-PONs. In the article, a comparative classification of the proposed schemes based on their quality-of-service awareness, base-types, feeder distances and tested performance metrics is provided. At the end of the article, a brief discussion on the open issues and research challenges for the solution of performance degradation in LR-PONs is presented. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Cellular networks are currently experiencing a tremendous growth of data traffic. To cope with this demand, a close cooperation between academic researchers and industry/standardization experts is necessary, which hardly exists in practice. In this paper, we try to bridge this gap between researchers and engineers by providing a review of current standard-related research efforts in wireless communication systems. Furthermore, we give an overview about our attempt in facilitating the exchange of information and results between researchers and engineers, via a common simulation platform for 3GPP long term evolution (LTE) and a corresponding webforum for discussion. Often, especially in signal processing, reproducing results of other researcher is a tedious task, because assumptions and parameters are not clearly specified, which hamper the consideration of the state-of-the-art research in the standardization process. Also, practical constraints, impairments imposed by technological restrictions and well-known physical phenomena, e.g., signaling overhead, synchronization issues, channel fading, are often disregarded by researchers, because of simplicity and mathematical tractability. Hence, evaluating the relevance of research results under practical conditions is often difficult. To circumvent these problems, we developed a standard-compliant opensource simulation platform for LTE that enables reproducible research in a well-defined environment. We demonstrate that innovative research under the confined framework of a real-world standard is possible, sometimes even encouraged. With examples of our research work, we investigate on the potential of several important research areas under typical practical conditions, and highlight consistencies as well as differences between theory and practice. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization is the key to the current and future success of cloud computing. In this article, we explain key reasons for virtualization and briefly explain several of the networking technologies that have been developed recently or are being developed in various standards bodies. In particular, we explain software defined networking, which is the key to network programmability. We also illustrate SDN?s applicability with our own research on OpenADN - application delivery in a multi-cloud environment. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more recently the academic community has emphasized virtualization as an enabler for network architecture research, deployment, and experimentation. We review the entire spectrum of relevant approaches with the goal of identifying the underlying commonalities. We offer a unifying definition of the term “network virtualization” and examine existing approaches to bring out this unifying perspective. We also discuss a set of challenges and research directions that we expect to come to the forefront as network virtualization technologies proliferate. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network virtualization gives each "tenant" in a data center its own network topology and control over its traffic flow. Software-defined networking offers a standard interface between controller applications and switch-forwarding tables, and is thus a natural platform for network virtualization. Yet, supporting numerous tenants with different topologies and controller applications raises scalability challenges. The FlowN architecture gives each tenant the illusion of its own address space, topology, and controller, and leverages database technology to efficiently store and manipulate mappings between virtual networks and physical switches. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Multi-thread polling (MTP) with offline scheduling and offline excess bandwidth distribution has recently been proposed to overcome the long propagation delay of long-reach passive optical networks (LR-PONs). In this paper, we propose a complementary MTP approach with online scheduling and online excess bandwidth distribution. We evaluate the throughput-delay performance of offline and online MTP against offline and online single-thread polling (STP) with excess bandwidth distribution as well as double-phase polling (DPP) with excess bandwidth distribution. We find that online MTP and STP as well as DPP give significantly lower average packet delays than offline MTP. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Future generation cellular networks are expected to provide ubiquitous broadband access to a continuously growing number of mobile users. In this context, LTE systems represent an important milestone towards the so called 4G cellular networks. A key feature of LTE is the adoption of advanced Radio Resource Management procedures in order to increase the system performance up to the Shannon limit. Packet scheduling mechanisms, in particular, play a fundamental role, because they are responsible for choosing, with fine time and frequency resolutions, how to distribute radio resources among different stations, taking into account channel condition and QoS requirements. This goal should be accomplished by providing, at the same time, an optimal trade-off between spectral efficiency and fairness. In this context, this paper provides an overview on the key issues that arise in the design of a resource allocation algorithm for LTE networks. It is intended for a wide range of readers as it covers the topic from basics to advanced aspects. The downlink channel under frequency division duplex configuration is considered as object of our study, but most of the considerations are valid for other configurations as well. Moreover, a survey on the most recent techniques is reported, including a classification of the different approaches presented in literature. Performance comparisons of the most well-known schemes, with particular focus on QoS provisioning capabilities, are also provided for complementing the described concepts. Thus, this survey would be useful for readers interested in learning the basic concepts before going into the details of a particular scheduling strategy, as well as for researchers aiming at deepening more specific aspects. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> With the rapid growth of the demands for mobile data, wireless network faces several challenges, such as lack of efficient interconnection among heterogeneous wireless networks, and shortage of customized QoS guarantees between services. The fundamental reason for these challenges is that the radio access network (RAN) is closed and ossified. We propose OpenRAN, an architecture for software-defined RAN via virtualization. It achieves complete virtualization and programmability vertically, and benefits the convergence of heterogeneous network horizontally. It provides open, controllable, flexible and evolvable wireless networks. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> An efficient scheduling scheme is a crucial part of Wireless Mesh Networks (WMNs)—an emerging communication infrastructure solution for autonomy, scalability, higher throughput, lower delay metrics, energy efficiency, and other service-level guarantees. Distributed schedulers are preferred due to better scalability, smaller setup delays, smaller management overheads, no single point of failure, and for avoiding bottlenecks. Based on the sequence in which nodes access the shared medium, repetitiveness, and determinism, distributed schedulers that are supported by wireless mesh standards can be classified as either random, pseudo-random, or cyclic schemes. We performed qualitative and quantitative studies that show the strengths and weaknesses of each category, and how the schemes complement each other. We discuss how wireless standards with mesh definitions have evolved by incorporating and enhancing one or more of these schemes. Emerging trends and research problems remaining for future research also have been identified. <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Data center networks today face exciting new challenges in supporting cloud computing and other data-intensive applications. In conventional DCNs, different types of traffic are carried by different types of networks, such as Ethernet and Fibre Channel. Typically, Ethernet carries data traffic among servers in LANs, and Fibre Channel connects servers and storages in storage area networks. Due to the existence of multiple networks, the network cost, power consumption, wiring complexity, and management overhead are often high. The concept of a converged DCN is therefore appealing, carrying both types of traffic in a single converged Ethernet. Recent standards have been proposed for unified data center bridging (DCB) Ethernet and Fibre Channel over Ethernet (FCoE) protocols by the DCB Task Group of IEEE and the T11 Technical Committee of INCITS. In this article, we give a survey of the standards and protocols on converged DCNs, focusing mainly on their motivations and key functionalities. The technologies are discussed mainly from a practical perspective and may serve as a foundation for future research in this area. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Traditional fair bandwidth sharing by leveraging AIMD-based congestion control mechanisms faces great challenges in data center networks. Much work has been done to solve one of the various challenges. However, no single transport layer protocol can solve all of them. In this article, we focus on the transport layer in data centers, and present a comprehensive survey of existing problems and their current solutions. We hope that this article can help readers quickly understand the causes of each problem and learn about current research progress, so as to help them make new contributions in this field. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> We provide models for evaluating the performance, cost and power consumption of different architectures suitable for a metropolitan area network (MAN). We then apply these models to compare today's synchronous optical network/synchronous digital hierarchy metro rings with different alternatives envisaged for next-generation MAN: an Ethernet carrier grade ring, an optical hub-based architecture and an optical time-slotted wavelength division multiplexing (WDM) ring. Our results indicate that the optical architectures are likely to decrease power consumption by up to 75% when compared with present day MANs. Moreover, by allowing the capacity of each wavelength to be dynamically shared among all nodes, a transparent slotted WDM yields throughput performance that is practically equivalent to that of today's electronic architectures, for equal capacity. <s> BIB025 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> As mobile network users look forward to the connectivity speeds of 5G networks, service providers are facing challenges in complying with connectivity demands without substantial financial investments. Network function virtualization (NFV) is introduced as a new methodology that offers a way out of this bottleneck. NFV is poised to change the core structure of telecommunications infrastructure to be more cost-efficient. In this article, we introduce an NFV framework, and discuss the challenges and requirements of its use in mobile networks. In particular, an NFV framework in the virtual environment is proposed. Moreover, in order to reduce signaling traffic and achieve better performance, this article proposes a criterion to bundle multiple functions of a virtualized evolved packet core in a single physical device or a group of adjacent devices. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. <s> BIB026 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Rigid fixed-grid wavelength division multiplexing (WDM) optical networks can no longer keep up with the emerging bandwidth-hungry and highly dynamic services in an efficient manner. As the available spectrum in optical fibers becomes occupied and is approaching fundamental limits, the research community has focused on seeking more advanced optical transmission and networking solutions that utilize the available bandwidth more effectively. To this end, the flexible/elastic optical networking paradigm has emerged as a way to offer efficient use of the available optical resources. In this work, we provide a comprehensive view of the different pieces composing the “flexible networking puzzle” with special attention given to capturing the occurring interactions between different research fields. Only when these interrelations are clearly defined, an optimal network-wide solution can be offered. Physical layer technological aspects, network optimization for flexible networks, and control plane aspects are examined. Furthermore, future research directions and open issues are discussed. <s> BIB027 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Abstract Data centers provide a volume of computation and storage resources for cloud-based services, and generate very huge traffic in data center networks. Usually, data centers are connected by ultra-long-haul WDM optical transport networks due to its advantages, such as high bandwidth, low latency, and low energy consumption. However, since the rigid bandwidth and coarse granularity, it shows inefficient spectrum utilization and inflexible accommodation of various types of traffic. Based on OFDM, a novel architecture named flexible grid optical network has been proposed, and becomes a promising technology in data center interconnections. In flexible grid optical networks, the assignment and management of spectrum resources are more flexible, and agile spectrum control and management strategies are needed. In this paper, we introduce the concept of Spectrum Engineering, which could be used to maximize spectral efficiency in flexible grid optical networks. Spectrum Defragmentation, as one of the most important aspect in Spectrum Engineering, is demonstrated by OpenFlow in flexible grid optical networks. Experimental results are reported and verify the feasibility of Spectrum Engineering. <s> BIB028 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Telecommunications carriers have begun to upgrade their networks with state-of-the-art optical equipment, referred to as optical-bypass technology. The ramifications of this technology are manifold, affecting the architecture, operation, and economics of the network, all of which are covered in this book. The book is oriented towards practical implementation in metro and backbone networks, taking advantage of the authors extensive experience with actual commercial equipment and carrier networks. The book starts with an overview of optical networking, including an introduction to state-of-the-art optical networks. The second chapter covers legacy optical equipment and the new optical-bypass technology, with an emphasis on the architectural impact of the equipment. For example, the discussion covers how the various types of equipment affect the economics and flexibility of the network. One of the challenges of optical-bypass technology is that it requires sophisticated algorithms in order to operate the network efficiently. Chapters three, four, and five describe such algorithms, where the focus is on techniques that have been proven to produce efficient results in realistic carrier networks. The design and planning strategies described in these chapters are readily implementable. All of the algorithms presented scale well with network size so that they are suitable for real-time design. Chapters six and seven focus on two important aspects of optical networks, namely efficient bundling of the traffic and protection of the traffic. Rather than cover every aspect of these two subjects, the book focuses on how best to perform bundling and protection in the presence of optical-bypass technology. Again, the emphasis is on techniques that have proven effective in real network environments. The final chapter explores the economics of optical networking. Several studies are presented that offer guidelines as to when and how optical-bypass technology should be deployed. The code for some of the routing algorithms is provided in the appendix, which adds to the utility of the book. <s> BIB029 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Current Gigabit-class passive optical networks (PONs) evolve into next-generation PONs, whereby high-speed Gb/s time division multiplexing (TDM) and long-reach wavelength-broadcasting/routing wavelength division multiplexing (WDM) PONs are promising near-term candidates. On the other hand, next-generation wireless local area networks (WLANs) based on frame aggregation techniques will leverage physical-layer enhancements, giving rise to Gigabit-class very high throughput (VHT) WLANs. In this paper, we develop an analytical framework for evaluating the capacity and delay performance of a wide range of routing algorithms in converged fiber-wireless (FiWi) broadband access networks based on different next-generation PONs and a Gigabit-class multiradio multichannel WLAN-mesh front end. Our framework is very flexible and incorporates arbitrary frame size distributions, traffic matrices, optical/wireless propagation delays, data rates, and fiber faults. We verify the accuracy of our probabilistic analysis by means of simulation for the wireless and wireless-optical-wireless operation modes of various FiWi network architectures under peer-to-peer, upstream, uniform, and nonuniform traffic scenarios. The results indicate that our proposed optimized FiWi routing algorithm (OFRA) outperforms minimum (wireless) hop and delay routing in terms of throughput for balanced and unbalanced traffic loads, at the expense of a slightly increased mean delay at small to medium traffic loads. <s> BIB030 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> We discuss hybrid fiber/copper access networks with a focus on XG-PON/VDSL2 hybrid access networks. We present tutorial material on the XG-PON and VDSL2 protocols as standardized by the ITU. We investigate mechanisms to reduce the functional logic at the device that bridges the fiber and copper segments of the hybrid fiber/copper access network. This device is called a drop-point device. Reduced functional logic translates into lower energy consumption and cost for the drop-point device. We define and analyze the performance of several mechanisms to move some of the VDSL2 functional logic blocks from the drop-point device into the XG-PON Optical Line Terminal. Our analysis uncovers that silence suppression mechanisms are necessary to achieve statistical multiplexing gain when carrying synchronous intermediate VDSL2 data formats across the XG-PON. <s> BIB031 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Recent years have witnessed the emergence of machine-to-machine (M2M) networks as an efficient means for providing automated communications among distributed devices. Automated M2M communications can offset the overhead costs of conventional operations, thus promoting their wider adoption in fixed and mobile platforms equipped with embedded processors and sensors/actuators. In this paper, we survey M2M technologies for applications such as healthcare, energy management and entertainment. In particular, we examine the typical architectures of home M2M networks and discuss the performance tradeoffs in existing designs. Our investigation covers quality of service, energy efficiency and security issues. Moreover, we review existing home networking projects to better understand the real-world applicability of these systems. This survey contributes to better understanding of the challenges in existing M2M networks and further shed new light on future research directions. <s> BIB032 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Software defined networking (SDN) has emerged as a promising paradigm for making the control of communication networks flexible. SDN separates the data packet forwarding plane, i.e., the data plane, from the control plane and employs a central controller. Network virtualization allows the flexible sharing of physical networking resources by multiple users (tenants). Each tenant runs its own applications over its virtual network, i.e., its slice of the actual physical network. The virtualization of SDN networks promises to allow networks to leverage the combined benefits of SDN networking and network virtualization and has therefore attracted significant research attention in recent years. A critical component for virtualizing SDN networks is an SDN hypervisor that abstracts the underlying physical SDN network into multiple logically isolated virtual SDN networks (vSDNs), each with its own controller. We comprehensively survey hypervisors for SDN networks in this paper. We categorize the SDN hypervisors according to their architecture into centralized and distributed hypervisors. We furthermore sub-classify the hypervisors according to their execution platform into hypervisors running exclusively on general-purpose compute platforms, or on a combination of general-purpose compute platforms with general- or special-purpose network elements. We exhaustively compare the network attribute abstraction and isolation features of the existing SDN hypervisors. As part of the future research agenda, we outline the development of a performance evaluation framework for SDN hypervisors. <s> BIB033 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products. <s> BIB034 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Flexgrid technology is now considered to be a promising solution for future high-speed network design. In this context, we need a tutorial that covers the key aspects of elastic optical networks. This tutorial paper starts with a brief introduction of the elastic optical network and its unique characteristics. The paper then moves to the architecture of the elastic optical network and its operation principle. To complete the discussion of network architecture, this paper focuses on the different node architectures, and compares their performance in terms of scalability and flexibility. Thereafter, this paper reviews and classifies routing and spectrum allocation (RSA) approaches including their pros and cons. Furthermore, various aspects, namely, fragmentation, modulation, quality-of-transmission, traffic grooming, survivability, energy saving, and networking cost related to RSA, are presented. Finally, the paper explores the experimental demonstrations that have tested the functionality of the elastic optical network, and follows that with the research challenges and open issues posed by flexible networks. <s> BIB035 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> Increasing bandwidth demand drives the need for next-generation optical access (NGOA) networks that can meet future end-user service requirements. This paper gives an overview of NGOA solutions, the enabling optical access network technologies, architecture principles, and related economics and business models. NGOA requirements (including peak and sustainable data rate, reach, cost, node consolidation, and open access) are proposed, and the different solutions are compared against such requirements in different scenarios (in terms of population density and system migration). Unsurprisingly, it is found that different solutions are best suited for different scenarios. The conclusions drawn from such findings allow us to formulate recommendations in terms of technology, strategy, and policy. The paper is based on the main results of the European FP7 OASE Integrated Project that ran between January 1, 2010 and February 28, 2013. <s> BIB036 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> NFV is promising to lower the network operator’s capital expenditure and operational expenditure by replacing proprietary hardware-based network equipment with software-based VNFs that can be consolidated into telecom clouds. In particular, NFV provides an efficient way to deploy network services using SFCs that consist of a set of VNFs interconnected by virtual links. A practical but theoretically challenging problem related to NFV management and orchestration is how to jointly optimize the topology design and mapping of multiple SFCs such that the TBC is minimized, which is called the JTDM problem. In this article, we propose a novel heuristic algorithm, Closed-Loop with Critical Mapping Feedback, to efficiently address the JTDM problem. While minimizing the TBC, we also propose scalable and reliable JTDM strategies that can significantly reduce the network reconfigurations and enhance the service reliability, respectively. <s> BIB037 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> C. Network Virtualization <s> The cloud radio access network (C-RAN) provides high spectral and energy efficiency performances, low expenditures, and intelligent centralized system structures to operators, which have attracted intense interests in both academia and industry. In this paper, a hybrid coordinated multipoint transmission (H-CoMP) scheme is designed for the downlink transmission in C-RANs and fulfills the flexible tradeoff between cooperation gain and fronthaul consumption. The queue-aware power and rate allocation with constraints of average fronthaul consumption for the delay-sensitive traffic are formulated as an infinite horizon constrained partially observed Markov decision process, which takes both the urgent queue state information and the imperfect channel state information at transmitters (CSIT) into account. To deal with the curse of dimensionality involved with the equivalent Bellman equation, the linear approximation of postdecision value functions is utilized. A stochastic gradient algorithm is presented to allocate the queue-aware power and transmission rate with H-CoMP, which is robust against unpredicted traffic arrivals and uncertainties caused by the imperfect CSIT. Furthermore, to substantially reduce the computing complexity, an online learning algorithm is proposed to estimate the per-queue postdecision value functions and update the Lagrange multipliers. The simulation results demonstrate performance gains of the proposed stochastic gradient algorithms and confirm the asymptotical convergence of the proposed online learning algorithm. <s> BIB038
Analogously to the virtualization of computing resources BIB001 , , network virtualization abstracts the underlying physical network infrastructure so that one or multiple virtual networks can operate on a given physical network BIB015 , BIB010 - BIB016 . Virtual networks can span over a single or multiple physical infrastructures (e.g., geographically separated WAN segments). Network Virtualization (NV) can flexibly create independent virtual networks (slices) for distinct users over a given physical infrastructure. Each network slice can be created with prescribed resource allocations. When no longer required, a slice can be deleted, freeing up the reserved physical resources. Network hypervisors BIB011 , BIB006 are the network elements that abstract the physical network infrastructure (including network elements, communication links, and control functions) into logically isolated virtual network slices. In particular, in the case of an underlying physical SDN network, an SDN hypervisor can create multiple isolated virtual SDN networks BIB033 , BIB017 . Through hypervisors, NV supports the implementation of a wide range of network services belonging to the link and network protocol layers (L2 and L3), such as switching and routing. Additionally, virtualized infrastructures can also support higher layer services, such as load-balancing of servers and firewalls. The implementation of such higher layer services in a virtualized environment is commonly referred to as Network Function Virtualization (NFV) BIB026 - BIB037 . NFV can be viewed as a special case of NV in which network functions, such as address translation and intrusion detection functions, are implemented in a virtualized environment. That is, the virtualized functions are implemented in the form of software entities (modules) running on a data center (DC) or the cloud BIB034 . In contrast, the term NV emphasizes the virtualization of the network resources, such as communication links and network nodes. D. Optical Networking Background 1) Optical Switching Paradigms: Optical networks are networks that either maintain signals in the optical domain or at least utilize transmission channels that carry signals in the optical domain. In optical networks that maintain signals in the optical domain, switching can be performed at the circuit, packet, or burst granularities. a) Circuit Switching: Optical circuit switching can be performed in space, waveband, wavelength, or time. The optical spectrum is divided into wavelengths either on a fixed wavelength grid or on a flexible wavelength grid. Spectrally adjacent wavelengths can be coalesced into wavebands. The fixed wavelength grid standard (ITU-T G.694.1) specifies specific center frequencies that are either 12.5 GHz, 25 GHz, 50 GHz, or 100 GHz apart. The flexible DWDM grid (flexigrid) standard (ITU-T G.694.1) BIB018 , - BIB027 allows the center frequency to be any multiple of 6.25 GHz away from 193.1 THz and the spectral width to be any multiple of 12.5 GHz. Elastic Optical Networks (EONs) BIB035 - BIB028 that take advantage of the flexible grid can make more efficient use of the optical spectrum but can cause spectral fragmentation, as lightpaths are set up and torn down, the spectral fragmentation counteracts the more efficient spectrum utilization BIB012 . b) Packet Switching: Optical packet switching performs packet-by-packet switching using header fields in the optical domain as much as possible. An all-optical packet switch requires BIB002 : • Optical synchronization, demultiplexing, and multiplexing • Optical packet forwarding table computation • Optical packet forwarding table lookup • Optical switch fabric • Optical buffering Optical packet switches typically relegate some of these design elements to the electrical domain. Most commonly the packet forwarding table computation and lookup is performed electrically. When there is contention for a destination port, a packet needs to be buffered optically, this buffering can be accomplished with rather impractical fiber delay lines. Fiber delay lines are fiber optic cables whose lengths are configured to provide a certain time delay of the optical signal; e.g., 100 meters of fiber provides 500 ns of delay. An alternative to buffering is to either drop the packet or to use deflection routing, whereby a packet is routed to a different output that may or may not lead to the desired destination. c) Burst Switching: Optical burst switching alleviates the requirements of optical packet forwarding table computation, forwarding table lookup, as well as buffering while accommodating bursty traffic that would lead to poor utilization of optical circuits. In essence, it permits the rapid establishment of short-lived optical circuits to support the transfer of one or more packets coalesced into a burst. A control packet is sent through the network that establishes the lightpath for the burst and then the burst is transmitted on the short-lived circuit with no packet lookup or buffering required along the path BIB002 . Since the circuit is only established for the length of the burst, network resources are not wasted during idle periods. To avoid any buffering of the burst in the optical network, the burst transmission can begin once the lightpath establishment has been confirmed (tell-and-wait) or a short time period after the control packet is sent (just-enough-time). Note: Sending the burst immediately after the control packet (tell-and-go) would require some buffering of the optical burst at the switching nodes. 2) Optical Network Structure: Optical networks are typically structured into three main tiers, namely access networks, metropolitan (metro) area networks, and backbone (core) networks BIB029 . a) Access Networks: In the area of optical access networks BIB036 , so-called Passive Optical Networks (PONs), in particular, Ethernet PONs (EPONs) and Gigabit PONs (GPONs) BIB005 , , have been widely studied. A PON has typically an inverse tree structure with a central Optical Line Terminal (OLT) connecting multiple distributed Optical Network Units (ONUs; also referred to as Optical Network Terminals, ONTs) to metro networks. In the downstream (OLT to ONUs) direction, the OLT broadcasts transmissions. However, in the upstream (ONUs to OLT) direction, the transmissions of the distributed ONUs need to be coordinated to avoid collisions on the shared upstream wavelength channel. Typically, a cyclic polling based Medium Access Control (MAC) protocol, e.g., based on the MultiPoint Control Protocol (MPCP, IEEE 802.3ah), is employed. The ONUs report their bandwidth demands to the OLT and the OLT then assigns upstream transmission windows according to a Dynamic Bandwidth Allocation (DBA) algorithm BIB013 - BIB007 . Conventional PONs cover distances up to 20 km, while socalled Long-Reach (LR) PONs cover distances up to around 100 km BIB019 - BIB008 . Recently, hybrid access networks that combine multiple transmission media, such as Fiber-Wireless (FiWi) networks BIB030 - BIB009 and PON-DSL networks BIB031 , have been explored to take advantage of the respective strengths of the different transmission media. b) Networks Connected to Access Networks: Optical access networks provide Internet connectivity for a wide range of peripheral networks. Residential (home) wired or wireless local area networks BIB032 typically interconnect individual end devices (hosts) in a home or small business and may connect directly with an optical access network. Cellular wireless networks provide Internet access to a wide range of mobile devices BIB020 - BIB014 . Specialized cellular backhaul networks BIB038 - BIB021 relay the traffic to/from base stations of wireless cellular networks to either wireless access networks BIB004 - BIB022 or optical access networks. Moreover, optical access networks are often employed to connect Data Center (DC) networks to the Internet. DC networks interconnect highly specialized server units that process and store large data amounts with specialized networking technologies BIB023 - BIB024 . Data centers are typically employed to provide the so-called "cloud" services for commercial and social media applications. c) Metropolitan Area Networks: Optical Metropolitan (metro) Area Networks (MANs) interconnect the optical access networks in a metropolitan area with each other and with wide-area (backbone, core) networks. MANs have typically a ring or star topology BIB025 - BIB003 and commonly employ optical networking technologies. d) Backbone Networks: Optical backbone (wide area) networks interconnect the individual MANs on a national or international scale. Backbone networks have typically a mesh structure and employ very high speed optical transmission links.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> III. SDN CONTROLLED PHOTONIC COMMUNICATION INFRASTRUCTURE LAYER <s> The design of the SILO network architecture of fine-grain services was based on three fundamental principles. First, SILO generalizes the concept of layering and decouples layers from services, making it possible to introduce easily new functionality and innovations into the architecture. Second, cross-layer interactions are explicitly supported by extending the definition of a service to include control interfaces that can be tuned externally so as to modify the behavior of the service. The third principle is ldquodesign for change:ldquo the architecture does not dictate the services to be implemented, but provides mechanisms to introduce new services and compose them to perform specific communication tasks. In this paper, we provide an update on the current status of the architecture and the prototype software implementation. We also introduce the concept of ldquosoftware defined opticsrdquo (SDO) to refer to the emerging intelligent and programmable optical layer. We then explain how the SILO architecture may enable the rapid adoption of SDO functionality as well as evolving optical switching models, in particular, optical burst switching (OBS). <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> III. SDN CONTROLLED PHOTONIC COMMUNICATION INFRASTRUCTURE LAYER <s> Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. <s> BIB002
This section surveys mechanisms for controlling physical layer aspects of the optical (photonic) communication infrastructure through SDN. Enabling the SDN control down to the photonic level operation of optical communications allows for flexible adaptation of the photonic components supporting optical networking functionalities BIB002 , - BIB001 . As illustrated in Fig. 3 , this section first surveys transmitters and receivers (collectively referred to as transceivers or transponders) that permit SDN control of the optical signal transmission characteristics, such as modulation format. We also survey SDN controlled space division multiplexing (SDM), which provides an emerging avenue for highly efficient optical transmissions. Then, we survey SDN controlled optical switching, covering first switching elements and then overall switching paradigms, such as converged packet and circuit switching. Finally, we survey cognitive photonic communication infrastructures that monitor the optical signal quality. The optical signal quality information can be used to dynamically control the transceivers as well as the filters in switching elements.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Fiber-optic communication systems form the high-capacity transport infrastructure that enables global broadband data services and advanced Internet applications. The desire for higher per-fiber transport capacities and, at the same time, the drive for lower costs per end-to-end transmitted information bit has led to optically routed networks with high spectral efficiencies. Among other enabling technologies, advanced optical modulation formats have become key to the design of modern wavelength division multiplexed (WDM) fiber systems. In this paper, we review optical modulation formats in the broader context of optically routed WDM networks. We discuss the generation and detection of multigigabit/s intensity- and phase-modulated formats, and highlight their resilience to key impairments found in optical networking, such as optical amplifier noise, multipath interference, chromatic dispersion, polarization-mode dispersion, WDM crosstalk, concatenated optical filtering, and fiber nonlinearity <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We have proposed and experimentally demonstrated a novel architecture for orthogonal frequency-division- multiplexing (OFDM) wavelength-division-multiplexing passive optical network with centralized lightwave. In this architecture, 16 quadrature amplitude modulation intensity-modulated OFDM signals at 10 Gb/s are utilized for downstream transmission. A wavelength-reuse scheme is employed to carry the upstream data to reduce the cost at optical network unit. By using one intensity modulator, the downstream signal is remodulated for upstream on-off keying (OOK) data at 2.5 Gb/s based on its return-to-zero shape waveform. We have also studied the fading effect caused by double-sideband (DSB) downstream signals. Measurement results show that 2.5-dB power penalty is caused by the fading effect. The fading effect can be removed when the DSB OFDM downstream signals are converted to single sideband (SSB) after vestigial filtering. The power penalty is negligible for both SSB OFDM downstream and the remodulated OOK upstream signals after over 25-km standard single-mode-fiber transmission. Index <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We present a fully tunable multistage narrowband optical pole-zero notch filter that is fabricated in a silicon complementary metal oxide semiconductor (CMOS) foundry. The filter allows for the reconfigurable and independent tuning of the center frequency, null depth, and bandwidth for one or more notches simultaneously. It is constructed using a Mach-Zehnder interferometer (MZI) with cascaded tunable all-pass filter (APF) ring resonators in its arms. Measured filter nulling response exhibits ultranarrow notch 3 dB BW of 0.6350 GHz, and nulling depth of 33 dB. This filter is compact and integrated in an area of 1.75 mm2. Using this device, a novel method to cancel undesired bands of 3 dB bandwidth of < 910 MHz in microwave-photonic systems is demonstrated. The ultranarrow filter response properties have been realized based on our developed low-propagation loss silicon channel waveguide and tunable ring-resonator designs. Experimentally, they yielded a loss of 0.25 dB/cm and 0.18 dB/round trip, respectively. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Due to the requirement of broad bandwidth for next-generation access networks, present passive optical networks (PONs) will be upgraded to 40 Gb/s or higher data rate PONs. Hence, we propose and experimentally demonstrate a simple and efficient scheme to achieve a symmetric 40-Gb/s long-reach (LR) time-division-multiplexed PON by using four wavelength-division-multiplexed 10-Gb/s external on-off keying format channels to serve as the optical transmitter for downstream and upstream traffic simultaneously. Moreover, the system performance of LR transmission and split ratio have also been analyzed and discussed without dispersion compensation. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We experimentally demonstrate successful performance of VCSEL-based WDM link supporting advanced 16-level carrierless amplitude/phase modulation up to 1.25 Gbps, over 26 km SSMF with spectral efficiency of 4 bit/s/Hz for application in high capacity PONs. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> This survey guides the reader through the extensive open literature that is covering the family of low-density parity-check LDPC codes and their rateless relatives. In doing so, we will identify the most important milestones that have occurred since their conception until the current era and elucidate the related design problems and their respective solutions. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We propose a new optical transmitter which is capable of changing flexibly the modulation format of the optical signal. By using this transmitter, we can handle and assign various modulation formats: binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), 8-ary quadrature amplitude modulation (8QAM), and 16QAM. The proposed transmitter is based on a combination of a dual-drive Mach-Zehnder modulator (DD-MZM) and a dual-parallel MZM (DP-MZM) with electrical binary drive signals. DD-MZM is a key element to produce the 8QAM and 16QAM formats where each arm of DD-MZM is driven by independent binary data. This is because we can modulate the amplitude and phase of the optical signal by using a frequency chirp of the modulator when we adjust properly the amplitudes of the electrical drive signals. In addition, we show an algorithm by which the proposed transmitter can intelligently select the modulation format in accordance with the signal quality. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Spectrum-efficient optical transmission with bitrates of 400 Gb/s and beyond can be achieved using flexible modulation with advanced DSPs. The technology options include modulation format, signal baud rate, number of subcarriers, and spectral bandwidth. A fine-granular spectral bandwidth requires a flexible WDM grid as recently defined by ITU-T. Transmission of a signal with multiple optical carriers, each potentially with their own set of modulation options, allows bandwidth-variable multi-flow transceivers. This can reduce the spectrum continuity constraint in the network. At the network layer, these new degrees of freedom create additional levels of complexity and constraints during network design, planning and operation. Which modulation constellation should be chosen for a new optical connection? What are the impacts on transmission reach, spectrum continuity constraints, and network utilization? Routing and spectrum assignment is becoming more complex, and the inevitable spectrum fragmentation reduces the spectral efficiency gained though efficient modulation schemes. Dynamic spectrum defragmentation requires transceivers supporting hitless defragmentation, or it is traffic affecting for the reallocated signals. The sheer number of technology options will increase the operational complexity of the network. In this paper, we give an overview of technology options for software-defined transceivers for fixed-grid and flex-grid optical transport networks, and their impact for network planning and operation. We evaluate the spectral network efficiency, and operational complexity of selected technology options, such as multi-carrier transmission in fixed WDM grid, bandwidth-variable transponders with multiple subcarriers in a flexible WDM grid, and fully flexible multi-flow transponders. The evaluation is done based on a network planning study on a national European and US reference network. Based on the evaluation result, a guideline is given for a technology strategy with a good balance between flexibility, spectrum efficiency, and network cost. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> A bit-error rate (BER)-adaptive wavelength-switched optical network (WSON) employing the multiformat and multirate transmitter/receiver is experimentally demonstrated. The flexible transmitter is capable of generating binary phase-shift-keying, quadrature phase-shift-keying, eight-ary quadrature amplitude modulation, 8QAM, and 16QAM, as well as changing the symbol rate. On the other hand, the flexible receiver based on a coherent detection scheme is utilized not only to detect the optical signal with any modulation format/rate, but also to send the BER information to an OpenFlow controller via extended OpenFlow protocols. Since the measured BER is a reliable barometer of the optical path conditions, an extended OpenFlow-based control plane with the OpenFlow controller intelligently determines and assigns either appropriate modulation format/rate or a backup path in WSON in accordance with BER information. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> OpenFlow, which allows operators to control the network using software running on a network operating system within an external controller, has recently been proposed and experimentally validated as a promising intelligent control plane technique. To mitigate the potential scalability issue of an OpenFlow-based centralized control plane and to leverage the mature, well-defined, and feature-complete path computation element (PCE) communication protocol, the complex path computation function required in optical networks can be formally decoupled from the OpenFlow controller so the controller can off-load the task to one or more dedicated PCEs. In addition to the control plane intelligence, future optical networks also feature data plane intelligence such as the introduction of flexible transmitters and receivers, which can dynamically change the modulation format and transmission rate of the optical signal without hardware modifications. In this paper, for the first time, we successfully demonstrate a dynamic transparent wavelength-switched optical network employing flexible transmitters and receivers controlled by an OpenFlow-stateless PCE integrated control plane. Our designed flexible transmitter is implemented by a cas-cadeof adual-drive Mach-Zehnder modulator (MZM) and a dual-parallel MZM. By adjusting the electrical binary drive signals, the flexible transmitter is able toflexibly switch the symbol rate and the modulation format, including binary phase shift keying, quadrature phase shift keying, 8-ary quadrature amplitude modulation (8QAM), and 16QAM. The flexible receiver is able to automatically detect different modulation formats and symbol rates and measure the bit-error rate. All the network elements, including optical switching nodes and flexible transmitters and receivers, are extended with OpenFlow interfaces, which can be intelligently controlled by the OpenFlow-stateless PCE integrated control plane with significant protocol extensions. On an actual network testbed with real hardware, we successfully validate dynamic and seamless interworking operations between the OpenFlow controller, a stateless PCE, and all the data plane hardware. The overall feasibility and efficiency of the proposed solutions are verified, and dynamic end-to-end path provisioning and lightpath restoration in such a new network scenario are quantitatively evaluated. We also tested the scalability of the proposed control plane; the experiment results indicated that dynamic path provisioning and restoration can be achieved within hundreds of milliseconds by using the proposed approach, and the overall architecture scales well with a batch of requests. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We propose a concept of flexible PON and show with experiments and network dimensioning how burst-mode, software-defined coherent transponders can more than double the average capacity per user in TDMA access networks. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Flexible PON are future paradigm in parallel with flexible and elastic optical networks are under research for core networks. In the same way as those backbone optical networks can be significantly improved by following software-defined network (SDN) techniques, it is described how SDN PONs can be implemented by highly spectral efficient digital modulation formats. A main challenge is the implementation by cost effective devices. We will show the progress in alternatives implementations and adequacy of diverse modulation formats to cost effective bandwidth limited optical sources and receivers. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> A novel signal modulation technique, termed optical OFDM-based carrierless amplitude and phase (CAP) modulation (OOFDM-CAP) is proposed, analyzed and evaluated, for the first time, in which multiple real-valued OFDM channels are multiplexed/demultiplexed using digital orthogonal filters embedded in DSP logic. An OOFDM-CAP theoretical model is established, based on which the dependence of the required minimum oversampling factor upon the total number of OFDM channels simultaneously transmitted is identified in simple SSMF systems utilizing intensity modulation and direct detection. In such a system consisting of two OFDM channels, detailed numerical explorations are also undertaken of the impacts of major transceiver design aspects on the OOFDM-CAP transmission performance. These aspects include digital orthogonal filter characteristics, oversampling factors and OOFDM adaptability. It is shown that OOFDM-CAP not only allows the utilization of a minimum oversampling factor as low as 2, but also overcomes all fundamental limitations associated with conventional CAP modulation. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> An intra-data center architecture employing a cost-effective multi-level transmission scheme is presented. Unlike related previous works, the proposed scheme enables the optical multiplexing of two asynchronous and independent optical streams into an eight-amplitude phase-shift keying signal. The scheme is particularly suitable for intra-data center scenarios, where cost-effective solutions have to guarantee adequate bandwidth flexibility. The proposed transmitter successfully multiplexes two data streams, namely a 40 Gb/s differential quadrature phase-shift keying signal and a 10 Gb/s on-off keying one, coming from two separate (and not synchronized) apparatus. The experimental implementation, including two non-coherent receivers, reveals satisfactory performance ensuring a correct functionality (bit error rate <;10 -9) for an optical signal-to-noise ratio of about 31 dB. The design of an intra-data center switch architecture encompassing the proposed transmission scheme is then presented. Additionally, the switch includes a specifically defined Open Flow control enabling bandwidth flexibility according to application service requirements. The overall solution has been successfully implemented and demonstrated in an experimental testbed including traffic tributaries provided through off-the-shelf network elements. The overall configuration is successfully completed within a few milliseconds only. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Next generation optical networks will require high levels of flexibility both at the data and control planes, being able to fit rate, bandwidth, and optical reach requirements of different connections. Optical transmission should be able to support very high rates (e.g., 1 Tb/s) and to be distance adaptive while optimizing spectral efficiency (i.e., the information rate transmitted over a given bandwidth). Similarly, the control plane should be capable of performing effective routing and spectrum assignment as well as proper selection of the transmission parameters (e.g., modulation format) depending on the required optical reach. In this paper we present and demonstrate a software-defined super-channel transmission based on time frequency packing and on the proposed differentiated filter configuration (DFC). Time frequency packing is a technique able to achieve high spectral efficiency even with low-order modulation formats (e.g., quadrature phase-shift keying). It consists in sending pulses that overlap in time or frequency or both to achieve high spectral efficiency. Coding and detection are properly designed to account for the introduced inter-symbol and inter-carrier interference. We present a software defined network (SDN) controller that sets transmission parameters (e.g., code rate) both at the transmitter and the receiver side. In particular, at the transmitter side, a programmable encoder adding redundancy to the data is controlled by SDN. At the receiver side, the digital signal processing is set by SDN based on the selected transmission parameters (e.g., code rate). Thus, extensions to the OpenFlow architectures are presented to control super-channel transmission based on time frequency packing. Then, the SDN-based DFC is proposed. According to DFC, the passband of the filters traversed by the same connection can be configured to different values. Experiments including data and control planes are shown to demonstrate the feasibility of optical-reach-adaptive super-channel at 1 Tb/s controlled by extended OpenFlow. Then, the effectiveness of the proposed SDN-based DFC is demonstrated in a testbed with both wavelength selective switches and spectrum selective switches, where filters traversed by a connection requires different passband values. Extended OpenFlow messages for time frequency packing and supporting DFC have been captured and shown in the paper. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> A multiflow transponder in flex-grid optical networks has recently been proposed as a transponder solution to generate multiple optical flows (or subcarriers). Multiflow transponders support high-rate super-channels (i.e., connection composed of multiple corouted subcarriers contiguous in the spectrum) and sliceability; i.e., flows can be flexibly associated to the incoming traffic requests, and, besides composing a super-channel, they can be directed toward different destinations. Transponders supporting sliceability are also called sliceable transponders or sliceable bandwidth variable transponders (SBVTs). Typically, in the literature, SBVTs have been considered composed of multiple laser sources (i.e., one for each subcarrier). In this paper, we propose and evaluate a novel multirate, multimodulation, and code-rate adaptive SBVT architecture. Subcarriers are obtained either through multiple laser sources (i.e., a laser for each subcarrier) or by exploiting a more innovative and cost-effective solution based on a multiwavelength source and micro-ring resonators (MRRs). A multiwavelength source is able to create several optical subcarriers from a single laser source. Then, cascaded MRRs are used to select subcarriers and direct them to the proper modulator. MRRs are designed and analyzed through simulations in this paper. An advanced transmission technique such as time frequency packing is also included. A specific implementation of a SBVT enabling an information rate of 400 Gb/s is presented considering standard 100 GbE interfaces. A node architecture supporting SBVT is also considered. A simulation analysis is carried out in a flex-grid network. The proposed SBVT architecture with a multiwavelength source permits us to reduce the number of required lasers in the network. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Digital filter multiple access (DFMA) passive optical networks (PONs) are, for the first time to our knowledge, proposed and extensively investigated, where digital signal processing (DSP)-enabled, software-reconfigurable, digital orthogonal filtering is employed in each individual optical network unit (ONU) and the optical line terminal to enable all ONUs to dynamically share the transmission medium under the control of the centralized software-defined controller and the transceiver-embedded DSP controllers. The DFMA PONs fully support software-defined networking with the network control further extended to the physical layer. As digital filtering is the fundamental process at the heart of the proposed DFMA PONs, the filtering-induced tradeoffs among upstream transmission capacity, filter design flexibility, and filter DSP complexity are examined, based on which optimum filter design guidelines are identified for various application scenarios. Furthermore, the performance characteristics of the DFMA PONs are also numerically explored in terms of maximum achievable upstream transmission capacity, differential ONU launch power dynamic range, and ONU count-dependent minimum received optical power. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> This article reports the work on next generation transponders for optical networks carried out within the last few years. A general architecture supporting super-channels (i.e., optical connections composed of several adjacent subcarriers) and sliceability (i.e., subcarriers grouped in a number of independent super-channels with different destinations) is presented. Several transponder implementations supporting different transmission techniques are considered, highlighting advantages, economics, and complexity. Discussions include electronics, optical components, integration, and programmability. Application use cases are reported. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> We experimentally demonstrate multiple advanced functionalities of a cost-effective high-capacity sliceable-BVT using multicarrier technology. It is programmable, adaptive and reconfigurable by an SDN controller for efficient resource usage, enabling unique granularity, flexibility and grid adaptation, even in conventional fixed-grid networks. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Future high-performance network-based applications are delivered over the high-capacity dynamic optical network. Each of these applications requires dedicated network service, which can be provided by the virtual optical network (VON) created by optical network virtualization. The virtualizable bandwidth variable transceiver (V-BVT) is a key enabling technology for optical network virtualization. In this paper, we propose the virtualization of V-BVT to support the virtualization of optical orthogonal frequency-division-multiplexing-based elastic optical network. We present a novel V-BVT architecture, and introduce both online and offline virtualization algorithms for V-BVT. Accordingly, multiple independent but coexisting virtual transceivers that share the same physical transceiver resources are created, in order to serve separate VONs. To guarantee the isolation of the created virtual transceivers together with their quality of transmission (QoT), the impact of physical layer impairments on different V-BVT solutions is also considered and integrated into the virtualization algorithms. In the offline virtualization, both heuristic and integral linear programming methods are proposed, in order to maximize the VON demand accommodation using the given physical V-BVT resources. In the online virtualization, a heuristic method is proposed to accommodate real-time received VON demands. By applying both algorithms, multiple virtual transceivers can be dynamically created based on the bandwidth and QoT of the VON demands. Finally, we evaluate and compare the performance of the proposed algorithms, and also verify the V-BVT transmission performance through simulation studies. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> This paper proposes HYbriD long-Reach fiber Access network (HYDRA), a novel network architecture that overcomes many limitations of the current WDM/TDM PON approaches leading to significantly improved cost and power consumption figures. The key concept is the introduction of an active remote node that interfaces to end-users by means of the lowest cost/power consumption technology (short-range xPON, wireless, etc.) while on the core network side it employs adaptive ultra-long reach links to bypass the metropolitan area network. The scheme leads to a higher degree of node consolidation and access-core integration. We demonstrate that HYDRA can achieve very high performance based on mature component technologies ensuring very low cost end-user terminals, reduced complexity, and high scalability. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Software-defined transceivers are about to be established in long-haul optical communications. But will they be of equal importance in dynamic access networks? And which technology seems most promising? <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> Advances in transmission technologies and control plane solutions are driving the introduction of spectrally-efficient ultrahigh rate superchannel transmissions. Modulation format, forward error correction/coding, and carrier spacing represent the key transmission parameters to configure in order to obtain efficient network resource utilization according to the specific optical path requirements. So far, several studies have mainly addressed the efficient configuration of a single selected transmission parameter. Instead, the topic of the combined configuration of the whole set of parameters still requires significant investigations, particularly in the case of automatic configuration procedures. In this study, we first review the aforementioned transmission parameters in terms of their adaptation capabilities. Then, a novel procedure for effective configuration of the whole set of transmission parameters is presented. The procedure, besides modulation format and coding configuration, includes a novel self-adaptation technique for the carrier spacing in a superchannel transmission. Moreover, the technique relies on a novel software-defined networking control to reoptimize the superchannel frequency slot width. The technique has been successfully validated in a field trial where a 1 Tb/s superchannel of eight subcarriers has been automatically adapted from a frequency slot width of 200 GHz to a more efficient slot width of 175 GHz, without traffic disruption. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> In order to serve the future high-performance network-based Internet applications, optical network virtualization is proposed to offer each application type a dedicated virtual optical network (VON). Virtualizeable bandwidth variable transceiver (V-BVT) is a key enabler in supporting the creation of multiple VONs. In this paper, we present a feasible V-BVT architecture that can be a part of a software-defined optical network. The proposed V-BVT has a novelty to offer independent operation, control, and management abilities to the clients or higher level network controllers. In addition, a specific V-BVT virtualization algorithm is proposed, in order to enable the efficient creation of multiple coexisting, but independent virtual transceivers that share the same V-BVT physical resources. The virtual transceiver can provide specific bit rate, subcarrier, modulation format, and a corresponding baud rate to each VON, based on the requirement of the VON demand, V-BVT resources availability, and optical network status. We further realize the proposed V-VBT architecture on an experimental platform with a software-defined network controller. The V-BVT resource allocation through the proposed virtualization algorithm is also performed using the extended OpenFlow protocol. The proposed and experimentally demonstrated V-BVT achieves independence in virtual transceivers control and management in the control plane, while maintaining the coexisting and isolation features in the physical layer. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> A. Transceivers <s> AON, one of the most deployed fiber access solutions in Europe, needs to be upgraded in order to satisfy the ever growing bandwidth demand driven by new applications and services. Meanwhile, network providers want to reduce both capital expenditures and operational expenditures to ensure that there is profit coming from their investments. This article proposes several migration strategies for AON from the data plane, topology, and control plane perspectives, and investigates their impact on the total cost of ownership <s> BIB025
Software defined optical transceivers are optical transmitters and receivers that can be flexibly configured by SDN to transmit or receive a wide range of optical signals BIB022 . Generally, software defined optical transceivers vary the modulation format BIB001 of the transmitted optical signal by adjusting the transmitter and receiver operation through Digital Signal Processing (DSP) techniques - . These transceivers have evolved in recent years from Bandwidth Variable Transceivers (BVTs) generating a single signal flow to sliceable multiflow BVTs. Single-flow BVTs permit SDN control to adjust the transmission bandwidth of the single generated signal flow. In contrast, sliceable multi-flow BVTs allow for the independent SDN control of multiple communication traffic flows generated by a single BVT. 1) Single-Flow Bandwidth Variable Transceivers (BVTs): Software defined optical transceivers have initially been examined in the context of adjusting a single optical signal flow for flexible WDM networking BIB008 - . The goal has been to make the photonic transmission characteristics of a given transmitter fully programmable. We proceed to review a representative single-flow BVT design for general optical mesh networks in detail and then summarize related singleflow BVTs for PONs and data center networks. a) Mach-Zehnder Modulator Based Flexible Transmitter: Choi and Liu et al. BIB009 , BIB010 have demonstrated a flexible transmitter based on Mach-Zehnder Modulators (MZMs) and a corresponding flexible receiver for SDN control in a general mesh network. The flexible transceiver employs a single dual-drive MZM that is fed by two binary electric signals as well as a parallel arrangement of two MZMs which are fed by two additional electrical signals. Through adjusting the direct current bias voltages and amplitudes of drive signals the combination of MZMs can vary the amplitude and phase of the generated optical signal BIB007 . Thus, modulation formats ranging from Binary Phase Shift Keying (BPSK) to Quadrature Phase Shift Keying (QPSK) as well as 8 and 16 quadrature amplitude modulation BIB001 can be generated. The amplitudes and bias voltages of the drive signals can be signaled through an SDN OpenFlow control plane to achieve the different modulation formats. The corresponding flexible receiver consists of a polarization filter that feeds four parallel photodetectors, each followed by an Analog-to-Digital Converter (ADC). The outputs of the four parallel ADCs are then processed with DSP techniques to automatically (without SDN control) detect the modulation format. Experiments in BIB009 , BIB010 have evaluated the bit error rates and transmission capacities of the different modulation formats and have demonstrated the SDN control. b) Single-Flow BVTs for PONs: Flexible optical networking with real-time bandwidth adjustments is also highly desirable for PON access and metro networks, albeit the BVT technologies for access and metro networks should have low cost and complexity BIB012 . Iiyama et al. have developed a DSP based approach that employs SDN to coordinate the downstream PON transmission of On-Off Keying (OOK) modulation BIB004 and Quadrature Amplitude Modulation (QAM) BIB002 signals. The OOK-QAM-SDN scheme involves a novel multiplexing method, wherein all the data are simultaneously sent from the OLT to the ONUs and the ONUs filter the data they need. The experimental setup in also demonstrated digital software ONUs that concurrently transmit data by exploiting the coexistence of OOK and QAM. The OOK-QAM-SDN evaluations demonstrated the control of the receiving sensitivity which is very useful for a wide range of transmission environments. In a related study, Vacondio et al. BIB011 have examined Software-Defined Coherent Transponders (SDCT) for TDMA PON access networks. The proposed SDCT digitally processes the burst transmissions to achieve improved burst mode transmissions according to the distance of a user from the OLT. The performance results indicate that the proposed flexible approach more than doubles the average transmission capacity per user compared to a static approach. Bolea et al. BIB013 , BIB017 have recently developed lowcomplexity DSP reconfigurable ONU and OLT designs for SDN-controlled PON communication. The proposed communication is based on carrierless amplitude and phase modulation BIB005 enhanced with optical Orthogonal frequency Division Multiplexing (OFDM) BIB013 . The different OFDM channels are manipulated through DSP filtering. As illustrated in Fig. 4 , the ONU consists of a DSP controller that controls the filter coefficients of the shaping filter. The filter output is then passed through a Digital-to-Analog Converter (DAC) and intensity modulator for electric-optical conversion. At the OLT, a photo diode converts the optical signal to an electrical signal, which then passes through an Analog-to- Digital Converter (ADC). The SDN controlled OLT DSP controller sets the filter coefficients in the matching filter to correspond to the filtering in the sending ONU. The OLT DSP controller is also responsible for ensuring the orthogonality of all the ONU filters in the PON. The performance evaluations in BIB017 indicate that the proposed DSP reconfigurable ONU and OLT system achieves ONU signal bitrates around 3.7 Gb/s for eight ONUs transmitting upstream over a 25 km PON. The performance evaluations also illustrate that long DSP filter lengths, which increase the filter complexity, improve performance. c) Single-Flow BVTs for Data Center Networks: Malacarne et al. BIB014 have developed a low-complexity and low-cost bandwidth adaptable transmitter for data center networking. The transmitter can multiplex Amplitude Shift Keying (ASK), specifically On-Off Keying (OOK), and Phase Shift Keying (PSK) on the same optical carrier signal without any special synchronization or temporal alignment mechanism. In particular, the transmitter design BIB014 uses the OOK electronic signal to drive a Mach-Zehnder Modulator (MZM) that is fed by the optical pulse modulated signal. SDN control can activate (or de-activate) the OOK signal stream, i.e., adapt from transmitting only the PSK signal to transmitting both the PSK and OOK signal and thus providing a higher transmission bit rate. 2) Sliceable Multi-Flow Bandwidth Variable Transceivers: Whereas the single-flow transceivers surveyed in Section III-A1 generate a single optical signal flow, parallelization efforts have resulted in multi-flow transceivers (transponders) . Multi-flow transceivers can generate multiple parallel optical signal flows and thus form the infrastructure basis for network virtualization. a) Encoder Based Programmable Transponder: Sambo et al. BIB015 , BIB018 have developed an SDNprogrammable bandwidth-variable multi-flow transmitter and corresponding SDN-programmable multi-flow bandwidth variable receiver, referred to jointly as programmable bandwidth-variable transponder. The transmitter mainly consists of a programmable encoder and multiple parallel Polarization-Multiplexing Quadrature Phase Shift Keying (PM-QPSK BIB001 ) laser transmitters, whose signals are multiplexed by a coupler. The encoder is SDN-controlled to implement Low-Density Parity-Check (LDPC) coding BIB006 with different code rates. At the receiver, the SDN control sets the local oscillators and LDPC decoder. The developed transponder allows the setting of the number of subcarriers, the subcarrier bitrate, and the LDPC coding rate through SDN. Related frequency conversion and defragmentation issues have been examined in . In BIB016 , a low-cost version of the SDN programmable transponder with a multiwavelength source has been developed. The multiwavelength source is based on a micro-ring resonator BIB003 that generates multiple signal carriers with only a single laser. Automated configuration procedures for the comprehensive set of transmission parameters, including modulation format, coding configuration, and carriers have been explored in BIB023 . b) DSP Based Sliceable BVT: Moreolo et al. BIB019 have developed an SDN controlled sliceable BVT based on adaptive Digital Signal Processing (DSP) of multiple parallel signal subcarriers. Each subcarrier is fed by a DSP module that configures the modulation format, including the bit rate setting, and the power level of the carrier by adapting a gain coefficient. The output of the DSP module is then passed through digital to analog conversion that drives laser sources. The parallel flows can be combined with a wavelength selective switch; the combined flow can be sliced into multiple distinct sub-flows for distinct destinations. The functionality of the developed DSP based BVT has been verified for a metropolitan area network with links reaching up to 150 km. c) Subcarrier and Modulator Pool Based Virtualizable BVT: Ou et al. BIB024 , BIB020 have developed a Virtualizable BVT (V-BVT) based on a combination of an optical subcarriers pool with an independent optical modulators pool, as illustrated in Fig. 5 . The emphasis of the design is on imple- BIB021 is a novel hybrid long-reach fiber access network architecture based on sliceable BVTs. HYDRA supports low-cost end-user ONUs through an Active Remote Node (ARN) that directly connects via a distribution fiber segment, a passive remote node, and a trunk fiber segment to the core (backbone) network, bypassing the conventional metro network. The ARN is based on an SDN controlled S-BVT to optimize the modulation format. With the modulation format optimization, the ARN can optimize the transmission capacity for the given distance (via the distribution and trunk fiber segments) to the core network. The evaluations in BIB021 demonstrate good bit error rate performance of representative HYDRA scenarios with a 200 km trunk fiber segment and distribution fiber lengths up to 100 km. In particular, distribution fiber lengths up to around 70 km can be supported without Forward Error Correction (FEC), whereas distribution fiber lengths above 70 km would require standard FEC. The consolidation of the access and metro network infrastructure BIB025 achieved through the optimized S-BVT transmissions can significantly reduce the network cost and power consumption.
Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Small erbium-doped amplets and semiconductor optical amplifiers will be used in current and future metro and enterprise networks in various configurations. Many new system architectures will be enabled as these low-cost technologies are used to compensate for transmission and impairment-compensating component losses. This paper discusses the definition, use, and technologies associated with these new classes of optical amplifiers which, though little, will impact next-generation networks a great deal. <s> BIB001 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? <s> BIB002 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. <s> BIB003 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. <s> BIB004 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. <s> BIB005 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> A comprehensive analysis and survey of the optical performance monitoring (OPM) is provided towards the deployment of translucent/transparent optical networks. OPM applications and technologies are reviewed for the different stages of the optical network life cycle: planning and provisioning, impairment mitigations, maintenance and failure location. Recommendations for the most relevant OPMs required to be integrated in translucent/transparent optical networks are provided. <s> BIB006 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The wavelength routed network (WRN) accommodates traffic demands by establishing lightpaths along the corresponding routing paths. The wavelength of each lightpath is switched individually by traditional wavelength optical cross-connects (T-OXCs) to transit the traffic. Due to the traffic explosion and the resulted growth in wavelength number, WRNs face a challenge of the increase in node-size (i.e., the port number of a T-OXC) as well as the associated cost and control complexity. As an alternative solution, waveband switching (WBS) is introduced to group multiple wavelengths together as a band or fiber. Whenever possible, the group of wavelengths requires just a single port at a multi-granular optical cross-connect (MG-OXC). One fundamental problem in WBS networks is the routing and wavelength assignment (RWA). With the major goal of minimizing the port numbers in WBS networks, the optimal RWA problem was shown to be NP-Hard. In the literature, various Integer Linear Programming models are proposed to optimally solve a small-size RWA problem, and many heuristic algorithms are proposed to provide a practical solution for the large-scale RWA problem in WBS networks. In this work, we comprehensively review literature studies on waveband switching networks. The topics covered include architecture, RWA problem solving strategies, and future challenges of wavelength conversion, protection, and lightpath rerouting in WBS networks. We aim at presenting a classified view of WBS networks, based on various aspects including the traffic pattern, node and network architecture, grouping policy, and the band configurations. We investigate factors that affect the goal of port reduction and blocking minimization in WBS networks. In addition, we explore several unique features of waveband switching in protection, wavelength conversion and rerouting, along which we point out multiple open challenges in WBS networks that deserve further studies. <s> BIB007 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> This paper discusses the benefits of applying software defined networking (SDN) to circuit based transport networks. It first establishes the need for SDN in the context of transport networks. This paper argues that the use of SDN in the transport layers could be the enabler for both packet-optical integration and improved transport network applications. Then, this paper proposes extensions to OpenFlow 1.1 to achieve control of switches in multi-technology transport layers. The approach presented in this paper is simple, yet it distinguishes itself from similar work by its friendliness with respect to the current transport layer control plane based on generalized multiprotocol label switching (GMPLS). This is important as it will enable an easier and gradual injection of SDN into existing transport networks. This paper is completed with a few use case applications of SDN in transport networks. <s> BIB008 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Data centers are experiencing an exponential increase in the amount of network traffic that they have to sustain due to cloud computing and several emerging web applications. To face this network load, large data centers are required with thousands of servers interconnected with high bandwidth switches. Current data center networks, based on electronic packet switches, consume excessive power to handle the increased communication bandwidth of emerging applications. Optical interconnects have gained attention recently as a promising solution offering high throughput, low latency and reduced energy consumption compared to current networks based on commodity switches. This paper presents a thorough survey on optical interconnects for next generation data center networks. Furthermore, the paper provides a qualitative categorization and comparison of the proposed schemes based on their main features such as connectivity and scalability. Finally, the paper discusses the cost and the power consumption of these schemes that are of primary importance in the future data center networks. <s> BIB009 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Present-day networks are being challenged by dramatic increases in data rate demands of emerging applications. A new network architecture, incorporating “optical flow switching,” will enable significant rate growth, power efficiency, and cost-effective scalability of next-generation networks. We will explore architecture concepts germinated 22 years ago, technology and testbed demonstrations performed in the last 17 years, and the architecture construct from the physical layer to the transport layer of an implementable optical flow switching network that is scalable and manageable. <s> BIB010 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> An in-band optical signal-to-noise ratio (OSNR) monitor is proposed, based on an instantaneous polarization state distribution analysis. The proposed monitor is simple, and is applicable to polarization division multiplexed signals. We fabricate a high-speed Stokes polarimeter that integrates a planar lightwave circuit (PLC) based polarization filter, high-speed InP/InGaAs photodiodes and InP hetero-junction bipolar transistor (HBT) trans-impedance amplifiers (TIA). We carry out proof-of-concept experiments with the fabricated polarimeter, and successfully measure the OSNR dependent polarization distribution with 100-Gb/s dual polarization quadrature phase shift keying (DP-QPSK) signals. <s> BIB011 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We present the first elastic, space division multiplexing, and multi-granular network based on two 7-core MCF links and four programmable optical nodes able to switch traffic utilising the space, frequency and time dimensions with over 6000-fold bandwidth granularity. Results show good end-to-end performance on all channels with power penalties between 0.75 dB and 3.7 dB. <s> BIB012 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> This Review summarizes the simultaneous transmission of several independent spatial channels of light along optical fibres to expand the data-carrying capacity of optical communications. Recent results achieved in both multicore and multimode optical fibres are documented. <s> BIB013 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Next-generation ROADM networks are incorporating an extensive range of new features and capabilities including colorless, directionless, and contentionless multiplexing and demultiplexing, flexible spectrum channel definition, and higher-order modulation formats. To efficiently support these new features, both new ROADM node architectures along with complementary optical components and technologies are being synergistically designed. In this article, we describe these new architectures, components, and technologies, and how they work together to support these features in a compact and costefficient manner. <s> BIB014 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> A large number of factors generate uncertainty on traffic demands and requirements. In order to deal with uncertainty optical nodes and networks are equipped with flexibility. In this context, we define several types of flexibility and propose a method, based on entropy maximization, to quantitatively evaluate the flexibility provided by optical node components, subsystems, and architectures. Using this method we demonstrate the equivalence, in terms of switching flexibility, of finer spectrum granularity, and faster reconfiguration rate. We also show that switching flexibility is closely related to bandwidth granularity. The proposed method is used to derive formulae for the switching flexibility of key optical node components and the switching and architectural flexibility of four elastic optical node configurations. The elastic optical nodes presented provide various degrees of flexibility and functionality that are discussed in the paper, from flexible spectrum switching to adaptive architectures that support elastic switching of frequency, time, and spatial resources plus on-demand spectrum defragmentation. We further complement this analysis by experimentally demonstrating flexible time, spectrum, and space switching plus dynamic architecture reconfiguration. The implemented architectures support continuous and subwavelength heterogeneous signals with bitrates ranging from 190 Mb/s, for a subwavelength channel, to 555 Gb/s for a multicarrier superchannel. Results show good performance and the feasibility of implementing the architecture-on-demand concept. <s> BIB015 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> This paper reports on the design, implementation, and evaluation of a multitechnology, multirate, and adaptable network architecture for metropolitan/edge areas. It is empowered by programmability in control and data planes, providing users with an open network platform to redefine and optimize its behavior and performance. It uses a hybrid data plane of fixed-grid [(sub)wave-length] and flex-grid systems to support a broad range of data rates (1 to 555 Gb/s). The programmability in the data plane is achieved by building the nodes with a modular and flexible architecture (architecture on demand nodes) to achieve different functionalities (fixed-/flex-grid switching with or without time multiplexing) on demand. A centralized, modular, and scalable control framework has been constructed for this network. It uses a set of software plug-ins designed for architecture synthesis and adaptation for policing network resources access and as algorithms of routing and resource allocation for network operation. The proposed hybrid network architecture, along with allocation policies and resource allocation algorithms, is evaluated through simulations across a broad range of traffic profiles with bandwidth requests stretching from 1 to 400 Gb/s. Finally, the programmable data-plane/control-plane architecture has been implemented in an experimental testbed and the functionality of the node and network elements individually and together have been tested, demonstrating the feasibility of the system. <s> BIB016 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> A conventional colorless and directionless reconfigurable optical add/drop multiplexer (ROADM) architecture is modified to add intra-node optical bypass and achieve either statistical or absolute contention-free performance. The contention-free performance is accomplished without relying on external transponders and optical transport network (OTN) switches. Furthermore, the overall ROADM has a smaller size, lower power consumption, and lower cost than those of conventional colorless, directionless, and contentionless ROADMs. <s> BIB017 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> There have been a lot of proposals to unify the control and management of packet and circuit networks but none have been deployed widely. In this paper, we propose a simple programmable architecture that abstracts a core transport node into a programmable virtual switch, that meshes well with the software-defined network paradigm while leveraging the OpenFlow protocol for control. A demonstration use-case of an OpenFlow-enabled optical virtual switch implementation managing a small optical transport network for big-data applications is described. With appropriate extensions to OpenFlow, we discuss how the programmability and flexibility SDN brings to packet-optical backbone networks will be substantial in solving some of the complex multi-vendor, multi-layer, multi-domain issues service providers face today. <s> BIB018 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> This article surveys all known fields of network coding theory and leads the reader through the antecedents of the network coding theory to the most recent results, considering also information theory and matroid theory. By focusing on providing ideas and not formulas, this survey is both fitted for the taste of readers who are mathematically oriented and newcomers to the area. Additionally, this survey also includes an innovative and clear graph representation of the most prominent literature on network coding theory, its relevance and evolution from the very beginning till today. <s> BIB019 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We propose a fibre access network paradigm achieving low latency, high throughput and energy efficiency, by combining the best of PON and AON, optical and electrical forwarding, and the concepts of software defined networks, flexible grid, and cache assisted networking. <s> BIB020 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We report the first field trials of an integrated packet/circuit hybrid optical network. In a long-haul field trial with production traffic, the mean capacity utilization of an Ethernet wavelength is doubled. The transport shares a single lightpath between the circuit and packet layers. Router bypassing is demonstrated at sub-wavelength granularity in a metro network field trial. In both trials the circuit quality of service is shown to be independent of the load of the network. The vacant resources in the circuit are utilized by the packet layer's statistical multiplexing in an interleaved manner without affecting the timing of the circuit. Inaddition, an analytical model that provides an upper bound on the maximum achievable utilization is presented. <s> BIB021 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Circuit and packet switching convergence offers significant advantages in core networks to exploit their complementary characteristics in terms of flexibility, scalability and quality of service. This paper considers the possibility of unifying the two different types of transport using the Software Defined Networking (SDN) approach. The proposed architecture applies a modular design to the whole set of node functions, representing the key enabler for a fully programmable network implementation. This paper also proposes a possible extension to the basic concept of flow defined by the current OpenFlow standard to properly support a hybrid network. A set of experiments are performed to assess the main functionality and the performance of the hybrid node where packet and circuit switching are assumed to be configured through the OpenFlow protocol in a fully automated way. <s> BIB022 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Electrical packet switching is well known as a flexible solution for small data transfers, whereas optical flow switching (OFS) might be an effective solution for large Internet file transfers. The UltraFlow project, a joint effort of three universities, Stanford, Massachusetts Institute of Technology, and University of Texas-Dallas, aims at providing an efficient dual-mode solution (i.e., IP and OFS) to the current network. In this paper, we propose and experimentally demonstrate UltraFlow Access, a novel optical access network that enables dual-mode service to the end users: IP and OFS. The new architecture cooperates with legacy passive optical networks (PONs) to provide both IP and novel OFS services. The latter is facilitated by a novel optical flow network unit (OFNU) that we have proposed, designed, and experimentally demonstrated. Different colored and colorless OFNU designs are presented, and their impact on the network performance is explored. Our testbed experiments demonstrate concurrent bidirectional 1.25 Gbps IP and 10 Gbps per-wavelength Flow error-free communication delivered over the same infrastructure. The support of intra-PON OFS communication, that is, between two OFNUs in the same PON, is also explored and experimentally demonstrated. <s> BIB023 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. <s> BIB024 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The principles of software-defined networking as applied to multi-service broadband optical access systems are discussed, with an emphasis on centralized software-reconfigurable resource management, digital signal processing (DSP)-enhanced transceivers and multi-service support via software-reconfigurable network “apps”. <s> BIB025 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> This paper demonstrates a testbed of a reconfigurable optical network composed by four ROADMs equipped with flexgrid WSS modules, optical amplifiers, optical channel monitors, and supervisor boards. A controller daemon implements a node abstraction layer based in the YANG language, providing NETCONF and CLI interfaces. Additionally we demonstrate the virtualization of GMPLS control plane, while supporting automatic topology discovery and TE-Link instantiation, enabling a path towards SDN. GMPLS have been extended to collect specific DWDM measurement data allowing the implementation of adaptive/cognitive controls and policies for autonomic operation, based on global network view. <s> BIB026 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> As the infrastructure of cloud computing and big data, data centers have been deployed widely. Then, how to make full use of the computing and storage resources in data centers will be the focus. Data center networks are considered important solution for the problem above, which include intra-data center and inter-data center networks. Both of them will depend on the optical networking due to its advantages, such as high bandwidth, low latency, and low energy consumption. Data center interconnected by flexi-grid optical networks is a promising scenario to allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Compared with inter-data center, optical interconnect in intra-data center networks is a more pressing need and promising scenario to accommodate these applications in a dynamic, flexible and efficient manner. OpenFlow based Software Defined Networking (SDN) is considered as a good technology, which is very suitable for data center networks. This paper mainly focuses on the data center optical networks based on software defined networking (SDN), which can control the heterogeneous networks with unified resource interface. Architecture and experimental demonstration of OpenFlow-based optical interconnects in intra-data center Networks and OpenFlow-based flexi-grid optical networks for inter-data center are presented in the paper respectively. Some future works are listed finally. <s> BIB027 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Driven by various broadband applications, data center has become one of the most important service resources, connected by IP and optical networks. Then how to use the service resource and network resource together effectively will become the research focus. Towards realizing this goal, this paper proposes a unified control system for heterogeneous networks, which is implemented with Software Defined Networking (SDN) enabled by OpenFlow protocol. Data center, IP network and optical network resources can be abstracted as unified resource interface. NOX based controller can make full use of these resources, and provide the user with different kind of services. Remote demonstration is first accessed and presented with large scale multi-layer and multi-domain networks. <s> BIB028 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Cognitive networks are a promising solution for the control of heterogeneous optical networks. We review their fundamentals as well as a number of applications developed in the framework of the EU FP7 CHRON project. <s> BIB029 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We present results from the first demonstration of a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilizing sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. Results show that SDN is a suitable control plane solution for the high-capacity flexible SDM network. It is able to provision end-to-end bandwidth and QoT requests according to user requirements, considering the unique characteristics of the underlying SDM infrastructure. <s> BIB030 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Recent advances in optical communications not only increase the capacities of communication system but also improve the system dynamicity and survivability. Various new technologies are invented to increase the bandwidth of individual wavelength channels and the number of wavelengths transmitted per fiber. Multiple access technologies have also been developed to support various emerging applications, including real-time, on-demand and high data-rate applications, in a flexible, cost effective and energy efficient manner. In this paper, we overview recent research in optical communications and focus on the topics of modulation, switching, add-drop multiplexer, coding schemes, detection schemes, orthogonal frequency-division multiplexing, system analysis, cross-layer design, control and management, free space optics, and optics in data center networks. The primary purpose of this paper is to refresh the knowledge and broaden the understanding of advances in optical communications, and to encourage further research in this area and the deployment of new technologies in production networks. <s> BIB031 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The decoupled architecture and the fine-grained flow control feature of SDN limit the scalability of SDN network. In order to address this problem, some studies construct the flat control plane architecture, other studies build the hierarchical control plane architecture to improve the scalability of SDN. However, the two kinds of structure still have unresolved issues: the flat control plane structure can not solve the super-linear computational complexity growth of the control plane when SDN network scales to large size, the centralized abstracted hierarchical control plane structure brings path stretch problem. To address the two issues, we propose Orion, a hybrid hierarchical control plane for large-scale networks. Orion can effectively reduce the computational complexity growth of SDN control plane from super-linear to linear. Meanwhile, we design an abstracted hierarchical routing method to solve the path stretch problem. Further, Orion is implemented to verify the feasibility of the hybrid hierarchical approach. Finally, we verify the effectiveness of Orion both from the theoretical and experimental aspects. <s> BIB032 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Abstract Facing the huge traffic challenge, optical networking shows great advantages on capacity and energy issues. However, the efficiency and flexibility are not satisfactory to the data center networks, especially for intra-datacenter communications. This article reviews the typical architectures of data center networks, and suggests that TWDM-PON system can be used in the edge layer and aggregation layer in the datacenter networks. It proposes a software defined flexible and efficient passive optical network, combining the software defined technology and network coding, for intra-datacenter communications. Network coding is applied to increase the downstream bandwidth efficiency and overcome network bottleneck, and software defined technology provides the flexibility on wavelength assignment according to traffic statistics dynamically. To increase more efficiency, a seamless DBA (S-DBA) scheme and an ONU grouping algorithm are proposed to fully utilize the upstream idle time and minimize the traffic entering into the core layer of the datacenter networks. They realize flexible scheduling and resource allocation both in time domain and wavelength domain. The experimental simulations indicate that the proposed schemes and algorithms provide low delay, good fairness, increased efficiency and network flexibility. <s> BIB033 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In the last few years, changing infrastructure and business requirements are forcing enterprises to rethink their networks. Enterprises look to passive optical networks (PON) for increased network efficiency, flexibility, and cost reduction. At the same time, the emergence of Cloud and mobile in enterprise networks calls for dynamic network control and management following a centralized and software-defined paradigm. In this context, we propose a software-defined edge network (SDEN) design that operates on top of PON. SDEN leverages PON benefits while overcoming its lack of dynamic control. This paper is a work-in-progress focusing on enabling key flow control functions over PON: dynamic traffic steering, service dimensioning and realtime re-dimensioning. We also discuss how SDEN edge network can integrate with core SDN solutions to achieve end-to-end manageability. Through case experiment studies conducted on a live PON testbed deployment, we show the practical benefits and potentials that SDEN can offer to enterprise networks redesign. <s> BIB034 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In this paper, we propose and experimentally demonstrate a reconfigurable long-reach (R-LR) UltraFlow access network to provide flexible dual-mode (IP and Flow) service with lower capital expenditure (CapEx) and higher energy efficiency. UltraFlow is a research project involves the collaboration of Stanford, MIT, and UT-Dallas. The design of the R-LR UltraFlow access network enables seamless integration of the Flow service with IP passive optical networks deployed with different technologies. To fulfill the high-wavelength demand incurred by the extended service reach, we propose the use of multiple feeder fibers to form subnets within the UltraFlow access network. Two layers of custom switching devices are installed at the central office (CO) and remote node to provide flexibility in resource allocation and user grouping. With a centralized software-defined network (SDN) controller at the CO to control the dual-mode service, numerical analysis indicates that the reconfiguration architecture is able to reduce the CapEx during initial deployment by about 30%. A maximum of around 50% power savings is also achieved during low traffic period. The feasibility of the new architecture and the operation of the SDN controller are both successfully demonstrated on our experimental testbed. <s> BIB035 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Abstract Data centers provide a volume of computation and storage resources for cloud-based services, and generate very huge traffic in data center networks. Usually, data centers are connected by ultra-long-haul WDM optical transport networks due to its advantages, such as high bandwidth, low latency, and low energy consumption. However, since the rigid bandwidth and coarse granularity, it shows inefficient spectrum utilization and inflexible accommodation of various types of traffic. Based on OFDM, a novel architecture named flexible grid optical network has been proposed, and becomes a promising technology in data center interconnections. In flexible grid optical networks, the assignment and management of spectrum resources are more flexible, and agile spectrum control and management strategies are needed. In this paper, we introduce the concept of Spectrum Engineering, which could be used to maximize spectral efficiency in flexible grid optical networks. Spectrum Defragmentation, as one of the most important aspect in Spectrum Engineering, is demonstrated by OpenFlow in flexible grid optical networks. Experimental results are reported and verify the feasibility of Spectrum Engineering. <s> BIB036 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We propose and discuss the extension of software-defined networking (SDN) and OpenFlow principles to optical access/aggregation networks for dynamic flex-grid wavelength circuit creation. The first experimental demonstration of an OpenFlow1.0-based flex-grid λ-flow architecture for dynamic 150 Mb/s per-cell 4 G Orthogonal Frequency Division Multiple Access (OFDMA) mobile backhaul (MBH) overlays onto 10 Gb/s passive optical networks (PON) without optical network unit (ONU)-side optical filtering, amplification, or coherent detection, over 20 km standard single mode fiber (SSMF) with a 1:64 passive split is also detailed. The proposed approach can be attractive for monetizing optical access/aggregation networks via on-demand support for high-speed, low latency, high quality of service (QoS) applications over legacy fiber infrastructure. <s> BIB037 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The dramatic growth of Internet traffic is posing unprecedented challenges in all network segments. Moreover, its increasing heterogeneity is driving research trends in network functions virtualization and softwaredefined networking (SDN) to guarantee high levels of reconfigurability, adaptability, and flexibility. Therefore, hardware infrastructures must be capable of playing different roles according to service and traffic requirements. In this context, reconfigurable optical add/drop multiplexers (ROADMs) are key elements since they route signals directly in the optical domain. Thus, it is crucial to design ROADMs with easy maintenance, a low manual intervention rate, and high reconfigurability, flexibility, and adaptability. In this work, we experimentally demonstrate our recently proposed add/drop on demand (ADoD) architecture for ROADMs in a SDN metropolitan mesh optical network test-bed with 80 dual-polarization quadrature phase-shift-keying channels at 128 Gb/s. In addition, we extend our quantitative measurement of flexibility considering the system’s entropy, showing that ADoD provides higher flexibility and lower loss than current proposals. <s> BIB038 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In the last few years, changing infrastructure and business requirements are forcing enterprises to rethink their networks. Enterprises look for network infrastructures that increase network efficiency, flexibility, and cost reduction. At the same time, the emergence of Cloud and mobile in enterprise networks has introduced tremendous variability in enterprise traffic patterns at the edge. This highly mobile and dynamic traffic presents a need for dynamic capacity management and adaptive traffic steering and appeals for new infrastructures and management solutions. In this context, passive optical networks (PON) have gained attention in the last few years as a promising solution for enterprise networks, as it can offer efficiency, security, and cost reduction. However, network management in PON is not yet automated and needs humain intervention. As such, capabilities for dynamic and adaptive PON are necessary. In this paper, we present a joint solution for PON capacity management both in deployment and in operation, as to maximize peak load tolerance by dynamically allocating capacity to fit varying and migratory traffic loads. To this end, we developed the novel approaches of capacity pool based deployment and dynamic traffic steering in PON. Compared with traditional edge network design, our approach significantly reduces the need for capacity over-provisioning. Compared with generic PON networks, our approach enables dynamic traffic steering through software-defined control. We implemented our design on a production grade PON testbed, and the results demonstrate the feasibility and flexibility of our approach. <s> BIB039 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In this paper, a flexible and reliable 40-Gb/s time and wavelength division multiplexing passive optical network (TWDM-PON) architecture is proposed and demonstrated. Here, a $4\times 10\ \mbox{Gb/s}$ orthogonal frequency-division multiplexing (OFDM) downstream signal is achieved by utilizing four 2.5-GHz directly modulated lasers in the optical line terminal. A reflective optical semiconductor amplifier is utilized in each optical network unit to serve as upstream transmitter (Tx) transmitting 2.5-Gb/s on-off keying (OOK) and 10-Gb/s OFDM upstream traffic, respectively. In addition, the dynamic bandwidth (capacity) allocation and fiber fault protection also can be achieved by applying the new proposed PON architecture with software-defined networking approach. <s> BIB040 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The huge increase in broadband service requires much more bandwidth than ever before; however, due to the cost sensitivity, it is not possible to pursue high transmission rate blindly in the access network, which requires us to consider how to improve network efficiency. In this paper, a software-defined passive optical network architecture with network coding (NC) is proposed to reduce downstream bandwidth consumption and thus increases the throughput and network efficiency. To flexibly implement the coding operation on local peer traffic, the NC pair management scheme is provided and keeps the compatibility with the current multi-point control protocol (MPCP) in a single optical line terminal (OLT). Considering the trends in OLT pooling and the requirement of smooth network upgrade, software-defined networking (SDN) techniques are applied in the NC-based passive optical networks. Through re-arranging the affiliations between the OLTs and optical network units (ONUs), the local traffic between peer ONUs will be led from non-NC-supported OLTs to NC-supported OLTs, and then the downstream efficiency will be still quite high even in a hybrid OLT pool. The experiments and evaluation results show that, the software-defined passive optical networks with NC reduce nearly 50 % occupied downstream bandwidth, when there is local traffic between peer ONUs, even in a hybrid OLT pool. <s> BIB041 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> A novel latency-aware aggregation node architecture supporting TWDM-PONs is successfully demonstrated. The node, performing traffic scheduling according to sleep-mode operations, includes a lightweight SDN solution, scaled to operate as intra-node controller. <s> BIB042 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In this paper, we propose, design, and demonstrate a novel Intra-PON Flow transmission with optical reroute using a Quasi-PAssive Reconfigurable (QPAR) node. The network can be reconfigured adaptively according to the monitored traffic status in a software-defined manner. Simulations show that PON with reroute architecture can achieve ∼20% higher network capacity comparing to PON without reroute case with the same traffic waiting time or blocking probability requirement. PON with reroute consistently outperforms PON without reroute configuration with 20% larger throughput and 24% less power consumption with the Intra-PON traffic ratio of 0.3. In addition, adaptive Intra-wavelength assignment with a QPAR node can adapt to the subscription rate growth with time, and provide cost and power savings compared to PON without reroute and fixed PON with reroute architectures by approximately 20% and 10%. Moreover, adaptive Intra-PON architecture with a QPAR node can facilitate efficient multicast transmission for video or file backup among multiple serves located in different access networks, which can provide lower traffic waiting time, 14% power saving, and support roughly 30% higher traffic comparing to the fixed PON with reroute design with a multicast ratio of 0.5. <s> BIB043 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We have recently proposed and demonstrated, by means of simulation, the benefits of a simple yet effective cognitive technique to enhance stateless Path Computation Element algorithms with the aim of reducing the connection blocking probability when relying on a potentially non-up-to-date traffic engineering database. In this paper, we employ that technique, called elapsed time matrix (ETM), in the framework of the CHRON (Cognitive Heterogeneous Reconfigurable Optical Network) architecture and, more importantly, validate and analyze its performance in an emulation environment (rather than in a simulation environment) supporting impairment-aware lightpath establishment. Not only dynamic lightpath establishment on demand has been studied, but also restoration processes when facing optical link failures. Emulation results demonstrate that ETM reduces the blocking probability when establishing lightpaths on demand, and increases the percentage of successful restorations in case of optical link failure. Moreover, the use of that technique has little impact on lightpath setup time and lightpath restoration time, respectively. <s> BIB044 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Terabit elastic optical networking (EON) is foreseen as a viable solution to extend the lifetime of a network exploiting the available bandwidth in previously deployed optical fibers. EON is based on bandwidth-variable transponders capable of supporting multiple bit rates and/ or modulation formats according to traffic requirements and node architectures that route arbitrary channel band-widths. Thus, EON increases the heterogeneity of the network, which may create the need for autonomie adaptive and/or cognitive techniques. In this context, the software-defined networking (SDN) paradigm emerges as an opportunity to enable such techniques thanks to the centralized view of the network by decoupling the control plane and the data plane. This paper surveys different activities carried out at the Optical Technologies Division in Centro de Pesquisa e Desenvolvimento em Telecomunicacoes, Brazil. We review an optical transport SDN controller for virtual optical networks that supports two adaptive algorithms. First, the autonomie flexible transponder reconfigures the transmission modulation format according to a threshold level. Second, the adaptive global spectrum equalization reconfigures the wavelengths' attenuation profiles applied at the optical nodes to improve the signals' optical signal-to-noise ratio (OSNR) at reception. Finally, we report experimental results of an in-band OSNR monitor for advanced modulation formats. <s> BIB045 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> The coherent revolution has signalled in a new era for optical networks. Flexible transceivers (TRx) able to adapt a wide range of transport layer parameters such as modulation format, symbol rate, center wavelength, forward error correction (FEC), etc., are the key enabling components that will finally deliver on the promise of dynamic network operation. With flexibility comes the potential for a more optimized network, leading, in turn, to increased network efficiency and capacity. To be subject to optimization, an optical network has to first be observable, and this is what the ORCHESTRA project aims to introduce. Physical layer status monitoring with an unprecedented level of detail, enabled by the digital signal processing (DSP) in the deployed digital coherent receivers that will function as software-defined optical performance monitors (soft-OPMs). Novel OPM algorithms will be developed and combined with a novel hierarchical monitoring plane, cross-layer optimization algorithms and active-control functionalities. ORCHESTRA's vision is to close the control loop, enabling true network dynamicity and unprecedented network efficiency. <s> BIB046 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> In flexigrid elastic optical networks, filtering cascade effects strongly affect the overall transmission performance. To avoid these detrimental impairments, each lightpath is typically configured, in an independent way, to encompass additional spectrum resources. Thus, the flat central region of the traversed filters are exploited and their transition region around the cutoff frequencies is avoided. However, handling lightpaths independently leads to less efficient spectrum utilization. In this study, we propose a novel technique, called superfilters, which enables different lightpaths, with different source-destination pairs, to coexist within the same flat region of a single filter configuration. That is, the technique consists of a pathcomputation strategy that applies differentiated configurations of lightpath traversed filters, which are decoupled from head-end lightpath configurations. Simulative transmission results are provided to assess the benefits of the proposed technique. Then, an experimental implementation is presented, including a software-defined networking implementation successfully applied to a flexigrid network testbed. <s> BIB047 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> We propose and experimentally validate an SDN-enabled cognitive methodology for EDFA gain adjustment that relies on case-based reasoning. Results show OSNR improvements over time demonstrating the cognition process regardless the deployed amplifier type. <s> BIB048 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Global optimization of optical network elements (NEs) is a potential solution to provide optical signal-to-noise ratio (OSNR) requirements for high spectrally-efficient (SE) modulation formats, essential in next-generation optical networks. In this context, software defined networking (SDN) is a suitable paradigm that allows for global monitoring and NEs actuation by decoupling data and control planes. By taking that into account, we review our recently proposed SDN dual-optimization application for EDFAs and WSS-based ROADMs which targets the optimization of the OSNR. In this work we detail the implementation of the application based on a state-machine approach. The application is tested using a SDN controller in a metropolitan optical network testbed. Experimental results show OSNR improvements of up to 10 dB for different application strategies and for different number of ROADM nodes in cascade. <s> BIB049 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Passive optical networks (PON) have become a promising solution for accessing networks because of the advantages they offer, such as high efficiency, security, and cost reduction. However, network management in PON is not yet automated and needs network operator intervention. In recent years, software-defined networking (SDN) has become an emerging technology. Through the separation of control and data plane in SDN switches, SDN provides dynamically fine-grained traffic control that enhances total network controllability and manageability. In this paper, we leverage the benefits of gigabit-capable passive optical network (GPON), while enhancing its capabilities on traffic management to the same level as an SDN switch. More specifically, we abstract the underlying physical GPON into an OpenFlow-enabled virtual SDN switch. The virtual switch can be used to connect multiple sites in widespread geographic locations. Similar to a real OpenFlow switch, a GPON virtual switch can be controlled by a standard OpenFlow controller. In our design, an embedded OpenFlow agent resides in the optical line termination (OLT) of the underlying GPON. The agent communicates with the external OpenFlow controller and simultaneously uses optical network unit management and control interface inside the OLT to manage ONUs. We created a prototype system based on a commodity GPON network. In the virtual switch, we implemented all the OpenFlow functions, including packet forwarding, bandwidth metering, statistical data collection, and status reporting. The experimental results show that the GPON virtual switch can correctly perform all the functions defined in the OpenFlow 1.3 specification. Its performance on flow entry modification time, dynamic bandwidth control, and switch status monitoring are comparable to the performance of a real OpenFlow switch. <s> BIB050 </s> Software Defined Optical Networks (SDONs): A Comprehensive Survey <s> B. Space Division Multiplexing (SDM)-SDN <s> Our recently proposed cognitive methodology for optical amplifier gain adjustment, that relies on case-based reasoning, showed optical signal-to-noise ratio improvements over time demonstrating the cognition process regardless the deployed amplifier type. In this paper, we extend our preliminary analysis exploring the cognitive methodology benefits for different and larger network topologies. The obtained results show agreement between networks, demonstrating the methodology suitability regardless the network scenario. <s> BIB051
Amaya et al. BIB012 , BIB030 have demonstrated SDN control of Space Division Multiplexing (SDM) BIB013 BIB031 . The ROADM thus provides an elementary switching functionality in the optical wavelength domain. Initial ROADM based node architectures for costeffectively supporting flexible SDN networks have been presented in BIB014 . Conventional ROADM networks have typically statically configured wavelength channels that transport traffic along a pre-configured route. Changes of wavelength channels or routes in the statically configured networks incur presently high operational costs due to required physical interventions and are therefore typically avoided. New ROADM node designs allow changes of wavelength channels and routes through a management control plane. Due to these two flexibility dimensions (wavelength and route), these new ROADM nodes are referred to as "colorless" and "directionless". First designs for such colorless and directionless ROADM nodes have been outlined in BIB014 and further elaborated in BIB015 , BIB016 . In addition to the colorless and directionless properties, the contentionless property has emerged for ROADMs . Contentionless ROADM operation means that any port can be routed on any wavelength (color) in any direction without causing resource contention. Designs for such ColorlessDirectionless-Contentionless (CDC) ROADMs have been proposed in , BIB017 . In general, the ROADM designs consist of an express bank that interconnects the input and output ports coming from/leading to other ROADMs, and an add-drop bank that connects the express bank with the local receivers for dropped wavelength channels or transmitters for added wavelength channels. The recent designs have focused on the adddrop bank and explored different arrangements of wavelength selective switches and multicast switches to provide add-drop bank functionality with the CDC property , BIB017 . Garrich et al. BIB038 have recently designed and demonstrated a CDC ROADM with an add-drop bank based on an Optical Cross-Connect (OXC) backplane BIB007 . The OXC backplane allows for highly flexible add/drop configurations implemented through SDN control. The backplane based ROADM has been analytically compared with prior designs based on wavelength selective and multicast switches and has been shown to achieve higher flexibility and lower losses. An experimental evaluation has tested the backplane based ROADM for a metropolitan area mesh network extending over 100 km with an aggregate traffic load of close to 9 Tb/s. b) Open Transport Switch (OTS): The Open Transport Switch (OTS) BIB018 is an OpenFlow-enabled optical virtual switch design. The OTS design abstracts the details of the underlying physical switching layer (which could be packet switching or circuit switching) to a virtual switch element. The OTS design introduces three agent modules (discovery, control, and data plane) to interface with the physical switching hardware. These agent modules are controlled from an SDN controller through extended OpenFlow messages. Performance measurements for an example testbed network setup indicate that the circuit path computation latencies on the order of 2-3 s that can be reduced through faster processing in the controller. c) Logical xBar: The logical xBar has been defined to represent a programmable switch. An elementary (small) xBar could consist of a single OpenFlow switch. Multiple small xBars can be recursively merged to form a single large xBar with a single forwarding table. The xBar concept envisions that xBars are the building blocks for forming large networks. Moreover, labels based on SDN and MPLS are envisioned for managing the xBar data plane forwarding. The xBar concepts have been further advanced in the Orion study BIB032 to achieve low computational complexity of the SDN control plane. d) Optical White Box: Nejabati et al. [173] have proposed an optical white box switch design as a building block for a completely softwarized optical network. The optical white box design combines a programmable backplane with programmable switching node elements. More specifically, the backplane consists of two slivers, namely an optical backplane sliver and an electronic backplane sliver. These slivers are set up to allow for flexible arbitrary connections between the switch node elements. The switch node elements include programmable interfaces that build on SDN-controlled BVTs (see Section III-A), protocol agnostic switching, and DSP elements. The protocol agnostic switching element is envisioned to support both wavelength channel and time slot switching in the optical backplane as well as programmable switching with a high-speed packet processor in the electronic backplane. The DSP elements support both the network processing and the signal processing for executing a wide range of network functions. A prototype of the optical white box has been built with only a optical backplane sliver consisting of a 192 × 192 optical space switch. Experiments have indicated that the creation of a virtual switching node with the OpenDayLight SDN controller takes roughly 400 ms. e) GPON Virtual Switch: Lee et al. BIB050 have developed a GPON virtual switch design that makes the GPON fully programmable similar to a conventional OpenFlow switch. Preliminary steps towards the GPON virtual switch design have been taken by Gu et al. BIB033 who developed components for SDN control of a PON in a data center and Amokrane et al. BIB034 , BIB039 who developed a module for mapping OpenFlow flow control requests into PON configuration commands. Lee et al. BIB050 have expanded on this groundwork to abstract the entire GPON into a virtual OpenFlow switch. More specifically, Lee et al. have comprehensively designed a hardware architecture and a software architecture to allow SDN control to interface with the virtual GPON as if it were a standard OpenFlow switch. The experimental performance evaluation of the designed GPON virtual switch measured response times for flow entry modifications from an ONU port (where a subscriber connects to the virtual GPON switch) to an SDN external port around 0.6 ms, which compares to 0.2 ms for a corresponding flow entry modification in a conventional OFsoftswitch and 1.7 ms in a EdgeCore AS4600 switch. In a related study on SDN controlled switching in a PON, Yeh et al. BIB040 have designed an ONU with an optical switch that selects OFDM subchannels in a TWDM-PON. The switch in the ONU allows for flexible dynamic adaption of the downstream bandwidth through SDN. Gu et al. BIB041 have examined the flexible SDN controlled re-arrangement of ONUs to OLTs so as to efficiently support PON service with network coding BIB019 . f) Flexi Access Network Node: A flexi-node for an access network that flexibly aggregates traffic flows from a wide range of networks, such as local area networks and base stations of wireless networks has been proposed in BIB020 . The flexi-node design is motivated by the shortcomings of the currently deployed core/metro network architectures that attempt to consolidate the access and metro networks. This consolidation forces all traffic in the access network to traverse the metro network, even if the traffic is destined to destination nodes in the coverage area of an access network. In contrast, the proposed flexi-node encompasses electrical and optical forwarding capabilities that can be controlled through SDN. The flexi-node can thus serve as an effective aggregation node in access-metro networks. Traffic that is destined to other nodes in the coverage area of an access network can be sent directly to the access network. Kondepu et al. have similarly presented an SDN based PON aggregation node BIB042 . In their architecture, multiple ONUs communicate with the SDN controller within the aggregation node to request the scheduling of upstream transmission resources. ONUs are then serviced by multiple Optical Service Units (OSUs) which exist within the aggregation node alongside with the SDN controller. OSUs are then configured by the controller based on Time and Wavelength Division Multiplexed (TWDM) PON. The OSUs step between normal and sleep-mode depending on the traffic loads, thus saving power. 2) Switching Paradigms: a) Converged Packet-Circuit Switching: Hybrid packetcircuit optical network infrastructures controlled by SDN have been explored in a few studies. Das et al. BIB003 have described how to unify the control and management of circuitand packet-switched networks using OpenFlow. Since packetand circuit-switched networking are extensively employed in optical networks, examining their integration is an important research direction. Das et al. have given a high-level overview of a flow abstraction for each type of switched network and a common control paradigm. In their follow-up work, Das et al. BIB004 have described how a packet and circuit switching network can be implemented in the context of an OpenFlowprotocol based testbed. The testbed is a standard Ethernet network that could generally be employed in any access network with Time Division Multiplexing (TDM). Veisllari et al. BIB021 studied packet/circuit hybrid optical long-haul metro access networks. Although Veisllari et al. indicated that SDN can be used for load balancing in the proposed packet/circuit network, no detailed study of such an SDN-based load balancing has been conducted in BIB021 . Related switching paradigms that integrate SDN with Generalized Multiple Protocol Label Switching (GMPLS) have been examined in BIB005 , BIB008 , while data center specific aspects have been surveyed in BIB009 . Cerroni et al. BIB022 have further developed the concept of unifying circuit-and packet-switching networks with OpenFlow, which was initiated by Das et al. BIB003 , BIB004 . The unification is accomplished with SDN on the network layer and can be used in core networks. Specifically, Cerroni et al. BIB022 have described an extension of the OpenFlow flow concept to support hybrid networks. OpenFlow message format extensions to include matching rules and flow entries have also been provided. The matching rules can represent different transport functions, such as a channel on which a packet is received in optical circuit-switched WDM networks, time slots in TDM networks, or transport class services (such as guaranteed circuit service or best effort packet service). Cerroni et al. BIB022 have presented a testbed setup and reported performance results for throughput (in bit/s and packets/s) to demonstrate the feasibility of the proposed unified OpenFlow switching network. b) R-LR-UFAN: The Reconfigurable Long-Reach UltraFlow Access Network (R-LR-UFAN) BIB023 , BIB035 provides flexible dual-mode transport service based on either the Internet Protocol (IP) or Optical Flow Switching (OFS). OFS BIB010 provides dedicated end-to-end network paths through purely optical switching, i.e., there is no electronic processing or buffering at intermediate network nodes. The R-LR-UFAN architecture employs multiple feeder fibers to form subnets within the network. UltraFlow coexists alongside the conventional PON OLT and ONUs. The R-LR-UFAN introduces new entities, namely the Optical Flow Network Unit (OFNU) and the SDN-controlled Optical Flow Line Terminal (OFLT). A Quasi-PAssive Reconfigurable (QPAR) node BIB043 is introduced between the OFNU and OFLT. The QPAR node can reroute intra PON traffic between OFNUs without having to pass through the OLFTs. The optically rerouted intra-PON channels can be used for communication between wireless base stations supporting inter cell device-to-device communication. The testbed evaluations indicate that for an intra-PON traffic ratio of 0.3, the QPAR strategy achieves power savings up to 24%. c) Flexi-grid: The principle of flexi-grid (elastic) optical networking BIB024 , - BIB036 , has been explored in several SDN infrastructure studies. Generally, flexi-grid networking strives to enhance the efficiency of the optical transmissions by adapting physical (photonic) transmission parameters, such as modulation format, symbol rate, number and spacing of subcarrier wavelength channels, as well as the ratio of forward error correction to payload. Flexi-grid transmissions have become feasible with high-capacity flexible transceivers. Flexi-grid transmissions use narrower frequency slots (e.g., 12.5 GHz) than classical Wavelength Division Multiplexing (WDM, with typically 50 GHz frequency slots for WDM) and can flexibly form optical transmission channels that span multiple contiguous frequency slots. Cvijetic BIB025 has proposed a hierarchical flexi-grid infrastructure for multiservice broadband optical access utilizing centralized software-reconfigurable resource management and digital signal processing. The proposed flexi-grid infrastructure incorporates mobile backhaul, as well as SDN controlled transceivers III-A. In follow-up work, Cvijetic et al. BIB037 have designed a dynamic flexi-grid optical access and aggregation network. They employ SDN to control tunable lasers in the OLT for flexible downstream transmissions. Flexi-grid wavelength selective switches are controlled through SDN to dynamically tune the passband for the upstream transmissions arriving at the OLT. Cvijetic et al. BIB037 obtained good results for the upstream and downstream bit error rate and were able to provide 150 Mb/s per wireless network cell. Oliveira et al. BIB026 have demonstrated a testbed for a Reconfigurable Flexible Optical Network (RFON), which was one of the first physical layer SDN-based testbeds. The RFON testbed is comprised of 4 ROADMs with flexi-grid Wavelength Selective Switching (WSS) modules, optical amplifiers, optical channel monitors and supervisor boards. The controller daemon implements a node abstraction layer and provides configuration details for an overall view of the network. Also, virtualization of the GMPLS control plane with topology discovery and Traffic Engineering (TE)-link instantiation have been incorporated. Instead of using OpenFlow, the RFON testbed uses the controller language YANG to obtain the topology information and collect monitoring data for the lightpaths. Zhao et al. BIB027 have presented an architecture with OpenFlow-based optical interconnects for intra-data center networking and OpenFlow-based flexi-grid optical networks for inter-data center networking. Zhao et al. focus on the SDN benefits for inter-data center networking with heterogeneous networks. The proposed architecture includes a service controller, an IP controller, and an optical controller based on the Father Network Operating System (F-NOX) BIB002 , BIB028 . The performance evaluations in BIB027 include results for blocking probability, release latency, and bandwidth spectrum characteristics. architecture has been outlined in BIB029 - BIB044 . CHRON senses the current network conditions and adapts the network operation accordingly. The three main components of CHRON are monitoring elements, software adaptable elements, and cognitive processes. The monitoring elements observe two main types of optical transmission impairments, namely noncatastrophic impairments and catastrophic impairments. Noncatastrophic impairments include the photonic impairments that degrade the Optical Signal to Noise Ratio (OSNR), such as the various forms of dispersion, cross-talk, and non-linear propagation effects, but do not completely disrupt the communication. In contrast, a catastrophic impairment, such as a fiber cut or malfunctioning switch, can completely disrupt the communication. Advances in optical performance monitoring allow for in-band OSNR monitoring BIB006 - BIB011 at midpoints in the communication path, e.g., at optical amplifiers and ROADMs. The cognitive processes involve the collection of the monitoring information in the controller, executing control algorithms, and instructing the software adaptable components to implement the control decisions. SDN can provide the framework for implementing these cognitive processes. Two main types of software adaptable components have been considered so far BIB045 , BIB046 , namely control of transceivers and control of wavelength selective switches/amplifiers. For transceiver control, the cognitive control adjusts the transmission parameters. For instance, transmission bit rates can be adjusted through varying the modulation format or the number of signal carriers in multicarrier communication (see Section III-A). 2) Wavelength Selective Switch/Amplifier Control: In general, ROADMs (see Section III-C1a) employ wavelength selective switches based on filters to add or drop wavelength channels for routing through an optical network. Detrimental nonideal filtering effects accumulate and impair the OSNR BIB047 . At the same time, Erbium Doped Fiber Amplifiers (EDFAs) BIB001 are widely deployed in optical networks to boost optical signal power that has been depleted through attenuation in fibers and ROADMs. However, depending on their operating points, EDFAs can introduce significant noise. Moura et al. BIB048 , BIB051 have explored SDN based adaptation strategies for EDFA operating points to increase the OSNR. In a complementary study, Paolucci et al. BIB047 have exploited SDN control to reduce the detrimental filtering effects. Paolucci group wavelength channels that jointly traverse a sequence of filters at successive switching nodes. Instead of passing these wavelength channels through individual (per-wavelength channel) filters, the group of wavelength channels is jointly passed through a superfilter that encompasses all grouped wavelength channels. This joint filtering significantly improves the OSNR. While the studies BIB048 - BIB047 have focused on either the EDFA or the filters, Carvalho et al. BIB049 and Wang et al. have jointly considered the EDFA and filter control. More specifically, the EDFA gain and the filter attenuation (and signal equalization) profile were adapted to improve the OSNR. Carvalho et al. BIB049 propose and evaluate a specific joint EDFA and filter optimization approach that exploits the global perspective of the SDN controller. The global optimization achieves ONSR improvements close to 5 dB for a testbed consisting of four ROADMs with 100 km fiber links. Wang et al. explore different combinations of EDFA gain control strategies and filter equalization strategies for a simulated network with 14 nodes and 100 km fiber links. They find mutual interactions between the EDFA gain control and the filter equalization control as well as an additional wavelength assignment module. They conclude that global SDN control is highly useful for synchronizing the EDFA gain and filter equalization in conjunction with wavelength assignments so as to achieve improved OSNR.