reference
stringlengths 141
444k
| target
stringlengths 31
68k
|
---|---|
Scene Flow Estimation: A Survey <s> Stereo matching <s> Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> A common problem of optical flow estimation in the multiscale variational framework is that fine motion structures cannot always be correctly estimated, especially for regions with significant and abrupt displacement variation. A novel extended coarse-to-fine (EC2F) refinement framework is introduced in this paper to address this issue, which reduces the reliance of flow estimates on their initial values propagated from the coarse level and enables recovering many motion details in each scale. The contribution of this paper also includes adaptation of the objective function to handle outliers and development of a new optimization procedure. The effectiveness of our algorithm is demonstrated by Middlebury optical flow benchmarkmarking and by experiments on challenging examples that involve large-displacement motion. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61% on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Stereo matching <s> In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches. <s> BIB008
|
Stereo matching is essential to scene flow estimation under binocular setting. A stereo algorithm generally consists of four parts:(1) matching cost computation, (2) cost aggregation, (3) estimation and optimization and (4) refinement. It is categorized into local methods and global methods depending on how the cost aggregation and computation are performed. Local methods suffer from the textureless region, while global methods is computationally expensive. A semi-global-matching(SGM) method combines local smoothness and global pixel-wise estimation and leads to a dense matching result at low runtime BIB002 , which is commonly utilized as the modification. A comprehensive review is presented by Scharstein in 2001 BIB001 . The upper rank algorithms of Middlebury stereo dataset and KITTI stereo dataset BIB001 BIB004 are mainly occupied by unpublished papers, indicating the rapid development in this field. Learning methods are utilized with promising efficiency and accuracy BIB005 BIB008 . Besides, Zhang proposed a mesh-based approach considering the high speed of rendering and ranks the top among the published papers BIB007 , while segmentation-based methods are proven to tackle the textureless problem BIB003 BIB006 .
|
Scene Flow Estimation: A Survey <s> Large displacement <s> We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Large displacement <s> The emergence of modern, affordable and accurate RGB-D sensors increases the need for single view approaches to estimate 3-dimensional motion, also known as scene flow. In this paper we propose a coarse-to-fine, dense, correspondence-based scene flow formulation that relies on explicit geometric reasoning to account for the effects of large displacements and to model occlusion. Our methodology enforces local motion rigidity at the level of the 3d point cloud without explicitly smoothing the parameters of adjacent neighborhoods. By integrating all geometric and photometric components in a single, consistent, occlusion-aware energy model, defined over overlapping, image-adaptive neighborhoods, our method can process fast motions and large occlusions areas, as present in challenging datasets like the MPI Sintel Flow Dataset, recently augmented with depth information. By explicitly modeling large displacements and occlusion, we can handle difficult sequences which cannot be currently processed by state of the art scene flow methods. We also show that by integrating depth information into the model, we can obtain correspondence fields with improved spatial support and sharper boundaries compared to the state of the art, large-displacement optical flow methods. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Large displacement <s> We present an approach for computing dense scene flow from two large displacement RGB-D images. When dealing with large displacements the crucial step is to estimate the overall motion correctly. While state-of-the-art approaches focus on RGB information to establish guiding correspondences, we explore the power of depth edges. To achieve this, we present a new graph matching technique that brings sparse depth edges into correspondence. An additional contribution is the formulation of a continuous-label energy which is used to densify the sparse graph matching output. We present results on challenging Kinect images, for which we outperform state-of-the-art techniques. <s> BIB003
|
Large displacement occurs frequently when an object is moving at a high speed or under a limited frame-rate. Moreover, articulated motion may lead to large displacement as well. This kind of problem is hard to tackle on account that the scene flow algorithms normally assume the constancy and smoothness within a small region, large displacement may make the solution to energy function trapped into a local minimum which leads to enormous errors propagated by iteration procedure. Brox implemented the coarse-to-fine method along with a gradient constancy assumption to alleviate the impact caused by large displacement in the optical flow field BIB001 . Currently, several matching algorithms have been introduced to handle this issue specifically and achieved promising results BIB002 BIB003 .
|
Scene Flow Estimation: A Survey <s> Varying illumination <s> The quantitative evaluation of optical flow algorithms by Barron et al. led to significant advances in the performance of optical flow methods. The challenges for optical flow today go beyond the datasets and evaluation methods proposed in that paper and center on problems associated with nonrigid motion, real sensor noise, complex natural scenes, and motion discontinuities. Our goal is to establish a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture; realistic synthetic sequences; high frame-rate video used to study interpolation error; and modified stereo sequences of static scenes. In addition to the average angular error used in Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and flow accuracy at motion boundaries and in textureless regions. We evaluate the performance of several well-known methods on this data to establish the current state of the art. Our database is freely available on the Web together with scripts for scoring and publication of the results at http://vision.middlebury.edu/flow/. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Varying illumination <s> We extend estimation of range flow to handle brightness changes in image data caused by inhomogeneous illumination. Standard range flow computes 3D velocity fields using both range and intensity image sequences. Toward this end, range flow estimation combines a depth change model with a brightness constancy model. However, local brightness is generally not preserved when object surfaces rotate relative to the camera or the light sources, or when surfaces move in inhomogeneous illumination. We describe and investigate different approaches to handle such brightness changes. A straightforward approach is to prefilter the intensity data such that brightness changes are suppressed, for instance, by a highpass or a homomorphic filter. Such prefiltering may, though, reduce the signal-to-noise ratio. An alternative novel approach is to replace the brightness constancy model by 1) a gradient constancy model, or 2) by a combination of gradient and brightness constancy constraints used earlier successfully for optical flow, or 3) by a physics-based brightness change model. In performance tests, the standard version and the novel versions of range flow estimation are investigated using prefiltered or nonprefiltered synthetic data with available ground truth. Furthermore, the influences of additive Gaussian noise and simulated shot noise are investigated. Finally, we compare all range flow estimators on real data. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Varying illumination <s> Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion. <s> BIB003
|
Brightness constancy doesn't obey the illumination-varying environment. However, this issue is common in an outdoor scene, e.g., drifting clouds that block the sunlight, sudden reflection from a window, and lens flares. Furthermore, it will be a disaster at night when lights start to flash. In the optical flow field, additional assumptions such as gradient constancy and some more complicated constraints have been added to make it more robust to the illumination changes BIB001 . Schuchert specifically studied range flow estimation under varying illumination BIB002 . In his paper, pre-filtering and changes of brightness model improve the accuracy. Gotardo introduced an albedo consistency assumption as a revision BIB003 . A relighting procedure was proposed as a key element to handle the multiplexed situation in his paper as well.
|
Scene Flow Estimation: A Survey <s> Insufficient texture <s> Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting based on both optical flow constraints and stereo constraints is then carried out on each local region in order to simultaneously estimate 3-D motion correspondences and structure. The second system is based on extended gradient-based system (EGS), a natural extension of two-dimensional (2-D) optical flow computation. In this method, a new hierarchical rule-based stereo matching algorithm is first developed to estimate the initial disparity map. Different available constraints under a multiview camera setup are further investigated and utilized in the proposed motion estimation. We use image segmentation information to adopt and maintain the motion and depth discontinuities. Within the framework for EGS, we present two different formulations for 3-D scene flow and structure computation. One formulation assumes that initial disparity map is accurate, while the other does not. Experimental results on both synthetic and real imagery demonstrate the effectiveness of our 3-D motion and structure recovery schemes. Empirical comparison between IMS and EGS is also reported. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Insufficient texture <s> [email protected] Germany Vasanth Philomin [email protected] Germany Abstract This paper presents a method for estimating disparity images from a stereo image sequence. While many existing stereo algorithms work well on a single pair of stereo images, it is not sufficient to simply apply them to temporal frames independently without considering the temporal consistency between adjacent frames. Our method integrates the state-of-the-art stereo algorithm with the scene flow concept in order to capture the temporal correspondences. It computes the dense disparity images and scene flow in a practical and unified process: the disparity is initialized by a hybrid stereo approach which employs the over-segmentation based stereo and pixelwise iterative stereo; then the scene flow, estimated via a variational approach, is used to predict the disparity image and to compute its confidence map for the next frame. The prediction is modeled as a prior probability distribution and is built into an energy function defined for stereo matching on the next frame. The disparity can be estimated by minimizing this energy function. Experimental results show that the algorithm is able to estimate the disparity images in an accurate and temporally consistent fashion. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Insufficient texture <s> This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. <s> BIB003
|
The lack of texture may make the scene flow estimation still an ill-posed problem, which is a challenge for discovering consistency. It is also a challenge to stereo matching, which may lead to enormous errors in the binocular-based scene flow estimation. The textureless region is still a major distribution of the estimation error. To overcome this problem, different scene representations have been utilized. For example, Popham introduced a patch-based method BIB003 . The motion of each patch doesn't only rely on the texture information, but utilizes the motion from neighbor patches. This makes it more robust for a textureless region. As a solution to the occlusion issue, segmentation-based method is valid because it assumes uniform motion among the small regions to deal with the ambiguousness BIB001 BIB002 .
|
Scene Flow Estimation: A Survey <s> Applications <s> Obstacle avoidance is one of the most important challenges for mobile robots as well as future vision based driver assistance systems. This task requires a precise extraction of depth and the robust and fast detection of moving objects. In order to reach these goals, this paper considers vision as a process in space and time. It presents a powerful fusion of depth and motion information for image sequences taken from a moving observer. 3D-position and 3D-motion for a large number of image points are estimated simultaneously by means of Kalman-Filters. There is no need of prior error-prone segmentation. Thus, one gets a rich 6D representation that allows the detection of moving obstacles even in the presence of partial occlusion of foreground or background. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Applications <s> Detecting humans in films and videos is a challenging problem owing to the motion of the subjects, the camera and the background and to variations in pose, appearance, clothing, illumination and background clutter. We develop a detector for standing and moving people in videos with possibly moving cameras and backgrounds, testing several different motion coding schemes and showing empirically that orientated histograms of differential optical flow give the best overall performance. These motion-based descriptors are combined with our Histogram of Oriented Gradient appearance descriptors. The resulting detector is tested on several databases including a challenging test set taken from feature films and containing wide ranges of pose, motion and background variations, including moving cameras and backgrounds. We validate our results on two challenging test sets containing more than 4400 human examples. The combined detector reduces the false alarm rate by a factor of 10 relative to the best appearance-based detector, for example giving false alarm rates of 1 per 20,000 windows tested at 8% miss rate on our Test Set 1. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Applications <s> The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8% accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Applications <s> [email protected] Germany Vasanth Philomin [email protected] Germany Abstract This paper presents a method for estimating disparity images from a stereo image sequence. While many existing stereo algorithms work well on a single pair of stereo images, it is not sufficient to simply apply them to temporal frames independently without considering the temporal consistency between adjacent frames. Our method integrates the state-of-the-art stereo algorithm with the scene flow concept in order to capture the temporal correspondences. It computes the dense disparity images and scene flow in a practical and unified process: the disparity is initialized by a hybrid stereo approach which employs the over-segmentation based stereo and pixelwise iterative stereo; then the scene flow, estimated via a variational approach, is used to predict the disparity image and to compute its confidence map for the next frame. The prediction is modeled as a prior probability distribution and is built into an energy function defined for stereo matching on the next frame. The disparity can be estimated by minimizing this energy function. Experimental results show that the algorithm is able to estimate the disparity images in an accurate and temporally consistent fashion. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Applications <s> This paper proposes a novel approach to motion capture from multiple, synchronized video streams, specifically aimed at recording dense and accurate models of the structure and motion of highly deformable surfaces such as skin, that stretches, shrinks, and shears in the midst of normal facial expressions. Solving this problem is a key step toward effective performance capture for the entertainment industry, but progress so far has been hampered by the lack of appropriate local motion and smoothness models. The main technical contribution of this paper is a novel approach to regularization adapted to nonrigid tangential deformations. Concretely, we estimate the nonrigid deformation parameters at each vertex of a surface mesh, smooth them over a local neighborhood for robustness, and use them to regularize the tangential motion estimation. To demonstrate the power of the proposed approach, we have integrated it into our previous work for markerless motion capture [9], and compared the performances of the original and new algorithms on three extremely challenging face datasets that include highly nonrigid skin deformations, wrinkles, and quickly changing expressions. Additional experiments with a dataset featuring fast-moving cloth with complex and evolving fold structures demonstrate that the adaptability of the proposed regularization scheme to nonrigid tangential motion does not hamper its robustness, since it successfully recovers the shape and motion of the cloth without overfitting it despite the absence of stretch or shear in this case. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Applications <s> In this paper, we introduce the concept of dense scene flow for visual SLAM applications. Traditional visual SLAM methods assume static features in the environment and that a dominant part of the scene changes only due to camera egomotion. These assumptions make traditional visual SLAM methods prone to failure in crowded real-world dynamic environments with many independently moving objects, such as the typical environments for the visually impaired. By means of a dense scene flow representation, moving objects can be detected. In this way, the visual SLAM process can be improved considerably, by not adding erroneous measurements into the estimation, yielding more consistent and improved localization and mapping results. We show large-scale visual SLAM results in challenging indoor and outdoor crowded environments with real visually impaired users. In particular, we performed experiments inside the Atocha railway station and in the city-center of Alcala de Henares, both in Madrid, Spain. Our results show that the combination of visual SLAM and dense scene flow allows to obtain an accurate localization, improving considerably the results of traditional visual SLAM methods and GPS-based approaches. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Applications <s> 3-D motion estimation is a fundamental problem that has far-reaching implications in robotics. A scene flow formulation is attractive as it makes no assumptions about scene complexity, object rigidity, or camera motion. RGB-D cameras provide new information useful for computing dense 3-D flow in challenging scenes. In this work we show how to generalize two-frame variational 2-D flow algorithms to 3-D. We show that scene flow can be reliably computed using RGB-D data, overcoming depth noise and outperforming previous results on a variety of scenes. We apply dense 3-D flow to rigid motion segmentation. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Applications <s> This paper investigates motion estimation and segmentation of independently moving objects in video sequences that contain depth and intensity information, such as videos captured by a Time of Flight camera. Specifically, we present a motion estimation algorithm which is based on integration of depth and intensity data. The resulting motion information is used to derive long-term point trajectories. A segmentation technique groups the trajectories according to their motion and depth similarity into spatio-temporal segments. Quantitative and qualitative analysis of synthetic and real world videos verify the proposed motion estimation and segmentation approach. The proposed framework extracts independently moving objects from videos recorded by a Time of Flight camera. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Applications <s> Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Applications <s> This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods. <s> BIB010
|
Scene flow estimation is a comprehensive problem. Motion information reveals the temporal coherence between two moments. In a long sequence, scene flow can be utilized to get the initial value for the next frame and serve as a constraint in its relevant issue fields. Scene flow can not only profit from its relevant issues, but also facilitate them mutually. Gotardo captured three-dimensional scene flow to provide delicate geometric details BIB009 , while Liu utilized scene flow as a soft constraint for stereo matching and a prediction for next frame disparity estimation BIB004 . Ghuffar combined local estimation and global regularization in a TLS framework and utilized scene flow for segmentation and trajectory generation BIB008 . Beyond that, scene flow can be a valuable input or mobile robotics and autonomous driving field, which consist of multiple task such as obstacle avoidance and scene understanding. Frank first fused optical flow and stereo by means of Kalman filter for obstacle avoidance BIB001 . Alcantarilla combined scene flow estimation with the visual SLAM to enhance the robustness and accuracy BIB006 . Herbst got object segmentation with the RGB-D scene flow estimation result BIB007 , aiming to achieve autonomous exploration of indoor scenes. Menze utilized scene flow to reason objects by regarding the scene as a set of rigid objects BIB010 . Autonomous driving could make use of both the geometric information that represents distance and the scene flow information that represents motion for multiple tasks. In addition, scene flow can be utilized to serve as a feature as the histogram of optical flow(HOF) BIB003 or the motion boundary histogram(MBH) BIB002 feature descriptors for object detection and recognition, e.g., facial expression, gesture, and body motion recognition. It may enrich the information in the descriptor with additional depth dimension and can be applied for motion like rotation or dolly moves that optical flow can't handle. For instance, in 2009, Furukawa recorded the motion model of the facial expression using scene flow estimation BIB005 .
|
Scene Flow Estimation: A Survey <s> Point cloud <s> Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Scene flow is the 3D motion field of points in the world. Given N (N>1) image sequences gathered with a N-eye stereo camera or N calibrated cameras, we present a novel system which integrates 3D scene flow and structure recovery in order to complement each other's performance. We do not assume rigidity of the scene motion, thus allowing for non-rigid motion in the scene. In our work, images are segmented into small regions. We assume that each small region is undergoing similar motion, represented by a 3D affine model. Nonlinear motion model fitting based on both optical flow constraints and stereo constraints is then carried over each image region in order to simultaneously estimate 3D motion correspondences and structure. To ensure the robustness, several regularization constraints are also introduced. A recursive algorithm is designed to incorporate the local and regularization constraints. Experimental results on both synthetic and real data demonstrate the effectiveness of our integrated 3D motion and structure analysis scheme. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> In this paper, novel algorithms computing dense 3D scene flow from multiview image sequences are described. A new hierarchical rule-based stereo matching algorithm is presented to estimate the initial disparity map. Different available constraints under a multiview camera setup are investigated and then utilized in the proposed motion estimation algorithms. We show two different formulations for 3D scene flow computation. One formulation assumes that initial disparity map is accurate while the other does not make this assumption. Image segmentation information is used to maintain the motion and depth discontinuities. Iterative implementations are used to successfully compute 3D scene flow and structure at every point in the reference image. Novel hard constraints are introduced in this paper to make the algorithms more accurate and robust. Promising experimental results are seen by applying our algorithms to real imagery. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting based on both optical flow constraints and stereo constraints is then carried out on each local region in order to simultaneously estimate 3-D motion correspondences and structure. The second system is based on extended gradient-based system (EGS), a natural extension of two-dimensional (2-D) optical flow computation. In this method, a new hierarchical rule-based stereo matching algorithm is first developed to estimate the initial disparity map. Different available constraints under a multiview camera setup are further investigated and utilized in the proposed motion estimation. We use image segmentation information to adopt and maintain the motion and depth discontinuities. Within the framework for EGS, we present two different formulations for 3-D scene flow and structure computation. One formulation assumes that initial disparity map is accurate, while the other does not. Experimental results on both synthetic and real imagery demonstrate the effectiveness of our 3-D motion and structure recovery schemes. Empirical comparison between IMS and EGS is also reported. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization-two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Scene flow is the motion of the surface points in the 3D world. For a camera, it is seen as a 2D optical flow in the image plane. Knowing the scene flow can be very useful as it gives an idea of the surface geometry of the objects in the scene and how those objects are moving. Four methods for calculating the scene flow given multiple optical flows have been explored and detailed in this paper along with the basic mathematics surrounding multi-view geometry. It was found that given multiple optical flows it is possible to estimate the scene flow to different levels of detail depending on the level of prior information present. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> The motion field of a scene can be used for object segmentation and to provide features for classification tasks like action recognition. Scene flow is the full 3D motion field of the scene, and is more difficult to estimate than it's 2D counterpart, optical flow. Current approaches use a smoothness cost for regularisation, which tends to over-smooth at object boundaries. This paper presents a novel formulation for scene flow estimation, a collection of moving points in 3D space, modelled using a particle filter that supports multiple hypotheses and does not oversmooth the motion field. In addition, this paper is the first to address scene flow estimation, while making use of modern depth sensors and monocular appearance images, rather than traditional multi-viewpoint rigs. The algorithm is applied to an existing scene flow dataset, where it achieves comparable results to approaches utilising multiple views, while taking a fraction of the time. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> The scene flow describes the 3D motion of every point in a scene between two time steps. We present a novel method to estimate a dense scene flow using intensity and depth data. It is well known that local methods are more robust under noise while global techniques yield dense motion estimation. We combine local and global constraints to solve for the scene flow in a variational framework. An adaptive TV (Total Variation) regularization is used to preserve motion discontinuities. Besides, we constrain the motion using a set of 3D correspondences to deal with large displacements. In the experimentation our approach outperforms previous scene flow from intensity and depth methods in terms of accuracy. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> In this paper we present a novel method to accurately estimate the dense 3D motion field, known as scene flow, from depth and intensity acquisitions. The method is formulated as a convex energy optimization, where the motion warping of each scene point is estimated through a projection and back-projection directly in 3D space. We utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. Our formulation enables the calculation of a dense flow field which does not penalize smooth and non-rigid movements while aligning motion boundaries with strong depth boundaries. An efficient parallelization of the numerical algorithm leads to runtimes in the order of 1s and therefore enables the method to be used in a variety of applications. We show that this novel scene flow calculation outperforms existing approaches in terms of speed and accuracy. Furthermore, we demonstrate applications such as camera pose estimation and depth image super resolution, which are enabled by the high accuracy of the proposed method. We show these applications using modern depth sensors such as Microsoft Kinect or the PMD Nano Time-of-Flight sensor. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> We present a novel method for dense variational scene flow estimation based a multiscale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> This paper investigates motion estimation and segmentation of independently moving objects in video sequences that contain depth and intensity information, such as videos captured by a Time of Flight camera. Specifically, we present a motion estimation algorithm which is based on integration of depth and intensity data. The resulting motion information is used to derive long-term point trajectories. A segmentation technique groups the trajectories according to their motion and depth similarity into spatio-temporal segments. Quantitative and qualitative analysis of synthetic and real world videos verify the proposed motion estimation and segmentation approach. The proposed framework extracts independently moving objects from videos recorded by a Time of Flight camera. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios. <s> BIB014 </s> Scene Flow Estimation: A Survey <s> Point cloud <s> We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios. We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings. <s> BIB015
|
To truly present the three-dimensional scene, the image pixel need to be projected into the scene space. The projection is illustrated in Figure 3 (a) and presented in Equation BIB001 . where x is the image pixel x = (x, y), π and a depth value z to a 3D point X = (X, Y, Z). f x and f y stands for the camera focal length, and c x and c y are the principle points. Formulation 3 can also be presented as: where matrix A is known as the camera projection matrix. Hence, scene flow under point cloud representation BIB005 BIB002 BIB003 BIB004 BIB006 BIB007 BIB009 BIB008 BIB010 BIB011 BIB012 BIB013 BIB014 BIB015 can be presented as V = (∆X, ∆Y, ∆Z) = (U, V, W ), which truly reveal the three-dimensional displacement.
|
Scene Flow Estimation: A Survey <s> Mesh <s> We present a method to automatically extract spatiotemporal descriptions of moving objects from synchronized and calibrated multi-view sequences. The object is modeled by a time-varying multi-resolution subdivision surface that is fitted to the image data using spatio-temporal multiview stereo information, as well as contour constraints. The stereo data is utilized by computing the normalized correlation between corresponding spatio-temporal image trajectories of surface patches, while the contour information is determined using incremental segmentation of the viewing volume into object and background. We globally optimize the shape of the spatio-temporal surface in a coarse-to-fine manner using the multi-resolution structure of the subdivision mesh. The method presented incorporates the available image information in a unified framework and automatically reconstructs accurate spatio-temporal representations of complex non-rigidly moving objects. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Mesh <s> In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectancemodel and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Mesh <s> Scene flow represents the 3-D motion of points in the scene, just as optical flow is related to their 2-D motion in the images. As opposed to classical methods which compute scene flow from optical flow, we propose to compute it by tracking 3-D points and surface elements (surfels) in a multi-camera setup (at least two cameras are needed). Two methods are proposed: in the first one, the translation of each 3-D point is found by matching the neighborhoods of its 2-D projections in each camera between two time steps; in the second one, the full pose of a surfel is recovered by matching the image of its projection with a texture template attached to the surfel, and visibility changes caused by occlusion or rotation of surfels are handled. Both methods detect lost or untrackable points and surfels. They were designed for real-time execution and can be used for fast extraction of scene flow from multi-camera sequences. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Mesh <s> We describe a method for computing a dense estimate of motion and disparity, given a stereo video sequence containing moving non-rigid objects. In contrast to previous approaches, motion and disparity are estimated simultaneously from a single coherent probabilistic model that correctly accounts for all occlusions, depth discontinuities, and motion discontinuities. The results demonstrate that simultaneous estimation of motion and disparity is superior to estimating either in isolation, and show the promise of the technique for accurate, probabilistically justified, scene analysis. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Mesh <s> This paper proposes a novel approach to non-rigid, markerless motion capture from synchronized video streams acquired by calibrated cameras. The instantaneous geometry of the observed scene is represented by a polyhedral mesh with fixed topology. The initial mesh is constructed in the first frame using the publicly available PMVS software for multi-view stereo [7]. Its deformation is captured by tracking its vertices over time, using two optimization processes at each frame: a local one using a rigid motion model in the neighborhood of each vertex, and a global one using a regularized nonrigid model for the whole mesh. Qualitative and quantitative experiments using seven real datasets show that our algorithm effectively handles complex nonrigid motions and severe occlusions. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Mesh <s> This paper proposes a novel approach to motion capture from multiple, synchronized video streams, specifically aimed at recording dense and accurate models of the structure and motion of highly deformable surfaces such as skin, that stretches, shrinks, and shears in the midst of normal facial expressions. Solving this problem is a key step toward effective performance capture for the entertainment industry, but progress so far has been hampered by the lack of appropriate local motion and smoothness models. The main technical contribution of this paper is a novel approach to regularization adapted to nonrigid tangential deformations. Concretely, we estimate the nonrigid deformation parameters at each vertex of a surface mesh, smooth them over a local neighborhood for robustness, and use them to regularize the tangential motion estimation. To demonstrate the power of the proposed approach, we have integrated it into our previous work for markerless motion capture [9], and compared the performances of the original and new algorithms on three extremely challenging face datasets that include highly nonrigid skin deformations, wrinkles, and quickly changing expressions. Additional experiments with a dataset featuring fast-moving cloth with complex and evolving fold structures demonstrate that the adaptability of the proposed regularization scheme to nonrigid tangential motion does not hamper its robustness, since it successfully recovers the shape and motion of the cloth without overfitting it despite the absence of stretch or shear in this case. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Mesh <s> This paper addresses the problem of estimating the dense 3D motion of a scene over several frames using a set of calibrated cameras. Most current 3D motion estimation techniques are limited to estimating the motion over a single frame, unless a strong prior model of the scene (such as a skeleton) is introduced. Estimating the 3D motion of a general scene is difficult due to untextured surfaces, complex movements and occlusions. In this paper, we show that it is possible to track the surfaces of a scene over several frames, by introducing an effective prior on the scene motion. Experimental results show that the proposed method estimates the dense scene-flow over multiple frames, without the need for multiple-view reconstructions at every frame. Furthermore, the accuracy of the proposed method is demonstrated by comparing the estimated motion against a ground truth. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Mesh <s> Existing scene flow approaches mainly focus on two-frame stereo-pair configurations and reconstruct an image-based representation of scene flow. Instead, we propose a variational formulation of scene flow relative to a coarse proxy geometry, which is better suited for many views. Furthermore, a linear basis is used to represent temporal surface flow, allowing for longer-range temporal correspondence with fewer variables. Our formulation takes known proxy motion into account (e.g, if the proxy is a tracked human subject), which enables 3D trajectory reconstruction when only a single view is available. Additionally, through the appropriate proxy and basis, our framework generalizes existing approaches for scene flow, optic-flow, and two-frame stereo. We illustrate results on real-data for both static and moving proxy surfaces over several frames. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Mesh <s> In this paper we consider the problem of estimating a 3D motion field using multiple cameras. In particular, we focus on the situation where a depth camera and one or more color cameras are available, a common situation with recent composite sensors such as the Kinect. In this case, geometric information from depth maps can be combined with intensity variations in color images in order to estimate smooth and dense 3D motion fields. We propose a unified framework for this purpose, that can handle both arbitrary large motions and sub-pixel displacements. The estimation is cast as a linear optimization problem that can be solved very efficiently. The novelty with respect to existing scene flow approaches is that it takes advantage of the geometric information provided by the depth camera to define a surface domain over which photometric constraints can be consistently integrated in 3D. Experiments on real and synthetic data provide both qualitative and quantitative results that demonstrate the interest of the approach. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Mesh <s> We introduce a framework to estimate and refine 3D scene flow which connects 3D structures of a scene across different frames. In contrast to previous approaches which compute 3D scene flow that connects depth maps from a stereo image sequence or from a depth camera, our approach takes advantage of full 3D reconstruction which computes the 3D scene flow that connects 3D point clouds from multi-view stereo system. Our approach uses a standard multi-view stereo and optical flow algorithm to compute the initial 3D scene flow. A unique two-stage refinement process regularizes the scene flow direction and magnitude sequentially. The scene flow direction is refined by utilizing 3D neighbor smoothness defined by tensor voting. The magnitude of the scene flow is refined by connecting the implicit surfaces across the consecutive 3D point clouds. Our estimated scene flow is temporally consistent. Our approach is efficient, model free, and it is effective in error corrections and outlier rejections. We tested our approach on both synthetic and real-world datasets. Our experimental results show that our approach out-performs previous algorithms quantitatively on synthetic dataset, and it improves the reconstructed 3D model from the refined 3D point cloud in real-world dataset. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Mesh <s> Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixel-to-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial super pixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Mesh <s> This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Mesh <s> We propose a method to recover dense 3D scene flow from stereo video. The method estimates the depth and 3D motion field of a dynamic scene from multiple consecutive frames in a sliding temporal window, such that the estimate is consistent across both viewpoints of all frames within the window. The observed scene is modeled as a collection of planar patches that are consistent across views, each undergoing a rigid motion that is approximately constant over time. Finding the patches and their motions is cast as minimization of an energy function over the continuous plane and motion parameters and the discrete pixel-to-plane assignment. We show that such a view-consistent multi-frame scheme greatly improves scene flow computation in the presence of occlusions, and increases its robustness against adverse imaging conditions, such as specularities. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Mesh <s> 3D scene flow estimation aims to jointly recover dense geometry and 3D motion from stereoscopic image sequences, thus generalizes classical disparity and 2D optical flow estimation. To realize its conceptual benefits and overcome limitations of many existing methods, we propose to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented. Geometry and 3D motion are then jointly recovered alongside an over-segmentation of the scene. This piecewise rigid scene model is significantly more parsimonious than conventional pixel-based representations, yet retains the ability to represent real-world scenes with independent object motion. It, furthermore, enables us to define suitable scene priors, perform occlusion reasoning, and leverage discrete optimization schemes toward stable and accurate results. Assuming the rigid motion to persist approximately over time additionally enables us to incorporate multiple frames into the inference. To that end, each view holds its own representation, which is encouraged to be consistent across all other viewpoints and frames in a temporal window. We show that such a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB014 </s> Scene Flow Estimation: A Survey <s> Mesh <s> Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion. <s> BIB015
|
Meshes represent a surface as a set of planar polygons, e.g., triangle, which connected to each other as is shown in Figure 3 (b). This representation is a efficient way for rendering, and it occupies less memory. Mesh is essentially sort of point cloud representation as the vertex can be viewed as a point in the three-dimensional point cloud. Scene flow estimation methods with a mesh representation BIB001 BIB001 BIB005 BIB006 BIB008 BIB009 BIB010 BIB012 are only under a multi-view setting and the geometry estimation is Early paper viewed patch as a surface element(surfel) under a multi-view setting BIB002 BIB003 . A few binocular-based scene flow methods utilized patches to fit the surface of the scene BIB004 BIB007 BIB011 BIB012 BIB013 BIB014 BIB015 on account that this kind of patch-based methods are common in stereo matching field. In addition, Hornacek uniquely exploited a pair of RGB-D data to seek patch correspondences in the 3D world space and leads to dense body motion field including both translation and rotation .
|
Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Scene flow is the 3D motion field of points in the world. Given N (N>1) image sequences gathered with a N-eye stereo camera or N calibrated cameras, we present a novel system which integrates 3D scene flow and structure recovery in order to complement each other's performance. We do not assume rigidity of the scene motion, thus allowing for non-rigid motion in the scene. In our work, images are segmented into small regions. We assume that each small region is undergoing similar motion, represented by a 3D affine model. Nonlinear motion model fitting based on both optical flow constraints and stereo constraints is then carried over each image region in order to simultaneously estimate 3D motion correspondences and structure. To ensure the robustness, several regularization constraints are also introduced. A recursive algorithm is designed to incorporate the local and regularization constraints. Experimental results on both synthetic and real data demonstrate the effectiveness of our integrated 3D motion and structure analysis scheme. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper, novel algorithms computing dense 3D scene flow from multiview image sequences are described. A new hierarchical rule-based stereo matching algorithm is presented to estimate the initial disparity map. Different available constraints under a multiview camera setup are investigated and then utilized in the proposed motion estimation algorithms. We show two different formulations for 3D scene flow computation. One formulation assumes that initial disparity map is accurate while the other does not make this assumption. Image segmentation information is used to maintain the motion and depth discontinuities. Iterative implementations are used to successfully compute 3D scene flow and structure at every point in the reference image. Novel hard constraints are introduced in this paper to make the algorithms more accurate and robust. Promising experimental results are seen by applying our algorithms to real imagery. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present a method to automatically extract spatiotemporal descriptions of moving objects from synchronized and calibrated multi-view sequences. The object is modeled by a time-varying multi-resolution subdivision surface that is fitted to the image data using spatio-temporal multiview stereo information, as well as contour constraints. The stereo data is utilized by computing the normalized correlation between corresponding spatio-temporal image trajectories of surface patches, while the contour information is determined using incremental segmentation of the viewing volume into object and background. We globally optimize the shape of the spatio-temporal surface in a coarse-to-fine manner using the multi-resolution structure of the subdivision mesh. The method presented incorporates the available image information in a unified framework and automatically reconstructs accurate spatio-temporal representations of complex non-rigidly moving objects. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting based on both optical flow constraints and stereo constraints is then carried out on each local region in order to simultaneously estimate 3-D motion correspondences and structure. The second system is based on extended gradient-based system (EGS), a natural extension of two-dimensional (2-D) optical flow computation. In this method, a new hierarchical rule-based stereo matching algorithm is first developed to estimate the initial disparity map. Different available constraints under a multiview camera setup are further investigated and utilized in the proposed motion estimation. We use image segmentation information to adopt and maintain the motion and depth discontinuities. Within the framework for EGS, we present two different formulations for 3-D scene flow and structure computation. One formulation assumes that initial disparity map is accurate, while the other does not. Experimental results on both synthetic and real imagery demonstrate the effectiveness of our 3-D motion and structure recovery schemes. Empirical comparison between IMS and EGS is also reported. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present a common variational framework for dense depth recovery and dense three-dimensional motion field estimation from multiple video sequences, which is robust to camera spectral sensitivity differences and illumination changes. For this purpose, we first show that both problems reduce to a generic image matching problem after backprojecting the input images onto suitable surfaces. We then solve this matching problem in the case of statistical similarity criteria that can handle frequently occurring nonaffine image intensities dependencies. Our method leads to an efficient and elegant implementation based on fast recursive filters. We obtain good results on real images. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectancemodel and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Obstacle avoidance is one of the most important challenges for mobile robots as well as future vision based driver assistance systems. This task requires a precise extraction of depth and the robust and fast detection of moving objects. In order to reach these goals, this paper considers vision as a process in space and time. It presents a powerful fusion of depth and motion information for image sequences taken from a moving observer. 3D-position and 3D-motion for a large number of image points are estimated simultaneously by means of Kalman-Filters. There is no need of prior error-prone segmentation. Thus, one gets a rich 6D representation that allows the detection of moving obstacles even in the presence of partial occlusion of foreground or background. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization-two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present a new variational method for multi-view stereovision and non-rigid three-dimensional motion estimation from multiple video sequences. Our method minimizes the prediction error of the shape and motion estimates. Both problems then translate into a generic image registration task. The latter is entrusted to a global measure of image similarity, chosen depending on imaging conditions and scene properties. Rather than integrating a matching measure computed independently at each surface point, our approach computes a global image-based matching score between the input images and the predicted images. The matching process fully handles projective distortion and partial occlusions. Neighborhood as well as global intensity information can be exploited to improve the robustness to appearance changes due to non-Lambertian materials and illumination changes, without any approximation of shape, motion or visibility. Moreover, our approach results in a simpler, more flexible, and more efficient implementation than in existing methods. The computation time on large datasets does not exceed thirty minutes on a standard workstation. Finally, our method is compliant with a hardware implementation with graphics processor units. Our stereovision algorithm yields very good results on a variety of datasets including specularities and translucency. We have successfully tested our motion estimation algorithm on a very challenging multi-view video sequence of a non-rigid scene. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Scene flow represents the 3-D motion of points in the scene, just as optical flow is related to their 2-D motion in the images. As opposed to classical methods which compute scene flow from optical flow, we propose to compute it by tracking 3-D points and surface elements (surfels) in a multi-camera setup (at least two cameras are needed). Two methods are proposed: in the first one, the translation of each 3-D point is found by matching the neighborhoods of its 2-D projections in each camera between two time steps; in the second one, the full pose of a surfel is recovered by matching the image of its projection with a texture template attached to the surfel, and visibility changes caused by occlusion or rotation of surfels are handled. Both methods detect lost or untrackable points and surfels. They were designed for real-time execution and can be used for fast extraction of scene flow from multi-camera sequences. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We describe a method for computing a dense estimate of motion and disparity, given a stereo video sequence containing moving non-rigid objects. In contrast to previous approaches, motion and disparity are estimated simultaneously from a single coherent probabilistic model that correctly accounts for all occlusions, depth discontinuities, and motion discontinuities. The results demonstrate that simultaneous estimation of motion and disparity is superior to estimating either in isolation, and show the promise of the technique for accurate, probabilistically justified, scene analysis. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper proposes a novel approach to non-rigid, markerless motion capture from synchronized video streams acquired by calibrated cameras. The instantaneous geometry of the observed scene is represented by a polyhedral mesh with fixed topology. The initial mesh is constructed in the first frame using the publicly available PMVS software for multi-view stereo [7]. Its deformation is captured by tracking its vertices over time, using two optimization processes at each frame: a local one using a rigid motion model in the neighborhood of each vertex, and a global one using a regularized nonrigid model for the whole mesh. Qualitative and quantitative experiments using seven real datasets show that our algorithm effectively handles complex nonrigid motions and severe occlusions. <s> BIB014 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters. <s> BIB015 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper proposes a novel approach to motion capture from multiple, synchronized video streams, specifically aimed at recording dense and accurate models of the structure and motion of highly deformable surfaces such as skin, that stretches, shrinks, and shears in the midst of normal facial expressions. Solving this problem is a key step toward effective performance capture for the entertainment industry, but progress so far has been hampered by the lack of appropriate local motion and smoothness models. The main technical contribution of this paper is a novel approach to regularization adapted to nonrigid tangential deformations. Concretely, we estimate the nonrigid deformation parameters at each vertex of a surface mesh, smooth them over a local neighborhood for robustness, and use them to regularize the tangential motion estimation. To demonstrate the power of the proposed approach, we have integrated it into our previous work for markerless motion capture [9], and compared the performances of the original and new algorithms on three extremely challenging face datasets that include highly nonrigid skin deformations, wrinkles, and quickly changing expressions. Additional experiments with a dataset featuring fast-moving cloth with complex and evolving fold structures demonstrate that the adaptability of the proposed regularization scheme to nonrigid tangential motion does not hamper its robustness, since it successfully recovers the shape and motion of the cloth without overfitting it despite the absence of stretch or shear in this case. <s> BIB016 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper addresses the problem of estimating the dense 3D motion of a scene over several frames using a set of calibrated cameras. Most current 3D motion estimation techniques are limited to estimating the motion over a single frame, unless a strong prior model of the scene (such as a skeleton) is introduced. Estimating the 3D motion of a general scene is difficult due to untextured surfaces, complex movements and occlusions. In this paper, we show that it is possible to track the surfaces of a scene over several frames, by introducing an effective prior on the scene motion. Experimental results show that the proposed method estimates the dense scene-flow over multiple frames, without the need for multiple-view reconstructions at every frame. Furthermore, the accuracy of the proposed method is demonstrated by comparing the estimated motion against a ground truth. <s> BIB017 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper a novel approach for estimating the three dimensional motion field of the visible world from stereo image sequences is proposed. This approach combines dense variational optical flow estimation, including spatial regularization, with Kalman filtering for temporal smoothness and robustness. The result is a dense, robust, and accurate reconstruction of the three-dimensional motion field of the current scene that is computed in real-time. Parallel implementation on a GPU and an FPGA yields a vision-system which is directly applicable in real-world scenarios, like automotive driver assistance systems or in the field of surveillance. Within this paper we systematically show that the proposed algorithm is physically motivated and that it outperforms existing approaches with respect to computation time and accuracy. <s> BIB018 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result. <s> BIB019 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present a novel variational method for the simultaneous estimation of dense scene flow and structure from stereo sequences. In contrast to existing approaches that rely on a fully calibrated camera setup, we assume that only the intrinsic camera parameters are known. To couple the estimation of motion, structure and geometry, we propose a joint energy functional that integrates spatial and temporal information from two subsequent image pairs subject to an unknown stereo setup. We further introduce a normalisation of image and stereo constraints such that deviations from model assumptions can be interpreted in a geometrical way. Finally, we suggest a separate discontinuity-preserving regularisation to improve the accuracy. Experiments on calibrated and uncalibrated data demonstrate the excellent performance of our approach. We even outperform recent techniques for the rectified case that make explicit use of the simplified geometry. <s> BIB020 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these. <s> BIB021 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Existing scene flow approaches mainly focus on two-frame stereo-pair configurations and reconstruct an image-based representation of scene flow. Instead, we propose a variational formulation of scene flow relative to a coarse proxy geometry, which is better suited for many views. Furthermore, a linear basis is used to represent temporal surface flow, allowing for longer-range temporal correspondence with fewer variables. Our formulation takes known proxy motion into account (e.g, if the proxy is a tracked human subject), which enables 3D trajectory reconstruction when only a single view is available. Additionally, through the appropriate proxy and basis, our framework generalizes existing approaches for scene flow, optic-flow, and two-frame stereo. We illustrate results on real-data for both static and moving proxy surfaces over several frames. <s> BIB022 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper we consider the problem of estimating a 3D motion field using multiple cameras. In particular, we focus on the situation where a depth camera and one or more color cameras are available, a common situation with recent composite sensors such as the Kinect. In this case, geometric information from depth maps can be combined with intensity variations in color images in order to estimate smooth and dense 3D motion fields. We propose a unified framework for this purpose, that can handle both arbitrary large motions and sub-pixel displacements. The estimation is cast as a linear optimization problem that can be solved very efficiently. The novelty with respect to existing scene flow approaches is that it takes advantage of the geometric information provided by the depth camera to define a surface domain over which photometric constraints can be consistently integrated in 3D. Experiments on real and synthetic data provide both qualitative and quantitative results that demonstrate the interest of the approach. <s> BIB023 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided. <s> BIB024 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present an approach to 3D scene flow estimation, which exploits that in realistic scenarios image motion is frequently dominated by observer motion and independent, but rigid object motion. We cast the dense estimation of both scene structure and 3D motion from sequences of two or more views as a single energy minimization problem. We show that agnostic smoothness priors, such as the popular total variation, are biased against motion discontinuities in viewing direction. Instead, we propose to regularize by encouraging local rigidity of the 3D scene. We derive a local rigidity constraint of the 3D scene flow and define a smoothness term that penalizes deviations from that constraint, thus favoring solutions that consist largely of rigidly moving parts. Our experiments show that the new rigid motion prior reduces the 3D flow error by 42% compared to standard TV regularization with the same data term. <s> BIB025 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We introduce a framework to estimate and refine 3D scene flow which connects 3D structures of a scene across different frames. In contrast to previous approaches which compute 3D scene flow that connects depth maps from a stereo image sequence or from a depth camera, our approach takes advantage of full 3D reconstruction which computes the 3D scene flow that connects 3D point clouds from multi-view stereo system. Our approach uses a standard multi-view stereo and optical flow algorithm to compute the initial 3D scene flow. A unique two-stage refinement process regularizes the scene flow direction and magnitude sequentially. The scene flow direction is refined by utilizing 3D neighbor smoothness defined by tensor voting. The magnitude of the scene flow is refined by connecting the implicit surfaces across the consecutive 3D point clouds. Our estimated scene flow is temporally consistent. Our approach is efficient, model free, and it is effective in error corrections and outlier rejections. We tested our approach on both synthetic and real-world datasets. Our experimental results show that our approach out-performs previous algorithms quantitatively on synthetic dataset, and it improves the reconstructed 3D model from the refined 3D point cloud in real-world dataset. <s> BIB026 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This paper is concerned with the recovery of temporally coherent estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. A novel approach is presented based on matching of spatiotemporal quadric elements (stequels) between views, as this primitive encapsulates both spatial and temporal image structure for 3D estimation. Match constraints are developed for bringing stequels into correspondence across binocular views. With correspondence established, temporally coherent disparity estimates are obtained without explicit motion recovery. Further, the matched stequels also will be shown to support direct recovery of scene flow estimates. Extensive algorithmic evaluation with ground truth data incorporated in both local and global correspondence paradigms shows the considerable benefit of using stequels as a matching primitive and its advantages in comparison to alternative methods of enforcing temporal coherence in disparity estimation. Additional experiments document the usefulness of stequel matching for 3D scene flow estimation. <s> BIB027 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We propose a depth and image scene flow estimation method taking the input of a binocular video. The key component is motion-depth temporal consistency preservation, making computation in long sequences reliable. We tackle a number of fundamental technical issues, including connection establishment between motion and depth, structure consistency preservation in multiple frames, and long-range temporal constraint employment for error correction. We address all of them in a unified depth and scene flow estimation framework. Our main contributions include development of motion trajectories, which robustly link frame correspondences in a voting manner, rejection of depth/motion outliers through temporal robust regression, novel edge occurrence map estimation, and introduction of anisotropic smoothing priors for proper regularization. <s> BIB028 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB029 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixel-to-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial super pixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques. <s> BIB030 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. <s> BIB031 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches. <s> BIB032 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We propose a method to recover dense 3D scene flow from stereo video. The method estimates the depth and 3D motion field of a dynamic scene from multiple consecutive frames in a sliding temporal window, such that the estimate is consistent across both viewpoints of all frames within the window. The observed scene is modeled as a collection of planar patches that are consistent across views, each undergoing a rigid motion that is approximately constant over time. Finding the patches and their motions is cast as minimization of an energy function over the continuous plane and motion parameters and the discrete pixel-to-plane assignment. We show that such a view-consistent multi-frame scheme greatly improves scene flow computation in the presence of occlusions, and increases its robustness against adverse imaging conditions, such as specularities. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB033 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion. <s> BIB034 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <s> BIB035 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> 3D scene flow estimation aims to jointly recover dense geometry and 3D motion from stereoscopic image sequences, thus generalizes classical disparity and 2D optical flow estimation. To realize its conceptual benefits and overcome limitations of many existing methods, we propose to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented. Geometry and 3D motion are then jointly recovered alongside an over-segmentation of the scene. This piecewise rigid scene model is significantly more parsimonious than conventional pixel-based representations, yet retains the ability to represent real-world scenes with independent object motion. It, furthermore, enables us to define suitable scene priors, perform occlusion reasoning, and leverage discrete optimization schemes toward stable and accurate results. Assuming the rigid motion to persist approximately over time additionally enables us to incorporate multiple frames into the inference. To that end, each view holds its own representation, which is encouraged to be consistent across all other viewpoints and frames in a temporal window. We show that such a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB036 </s> Scene Flow Estimation: A Survey <s> Multi view stereopsis <s> We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios. We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings. <s> BIB037
|
Most of the algorithms in the early 2000s assume a multi-view system, with multiple cameras set in a complex calibrated scene. Multi-view scene flow estimation is usually along with 3D geometry reconstruction simultaneously. Ample data sources and diverse prior knowledge ensure the robustness of the estimation, and occlusion issue can be handled well. However, it is commonly at a high computational cost with an intricate full-view scene to deal with. Vedula proposed two choices for regularization and distinguished three scenarios in 1999 BIB007 , which guides the multi-view scene flow estimation till now. In his paper, a multi-view scene flow can be estimated from one optical flow and the known surface geometry. The equation is formulated in Equation 5 . where X = (X, Y, Z) is the three-dimensional scene point, x = (x, y) is the two-dimensional image pixel, V(X) is the scene flow, v(x) is the optical flow, and ∂X ∂x is the inverse Jacobian which can be estimated from the surface gradient ∇S(X). Afterwards, Zhang proposed two systems for estimation BIB001 BIB002 BIB004 , where IMS assumed each small patch undergoes 3D affine motion, and EGS used segmentation to keep the boundary. These papers modeled energy function with multiple constraint, which provided a basic estimation process. Similarly, Pons presented a common variational framework with local similarity criteria constraint BIB005 BIB010 . Henceforth, different scene representations were introduced to describe the surface BIB006 BIB003 BIB003 BIB014 BIB016 BIB017 BIB022 BIB031 . Diverse multi-frame tracking methods mentioned for sparse estimation are utilized as well to build the temporal coherence BIB008 BIB011 BIB026 . Moreover, Letouzey added an RGB-D camera into the multi-view system with a mesh representation BIB023 , aiming to enrich the geometry information with the depth data constraint. . where E f l and E f r are the optical flow consistency terms that assume the brightness of the same pixel stay constant between frames. Similarly, E d0 and E d1 are the stereo consistency terms that assume brightness constancy between views. Moreover, E cr is the cross term to constrain the constancy between both frames and views. Most binocular-based methods fused stereo and optical flow estimation into a joint framework BIB009 BIB012 BIB013 BIB018 BIB024 BIB027 BIB028 BIB032 . On the contrary, others decoupled motion from disparity estimation to estimate scene flow with stereo matching method replaceable at will BIB015 BIB019 BIB034 BIB035 , and Basha utilized a point cloud scene representation as a three-dimensional parametrization version of scene flow BIB029 . Moreover, local rigidity prior was presented along with segmentation prior and achieved promising results BIB025 BIB030 BIB033 BIB036 . Specifically, Valgaerts introduced a variational framework for scene flow estimation under an uncalibrated stereo setup by embedding an epipolar constraint BIB020 , which makes it possible for scene flow estimation under two arbitrary cameras. In 2016, Richardt has made it a reality to compute dense scene flow from two handheld cameras with varying camera settings BIB037 . Scene flow was estimated under a variational framework with a DAISY descriptor BIB021 for wide-baseline matching. Table 1 enumerates some typical methods under a binocular setting with diverse choices of data terms. Most methods chose optical flow consistency terms in both views and stereo consistency terms in both time t and time t + 1 BIB009 BIB029 BIB025 BIB030 , and few methods only take parts of terms mentioned above BIB012 BIB015 BIB024 . Cross term was utilized in BIB024 BIB025 BIB028 . Moreover, Huguet BIB013 and Hung BIB028 utilized additional gradient constancy assumption besides intensity to enhance robustness against illumination changes, which turns the image intensity value I(x, y) in energy function into image gradient G(x, y). Additionally, extra RGB constancy terms (I G − I B ) are taken in Hung's paper as well, which extends gray value intensity into three-channel information. However, it is proposed that image gradient is sensitive to noise and is view dependent BIB029 BIB025 . Hence, the necessity of additional assumptions like gradient constancy remains further research to balance the pros and cons.
|
Scene Flow Estimation: A Survey <s> RGB-D data <s> The combined use of intensity and depth information greatly helps in the estimation of the local 3D movements (range flow) of moving surfaces. We demonstrate how the two can be combined in both: a local total least squares algorithm, and an iterative global variational technique. While the former assumes locally constant flow, the latter relies on a smoothly varying flow field. The improvement achieved through incorporating intensity is illustrated qualitatively and quantitatively on synthetic and real test data. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> RGB-D data <s> Abstract We discuss the computation of the instantaneous 3D displacement vector fields of deformable surfaces from sequences of range data. We give a novel version of the basic motion constraint equation that can be evaluated directly on the sensor grid. The various forms of the aperture problem encountered are investigated and the derived constraint solutions are solved in a total least squares (TLS) framework. We propose a regularization scheme to compute dense full flow fields from the sparse TLS solutions. The performance of the algorithm is analyzed quantitatively for both synthetic and real data. Finally we apply the method to compute the 3D motion field of living plant leaves. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> RGB-D data <s> We extend estimation of range flow to handle brightness changes in image data caused by inhomogeneous illumination. Standard range flow computes 3D velocity fields using both range and intensity image sequences. Toward this end, range flow estimation combines a depth change model with a brightness constancy model. However, local brightness is generally not preserved when object surfaces rotate relative to the camera or the light sources, or when surfaces move in inhomogeneous illumination. We describe and investigate different approaches to handle such brightness changes. A straightforward approach is to prefilter the intensity data such that brightness changes are suppressed, for instance, by a highpass or a homomorphic filter. Such prefiltering may, though, reduce the signal-to-noise ratio. An alternative novel approach is to replace the brightness constancy model by 1) a gradient constancy model, or 2) by a combination of gradient and brightness constancy constraints used earlier successfully for optical flow, or 3) by a physics-based brightness change model. In performance tests, the standard version and the novel versions of range flow estimation are investigated using prefiltered or nonprefiltered synthetic data with available ground truth. Furthermore, the influences of additive Gaussian noise and simulated shot noise are investigated. Finally, we compare all range flow estimators on real data. <s> BIB003
|
Depth was regarded as a function of space and time by Spies BIB001 BIB002 . He added range flow motion constraint and introduced the range flow motion field. On basis of Spies' theory, Luckins added color channel constraint as an additional information to enhance the robustness . Moreover, Schuchert added gradient constancy assumption and used pre-filtering to handle varying illumination BIB003 . With the development of RGB-D camera, depth can be acquired easily and accurately. RGB-D
|
Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> The combined use of intensity and depth information greatly helps in the estimation of the local 3D movements (range flow) of moving surfaces. We demonstrate how the two can be combined in both: a local total least squares algorithm, and an iterative global variational technique. While the former assumes locally constant flow, the latter relies on a smoothly varying flow field. The improvement achieved through incorporating intensity is illustrated qualitatively and quantitatively on synthetic and real test data. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization-two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We describe a method for computing a dense estimate of motion and disparity, given a stereo video sequence containing moving non-rigid objects. In contrast to previous approaches, motion and disparity are estimated simultaneously from a single coherent probabilistic model that correctly accounts for all occlusions, depth discontinuities, and motion discontinuities. The results demonstrate that simultaneous estimation of motion and disparity is superior to estimating either in isolation, and show the promise of the technique for accurate, probabilistically justified, scene analysis. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We extend estimation of range flow to handle brightness changes in image data caused by inhomogeneous illumination. Standard range flow computes 3D velocity fields using both range and intensity image sequences. Toward this end, range flow estimation combines a depth change model with a brightness constancy model. However, local brightness is generally not preserved when object surfaces rotate relative to the camera or the light sources, or when surfaces move in inhomogeneous illumination. We describe and investigate different approaches to handle such brightness changes. A straightforward approach is to prefilter the intensity data such that brightness changes are suppressed, for instance, by a highpass or a homomorphic filter. Such prefiltering may, though, reduce the signal-to-noise ratio. An alternative novel approach is to replace the brightness constancy model by 1) a gradient constancy model, or 2) by a combination of gradient and brightness constancy constraints used earlier successfully for optical flow, or 3) by a physics-based brightness change model. In performance tests, the standard version and the novel versions of range flow estimation are investigated using prefiltered or nonprefiltered synthetic data with available ground truth. Furthermore, the influences of additive Gaussian noise and simulated shot noise are investigated. Finally, we compare all range flow estimators on real data. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> In this paper, we present a framework for range flow estimation from Microsoft's multi-modal imaging device Kinect. We address all essential stages of the flow computation process, starting from the calibration of the Kinect, over the alignment of the range and color channels, to the introduction of a novel multi-modal range flow algorithm which is robust against typical (technology dependent) range estimation artifacts. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> The motion field of a scene can be used for object segmentation and to provide features for classification tasks like action recognition. Scene flow is the full 3D motion field of the scene, and is more difficult to estimate than it's 2D counterpart, optical flow. Current approaches use a smoothness cost for regularisation, which tends to over-smooth at object boundaries. This paper presents a novel formulation for scene flow estimation, a collection of moving points in 3D space, modelled using a particle filter that supports multiple hypotheses and does not oversmooth the motion field. In addition, this paper is the first to address scene flow estimation, while making use of modern depth sensors and monocular appearance images, rather than traditional multi-viewpoint rigs. The algorithm is applied to an existing scene flow dataset, where it achieves comparable results to approaches utilising multiple views, while taking a fraction of the time. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We propose a depth and image scene flow estimation method taking the input of a binocular video. The key component is motion-depth temporal consistency preservation, making computation in long sequences reliable. We tackle a number of fundamental technical issues, including connection establishment between motion and depth, structure consistency preservation in multiple frames, and long-range temporal constraint employment for error correction. We address all of them in a unified depth and scene flow estimation framework. Our main contributions include development of motion trajectories, which robustly link frame correspondences in a voting manner, rejection of depth/motion outliers through temporal robust regression, novel edge occurrence map estimation, and introduction of anisotropic smoothing priors for proper regularization. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> There is close relationship between depth information and scene flow. However, it's not fully utilized in most of scene flow estimators. In this paper, we propose a method to estimate scene flow with monocular appearance images and corresponding depth images. We combine a global energy optimization and a bilateral filter into a two-step framework. Occluded pixels are detected by the consistency of appearance and depth, and the corresponding data errors are excluded from the energy function. The appearance and depth information are also utilized in anisotropic regularization to suppress over-smoothing. The multi-channel bilateral filter is introduced to correct scene flow with various information in non-local areas. The proposed approach is tested on Middlebury dataset and the sequences captured by KINECT. Experiment results show that it can estimate dense and accurate scene flow in challenging environments and keep the discontinuity around motion boundaries. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> 3-D motion estimation is a fundamental problem that has far-reaching implications in robotics. A scene flow formulation is attractive as it makes no assumptions about scene complexity, object rigidity, or camera motion. RGB-D cameras provide new information useful for computing dense 3-D flow in challenging scenes. In this work we show how to generalize two-frame variational 2-D flow algorithms to 3-D. We show that scene flow can be reliably computed using RGB-D data, overcoming depth noise and outperforming previous results on a variety of scenes. We apply dense 3-D flow to rigid motion segmentation. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> The scene flow describes the 3D motion of every point in a scene between two time steps. We present a novel method to estimate a dense scene flow using intensity and depth data. It is well known that local methods are more robust under noise while global techniques yield dense motion estimation. We combine local and global constraints to solve for the scene flow in a variational framework. An adaptive TV (Total Variation) regularization is used to preserve motion discontinuities. Besides, we constrain the motion using a set of 3D correspondences to deal with large displacements. In the experimentation our approach outperforms previous scene flow from intensity and depth methods in terms of accuracy. <s> BIB014 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> In this paper, an algorithm is presented for estimating scene flow, which is a richer, 3D analogue of Optical Flow. The approach operates orders of magnitude faster than alternative techniques, and is well suited to further performance gains through parallelized implementation. The algorithm employs multiple hypothesis to deal with motion ambiguities, rather than the traditional smoothness constraints, removing oversmoothing errors and providing significant performance improvements on benchmark data, over the previous state of the art. The approach is flexible, and capable of operating with any combination of appearance and/or depth sensors, in any setup, simultaneously estimating the structure and motion if necessary. Additionally, the algorithm propagates information over time to resolve ambiguities, rather than performing an isolated estimation at each frame, as in contemporary approaches. Approaches to smoothing the motion field without sacrificing the benefits of multiple hypotheses are explored, and a probabilistic approach to Occlusion estimation is demonstrated, leading to 10% and 15% improved performance respectively. Finally, a data driven tracking approach is described, and used to estimate the 3D trajectories of hands during sign language, without the need to model complex appearance variations at each viewpoint. <s> BIB015 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios. <s> BIB016 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> This paper investigates motion estimation and segmentation of independently moving objects in video sequences that contain depth and intensity information, such as videos captured by a Time of Flight camera. Specifically, we present a motion estimation algorithm which is based on integration of depth and intensity data. The resulting motion information is used to derive long-term point trajectories. A segmentation technique groups the trajectories according to their motion and depth similarity into spatio-temporal segments. Quantitative and qualitative analysis of synthetic and real world videos verify the proposed motion estimation and segmentation approach. The proposed framework extracts independently moving objects from videos recorded by a Time of Flight camera. <s> BIB017 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> In this paper we present a novel method to accurately estimate the dense 3D motion field, known as scene flow, from depth and intensity acquisitions. The method is formulated as a convex energy optimization, where the motion warping of each scene point is estimated through a projection and back-projection directly in 3D space. We utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. Our formulation enables the calculation of a dense flow field which does not penalize smooth and non-rigid movements while aligning motion boundaries with strong depth boundaries. An efficient parallelization of the numerical algorithm leads to runtimes in the order of 1s and therefore enables the method to be used in a variety of applications. We show that this novel scene flow calculation outperforms existing approaches in terms of speed and accuracy. Furthermore, we demonstrate applications such as camera pose estimation and depth image super resolution, which are enabled by the high accuracy of the proposed method. We show these applications using modern depth sensors such as Microsoft Kinect or the PMD Nano Time-of-Flight sensor. <s> BIB018 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We present a novel method for dense variational scene flow estimation based a multiscale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. <s> BIB019 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> As consumer depth sensors become widely available, estimating scene flow from RGBD sequences has received increasing attention. Although the depth information allows the recovery of 3D motion from a single view, it poses new challenges. In particular, depth boundaries are not well-aligned with RGB image edges and therefore not reliable cues to localize 2D motion boundaries. In addition, methods that extend the 2D optical flow formulation to 3D still produce large errors in occlusion regions. To better use depth for occlusion reasoning, we propose a layered RGBD scene flow method that jointly solves for the scene segmentation and the motion. Our key observation is that the noisy depth is sufficient to decide the depth ordering of layers, thereby avoiding a computational bottleneck for RGB layered methods. Furthermore, the depth enables us to estimate a per-layer 3D rigid motion to constrain the motion of each layer. Experimental results on both the Middlebury and real-world sequences demonstrate the effectiveness of the layered approach for RGBD scene flow estimation. <s> BIB020 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects. We evaluate our approach with both synthetic and real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion. <s> BIB021 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> The emergence of modern, affordable and accurate RGB-D sensors increases the need for single view approaches to estimate 3-dimensional motion, also known as scene flow. In this paper we propose a coarse-to-fine, dense, correspondence-based scene flow formulation that relies on explicit geometric reasoning to account for the effects of large displacements and to model occlusion. Our methodology enforces local motion rigidity at the level of the 3d point cloud without explicitly smoothing the parameters of adjacent neighborhoods. By integrating all geometric and photometric components in a single, consistent, occlusion-aware energy model, defined over overlapping, image-adaptive neighborhoods, our method can process fast motions and large occlusions areas, as present in challenging datasets like the MPI Sintel Flow Dataset, recently augmented with depth information. By explicitly modeling large displacements and occlusion, we can handle difficult sequences which cannot be currently processed by state of the art scene flow methods. We also show that by integrating depth information into the model, we can obtain correspondence fields with improved spatial support and sharper boundaries compared to the state of the art, large-displacement optical flow methods. <s> BIB022 </s> Scene Flow Estimation: A Survey <s> Literature Year Data term Description <s> We present an approach for computing dense scene flow from two large displacement RGB-D images. When dealing with large displacements the crucial step is to estimate the overall motion correctly. While state-of-the-art approaches focus on RGB information to establish guiding correspondences, we explore the power of depth edges. To achieve this, we present a new graph matching technique that brings sparse depth edges into correspondence. An additional contribution is the formulation of a continuous-label energy which is used to densify the sparse graph matching output. We present results on challenging Kinect images, for which we outperform state-of-the-art techniques. <s> BIB023
|
Li BIB002 2005 First method under a binocular setting in the early stage. Isard BIB003 2006 E f r , E dt , Ecr First utilizing cross term. Under a Markov random field(MRF) framework. Huguet BIB004 2007 Introducing a basic framework for scene flow under the binocular setting. Wedel BIB005 2008 E f l , E f r , E dt Decoupling motion and stereo. Basha BIB010 2010 Cech BIB007 2011 Utilizing a seeded growing-propagation framework for fast implementation. Hung BIB011 2013 Assuming additional RGB intensity and gradient constancy. Table 1 : Typical methods under the binocular setting information can be seen as a cheap data source for geometry knowledge, and the depth information from RGB-D camera were seen as a cheap and efficient source for layering BIB020 , which provided layer ordering straightly without exhaustive search. However, the application of RGB-D scene flow estimation will be restrained on account of the limited sensing range and the unstable performance under illumination or reflection, which is struggling in an outdoor scene. A comparison between the state-of-art consumer RGB-D cameras is presented in Table 2 to show the limitation of current RGB-D sensors in terms of range, frame rate and angle of depth measurement. Moreover, the quality of depth map is far from satisfactory due to the invalid data around object boundary region, noise and error pixels as Figure 5 presents. Gottfried was the first to use Kinect sensor for scene flow estimation BIB008 . He addressed all essential stages including calibration, alignment and estimation. Afterwards, following the common optical flow optimization, scene flow was solved under the variational framework BIB013 BIB012 , and was then modified by combining additional local rigidity priors BIB014 BIB016 BIB017 . Similarly, the pixel assignment methods are becoming more and more popular BIB018 BIB019 BIB021 BIB022 . In this way can the discontinuity be preserved smoothly. In addition, scene particle method BIB009 BIB015 and feature matching method BIB023 are applied to scene flow estimation with pros and cons as supplements besides the common variational methods. The basic data terms for RGB-D scene flow estimation consist of brightness constancy term(BC) and depth change consistency term(DCC): which are utilized by most RGB-D scene flow estimation methods BIB001 BIB008 BIB013 BIB014 BIB018 BIB017 . For robustness against varying illumination, Luckins added diverse additional color constraint, e.g., RGB, l * a * b, and hue , and Schuchert combining intensity constancy along with gradient constancy for per-pixel estimation BIB006 . Sun added layering and occlusion reasoning penalty for better performance BIB020 .
|
Scene Flow Estimation: A Survey <s> Light field data <s> Ordinary cameras gather light across the area of their lens aperture, and the light striking a given subregion of the aperture is structured somewhat differently than the light striking an adjacent subregion. By analyzing this optical structure, one can infer the depths of the objects in the scene, i.e. one can achieve single lens stereo. The authors describe a camera for performing this analysis. It incorporates a single main lens along with a lenticular array placed at the sensor plane. The resulting plenoptic camera provides information about how the scene would look when viewed from a continuum of possible viewpoints bounded by the main lens aperture. Deriving depth information is simpler than in a binocular stereo system because the correspondence problem is minimized. The camera extracts information about both horizontal and vertical parallax, which improves the reliability of the depth estimates. > <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Light field data <s> Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras, moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Light field data <s> 2D spatial image windows are used for comparing pixel values in computer vision applications such as correspondence for optical flow and 3D reconstruction, bilateral filtering, and image segmentation. However, pixel window comparisons can suffer from varying defocus blur and perspective at different depths, and can also lead to a loss of precision. In this paper, we leverage the recent use of light-field cameras to propose alternative oriented light-field windows that enable more robust and accurate pixel comparisons. For Lambertian surfaces focused to the correct depth, the 2D distribution of angular rays from a pixel remains consistent. We build on this idea to develop an oriented 4D light-field window that accounts for shearing (depth), translation (matching), and windowing. Our main application is to scene flow, a generalization of optical flow to the 3D vector field describing the motion of each point in the scene. We show significant benefits of oriented light-field windows over standard 2D spatial windows. We also demonstrate additional applications of oriented light-field windows for bilateral filtering and image segmentation. <s> BIB003
|
Light field data has enabled image refocusing and depth estimation with rich information BIB001 . That is to say, it can not only be treated as a depth data source, but provides much more information for constraint and regularization. Srinvasan was the first and till now the only one that utilized a light field camera Lytro Illum for scene flow estimation BIB003 . He proposed an oriented light-field window method as a matching manner and embedded it with the common RGB-D scene flow estimation framework, where depth data was acquired using the method proposed by Tao BIB002 . In terms of data term, he only took brightness constancy of the oriented light field window into consideration, which is illustrated in Equation 9 . where P is the full oriented light field window operator. The penalty function is the L2 norm Srinvasan's paper brings us with new ideas to extend the scene flow estimation area on account that light field data compensates the shortage of depth data source in terms of the sensing range and the robustness under an outdoor scene.
|
Scene Flow Estimation: A Survey <s> Global variational method <s> This contribution investigates local differential techniques for estimating optical flow and its derivatives based on the brightness change constraint. By using the tensor calculus representation we build the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood. Such a formulation simplifies a unifying framework for all existing local differential approaches and allows to derive new systems of equations to estimate the optical flow and its derivatives. We also tested various optical flow estimation approaches on real image sequences recorded by a calibrated camera fixed on the arm of a robot. By moving the arm of the robot along a precisely defined trajectory we can determine the true displacement rate of scene surface elements projected into the image plane and compare it quantitatively with the results of different optical flow estimators. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly appealing formulation is based on total variation (TV) regularization and the robust L1 norm in the data fidelity term. This formulation can preserve discontinuities in the flow field and offers an increased robustness against illumination changes, occlusions and noise. In this work we present a novel approach to solve the TV-L1 formulation. Our method results in a very efficient numerical scheme, which is based on a dual formulation of the TV energy and employs an efficient point-wise thresholding step. Additionally, our approach can be accelerated by modern graphics processing units. We demonstrate the real-time performance (30 fps) of our approach for video inputs at a resolution of 320 × 240 pixels. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> The novel concept of total generalized variation of a function $u$ is introduced, and some of its essential properties are proved. Differently from the bounded variation seminorm, the new concept involves higher-order derivatives of $u$. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and, as a side effect, does not lead to a staircasing effect. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> We present an approach to 3D scene flow estimation, which exploits that in realistic scenarios image motion is frequently dominated by observer motion and independent, but rigid object motion. We cast the dense estimation of both scene structure and 3D motion from sequences of two or more views as a single energy minimization problem. We show that agnostic smoothness priors, such as the popular total variation, are biased against motion discontinuities in viewing direction. Instead, we propose to regularize by encouraging local rigidity of the 3D scene. We derive a local rigidity constraint of the 3D scene flow and define a smoothness term that penalizes deviations from that constraint, thus favoring solutions that consist largely of rigidly moving parts. Our experiments show that the new rigid motion prior reduces the 3D flow error by 42% compared to standard TV regularization with the same data term. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> There is close relationship between depth information and scene flow. However, it's not fully utilized in most of scene flow estimators. In this paper, we propose a method to estimate scene flow with monocular appearance images and corresponding depth images. We combine a global energy optimization and a bilateral filter into a two-step framework. Occluded pixels are detected by the consistency of appearance and depth, and the corresponding data errors are excluded from the energy function. The appearance and depth information are also utilized in anisotropic regularization to suppress over-smoothing. The multi-channel bilateral filter is introduced to correct scene flow with various information in non-local areas. The proposed approach is tested on Middlebury dataset and the sequences captured by KINECT. Experiment results show that it can estimate dense and accurate scene flow in challenging environments and keep the discontinuity around motion boundaries. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> We introduce a framework to estimate and refine 3D scene flow which connects 3D structures of a scene across different frames. In contrast to previous approaches which compute 3D scene flow that connects depth maps from a stereo image sequence or from a depth camera, our approach takes advantage of full 3D reconstruction which computes the 3D scene flow that connects 3D point clouds from multi-view stereo system. Our approach uses a standard multi-view stereo and optical flow algorithm to compute the initial 3D scene flow. A unique two-stage refinement process regularizes the scene flow direction and magnitude sequentially. The scene flow direction is refined by utilizing 3D neighbor smoothness defined by tensor voting. The magnitude of the scene flow is refined by connecting the implicit surfaces across the consecutive 3D point clouds. Our estimated scene flow is temporally consistent. Our approach is efficient, model free, and it is effective in error corrections and outlier rejections. We tested our approach on both synthetic and real-world datasets. Our experimental results show that our approach out-performs previous algorithms quantitatively on synthetic dataset, and it improves the reconstructed 3D model from the refined 3D point cloud in real-world dataset. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> In this paper we present a novel method to accurately estimate the dense 3D motion field, known as scene flow, from depth and intensity acquisitions. The method is formulated as a convex energy optimization, where the motion warping of each scene point is estimated through a projection and back-projection directly in 3D space. We utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. Our formulation enables the calculation of a dense flow field which does not penalize smooth and non-rigid movements while aligning motion boundaries with strong depth boundaries. An efficient parallelization of the numerical algorithm leads to runtimes in the order of 1s and therefore enables the method to be used in a variety of applications. We show that this novel scene flow calculation outperforms existing approaches in terms of speed and accuracy. Furthermore, we demonstrate applications such as camera pose estimation and depth image super resolution, which are enabled by the high accuracy of the proposed method. We show these applications using modern depth sensors such as Microsoft Kinect or the PMD Nano Time-of-Flight sensor. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> We present a novel method for dense variational scene flow estimation based a multiscale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Global variational method <s> This paper presents the first method to compute dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in real-time. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate. <s> BIB011
|
Global variational method has always been a classical method for either the optical flow or the scene flow estimation. As Equation 1 presents, while data terms provide local constraints to keep consistency, regularization terms give global propagation that yield a dense estimation, the motion field is constrained both locally and globally under the total variational(TV) framework. Regularization term Under the total variational framework, a total variational(TV) regularizer is commonly chosen as the regularization term, or in other words, smoothness term. TV regularizer in terms of binocular or RGB-D can be formulated in Equation BIB001 . Besides, Basha estimated scene flow under a multi-view 3D point cloud BIB005 . Thus the optical flow constraint was replaced by the three-dimensional scene flow constraint, and the smoothness of depth was added to the regularization term to penalize the shape. Zhang modified the regularization term with an anisotropic smoothness term to choose a more reliable pixel change between depth and appearance BIB006 . A bilateral filtering was utilized for edge-preserving as well. And a tensor voting approach was utilized under the multi-view stereopsis BIB007 , where scene flow was refined in terms of direction by tensor voting in the temporal neighborhood and in terms of magnitude by a physical property between two frames. Nevertheless, Vogel stated that TV regularizer is not good for scene flow estimation on account that it cannot handle discontinuities in the depth direction BIB004 . In his paper, scene flow was estimated by simultaneously regularizing the global rigid motion and the local non-rigid residual, where the regularization term consists of TV regularizer and a local rigidity prior penalized by Lorentzian function Ψ(s) = log(1 + Moreover, Zach introduced a duality-based method to the optical flow estimation for optimization in 2007 and achieved real-time estimation BIB002 . It separates the energy function and holds them into the same framework for parallel optimization, which remarkably lowers the computational cost and complexity without accuracy loss. Following Zach's paper, Quiroga implemented an auxiliary flow in the energy function, which decompose the minimization into two simpler issues BIB008 . By alternating the updating of the scene flow and the auxiliary flow, the problem can be solved with great efficiency. A brief scheme is illustrated as follows: Step 1 With auxiliary flow V' introduced to the problem, it can be solved by alternating the updating both V' and V. Then the energy function defined in Equation 1 turns to: where θ is a small constant. Step 2 By fixing the scene flow V, the auxiliary flow V' can be solved by minimizing: Step 3 By fixing the auxiliary flow V', the scene flow V can be solved by minimizing: Similarly, Ferstl embedded primal-dual algorithm into a coarse-to-fine framework BIB009 BIB010 . A total generalized variation regularization(TGV) BIB003 along with an anisotropic diffusion tensor was utilized to preserve edges. Moreover, Jaimez achieved a real-time RGB-D scene flow estimation BIB011 . The brief summary of the typical global variational methods are illustrated in Table 4 .
|
Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of "local" functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> In this paper, novel algorithms computing dense 3D scene flow from multiview image sequences are described. A new hierarchical rule-based stereo matching algorithm is presented to estimate the initial disparity map. Different available constraints under a multiview camera setup are investigated and then utilized in the proposed motion estimation algorithms. We show two different formulations for 3D scene flow computation. One formulation assumes that initial disparity map is accurate while the other does not make this assumption. Image segmentation information is used to maintain the motion and depth discontinuities. Iterative implementations are used to successfully compute 3D scene flow and structure at every point in the reference image. Novel hard constraints are introduced in this paper to make the algorithms more accurate and robust. Promising experimental results are seen by applying our algorithms to real imagery. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization-two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Many computer vision applications rely on the efficient optimization of challenging, so-called non-submodular, binary pairwise MRFs. A promising graph cut based approach for optimizing such MRFs known as "roof duality" was recently introduced into computer vision. We study two methods which extend this approach. First, we discuss an efficient implementation of the "probing" technique introduced recently by Bows et al. (2006). It simplifies the MRF while preserving the global optimum. Our code is 400-700 faster on some graphs than the implementation of the work of Bows et al. (2006). Second, we present a new technique which takes an arbitrary input labeling and tries to improve its energy. We give theoretical characterizations of local minima of this procedure. We applied both techniques to many applications, including image segmentation, new view synthesis, super-resolution, diagram recognition, parameter learning, texture restoration, and image deconvolution. For several applications we see that we are able to find the global minimum very efficiently, and considerably outperform the original roof duality approach. In comparison to existing techniques, such as graph cut, TRW, BP, ICM, and simulated annealing, we nearly always find a lower energy. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Accurate estimation of optical flow is a challenging task, which often requires addressing difficult energy optimization problems. To solve them, most top-performing methods rely on continuous optimization algorithms. The modeling accuracy of the energy in this case is often traded for its tractability. This is in contrast to the related problem of narrow-baseline stereo matching, where the top-performing methods employ powerful discrete optimization algorithms such as graph cuts and message-passing to optimize highly non-convex energies. In this paper, we demonstrate how similar non-convex energies can be formulated and optimized discretely in the context of optical flow estimation. Starting with a set of candidate solutions that are produced by fast continuous flow estimation algorithms, the proposed method iteratively fuses these candidate solutions by the computation of minimum cuts on graphs. The obtained continuous-valued fusion result is then further improved using local gradient descent. Experimentally, we demonstrate that the proposed energy is an accurate model and that the proposed discrete-continuous optimization scheme not only finds lower energy solutions than traditional discrete or continuous optimization techniques, but also leads to flow estimates that outperform the current state-of-the-art. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> This paper addresses the problem of estimating the dense 3D motion of a scene over several frames using a set of calibrated cameras. Most current 3D motion estimation techniques are limited to estimating the motion over a single frame, unless a strong prior model of the scene (such as a skeleton) is introduced. Estimating the 3D motion of a general scene is difficult due to untextured surfaces, complex movements and occlusions. In this paper, we show that it is possible to track the surfaces of a scene over several frames, by introducing an effective prior on the scene motion. Experimental results show that the proposed method estimates the dense scene-flow over multiple frames, without the need for multiple-view reconstructions at every frame. Furthermore, the accuracy of the proposed method is demonstrated by comparing the estimated motion against a ground truth. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Many methods for object recognition, segmentation, etc., rely on a tessellation of an image into "superpixels". A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D "supervoxel" segmentation. We achieve high boundary recall on images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixel-to-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial super pixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> We propose a method to recover dense 3D scene flow from stereo video. The method estimates the depth and 3D motion field of a dynamic scene from multiple consecutive frames in a sliding temporal window, such that the estimate is consistent across both viewpoints of all frames within the window. The observed scene is modeled as a collection of planar patches that are consistent across views, each undergoing a rigid motion that is approximately constant over time. Finding the patches and their motions is cast as minimization of an energy function over the continuous plane and motion parameters and the discrete pixel-to-plane assignment. We show that such a view-consistent multi-frame scheme greatly improves scene flow computation in the presence of occlusions, and increases its robustness against adverse imaging conditions, such as specularities. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects. We evaluate our approach with both synthetic and real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> As consumer depth sensors become widely available, estimating scene flow from RGBD sequences has received increasing attention. Although the depth information allows the recovery of 3D motion from a single view, it poses new challenges. In particular, depth boundaries are not well-aligned with RGB image edges and therefore not reliable cues to localize 2D motion boundaries. In addition, methods that extend the 2D optical flow formulation to 3D still produce large errors in occlusion regions. To better use depth for occlusion reasoning, we propose a layered RGBD scene flow method that jointly solves for the scene segmentation and the motion. Our key observation is that the noisy depth is sufficient to decide the depth ordering of layers, thereby avoiding a computational bottleneck for RGB layered methods. Furthermore, the depth enables us to estimate a per-layer 3D rigid motion to constrain the motion of each layer. Experimental results on both the Middlebury and real-world sequences demonstrate the effectiveness of the layered approach for RGBD scene flow estimation. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> 3D scene flow estimation aims to jointly recover dense geometry and 3D motion from stereoscopic image sequences, thus generalizes classical disparity and 2D optical flow estimation. To realize its conceptual benefits and overcome limitations of many existing methods, we propose to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented. Geometry and 3D motion are then jointly recovered alongside an over-segmentation of the scene. This piecewise rigid scene model is significantly more parsimonious than conventional pixel-based representations, yet retains the ability to represent real-world scenes with independent object motion. It, furthermore, enables us to define suitable scene priors, perform occlusion reasoning, and leverage discrete optimization schemes toward stable and accurate results. Assuming the rigid motion to persist approximately over time additionally enables us to incorporate multiple frames into the inference. To that end, each view holds its own representation, which is encouraged to be consistent across all other viewpoints and frames in a temporal window. We show that such a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Pixel assignment method <s> We propose a continuous optimization method for solving dense 3D scene flow problems from stereo imagery. As in recent work, we represent the dynamic 3D scene as a collection of rigidly moving planar segments. The scene flow problem then becomes the joint estimation of pixel-to-segment assignment, 3D position, normal vector and rigid motion parameters for each segment, leading to a complex and expensive discrete-continuous optimization problem. In contrast, we propose a purely continuous formulation which can be solved more efficiently. Using a fine superpixel segmentation that is fixed a-priori, we propose a factor graph formulation that decomposes the problem into photometric, geometric, and smoothing constraints. We initialize the solution with a novel, high-quality initialization method, then independently refine the geometry and motion of the scene, and finally perform a global non-linear refinement using Levenberg-Marquardt. We evaluate our method in the challenging KITTI Scene Flow benchmark, ranking in third position, while being 3 to 30 times faster than the top competitors (x37 [10] and x3.75 [24]). <s> BIB014
|
pixel assignment methods assumes local rigidity where pixels in a small region shares the same motion. It consists of three steps: pixel assignment, region motion estimation, and the compensation for each pixel. Each pixel is assigned to a specific region with prior knowledges. Then the center motion of each region is estimated, while a small motion residual of each pixel is tolerated and compensated by refinement afterwards. In this way the method combines both the global denseness and the local computational effectiveness, with discontinuity preserved well simultaneously. Zhang first introduced this idea by fitting an affine motion model to each segment with global smoothness constraint in the early stage BIB002 . The scene was set under a multi-view system without rigid assumption. It is followed by Li to apply this kind of method under a binocular setting BIB003 . In the last decade, this pixel-to-segment scene flow estimation draws people's attention for its superiority. Popham gave a pixel-to-patch assignment BIB006 BIB009 . The motion of each patch was estimated through a common variational method solved by Gauss-Seidel iteration, and the motion field of pixels in each patch were interpolated with a measurement covariance. Jaimez jointly estimated motion and segmentation BIB011 . The scene was assumed to be segmented into several labels and the pixel-to-segment issue was seen as a labelling problem, while lareweighedbelling is involved in the regularization. Scene flow for each segment is estimated with the global iteratively reweighed least squares(IRLS) minimization. Sun handled the issue with multiple hypothesis to jointly obtain occlusion reasoning, motion estimation and the scene segmentation BIB012 . With a cheap acquisition of depth information, reliable layer ordering information can be obtained easily for the global-rigid, local-flexible motion field estimation. Vogel solved the issue by simultaneously looking for a pixel-to-segment mapping and the segment motion BIB008 . The energy function can be formulated as: where S : I → S is the pixel-to-segment mapping which assign each image pixel p ∈ I to a segment s ∈ S. P : S → Π is the segment motion mapping which assign each segment to a 3D rigidly moving plane π ∈ Π. E D is the data term and E R is the TV regularizer, and E S is an additional segmentation regularization term to refine segmentation during iterations. Using a superpixel segmentation for initialization BIB007 , the energy is alternatively optimized using fusion moves BIB005 and quadratic pseudo-boolean optimization (QPBO) BIB004 , and it achieves the stateof-the-art performance. The general idea is depicted in Figure 6 . Afterwards, he introduced a temporal window to enforce coherence over long time intervals BIB010 , and proposed a detailed version with deep analysis and thorough comparisons later that forms the whole theoretical framework BIB013 . Similarly, in 2016, Lv follows the idea by representing the dynamic 3D scene as a collection of rigidly moving planar segments BIB014 . The complex assignment problem is formulated with a factor graph formulation BIB001 , estimated as a non-linear least square problem and then optimized locally and globally. The brief summary of the typical pixel assignment methods are illustrated in Table 5 .
|
Scene Flow Estimation: A Survey <s> Feature matching method <s> Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> In this paper we consider the problem of estimating a 3D motion field using multiple cameras. In particular, we focus on the situation where a depth camera and one or more color cameras are available, a common situation with recent composite sensors such as the Kinect. In this case, geometric information from depth maps can be combined with intensity variations in color images in order to estimate smooth and dense 3D motion fields. We propose a unified framework for this purpose, that can handle both arbitrary large motions and sub-pixel displacements. The estimation is cast as a linear optimization problem that can be solved very efficiently. The novelty with respect to existing scene flow approaches is that it takes advantage of the geometric information provided by the depth camera to define a surface domain over which photometric constraints can be consistently integrated in 3D. Experiments on real and synthetic data provide both qualitative and quantitative results that demonstrate the interest of the approach. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> This paper is concerned with the recovery of temporally coherent estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. A novel approach is presented based on matching of spatiotemporal quadric elements (stequels) between views, as this primitive encapsulates both spatial and temporal image structure for 3D estimation. Match constraints are developed for bringing stequels into correspondence across binocular views. With correspondence established, temporally coherent disparity estimates are obtained without explicit motion recovery. Further, the matched stequels also will be shown to support direct recovery of scene flow estimates. Extensive algorithmic evaluation with ground truth data incorporated in both local and global correspondence paradigms shows the considerable benefit of using stequels as a matching primitive and its advantages in comparison to alternative methods of enforcing temporal coherence in disparity estimation. Additional experiments document the usefulness of stequel matching for 3D scene flow estimation. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> The scene flow describes the 3D motion of every point in a scene between two time steps. We present a novel method to estimate a dense scene flow using intensity and depth data. It is well known that local methods are more robust under noise while global techniques yield dense motion estimation. We combine local and global constraints to solve for the scene flow in a variational framework. An adaptive TV (Total Variation) regularization is used to preserve motion discontinuities. Besides, we constrain the motion using a set of 3D correspondences to deal with large displacements. In the experimentation our approach outperforms previous scene flow from intensity and depth methods in terms of accuracy. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> We present an approach for computing dense scene flow from two large displacement RGB-D images. When dealing with large displacements the crucial step is to estimate the overall motion correctly. While state-of-the-art approaches focus on RGB information to establish guiding correspondences, we explore the power of depth edges. To achieve this, we present a new graph matching technique that brings sparse depth edges into correspondence. An additional contribution is the formulation of a continuous-label energy which is used to densify the sparse graph matching output. We present results on challenging Kinect images, for which we outperform state-of-the-art techniques. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Feature matching method <s> We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios. We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings. <s> BIB007
|
Feature matching method mainly consists of three steps: feature extraction, feature matching, and propagation. While stereo estimation aims to find the spatial correspondence, scene flow seeks BIB001 as a prematcher. The sparse estimation was then propagated to dense result using seeded growing method in terms of both stereo and motion. Sizintsev introduced a spatiotemporal quadric element(stequel) matching method BIB004 . Stereo correspondence and scene flow were estimated through the match cost solution with multiple constraints. Richardt utilized the DAISY descriptor BIB002 for computing correspondence between views and frames BIB007 , and the dense scene flow was refined under the conventional variational framework. Moreover, matching method is also implemented with an RGB-D camera. Letouzey matched SIFT features as a constraint and minimized the energy function as a preliminary estimation BIB003 . By re-projecting this preliminary estimation into the image plane, a preliminary map can be viewed as the initial value, where the SIFT features are the non-moving anchor points. Quiroga enforced consistency of scene flow with a sparse set of SURF features BIB005 . The features were extracted in the color image while depth information was taken as the matching constraint. Hornacek proposed a patch-wise estimation without assuming brightness constancy . The matching cost takes both three-channel CIE l*a*b information and gradient information into consideration, initialized by SURF features. The pixel with tiny matching cost will be viewed with the same motion for propagation. On basis of Hornacek, Alhaija took edge as the sparse matching feature with a graph matching approach to handle large displacement and acquired promising results BIB006 . The brief summary of the typical feature matching methods are illustrated in Table 6 . Table 6 : The brief summary of the typical feature matching methods
|
Scene Flow Estimation: A Survey <s> Learning-based method <s> The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Learning-based method <s> Motion estimation algorithms are typically based upon the assumption of brightness constancy or related assumptions such as gradient constancy. This manuscript evaluates several common cost functions from the motion estimation literature, which embody these assumptions. We demonstrate that such assumptions break for real world data, and the functions are therefore unsuitable. We propose a simple solution, which significantly increases the discriminatory ability of the metric, by learning a nonlinear relationship using techniques from machine learning. Furthermore, we demonstrate how context and a nonlinear combination of metrics, can provide additional gains, and demonstrating a 44% improvement in the performance of a state of the art scene flow estimation technique. In addition, smaller gains of 20% are demonstrated in optical flow estimation tasks. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Learning-based method <s> Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <s> BIB003
|
Learning method has got enough attention over the past few years for solving computer vision tasks. To solve scene flow estimation, learning method can be utilized twofold. On the one hand, parameters as a part of the whole pipeline can be learned to enhance the robustness or the efficiency. Hadfield, for example, used machine learning technique to introduce an intelligent cost function as a penalization metric with limited improvement BIB002 . On the other hand, learning can be utilized for a per-pixel end-to-end estimation. Currently speaking, due to lack of large-scale dataset with ground truth and appropriate models, Mayer is the only one who utilized conventional neural network(CNN) BIB001 to learn scene flow estimation BIB003 , which reveals the embryo of learning scene flow. He also introduced a large scale dataset for training, which will be introduced in Section 4.3.6. Optical flow estimation and disparity estimation were decoupled, where the disparity estimation network named "DispNet" was presented on basis of the proposed FlowNet optical flow network . Each network consisted of a contracting part for feature contraction and an expanding part utilizing up-convolutional layers and un-pooling for final estimation. A loss weight scheme was implemented to balance the weight between high resolution and low resolution. The downsampling factor in total is 64, and the network consider a maximum displacement of 160 pixels in the input images, which is much superior to a 4-level pyramid with a 50% downsampling factor.
|
Scene Flow Estimation: A Survey <s> Two-dimensional error measure <s> This contribution investigates local differential techniques for estimating optical flow and its derivatives based on the brightness change constraint. By using the tensor calculus representation we build the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood. Such a formulation simplifies a unifying framework for all existing local differential approaches and allows to derive new systems of equations to estimate the optical flow and its derivatives. We also tested various optical flow estimation approaches on real image sequences recorded by a calibrated camera fixed on the arm of a robot. By moving the arm of the robot along a precisely defined trajectory we can determine the true displacement rate of scene surface elements projected into the image plane and compare it quantitatively with the results of different optical flow estimators. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Two-dimensional error measure <s> We present a technique for the computation of 2D component velocity from image sequences. Initially, the image sequence is represented by a family of spatiotemporal velocity-tuned linear filters. Component velocity, computed from spatiotemporal responses of identically tuned filters, is expressed in terms of the local first-order behavior of surfaces of constant phase. Justification for this definition is discussed from the perspectives of both 2D image translation and deviations from translation that are typical in perspective projections of 3D scenes. The resulting technique is predominantly linear, efficient, and suitable for parallel processing. Moreover, it is local in space-time, robust with respect to noise, and permits multiple estimates within a single neighborhood. Promising quantiative results are reported from experiments with realistic image sequences, including cases with sizeable perspective deformation. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Two-dimensional error measure <s> The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Two-dimensional error measure <s> Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Two-dimensional error measure <s> This paper proposes an optical flow algorithm by adapting Approximate Nearest Neighbor Fields (ANNF) to obtain a pixel level optical flow between image sequence. Patch similarity based coherency is performed to refine the ANNF maps. Further improvement in mapping between the two images are obtained by fusing bidirectional ANNF maps between pair of images. Thus a highly accurate pixel level flow is obtained between the pair of images. Using pyramidal cost optimization, the pixel level optical flow is further optimized to a sub-pixel level. The proposed approach is evaluated on the middlebury dataset and the performance obtained is comparable with the state of the art approaches. Furthermore, the proposed approach can be used to compute large displacement optical flow as evaluated using MPI Sintel dataset. <s> BIB005
|
Due to the fact that main datasets only provide ground truth under a two-dimensional representation in terms of optical flow and disparity, most methods re-project scene flow onto the image space or simply represent scene flow with a two-dimensional representation mentioned in Section 3.1.1, and the error is measured in terms of optical flow and disparity. But first, we'll introduce the fundamental error measure named absolute error: Absolute error Absolute error describes the absolute magnitude difference, which is the euclidean distance between the endpoints of ground truth vector and the estimated vector. It is presented in terms of optical flow and disparity in Equation BIB003 . where subscript e annotates the estimated value, and subscript g annotates the ground truth. The average endpoint error(EPE) EPE is introduced by Otte BIB001 . It's the mean of absolute error among all the pixels as Equation 13 presents. where n is the number of pixels, and Ω is the entire image plane. The root mean square error(RMSE) While EPE indicates the overall accuracy level, RMSE indicates both the error distribution and overall accuracy level as RMSE in terms of optical flow and disparity can be presented as Equation 14a and 14b. In addition, to better measure the error in different scales, the normalization is needed. The normalized root mean square error(NRMSE) is scaled by the ground truth, which can be compared between different datasets. NRMSE in terms of optical flow and disparity can be presented as Equation BIB005 . The average angular error(AAE) The average angular error was introduced by Fleet in 1990 BIB002 , which is introduced to measure the optical flow error deviation of angle: ) BIB004 It can also be calculated with arccos(v g · v e ) or arcsin(v g × v e ).
|
Scene Flow Estimation: A Survey <s> Three-dimensional error measure <s> This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Three-dimensional error measure <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB002
|
According to Equation 3: (17) where ω c = b d+δd . The focal length of each camera is assumed the same, as f x = f y = f . Hence, the transfer function between the three-dimensional point cloud scene flow representation and two-dimensional representation can be illustrated in Equation 18 . We can see clearly that the projection between scene flow and optical flow along with disparity and disparity change information is complex, and the accuracy of disparity change δd can significantly affect the result. That is to say, the protocol mentioned in Section 4.1.1 is not sufficient enough to evaluate scene flow. Wedel proposed a easy way for three-dimensional error measure BIB001 . The RMSE and AAE are modified as Equation 19a and 19b. However, this may not precisely reveal the contribution to error measure between different unknowns. Basha provided a three-dimensional point cloud ground truth made the three-dimensional error measure feasible BIB002 . The absolute error can be measured in a three-dimensional way: Hence, the EPE, RMSE, NRMSE, and AAE can be modified as: where S is the surface in the three-dimensional space.
|
Scene Flow Estimation: A Survey <s> Special metrics <s> Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Special metrics <s> Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Special metrics <s> This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods. <s> BIB003
|
Besides the protocols mentioned above, KITTI dataset BIB001 and Sintel dataset BIB002 analyzes EPE under different circumstances and introduced the special metric for evaluation. KITTI metric KITTI metric is the specific criterion for performance evaluation on KITTI dataset BIB001 . It employed an EPE threshold of τ ∈ (2, · · · , 5) pixel, and calculated the portion of pixels whose endpoint error in terms of optical flow and disparity is above the threshold among the entire image (threshold default is set as 3px). With occlusion ground truth, the official protocol is presented as: Method Setting Out-Noc Out-All Avg-Noc Avg-All Density Runtime Environment In 2015, Menze modified the dataset by adding background and foreground annotation BIB003 , and percentage of outliers in terms of background and foreground are distinguished in the evaluation metric. Moreover, a scene flow error was introduced. if either the disparity or the optical flow end-point error is above the threshold(where the default is 3 pixels), then the pixel is viewed as the scene flow outlier. The official protocol is presented as: Method D1-bg D1-fg D1-all D2-bg D2-fg D2-all Fl-bg Fl-fg Fl-all SF-bg SF-fg SF-all Sintel metric Sintel metric provides a thorough evaluation for Sintel benchmark BIB002 . It employed EPE as error measure as well. Particularly, it measure the error distribution in terms of both occlusion and large displacement with different threshold, which clearly reveal the performance under these two fundamental challenges. Moreover, the percentage of error pixels that remain visible in adjacent frames are taken as a criterion that reveal the temporal distribution of error. Thus, this metric provides sufficient information for evaluation and comparison. The official protocol is presented as:
|
Scene Flow Estimation: A Survey <s> dataset <s> Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http://www.middlebury.edu/stereo/. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> dataset <s> Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> dataset <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> dataset <s> Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti <s> BIB004 </s> Scene Flow Estimation: A Survey <s> dataset <s> Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> dataset <s> This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods. <s> BIB006
|
Existing datasets mainly serve for scene flow evaluation under a binocular setting, which consist of optical flow ground truth and disparity ground truth. This can also be used for evaluating stereo and optical flow estimation. Due to the lack of RGB-D dataset, to evaluate RGB-D scene flow, the disparity map of each dataset should be converted to the depth map with Equation 2 as an input. As is mentioned in Section 3.1, scene flow can be represented as v(u, v, ∆d) or V(∆X, ∆Y, ∆Z). While most datasets only provided optical flow and disparity ground truth, Basha BIB003 provided the three-dimensional ground truth V(∆X, ∆Y, ∆Z), and Freiburg dataset BIB005 provided additional disparity change ground truth that truly represent scene flow. Moreover, Middlebury BIB002 BIB001 , Basha BIB003 and KITTI BIB004 BIB006 provide occlusion ground truth so that the error out of occluded region can be evaluated separately. In the following section, we briefly introduced each commonly used dataset. A thorough information is illustrated in Table 7 . Sample images including color image, optical flow ground truth and disparity ground truth of each dataset is presented in Figure 9 and 10.
|
Scene Flow Estimation: A Survey <s> Middlebury dataset <s> Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http://www.middlebury.edu/stereo/. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization-two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> The motion field of a scene can be used for object segmentation and to provide features for classification tasks like action recognition. Scene flow is the full 3D motion field of the scene, and is more difficult to estimate than it's 2D counterpart, optical flow. Current approaches use a smoothness cost for regularisation, which tends to over-smooth at object boundaries. This paper presents a novel formulation for scene flow estimation, a collection of moving points in 3D space, modelled using a particle filter that supports multiple hypotheses and does not oversmooth the motion field. In addition, this paper is the first to address scene flow estimation, while making use of modern depth sensors and monocular appearance images, rather than traditional multi-viewpoint rigs. The algorithm is applied to an existing scene flow dataset, where it achieves comparable results to approaches utilising multiple views, while taking a fraction of the time. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> There is close relationship between depth information and scene flow. However, it's not fully utilized in most of scene flow estimators. In this paper, we propose a method to estimate scene flow with monocular appearance images and corresponding depth images. We combine a global energy optimization and a bilateral filter into a two-step framework. Occluded pixels are detected by the consistency of appearance and depth, and the corresponding data errors are excluded from the energy function. The appearance and depth information are also utilized in anisotropic regularization to suppress over-smoothing. The multi-channel bilateral filter is introduced to correct scene flow with various information in non-local areas. The proposed approach is tested on Middlebury dataset and the sequences captured by KINECT. Experiment results show that it can estimate dense and accurate scene flow in challenging environments and keep the discontinuity around motion boundaries. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> The scene flow describes the 3D motion of every point in a scene between two time steps. We present a novel method to estimate a dense scene flow using intensity and depth data. It is well known that local methods are more robust under noise while global techniques yield dense motion estimation. We combine local and global constraints to solve for the scene flow in a variational framework. An adaptive TV (Total Variation) regularization is used to preserve motion discontinuities. Besides, we constrain the motion using a set of 3D correspondences to deal with large displacements. In the experimentation our approach outperforms previous scene flow from intensity and depth methods in terms of accuracy. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> In this paper, an algorithm is presented for estimating scene flow, which is a richer, 3D analogue of Optical Flow. The approach operates orders of magnitude faster than alternative techniques, and is well suited to further performance gains through parallelized implementation. The algorithm employs multiple hypothesis to deal with motion ambiguities, rather than the traditional smoothness constraints, removing oversmoothing errors and providing significant performance improvements on benchmark data, over the previous state of the art. The approach is flexible, and capable of operating with any combination of appearance and/or depth sensors, in any setup, simultaneously estimating the structure and motion if necessary. Additionally, the algorithm propagates information over time to resolve ambiguities, rather than performing an isolated estimation at each frame, as in contemporary approaches. Approaches to smoothing the motion field without sacrificing the benefits of multiple hypotheses are explored, and a probabilistic approach to Occlusion estimation is demonstrated, leading to 10% and 15% improved performance respectively. Finally, a data driven tracking approach is described, and used to estimate the 3D trajectories of hands during sign language, without the need to model complex appearance variations at each viewpoint. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> In this paper we present a novel method to accurately estimate the dense 3D motion field, known as scene flow, from depth and intensity acquisitions. The method is formulated as a convex energy optimization, where the motion warping of each scene point is estimated through a projection and back-projection directly in 3D space. We utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. Our formulation enables the calculation of a dense flow field which does not penalize smooth and non-rigid movements while aligning motion boundaries with strong depth boundaries. An efficient parallelization of the numerical algorithm leads to runtimes in the order of 1s and therefore enables the method to be used in a variety of applications. We show that this novel scene flow calculation outperforms existing approaches in terms of speed and accuracy. Furthermore, we demonstrate applications such as camera pose estimation and depth image super resolution, which are enabled by the high accuracy of the proposed method. We show these applications using modern depth sensors such as Microsoft Kinect or the PMD Nano Time-of-Flight sensor. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> We present a novel method for dense variational scene flow estimation based a multiscale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> As consumer depth sensors become widely available, estimating scene flow from RGBD sequences has received increasing attention. Although the depth information allows the recovery of 3D motion from a single view, it poses new challenges. In particular, depth boundaries are not well-aligned with RGB image edges and therefore not reliable cues to localize 2D motion boundaries. In addition, methods that extend the 2D optical flow formulation to 3D still produce large errors in occlusion regions. To better use depth for occlusion reasoning, we propose a layered RGBD scene flow method that jointly solves for the scene segmentation and the motion. Our key observation is that the noisy depth is sufficient to decide the depth ordering of layers, thereby avoiding a computational bottleneck for RGB layered methods. Furthermore, the depth enables us to estimate a per-layer 3D rigid motion to constrain the motion of each layer. Experimental results on both the Middlebury and real-world sequences demonstrate the effectiveness of the layered approach for RGBD scene flow estimation. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Middlebury dataset <s> 2D spatial image windows are used for comparing pixel values in computer vision applications such as correspondence for optical flow and 3D reconstruction, bilateral filtering, and image segmentation. However, pixel window comparisons can suffer from varying defocus blur and perspective at different depths, and can also lead to a loss of precision. In this paper, we leverage the recent use of light-field cameras to propose alternative oriented light-field windows that enable more robust and accurate pixel comparisons. For Lambertian surfaces focused to the correct depth, the 2D distribution of angular rays from a pixel remains consistent. We build on this idea to develop an oriented 4D light-field window that accounts for shearing (depth), translation (matching), and windowing. Our main application is to scene flow, a generalization of optical flow to the 3D vector field describing the motion of each point in the scene. We show significant benefits of oriented light-field windows over standard 2D spatial windows. We also demonstrate additional applications of oriented light-field windows for bilateral filtering and image segmentation. <s> BIB014
|
The Middlebury stereo dataset BIB002 BIB001 is commonly used as a quantitatively evaluation benchmark for optical flow and stereo matching. Particularly, the subsets named Teddy, Cones and Venus provided both optical flow and disparity ground truth, and hence it is utilized for scene flow evaluation BIB003 BIB004 BIB005 BIB006 BIB006 BIB008 BIB007 BIB010 BIB011 BIB009 BIB012 BIB013 BIB014 . Each subset simulates a simple translational motion along the X axis. The 8 cameras are rectified and placed parallelly and equally along the X axis and thus the scene motion projected onto the image plane is the disparity between two cameras. Under a binocular setting, images from camera 2 and 6 are taken as stereo pairs from time t, while images from camera 4 and 8 are taken as stereo pairs from time t+1. The disparity ground truth is the disparity from camera 2 to 6, and the optical flow ground truth is the disparity from camera 2 to 4. Similarly, when it comes to RGB-D scene flow, the ground truth disparity map is converted into depth channel for evaluation.
|
Scene Flow Estimation: A Survey <s> Rotating sphere <s> This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> We present a novel variational method for the simultaneous estimation of dense scene flow and structure from stereo sequences. In contrast to existing approaches that rely on a fully calibrated camera setup, we assume that only the intrinsic camera parameters are known. To couple the estimation of motion, structure and geometry, we propose a joint energy functional that integrates spatial and temporal information from two subsequent image pairs subject to an unknown stereo setup. We further introduce a normalisation of image and stereo constraints such that deviations from model assumptions can be interpreted in a geometrical way. Finally, we suggest a separate discontinuity-preserving regularisation to improve the accuracy. Experiments on calibrated and uncalibrated data demonstrate the excellent performance of our approach. We even outperform recent techniques for the rectified case that make explicit use of the simplified geometry. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> We present an approach to 3D scene flow estimation, which exploits that in realistic scenarios image motion is frequently dominated by observer motion and independent, but rigid object motion. We cast the dense estimation of both scene structure and 3D motion from sequences of two or more views as a single energy minimization problem. We show that agnostic smoothness priors, such as the popular total variation, are biased against motion discontinuities in viewing direction. Instead, we propose to regularize by encouraging local rigidity of the 3D scene. We derive a local rigidity constraint of the 3D scene flow and define a smoothness term that penalizes deviations from that constraint, thus favoring solutions that consist largely of rigidly moving parts. Our experiments show that the new rigid motion prior reduces the 3D flow error by 42% compared to standard TV regularization with the same data term. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images' coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> We introduce a framework to estimate and refine 3D scene flow which connects 3D structures of a scene across different frames. In contrast to previous approaches which compute 3D scene flow that connects depth maps from a stereo image sequence or from a depth camera, our approach takes advantage of full 3D reconstruction which computes the 3D scene flow that connects 3D point clouds from multi-view stereo system. Our approach uses a standard multi-view stereo and optical flow algorithm to compute the initial 3D scene flow. A unique two-stage refinement process regularizes the scene flow direction and magnitude sequentially. The scene flow direction is refined by utilizing 3D neighbor smoothness defined by tensor voting. The magnitude of the scene flow is refined by connecting the implicit surfaces across the consecutive 3D point clouds. Our estimated scene flow is temporally consistent. Our approach is efficient, model free, and it is effective in error corrections and outlier rejections. We tested our approach on both synthetic and real-world datasets. Our experimental results show that our approach out-performs previous algorithms quantitatively on synthetic dataset, and it improves the reconstructed 3D model from the refined 3D point cloud in real-world dataset. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixel-to-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial super pixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Rotating sphere <s> This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods. <s> BIB010
|
In 2007, Huguet utilized the Pov-Ray to render a publicly available rectified synthetic rotating sphere with optical flow and stereo ground truth BIB001 , which is commonly used as a benchmark BIB002 BIB003 BIB005 BIB006 BIB004 BIB009 BIB010 . Two hemispheres rotate in opposite directions which lead to strong discontinuity. Basha modified the rotating sphere by adding a rotating plane behind the rotating sphere with OpenGL BIB007 . It was used for evaluating three-dimensional parametrization scene flow on account of the ground truth it provided BIB008 BIB007 . Moreover, it provided five rectified views which can also be evaluating multi-view scene flow, without full-view geometry ground truth though.
|
Scene Flow Estimation: A Survey <s> EISATS dataset <s> This paper presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi- resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities. Source code and sample data are available for the evaluation of the algorithm. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> Performance evaluation of stereo or motion analysis techniques is commonly done either on synthetic data where the ground truth can be calculated from ray-tracing principals, or on engineered data where ground truth is easy to estimate. Furthermore, these scenes are usually only shown in a very short sequence of images. This paper shows why synthetic scenes may not be the only testing criteria by giving evidence of conflicting results of disparity and optical flow estimation for real-world and synthetic testing. The data dealt with in this paper are images taken from a moving vehicle. Each real-world sequence contains 250 image pairs or more. Synthetic driver assistance scenes (with ground truth) are 100 or more image pairs. Particular emphasis is paid to the estimation and evaluation of scene flow on the synthetic stereo sequences. All image data used in this paper is made publicly available at http: //www.mi.auckland.ac.nz/EISATS. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> In this paper a novel approach for estimating the three dimensional motion field of the visible world from stereo image sequences is proposed. This approach combines dense variational optical flow estimation, including spatial regularization, with Kalman filtering for temporal smoothness and robustness. The result is a dense, robust, and accurate reconstruction of the three-dimensional motion field of the current scene that is computed in real-time. Parallel implementation on a GPU and an FPGA yields a vision-system which is directly applicable in real-world scenarios, like automotive driver assistance systems or in the field of surveillance. Within this paper we systematically show that the proposed algorithm is physically motivated and that it outperforms existing approaches with respect to computation time and accuracy. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> We propose a depth and image scene flow estimation method taking the input of a binocular video. The key component is motion-depth temporal consistency preservation, making computation in long sequences reliable. We tackle a number of fundamental technical issues, including connection establishment between motion and depth, structure consistency preservation in multiple frames, and long-range temporal constraint employment for error correction. We address all of them in a unified depth and scene flow estimation framework. Our main contributions include development of motion trajectories, which robustly link frame correspondences in a voting manner, rejection of depth/motion outliers through temporal robust regression, novel edge occurrence map estimation, and introduction of anisotropic smoothing priors for proper regularization. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> Estimating dense 3D scene flow from stereo sequences remains a challenging task, despite much progress in both classical disparity and 2D optical flow estimation. To overcome the limitations of existing techniques, we introduce a novel model that represents the dynamic 3D scene by a collection of planar, rigidly moving, local segments. Scene flow estimation then amounts to jointly estimating the pixel-to-segment assignment, and the 3D position, normal vector, and rigid motion parameters of a plane for each segment. The proposed energy combines an occlusion-sensitive data term with appropriate shape, motion, and segmentation regularizers. Optimization proceeds in two stages: Starting from an initial super pixelization, we estimate the shape and motion parameters of all segments by assigning a proposal from a set of moving planes. Then the pixel-to-segment assignment is updated, while holding the shape and motion parameters of the moving planes fixed. We demonstrate the benefits of our model on different real-world image sets, including the challenging KITTI benchmark. We achieve leading performance levels, exceeding competing 3D scene flow methods, and even yielding better 2D motion estimates than all tested dedicated optical flow techniques. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> We propose a method to recover dense 3D scene flow from stereo video. The method estimates the depth and 3D motion field of a dynamic scene from multiple consecutive frames in a sliding temporal window, such that the estimate is consistent across both viewpoints of all frames within the window. The observed scene is modeled as a collection of planar patches that are consistent across views, each undergoing a rigid motion that is approximately constant over time. Finding the patches and their motions is cast as minimization of an energy function over the continuous plane and motion parameters and the discrete pixel-to-plane assignment. We show that such a view-consistent multi-frame scheme greatly improves scene flow computation in the presence of occlusions, and increases its robustness against adverse imaging conditions, such as specularities. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> EISATS dataset <s> 3D scene flow estimation aims to jointly recover dense geometry and 3D motion from stereoscopic image sequences, thus generalizes classical disparity and 2D optical flow estimation. To realize its conceptual benefits and overcome limitations of many existing methods, we propose to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented. Geometry and 3D motion are then jointly recovered alongside an over-segmentation of the scene. This piecewise rigid scene model is significantly more parsimonious than conventional pixel-based representations, yet retains the ability to represent real-world scenes with independent object motion. It, furthermore, enables us to define suitable scene priors, perform occlusion reasoning, and leverage discrete optimization schemes toward stable and accurate results. Assuming the rigid motion to persist approximately over time additionally enables us to incorporate multiple frames into the inference. To that end, each view holds its own representation, which is encouraged to be consistent across all other viewpoints and frames in a temporal window. We show that such a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Our method currently achieves leading performance on the KITTI benchmark, for both flow and stereo. <s> BIB012
|
EISATS traffic scene datasetsdatasets is a synthetic stereo image sequence rendered by Pov-Ray with ground truth for both stereo and motion BIB002 . The sequence 1 consists of 100 frames while sequence 2 consists of 396 frames with ego-motion. The synthetic traffic scene consists of a few moving cars under an open environment. Only a few papers utilized this stereo pairs sequence for binocular scene flow estimation BIB003 BIB004 BIB005 BIB007 ]. BIB011 . A novel evaluation methodology is also introduced as the KITTI metric, which is illustrated in Section 4.1.3. These two binocular-based dataset have been utilized by multiple papers over the years BIB001 BIB006 BIB008 BIB009 BIB010 BIB011 BIB012 . The scene is much more realistic and challenging compared to the early Middlebury dataset, which is designed specifically for autonomous driving. However, due to the acquisition manner of data, there are missing value in both optical flow and disparity ground truth, as is illustrated in Figure 9 . The density of ground truth is about 75% to 90%. Thus, KITTI dataset is not recommended for RGB-D scene flow evaluation on account that it needs dense disparity ground truth for simulating depth data.
|
Scene Flow Estimation: A Survey <s> MPI Sintel dataset <s> Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> MPI Sintel dataset <s> We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects. We evaluate our approach with both synthetic and real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> MPI Sintel dataset <s> The emergence of modern, affordable and accurate RGB-D sensors increases the need for single view approaches to estimate 3-dimensional motion, also known as scene flow. In this paper we propose a coarse-to-fine, dense, correspondence-based scene flow formulation that relies on explicit geometric reasoning to account for the effects of large displacements and to model occlusion. Our methodology enforces local motion rigidity at the level of the 3d point cloud without explicitly smoothing the parameters of adjacent neighborhoods. By integrating all geometric and photometric components in a single, consistent, occlusion-aware energy model, defined over overlapping, image-adaptive neighborhoods, our method can process fast motions and large occlusions areas, as present in challenging datasets like the MPI Sintel Flow Dataset, recently augmented with depth information. By explicitly modeling large displacements and occlusion, we can handle difficult sequences which cannot be currently processed by state of the art scene flow methods. We also show that by integrating depth information into the model, we can obtain correspondence fields with improved spatial support and sharper boundaries compared to the state of the art, large-displacement optical flow methods. <s> BIB003
|
MPI Sintel dataset is the largest dataset before 2015 BIB001 , which consists of 23 training sequences with 1064 frames and 12 test sequences with 564 frames in total. It derived from an open source animated film and the resolution is 1024×436. The scenes are designed to be strictly realistic with fog and motion blur added. Moreover, beta version depth data were then added which can be a perfect dataset for RGB-D scene flow evaluation. Zanfir and Jaimez BIB002 BIB003 utilized this dataset for scene flow evaluation in 2015 and gave quantitative analysis. It is highly recommended for its naturalistic setting and density, as well as its comprehensive evaluation protocol. Moreover, the video sequence ensures a multi-frame implement. With the development datasetof scene flow estimation, this dataset which consists of non-rigid motion and large displacement under a high resolution is reliable and challenging enough for evaluation.
|
Scene Flow Estimation: A Survey <s> Other datasets <s> Scene flow is the 3D motion field of points in the world. Given N (N>1) image sequences gathered with a N-eye stereo camera or N calibrated cameras, we present a novel system which integrates 3D scene flow and structure recovery in order to complement each other's performance. We do not assume rigidity of the scene motion, thus allowing for non-rigid motion in the scene. In our work, images are segmented into small regions. We assume that each small region is undergoing similar motion, represented by a 3D affine model. Nonlinear motion model fitting based on both optical flow constraints and stereo constraints is then carried over each image region in order to simultaneously estimate 3D motion correspondences and structure. To ensure the robustness, several regularization constraints are also introduced. A recursive algorithm is designed to incorporate the local and regularization constraints. Experimental results on both synthetic and real data demonstrate the effectiveness of our integrated 3D motion and structure analysis scheme. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> Abstract We discuss the computation of the instantaneous 3D displacement vector fields of deformable surfaces from sequences of range data. We give a novel version of the basic motion constraint equation that can be evaluated directly on the sensor grid. The various forms of the aperture problem encountered are investigated and the derived constraint solutions are solved in a total least squares (TLS) framework. We propose a regularization scheme to compute dense full flow fields from the sparse TLS solutions. The performance of the algorithm is analyzed quantitatively for both synthetic and real data. Finally we apply the method to compute the 3D motion field of living plant leaves. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http://www.middlebury.edu/stereo/. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> Disparity flow depicts the 3D motion of a scene in the disparity space of a given view and can be considered as view-dependent scene flow. A novel algorithm is presented to compute disparity maps and disparity flow maps in an integrated process. Consequently, the disparity flow maps obtained helps to enforce the temporal consistency between disparity maps of adjacent frames. The disparity maps found also provides the spatial correspondence information that can be used to cross-validate disparity flow maps of different views. Two different optimization approaches are integrated in the presented algorithm for searching optimal disparity values and disparity flows. The local winner-take-all approach runs faster, whereas the global dynamic programming based approach produces better results. All major computations are performed in the image space of the given view, leading to an efficient implementation on programmable graphics hardware. Experimental results on captured stereo sequences demonstrate the algorithm's capability of estimating both 3D depth and 3D motion in real-time. Quantitative performance evaluation using synthetic data with ground truth is also provided. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> Scene flow is the motion of the surface points in the 3D world. For a camera, it is seen as a 2D optical flow in the image plane. Knowing the scene flow can be very useful as it gives an idea of the surface geometry of the objects in the scene and how those objects are moving. Four methods for calculating the scene flow given multiple optical flows have been explored and detailed in this paper along with the basic mathematics surrounding multi-view geometry. It was found that given multiple optical flows it is possible to estimate the scene flow to different levels of detail depending on the level of prior information present. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeleton's tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> We present a novel variational method for the simultaneous estimation of dense scene flow and structure from stereo sequences. In contrast to existing approaches that rely on a fully calibrated camera setup, we assume that only the intrinsic camera parameters are known. To couple the estimation of motion, structure and geometry, we propose a joint energy functional that integrates spatial and temporal information from two subsequent image pairs subject to an unknown stereo setup. We further introduce a normalisation of image and stereo constraints such that deviations from model assumptions can be interpreted in a geometrical way. Finally, we suggest a separate discontinuity-preserving regularisation to improve the accuracy. Experiments on calibrated and uncalibrated data demonstrate the excellent performance of our approach. We even outperform recent techniques for the rectified case that make explicit use of the simplified geometry. <s> BIB007 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> This paper addresses the problem of estimating the dense 3D motion of a scene over several frames using a set of calibrated cameras. Most current 3D motion estimation techniques are limited to estimating the motion over a single frame, unless a strong prior model of the scene (such as a skeleton) is introduced. Estimating the 3D motion of a general scene is difficult due to untextured surfaces, complex movements and occlusions. In this paper, we show that it is possible to track the surfaces of a scene over several frames, by introducing an effective prior on the scene motion. Experimental results show that the proposed method estimates the dense scene-flow over multiple frames, without the need for multiple-view reconstructions at every frame. Furthermore, the accuracy of the proposed method is demonstrated by comparing the estimated motion against a ground truth. <s> BIB008 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided. <s> BIB009 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> We present an approach to 3D scene flow estimation, which exploits that in realistic scenarios image motion is frequently dominated by observer motion and independent, but rigid object motion. We cast the dense estimation of both scene structure and 3D motion from sequences of two or more views as a single energy minimization problem. We show that agnostic smoothness priors, such as the popular total variation, are biased against motion discontinuities in viewing direction. Instead, we propose to regularize by encouraging local rigidity of the 3D scene. We derive a local rigidity constraint of the 3D scene flow and define a smoothness term that penalizes deviations from that constraint, thus favoring solutions that consist largely of rigidly moving parts. Our experiments show that the new rigid motion prior reduces the 3D flow error by 42% compared to standard TV regularization with the same data term. <s> BIB010 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> This paper is concerned with the recovery of temporally coherent estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. A novel approach is presented based on matching of spatiotemporal quadric elements (stequels) between views, as this primitive encapsulates both spatial and temporal image structure for 3D estimation. Match constraints are developed for bringing stequels into correspondence across binocular views. With correspondence established, temporally coherent disparity estimates are obtained without explicit motion recovery. Further, the matched stequels also will be shown to support direct recovery of scene flow estimates. Extensive algorithmic evaluation with ground truth data incorporated in both local and global correspondence paradigms shows the considerable benefit of using stequels as a matching primitive and its advantages in comparison to alternative methods of enforcing temporal coherence in disparity estimation. Additional experiments document the usefulness of stequel matching for 3D scene flow estimation. <s> BIB011 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> In this paper we present a novel method to accurately estimate the dense 3D motion field, known as scene flow, from depth and intensity acquisitions. The method is formulated as a convex energy optimization, where the motion warping of each scene point is estimated through a projection and back-projection directly in 3D space. We utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. Our formulation enables the calculation of a dense flow field which does not penalize smooth and non-rigid movements while aligning motion boundaries with strong depth boundaries. An efficient parallelization of the numerical algorithm leads to runtimes in the order of 1s and therefore enables the method to be used in a variety of applications. We show that this novel scene flow calculation outperforms existing approaches in terms of speed and accuracy. Furthermore, we demonstrate applications such as camera pose estimation and depth image super resolution, which are enabled by the high accuracy of the proposed method. We show these applications using modern depth sensors such as Microsoft Kinect or the PMD Nano Time-of-Flight sensor. <s> BIB012 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> We present a novel method for dense variational scene flow estimation based a multiscale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. <s> BIB013 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> This paper investigates motion estimation and segmentation of independently moving objects in video sequences that contain depth and intensity information, such as videos captured by a Time of Flight camera. Specifically, we present a motion estimation algorithm which is based on integration of depth and intensity data. The resulting motion information is used to derive long-term point trajectories. A segmentation technique groups the trajectories according to their motion and depth similarity into spatio-temporal segments. Quantitative and qualitative analysis of synthetic and real world videos verify the proposed motion estimation and segmentation approach. The proposed framework extracts independently moving objects from videos recorded by a Time of Flight camera. <s> BIB014 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> This article presents a novel method for estimating the dense three-dimensional motion of a scene from multiple cameras. Our method employs an interconnected patch model of the scene surfaces. The interconnected nature of the model means that we can incorporate prior knowledge about neighbouring scene motions through the use of a Markov Random Field, whilst the patch-based nature of the model allows the use of efficient techniques for estimating the local motion at each patch. An important aspect of our work is that the method takes account of the fact that local surface texture strongly dictates the accuracy of the motion that can be estimated at each patch. Even with simple squared-error cost functions, it produces results that are either equivalent to or better than results from a method based upon a state-of-the-art optical flow technique, which uses well-developed robust cost functions and energy minimisation techniques. <s> BIB015 </s> Scene Flow Estimation: A Survey <s> Other datasets <s> We present an approach for computing dense scene flow from two large displacement RGB-D images. When dealing with large displacements the crucial step is to estimate the overall motion correctly. While state-of-the-art approaches focus on RGB information to establish guiding correspondences, we explore the power of depth edges. To achieve this, we present a new graph matching technique that brings sparse depth edges into correspondence. An additional contribution is the formulation of a continuous-label energy which is used to densify the sparse graph matching output. We present results on challenging Kinect images, for which we outperform state-of-the-art techniques. <s> BIB016
|
Many other datasets with ground truth are introduced as well. However, on account of limited reference, the comparisons under these datasets are deficient, and most of them lack public availability and innovation. We list them as follows just in case. Similar to the rotating sphere dataset created by Huguet, in the early stage, Zhang introduced a synthetic deformable sphere using OpenInventor for quantitative analysis BIB001 . Spies modeled a structured light sensor to provide a synthetic textured sphere as well BIB002 . Valgaerts generated a general rotating sphere scene without rectification BIB007 , while Cech added a fast moving bar and a slanted background plane and then textured the whole scene with white noise using Blender BIB009 , which made it much more challenging for the scene flow estimation. Moreover, Ferstl created a translating and rotating cube in front of the static plane with white noise textured BIB012 BIB013 , and Ghuffar created a noisy scene with two cubes moving on a plane in front of a static wall to testify the algorithm's robustness towards noise and occlusion BIB014 . Vogel generated nine synthetic box dataset with ground truth BIB010 , which consisted of pure rotation, translations in all axes and translation only in depth for independent analysis. In addition, early in 2005, Luckins created a sloped plane and a sinusoidal plaid pattern named "splaid" for a rough evaluation , the sampling is the size of 100×100 with depth and RGB color. Gong generated a synthetic 3D scene that consists of a rotating earth model textured with Phong illumination and bump mapping and a translating galaxy background BIB004 . The scene is rendered with Gaussian noise and the camera is moving against the earth which makes it really tough for scene flow estimation. Ruttle utilized the Human Eva II dataset for motion tracking and pose estimation and gave the quantitative analysis BIB005 . Popham BIB008 BIB015 evaluated his algorithm with a multi-view motion capture dataset named "Katy" and "Skirt" BIB006 with sparse motion trajectory ground truth. Sizintsev captured few sets of sequences with BumbleBee stereo camera and got ground truth with a structure light approach BIB003 , and then used this dataset for evaluation BIB011 . Alhaija captured seven pairs of images through Kinect to specifically evaluate scene flow under large displacement BIB016 . The matching ground truth was given by manually labeling each segment.
|
Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> Obstacle avoidance is one of the most important challenges for mobile robots as well as future vision based driver assistance systems. This task requires a precise extraction of depth and the robust and fast detection of moving objects. In order to reach these goals, this paper considers vision as a process in space and time. It presents a powerful fusion of depth and motion information for image sequences taken from a moving observer. 3D-position and 3D-motion for a large number of image points are estimated simultaneously by means of Kalman-Filters. There is no need of prior error-prone segmentation. Thus, one gets a rich 6D representation that allows the detection of moving obstacles even in the presence of partial occlusion of foreground or background. <s> BIB001 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> This paper proposes a novel approach to non-rigid, markerless motion capture from synchronized video streams acquired by calibrated cameras. The instantaneous geometry of the observed scene is represented by a polyhedral mesh with fixed topology. The initial mesh is constructed in the first frame using the publicly available PMVS software for multi-view stereo [7]. Its deformation is captured by tracking its vertices over time, using two optimization processes at each frame: a local one using a rigid motion model in the neighborhood of each vertex, and a global one using a regularized nonrigid model for the whole mesh. Qualitative and quantitative experiments using seven real datasets show that our algorithm effectively handles complex nonrigid motions and severe occlusions. <s> BIB002 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> In this paper a novel approach for estimating the three dimensional motion field of the visible world from stereo image sequences is proposed. This approach combines dense variational optical flow estimation, including spatial regularization, with Kalman filtering for temporal smoothness and robustness. The result is a dense, robust, and accurate reconstruction of the three-dimensional motion field of the current scene that is computed in real-time. Parallel implementation on a GPU and an FPGA yields a vision-system which is directly applicable in real-world scenarios, like automotive driver assistance systems or in the field of surveillance. Within this paper we systematically show that the proposed algorithm is physically motivated and that it outperforms existing approaches with respect to computation time and accuracy. <s> BIB003 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> We propose a depth and image scene flow estimation method taking the input of a binocular video. The key component is motion-depth temporal consistency preservation, making computation in long sequences reliable. We tackle a number of fundamental technical issues, including connection establishment between motion and depth, structure consistency preservation in multiple frames, and long-range temporal constraint employment for error correction. We address all of them in a unified depth and scene flow estimation framework. Our main contributions include development of motion trajectories, which robustly link frame correspondences in a voting manner, rejection of depth/motion outliers through temporal robust regression, novel edge occurrence map estimation, and introduction of anisotropic smoothing priors for proper regularization. <s> BIB004 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61% on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset. <s> BIB005 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network. <s> BIB006 </s> Scene Flow Estimation: A Survey <s> The modification of methods in the future <s> In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches. <s> BIB007
|
By checking the error map provided by KITTI benchmark, it's clear that inaccuracy mainly exists in the boundaries of objects. Since this is a common issue for all computer vision tasks, edge-preserving and reasonable filtering is the first priority. GPU implementation has shown a great efficiency improvement, and the duality-based optimization has proved to enhance the efficiency of global variational methods without accuracy sacrifice. These kind of methods may be a routine in the future for better efficiency. With the development of a robust and efficient estimation between two frames, some papers have studied motion estimation under a long sequence BIB001 BIB003 BIB002 BIB004 . The multi-fames estimation with temporal prior knowledges deserves more attention. A robust temporal constraint can benefit the methods with a better initial value or a better feature BIB004 to match. The challenges like varying illumination and occlusion can be handled with the help of it. The emerging learning based methods and light field technique has brought fresh blood to scene flow estimation. Learning method with CNN shows an upward tendency in the relevant issues of scene flow estimation like stereo matching and optical flow with promising accuracy and computational cost BIB005 BIB007 . With the help of the up-to-date large scale training dataset BIB006 , learning-based method has a profound potential to achieve an accurate and fast estimation. Light field camera provides more data than existing data source, which brings diverse possibilities for this field. Similar to the emergence of RGB-D cameras, this new source of data may lead to a new attractive branch. On account of the fact that scene flow estimation relies highly on texture and intensity information, application will suffer in the night or an insufficient illumination circumstance. Moreover, the car headlights and lighting on the building that are frequent in the autonomous driving scene may interfere motion estimation significantly. Hence, the scene flow estimation with insufficient illumination is worth studying.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Introduction <s> In this paper we introduce a novel type of cryptographic scheme, which enables any pair of users to communicate securely and to verify each other’s signatures without exchanging private or public keys, without keeping key directories, and without using the services of a third party. The scheme assumes the existence of trusted key generation centers, whose sole purpose is to give each user a personalized smart card when he first joins the network. The information embedded in this card enables the user to sign and encrypt the messages he sends and to decrypt and verify the messages he receives in a totally independent way, regardless of the identity of the other party. Previously issued cards do not have to be updated when new users join the network, and the various centers do not have to coordinate their activities or even to keep a user list. The centers can be closed after all the cards are issued, and the network can continue to function in a completely decentralized way for an indefinite period. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Introduction <s> This document describes the end-to-end protocol, block formats, and ::: abstract service description for the exchange of messages (bundles) in ::: Delay Tolerant Networking (DTN). This document was produced within ::: the IRTF's Delay Tolerant Networking Research Group (DTNRG) and ::: represents the consensus of all of the active contributors to this ::: group. See http://www.dtnrg.org for more information. This memo ::: defines an Experimental Protocol for the Internet community. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Introduction <s> Abstract Cryptographic key exchange is considered to be a challenging problem in Delay Tolerant Networks (DTNs) operating in deep space environments. The difficulties and challenges are attributed to the peculiarities and constraints of the harsh communication conditions that DTNs typically operate in, rather than the actual features of the underlying key management cryptographic protocols and solutions. In this paper we propose a framework for evaluation of key exchange protocols in a DTN setting. Our contribution is twofold as the proposed framework can be used as a decision making tool for automated evaluation of various communication scenarios with regards to routing decisions and as part of a method for protocol evaluation in DTNs. <s> BIB003
|
It is without a doubt that cryptography is an important and powerful tool for achieving secure communications. Key management, including key distribution and revocation, is a central part of any cryptographically protected secure communication and is one of the weakest links of system security in general and protocol design in particular . In most communication scenarios, cryptographic keys need to be established between the communicating network nodes prior to any service can be delivered. Cryptographic key management is considered to be a challenging and open issue in DTNs BIB003 . Such environments are typically encountered in extreme terrestrial environments, deep space or interplanetary communications, and are characterized by long latency and high degree of disruption mainly due to physical phenomena (noise, limitations of wireless radio, etc.). Specifically, the difficulties and challenges are due to the constraints of the restricted networking conditions DTNs typically operate in, rather than the actual features of the underlying key management cryptographic protocols and solutions. Typically, the constraints of DTN environment make a number of mature and robust key management protocols described in the literature totally or partially unsuitable. Over the past few years, significant research has been performed in the field of communication in DTNs. DTN architecture introduces an overlay protocol, namely the Bundle Protocol (BP) BIB002 , that interfaces with either the transport or lower layers and exists anywhere between the transport and the application layers. In addition, DTN architecture is based on the well-known store and forward model, an old mechanism used in postal systems since ancient times . The main dissimilarities between the assumptions of traditional Internet-like networks and DTNs are the intermittent connectivity, implying the lack of a continuous end-to-end path between the source and destination and the long propagation delays. Conventional mechanisms for routing and key management do not work in a DTN mainly because of these assumptions. In fact, the literature has a relatively long domain of routing in DTNs, but very few consider the security parameter. The unique DTN characteristics, including long round-trip delay, frequent dis-connectivity, fragmentation, etc. , make the existing security protocols designed for the conventional networks unfit for DTN ecosystems. Cryptographic key management and secure routing are important issues in DTNs, but the solutions proposed until now tend to consider them separately. Several approaches have been adopted to achieve cryptographic key management in such challenged networks. The main bulk of research has been focused on two main approaches: the traditional Public Key Infrastructure (PKI) and Identity Based Cryptography (IBC) BIB001 . Each of them has its own benefits and drawbacks and is suitable in certain domains of DTN . To our knowledge, so far no work in the literature has attempted to provide a comprehensive survey of the various works addressing cryptographic key management specifically for the DTN domain. Motivated by this fact, the survey at hand offers an extensive study of the relevant literature in the last 13 years, spanning a period from 2005 to 2017. The surveyed protocols are categorized based on four distinct factors, namely the communication type, method used, type of the challenged network, and evaluation method. The rest of the paper is organized as follows: The next section briefly reviews the background of research on preliminaries regarding key management and DTN characteristics. Section 3 reviews and classifies all major contributions in the field of key management in DTNs. A discussion on the surveyed schemes is provided in Section 4. Section 5 presents alternative key management taxonomies for DTNs, while Section 6 lists the main security callenges in this type of networks. The last section summarizes and concludes the survey by posing open questions and future directions of applying key management to secure DTNs.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Preliminaries <s> In this paper we introduce a novel type of cryptographic scheme, which enables any pair of users to communicate securely and to verify each other’s signatures without exchanging private or public keys, without keeping key directories, and without using the services of a third party. The scheme assumes the existence of trusted key generation centers, whose sole purpose is to give each user a personalized smart card when he first joins the network. The information embedded in this card enables the user to sign and encrypt the messages he sends and to decrypt and verify the messages he receives in a totally independent way, regardless of the identity of the other party. Previously issued cards do not have to be updated when new users join the network, and the various centers do not have to coordinate their activities or even to keep a user list. The centers can be closed after all the cards are issued, and the network can continue to function in a completely decentralized way for an indefinite period. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Preliminaries <s> We present a decentralized key management architecture for wireless sensor networks, covering the aspects of key deployment, key refreshment and key establishment. Our architecture is based on a clear set of assumptions and guidelines. Balance between security and energy consumption is achieved by partitioning a system into two interoperable security realms: the supervised realm trades off simplicity and resources for higher security whereas in the unsupervised realm the vice versa is true. Key deployment uses minimal key storage while key refreshment is based on the well-studied scheme of Abdalla et al. The keying protocols involved use only symmetric cryptography and have all been verified with our constraint solving-based protocol verification tool CoProVe. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Preliminaries <s> A Delay-Tolerant Network (DTN) Architecture (Request for Comment, RFC-4838) and Bundle Protocol Specification, RFC-5050, have been proposed for space and terrestrial networks. Additional security specifications have been provided via the Bundle Security Specification (currently a work in progress as an Internet Research Task Force internet-draft) and, for link-layer protocols applicable to Space networks, the Licklider Transport Protocol Security Extensions. This document provides a security analysis of the current DTN RFCs and proposed security related internet drafts with a focus on space-based communication networks, which is a rather restricted subset of DTN networks. Note, the original focus and motivation of DTN work was for the ‘Interplanetary Internet’. This document does not address general store-and-forward network overlays, just the current work being done by the Internet Research Task Force (IRTF) and the Consultative Committee for Space Data Systems (CCSDS) Space Internetworking Services Area (SIS) - DTN working group under the ‘DTN’ and ‘Bundle’ umbrellas. However, much of the analysis is relevant to general store-and-forward overlays. 12 <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Preliminaries <s> InMobile ad hoc networks (MANETs) security has become a primary requirements.Thecharacteristics capabilities of MANETsexposeboth challenges and opportunities in achieving key security goals,such as confidentiality,access control,authentication, availability, integrity, and non- repudiation.Cryptographic techniques are widely used for secure communications in both TCP and UDPnetworks. Most cryptographic mechanisms, such as symmetric and asymmetric cryptography,often involve the use of cryptographic keys. However, all cryptographic techniques will beunsecure or inefficient if the key management is weak. Key management is also a central component inMANET security. The main purpose of key management is to provide secure methods for handlingcryptographic keying algorithm. The tasks of key management includes keys for generation, distribution and maintenance. Key maintenance includes the procedures for key storage, keyupdate, key revocation, etc. In MANETs, the computational load and complexityfor key management are strongly subject to restriction by the node's available resources and thedynamic nature of network topology. A number of key management schemes have beenproposed for MANETs. In this article, we present a survey of the research work on keymanagement in MANETs according to recent publications. <s> BIB004
|
Cryptographic key management is the process by which cryptographic keys are generated, stored, protected, transferred, loaded, used, and destroyed BIB002 and is one of the most difficult problems in DTN security BIB003 . The reason is that cryptographic key management generally requires multiple round trips in order to securely exchange or establish keys. This is problematic because of the long delays and possible connectivity disruptions in such restricted networks. As further discussed in Section 4, there are currently no key management schemes that appear to suit DTNs. Naturally, poor or weak cryptographic key management will have an adverse effect on the cryptographic techniques, which risk of being rendered insecure or inefficient BIB004 . Security initialization or bootstrapping, as the name suggests, is how to initially establish security associations between the communicating nodes. Key establishment is one of the basic concepts in this context, which is defined as a method that two or more parties adopt with the aim of sharing a secret value for secure communication. Key establishment is divided into (a) key transport or key distribution and (b) key agreement. In key transport, one party creates or receives a secret value and securely transfers it to the other party. In key agreement, a shared secret value is derived jointly by two (or more) parties. As already mentioned, the bulk of the research so far has been focusing on two main approaches. More specifically, the two main up-to-date proposed ideas are IBC and PKI. Both approaches are based on asymmetric or public-key cryptography (PKC). • Identity Based Cryptography (IBC)-Shamir first introduced IBC in 1985 BIB001 . In this cryptographic approach, user identifier information such as email address, IP address, and so forth are used as a public key for encryption and verification of digital signatures instead of certificates. In addition, in IBC, the Private Key Generator (PKG) is the central authority (similar to a Certificate Authority, CA in PKIs) which generates the private keys for participants. • Public Key Infrastructure (PKI)-Traditional asymmetric or public key cryptography widely and effectively used in the Internet and a plethora of business realms relies on a PKI. The latter depends on the availability and security of a CA, a central control point that everyone trusts.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> DTN Characteristics and Key Management <s> The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> DTN Characteristics and Key Management <s> A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> DTN Characteristics and Key Management <s> This paper presents our field experience in data collection from remote sensors. By letting tractors, farmers, and sensors have short-range radio communication devices with delay-disruption tolerant networking (DTN), we can collect data from those sensors to our central database. Although, several implementations have been made with cellular phones or mesh networks in the past, DTN-based systems for such applications are still under explored. The main objective of this paper is to present our practical implementation and experiences in DTN-based data collection from remote sensors. The software, which we have developed for this research, has about 50 kbyte footprint, which is much smaller than any other DTN implementation. We carried out an experiment with 39 DTN nodes at the University of Tokyo assuming an agricultural scenario. They achieved 99.8% success rate for data gathering with moderate latency, showing sufficient usefulness in data granularity. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> DTN Characteristics and Key Management <s> Abstract Cryptographic key exchange is considered to be a challenging problem in Delay Tolerant Networks (DTNs) operating in deep space environments. The difficulties and challenges are attributed to the peculiarities and constraints of the harsh communication conditions that DTNs typically operate in, rather than the actual features of the underlying key management cryptographic protocols and solutions. In this paper we propose a framework for evaluation of key exchange protocols in a DTN setting. Our contribution is twofold as the proposed framework can be used as a decision making tool for automated evaluation of various communication scenarios with regards to routing decisions and as part of a method for protocol evaluation in DTNs. <s> BIB004
|
Network environments characterized by intermittent connectivity, network heterogeneity, and large delays are called "challenged networks". DTN is a computer networking architecture that aims to address the technical issues present in challenged networking environment, as well as specify the necessary components for interconnecting heterogeneous networks. The term DTN stems from Fall's paper BIB001 , which introduced an architecture generalized from design work for the InterPlanetary Networking (IPN), which in turn addressed networking challenges in deep-space communications. The two main challenges addressed by DTNs are related to (a) long propagation delays and (b) intermittent connectivity, implying the lack of a continuous end-to-end path. Under such restricted and harsh networking conditions, traditional internetworking protocols (e.g., TCP/IP) are neither applicable nor suitable BIB002 . Networks where DTN architectures may apply include: • Deep space networks BIB003 The constraints under which such challenged networks function has also severe effects on the security protocols, and therefore traditional solutions cannot be directly applied. The need for secure communications in open networks like DTNs is higher than ever . However, until recently, security was not considered to be an issue for DTNs in space missions. Moreover, the authors in BIB004 propose a practical mechanism to evaluate security protocols, including key exchange ones in DTNs. This is done by considering node credentials and network topology. Such a method could help in identifying the most efficient key management scheme in terms of delay for experimentally tested scenarios.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Endpoints in a delay tolerant network (DTN) [K. Fall, 2003] must deal with long periods of disconnection, large end-to-end communication delays, and opportunistic communication over intermittent links. This makes traditional security mechanisms inefficient and sometimes unsuitable. We study three specific problems that arise naturally in this context: initiation of a secure channel by a disconnected user using an opportunistic connection, mutual authentication over an opportunistic link, and protection of disconnected users from attacks initiated by compromised identities. We propose a security architecture for DTN based on hierarchical identity based cryptography (HIBC) that provides efficient and practical solutions to these problems. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Traditional approaches for communication security do not work well in disruption- and delay-tolerant networks (DTNs). Recently, the use of identity-based cryptography (IBC) has been proposed as one way to help solve some of the DTN security issues. We analyze the applicability of IBC in this context and conclude that for authentication and integrity, IBC has no significant advantage over traditional cryptography, but it can indeed enable better ways of providing confidentiality. Additionally, we show a way of bootstrapping the needed security associations for IBC use from an existing authentication infrastructure. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Delay Tolerant Networks (DTN) arise whenever traditional assumptions about todaypsilas Internet such as continuous end-to-end connectivity, low latencies and low error rates are not applicable. These challenges impose constraints on the choice and implementation of possible security mechanisms in DTNs. The key requirements for a security architecture in DTNs include ensuring the protection of DTN infrastructure from unauthorized use as well as application protection by providing confidentiality, integrity and authentication services for end-to-end communication. In this paper, we examine the issues in providing application protection in DTNs and look at various possible mechanisms. We then propose an architecture based on Hierarchical Identity Based Encryption (HIBE) that provides end-to-end security services along with the ability to have fine-grained revocation and access control while at the same time ensuring efficient key management and distribution. We believe that a HIBE based mechanism would be much more efficient in dealing with the unique constraints of DTNs compared to standard public key mechanisms (PKI). <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Delay-Tolerant Networking” (DTN) is a neologism used for a new store-and-forward architecture and protocol suite intended for disrupted networks where there is intermittent or ad-hoc connectivity. This has been proposed as one approach to supporting delay-tolerant networks. Work in the late 1990s on the “Interplanetary Internet” forms the basis for current DTN protocols and architecture. That early work considered transport protocols robust to the hours-long propagation delays of deep-space communications. DTN is also known, primarily in military circles, as Disruption-Tolerant Networking, due to the dynamic links and outages in the military tactical environment, rather than long-delay links. In both cases, DTN technologies are well-suited to applications that are mostly asynchronous and insensitive to large variations in delivery conditions. DTN networks differ sufficiently from traditional terrestrial networks in their characteristics and connectivity that link, network and transport protocols must be carefully considered and chosen to cope with these different characteristics, or new protocols can be designed that are suited for the problems that these DTN network conditions impose. The “Bundle Protocol” exists within the DTN architecture, which sends bundles over subnet-specific transport protocols, called “convergence layers.” “Bundling” has undergone a large amount of shared development and design over a period of years as a research effort. We examine the Bundle Protocol and its related architecture closely, and discuss areas where we have found that the current Bundle approach is not well-suited to many of the operational concepts that it was intended to support. Problems with the Bundle Protocol and its convergence layers exist in mechanisms for error detection and overall reliability. This weakens the Bundle Protocol's suitability to disrupted and error-prone networks. We show that these reliability issues can lead to performance problems in DTN networks, requiring mitigation. Open research and development areas also exist with design choices in handling timing information, in determining necessary and sufficient security mechanisms, in its Quality of Service capabilities, and in other aspects of application or content identification. We show that the existing DTN bundling architecture has a number of open real-world deployment issues that can be addressed. We suggest possible remediation strategies for these weak areas of the bundle protocol that we have been working on. We also look at alternate approaches to DTN networking. Rather than only providing criticism, this paper identifies open issues, where work on modifying the Bundle Protocol is encouraged and approaches to address its various problems are suggested. <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> This document provides an overview of the security requirements and ::: mechanisms considered for delay tolerant networking security. It ::: discusses the options for protecting such networks and describes ::: reasons why specific security mechanisms were (or were not) chosen for ::: the relevant protocols. The entire document is informative, given its ::: purpose is mainly to document design decisions. <s> BIB005 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Delay- and disruption-tolerant networks (DTNs) can bring much-needed connectivity to rural areas and other settings with limited or non-existing infrastructures. High node mobility and infrequent connectivity inherent to DTNs make it challenging to implement simple and traditional security services, e.g., message integrity and confidentiality.In this paper, we focus on the problem of initial secure context establishment in DTNs. Concretely, we design a scheme that allows users to leverage social contact information to exchange confidential and authentic messages. We then evaluate the proposed scheme by analyzing real-world social network data, simulating communication scenarios, and through an informal security analysis. <s> BIB006 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> We describe a novel Distributed Key Establishment (DKE) protocol in Disruption (Delay) Tolerant Location Based Social Wireless Sensor and Actor Networks (DTLBS-WSAN). In DKE, we propose that sensor nodes use neighboring signatures to establish their keys. Pre-distributed keys are used by actor nodes to strengthen communication security. We show that nodes can get guaranteed security when actors are connected and cover the network area and high security confidence can be achieved even without actor nodes when the adversary (malicious node) density is small. In DTLBS-WSANs, key (certificate) establishment, storage and look up are performed in a distributed way. Multiple copies of a certificate can be stored at nodes to improve key security and counter the adverse impact of network disruption. <s> BIB007 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> Opportunistic networks are a new and specific type of mobile peer-to-peer networks where end-to-end connectivity cannot be assumed. These networks present compelling challenges, especially from a security perspective, as interactive protocols are infeasible in such environments. In this article, we focus on the problem of key management in the framework of content-based forwarding and opportunistic networks. After analysing this issue and identifying specific security threats such as Sybil attacks, we propose a specific key management scheme that enables the bootstrapping of local, topology-dependent security associations between a node and its neighbours along with the discovery of the neighbourhood topology, thanks to the use of pseudonym certificates and encapsulated signatures. <s> BIB008 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> A delay tolerant network (DTN) is a store carry and forward network characterized by highly mobile nodes, intermittent connectivity with frequent disruptions, limited radio range and physical obstructions. Emerging applications of DTN include rural DTN, vehicular DTN and pocket DTN. The development of DTN raises a number of security-related challenges due to inconsistent network access and unreliable end-to-end network path. One of the challenges is initial secure context establishment as it is unrealistic to assume that public key infrastructure (PKI) is always globally present and available, hence, the public key management becomes an open problem for DTN. In this paper, for the first time, we propose a dynamic virtual digraph (DVD) model for public key distribution study by extending graph theory and then present a public key distribution scheme for pocket DTN based on two-channel cryptography. By distinguishing between owners and carriers, public key exchange and authentication issues in the decentralized pocket DTN environment can be solved by a two-channel cryptography process and our simulation results have proven it. <s> BIB009 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> SUMMARY ::: Delay tolerant networks (DTNs) are resource-constrained dynamic networks where a continuous end-to-end connectivity is not always available. In such a challenging network, a fixed infrastructure may not be connected when a DTN is partitioned or the message delay in the network is large. Thus, the traditional public key infrastructure system and identity-based encryption (IBE) system are not suitable for DTNs because they rely on centralized infrastructures and require multiple round-trip interactions. To address this issue, we propose a distributed secret key generation system with self-certified identity (SCI-DKG) that does not require any private key generator and threshold cryptosystem. Initially, each node generates a private key and distributes an initial message including a self-certified identity and secret sharings to members in a DTN. Receivers independently authenticate the identity and extracts some encryption parameters corresponding to the identity from this initial message. We prove that SCI-DKG is chosen ciphertext secure in the standard model, and it can resist potential network attacks. Simulation results show that SCI-DKG has smaller delay and higher successful ratio of secret key generation compared with IBE and hierarchical IBE systems implemented in a DTN. Copyright © 2012 John Wiley & Sons, Ltd. <s> BIB010 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> A Delay Tolerant Network (DTN) is a network where nodes can be highly mobile, with long message delay times, forming dynamic and fragmented networks. Conventional centralised network security mechanisms are unsuitable in such networks, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme, usually requires human intervention. In this paper, we build and compare different decentralised trust systems for autonomous DTN. We utilise a public key distribution model based on the Web of Trust principle, and employ a simple Leverage of Common Friends (LCF) trust system to establish initial trust in autonomous DTN. We compare this system with two other scenarios (no trust and random trust) for autonomous establishment of initial trust. Comparisons are based on the time it takes to disperse the trust and resilience of the system against a malicious node distributing malicious and False Public Keys. Our results show that the LCF trust system mitigates the distribution of false malicious public keys by 40%. LCF takes 44% longer to distribute 50% of the public keys compared when using no trust system, but is 16% faster in comparison to the random trust method. <s> BIB011 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. <s> BIB012 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Security Initialization <s> The abstract of this paper is to asure the authenticity, integrity, and confidentiality, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification declares four IP sec style security headers to provide four aspects of security services. In any way, this specification leaves key management as an open problem. Aiming to apply the key establishment issue for Bundle Protocol, in this journal, we utilize a time- evolving topology model and two channel cryptography to design efficient and non interactive key exchange protocol . A time-evolving model is used in formal manner model the periodic and set in advance behavior patterns of space DTNs, and therefore, a system can schedule when and to whom it should send its public key. Meantime, the application of two-channel cryptography enables DTN nodes to replace their public keys or revocation status information, with authentication assurance and in a non interactive manner. The proposed scheme helps to establish a secure environment to defend for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. <s> BIB013
|
The first attempt of researchers to solve the problem of security initialization and key management in DTN was based on IBC rather PKI. This was mainly due to the frequently disconnected nodes and hostile nature of such networks. Works BIB001 BIB002 BIB003 are characteristic examles of this situation. More specifically, the authors in BIB001 proposed the first work based on IBC for key management in DTN. They state that the traditional PKI-based approach is unsuitable for DTNs due to their disconnected nature. Their work examines the practical aspects related to deployment of DTN in remote rural and/or disconnected areas. This includes practices for both initial key establishment and roaming among different service providers. They propose a forward-secure Hierarchical Identity Based Cryptography (HIBC) scheme that according to them can can be proved efficient and practical toward secure channel establishment, mutual authentication of parties, and revocation in DTNs. On the downside, as the authors admit, it is well-known that HIBC suffers from the problem of PKG compromisation, where all the generated private keys for lower level PKGs and users can be yield. To bypass this problem their work is founded on the assumption that PKG is trusted and uncompromisable. Moreover, their work is based on time-based keys (keys that rely on the high synchronized clocks between all entities), which can be a problem of practicality with respect to actual deployment BIB004 . Another work that evaluates IBC cryptography in the context of DTN and discusses the trade-offs between PKI and IBC is that in BIB002 . Specifically, the authors investigate how security in DTNs can be bootstrapped from existing cellular large-scale security infrastructure. They describe how a PKG can verify whether a new principal have the right to public identifier or not as compared to BIB001 . Moreover, in their work, they analised the applicability of IBC in DTNs and they found that there is no significant advantage over traditional cryptography in terms of authentication. In BIB003 the authors propose an architecture based on HIBC that according to them provides end-to-end security services as well as the ability to have fine-grained revocation and access control. In addition, their scheme is alleged to offer efficient key distribution across DTN regions. One drawback of IBC-based works is that there is a need to check IBC public parameter BIB005 . This is the same problem researchers tried to overcome in PKI with CA certificate verification. However, in BIB002 the authors argue that such a comparison is unfair. Another drawback of IBC is the difficulty of key revocation. The work in BIB006 focuses on the problem of initial secure context establishment in DTNs and proposes a method that allows users to leverage social contacts to exchange confidential and authentic messages. More specifically, if a node does not posses its peer's public key, then it can encrypt the message with the public keys of several nodes near the destination, in terms of either physical proximity or contact frequency. However, this algorithm has the problem of having to constantly maintain contact information for several nodes in the network, and therefore it does not scale well or it may lead to deadlock if the destination has currently no neighboring nodes. The authors in BIB008 propose a local and self-organised key management scheme in opportunistic networks (OppNets). They use pseudonym certificates and encapsulated signatures to enable the bootstrapping of local, topology-dependent security associations between a node and its neighbours along with the discovery of the neighbourhood topology. The authors identify that for content-based communication IBC-based solutions are inapt and self-organised solutions suit better. Their scheme consists of two phases the setup/initialization phase and the key agreement one. In BIB007 the authors describe a Distributed Key Establishment (DKE) protocol in location-based social wireless sensor and actor DTNs. Their mechanism uses a combination of key pre-distribution and neighbour key establishment to set up key pairs at nodes. To improve security and counter network disruptions, they also propose a distributed way to store public key certificates and certificate revocation list (CRL). The authors in BIB010 recognise that both traditional PKI system and IBC schemes are not suitable for DTNs because they rely on centralised infrastructures and require multiple round-trip interactions. They propose a distributed secret key generation system with self-certified identity that does not require any PKG and threshold cryptosystem. This sceme is based on secret key crypography (SKC). In BIB011 the authors build and compare different decentralised trust systems for implementation in autonomous DTN systems. They employ a key distribution model that is based on the Web of Trust (WoT) principle and compare it with two other decentralised methods. However, in such a model, if a highly trusted node is compromised, the entire model collapses. Various works also proposed two-channel cryptography BIB009 BIB012 BIB013 as a candidate solution for DTN. Two-channel cryptography techniques first introduced in and have several applications in constrained and infrastructure-less environments. The authors in BIB009 introduced a model for public key distribution, named Dynamic Virtual Digraph (DVD). This model extends conventional graph theory. Also, they present a public key distribution for pocket DTN based on two-channel cryptography. In BIB012 the authors propose a non-interactive key establishment scheme for the BSP focusing on space DTNs [39] . They use a time-evolving model based on the periodic and prearranged behaviour patterns of space DTNs. Based on this model, they were able to schedule when and where to send the corresponding public key. Another work based on BIB012 is that in . Precisely, the authors propose a scheduled key exchange mechanism for BSP of space DTNs too. For their mechanism, they also use two-channel cryptography and non-interactive public key exchange protocol to replace the traditional PKI. A more recent work that utilises a time evolving topology model and two-channel cryptography to design a non-interactive key exchange protocol is that in BIB013 . In any case, all the aforementioned two-channel cryptography works rely on the security of the authenticated channel, and on the strong assumption that the adversary has limitted control over that channel. Table 1 summarises all the aforementioned schemes for security initialization in DTN based on four (mostly) common in all the surveyed works criteria, namely cryptosystem, cryptographic protocol or method, area of application, and evaluation method.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> Endpoints in a delay tolerant network (DTN) [K. Fall, 2003] must deal with long periods of disconnection, large end-to-end communication delays, and opportunistic communication over intermittent links. This makes traditional security mechanisms inefficient and sometimes unsuitable. We study three specific problems that arise naturally in this context: initiation of a secure channel by a disconnected user using an opportunistic connection, mutual authentication over an opportunistic link, and protection of disconnected users from attacks initiated by compromised identities. We propose a security architecture for DTN based on hierarchical identity based cryptography (HIBC) that provides efficient and practical solutions to these problems. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> This document describes the end-to-end protocol, block formats, and ::: abstract service description for the exchange of messages (bundles) in ::: Delay Tolerant Networking (DTN). This document was produced within ::: the IRTF's Delay Tolerant Networking Research Group (DTNRG) and ::: represents the consensus of all of the active contributors to this ::: group. See http://www.dtnrg.org for more information. This memo ::: defines an Experimental Protocol for the Internet community. <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> Opportunistic Networking holds a great deal of potential for making communications easier and more flexible in pervasive assistive environments. However, security and privacy must be addressed to make these communications acceptable with respect to protecting patient privacy. In this position paper, we propose Privacy-Enhanced Opportunistic Networking (PEON), a system for using opportunistic networking in privacy-preserving way. PEON uses concepts from anonymous communications, rerouting messages through groups of peer nodes to hide the relation between the sources and destinations. By modifying group size, we can trade off between privacy and communication overhead. Further, individual nodes can make a similar trade off by changing the number of intermediate groups. We describe the cryptographic tools needed to facilitate changes in group membership and the design of simulation experiments that we will conduct to evaluate the overhead and effectiveness of our approach. <s> BIB005 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> Due to the rapid development in technology, every network, application needs full time connectivity without disruption and delays. The Delay/Disruption Tolerant Networking (DTN) concept is suitable for applications such as rural and disaster areas networks, animal and environmental monitoring plus others. However, due to the shared and unsecured nature of such challenged networks a good cryptographic framework needed in DTN. Identity Based Cryptography (IBC) compares favorably with traditional public key cryptography while generating public key on a fly as required. In this paper, we will provide anonymity solution in DTN using IBC. This has the advantage over public key cryptography with respect to end-to-end confidentiality. Also we use pseudonyms to provide anonymity and hide the identity of the end user. <s> BIB006 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> Most of the existing authentication and key agreement protocols for delay tolerant networks are not designed for protecting privacy. In this paper, an authentication and key agreement protocol with anonymity based on combined public key is proposed. The proposed protocol eliminates the need of public key digital certificate on-line retrieval, so that any on-line trusted third party is no longer required, only needs an off-line public information repository and key generation center; and realizes mutual authentication and key agreement with anonymity between two entities. We show that the proposed protocol is secure for all probabilistic polynomial-time attackers, and achieves good security properties, including authentication, anonymity, and confidentiality and so on. <s> BIB007 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Two-Party Communication <s> To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. <s> BIB008
|
Identity-Based Cryptography (IBC) As already mentioned, IBC has been examined as a possible solution for key management in DTN. In this context, we can stand out the works in BIB003 BIB006 BIB007 . In BIB003 , the authors introduce an anonymous authentication scheme. They also propose a secure communication solution based on the non-interactive Sakai-Ohgishi-Kasahara (SOK) key agreement scheme. This scheme is based on Boneh-Franklin HIBC, for greater scalability and signature verification. Also, according to the authors, it is more efficient compared to BIB002 , as it incurs no additional overhead for routing and can optionally be made non-interactive. Nevertheless, this scheme is very tightly tied to the DakNet model BIB001 , and it assumes a strongly trusted central authority BIB005 . Instead, for DTN a more general approach is required, where a trusted central authority cannot be assumed. The work in BIB006 is based on IBC too. The authors present a method using IBC and pseudonyms to transfer securely medical data from rural areas to a hospital in a remote city. They also state that there is no need to check frequently the public parameter as suggested in BIB002 . More recently, the author in presents a key distribution protocol for infrastructure-less networks, which is based on the BP BIB004 and more specifically on the BSP [39] . The BP is used to send application data across a DTN network, while BSP provides data integrity and confidentiality services for the BP [39] . It can be argued that with this non-interactive scheme, cryptographic keys can be established for all the BSP mechanisms. The derived keys will be used for the BSP supported algorithms. For instance, HMAC-SHA1 for authentication, RSA for signatures, and AES for encryption. However, this sceme assumes a pre-distributed key. The authors in BIB007 present an anonymous combined public key (CPK) based protocol. CPK techniques integrate public key cryptography with IBC. Their CPK cryptosystem is based on elliptic curves cryptography (ECC) and eliminates the need of PKC on-line retrieval compared to IBC. Instead, an offline repository is needed. In fact, this work is also based on IBC, which has proved impractical for DTNs. In addition, IBC solutions are undesirable due to intractability of some problems such as the PKG parameter distribution, private key revocation, identity name space management, key escrow, and so forth BIB008 . The PKG parameter distribution is the main problem in IBC. More specifically, a single PKG has to generate private keys for all the users and also establish secure channels to transmit them, which is a burdensome job in large networks. The use of hierarchy in HIBC alleviates the aforementioned problem making the process faster and more secure in case of key compromisation. Table 2 presents a comparison of IBC-based key establishment schemes proposed for DTNs using the same criteria as in Table 1 .
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Public Key Infrastructure (PKI) & Pretty Good Privacy (PGP) <s> Traditional approaches for communication security do not work well in disruption- and delay-tolerant networks (DTNs). Recently, the use of identity-based cryptography (IBC) has been proposed as one way to help solve some of the DTN security issues. We analyze the applicability of IBC in this context and conclude that for authentication and integrity, IBC has no significant advantage over traditional cryptography, but it can indeed enable better ways of providing confidentiality. Additionally, we show a way of bootstrapping the needed security associations for IBC use from an existing authentication infrastructure. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Public Key Infrastructure (PKI) & Pretty Good Privacy (PGP) <s> In the last few years, Delay/Disruption Tolerant Networking has grown to a healthy research topic because of its suitability for challenged environments characterized by heterogeneity, long delay paths and unpredictable link disruptions. This paper presents a DTN security architecture that focuses on the requirements for lightweight key management; lightweight AAA-like architecture for authentication/authorisation; resilience to Denial of Service attacks and user anonymity. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Public Key Infrastructure (PKI) & Pretty Good Privacy (PGP) <s> Key exchange is considered to be a challenging problem in Delay Tolerant Networks (DTNs) operating in space environments. In this paper we investigate the options for integrating key exchange protocols with the Bundle Protocol. We demonstrate this by using a one-pass key establishment protocol. In doing so, we also highlight the peculiarities, issues and opportunities a DTN network maintains, which heavily influences the underlying security solution. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Public Key Infrastructure (PKI) & Pretty Good Privacy (PGP) <s> In the past, security protocols including key transport protocols are designed with the assumption that there are two parties communication with each other and an adversary tries to intercept this communication. In Delay/Disruption Tolerant Networking (DTN), packet delivery relies on intermediate parties in the communication path to store and forward the packets. DTN security architecture requires that integrity and authentication should be verified at intermediate nodes as well as at end nodes and confidentiality should be maintained for end communicating parties. This requires new security protocols and key management to be defined for DTN as traditional end-to-end security protocols will not work with DTN. To contribute towards solving this problem, we propose a novel Efficient and Scalable Key Transport Scheme (ESKTS) to transport the symmetric key generated at a DTN node to other communicating body securely using public key cryptography and proxy signatures. It is unique effort to design a key transport protocol in compliance with DTN architecture. ESKTS ensures that integrity and authentication is achieved at hop-by-hop level as well as end-to-end level. It also ensures end-to-end confidentiality and freshness for end communicating parties. This scheme provides a secure symmetric key transport mechanism based on public key cryptography to exploit the unique bundle buffering characteristics of DTN to reduce communication and computation cost . <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Public Key Infrastructure (PKI) & Pretty Good Privacy (PGP) <s> Security solutions are not easily adapted to the DTN paradigm, for example, traditional key management solutions cannot be applied in such environments. A key management scheme for DTNs must consider the dynamic topology, be self-organized and distributed and be effective in the presence of intermittent connections. There are just a few key management scheme for DTNs. Nevertheless, all of them have serious issues. They generate a huge overhead, have a single point of failure, are centralized or do not provide any security against very simple attacks. This work introduces a new key management scheme for DTNs based on chains of digital signatures (DSC-KM - Digital Signature Chains Key Management Scheme). It is decentralized, fully distributed, and does not have a single point of failure. It allows nodes to leave and joint the network at any time, works in the presence of intermittent connections, has a very small communication and memory overhead. It is based on chains of digital signatures and on a very simple chain exchange mechanism. According to simulation results, it is able to correctly and securely disseminate all certificates through the network in a short period, while maintaining the size of the chains very small. Moreover, it is able to resist to several attacks to the key management infrastructure. <s> BIB005
|
An analysis related to the applicability of IBC in DTN systems by BIB001 resulted that for authentication IBC had no significant advantage over traditional cryptography. Works such as BIB002 BIB003 BIB004 are based on the classic PKI because it is well-examined and recognised. However, PKI schemes are associated with limitations such as server unavailability and cryptographic operations overhead. The authors in BIB002 proposed a DTN security architecture, which focuses on different key management parameters based on proxy certificates and PKI. Their method supports both hop-by-hop and end-to-end authentication with the aim of ensuring data correctness before forwarding using BAB. In their work, they identify that a single key management scheme does not suffice for DTNs because of the overlaid heterogeneous networks. The authors in BIB003 propose a one-pass key establishment protocol for space DTNs. Their protocol is based on an adoption of the Horsters-Michels-Petersen (HMP) protocol. More specifically, they use asymmetric authenticated encryption with message recovery to encrypt the parameters of the new key. In their method, they inject protocol messages in the bundle payload as part of the message. In addition, an encryption decision-making workflow diagram of a DTN custodian node is presented. The authors in propose a traditional cryptography based authentication scheme specially designed for Satellite DTN. According to the authors, the proposed scheme does not depend on network administrator's availability during post network authentication communication and facilitates bundle processing by the recipient in the absence of connectivity. More recently, the work in BIB004 presents an Efficient and Scalable Key Transport Scheme (ESKTS) based on public key cryptography and proxy signatures. This scheme ensures that integrity and authentication is achieved at hop-by-hop as well as end-to-end level. It also achieves end-to-end confidentiality and freshness for end communicating parties. In addition, the authors in propose a secure way of cryptographic key distribution to the DTN nodes. The proposed DTN security architecture offers a way of key distribution and prevention of nodes during possible threats and attacks, while at the same time affords all the security features of BSP. Last but not least, the authors in BIB005 propose a decentralised distributed scheme that is based on Digital Signature Chains Key Management Scheme (DSC-KM) and Pretty Good Privacy (PGP). According to the authors, the strong point of their scheme is it does not have a single point of failure. This is in contrast to other proposals that follow the centralised model. As with the previous subsections, Table 3 presents a comparison of PKI and PGP-based schemes proposed for DTNs.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Group Communication <s> The increased demand for mobile communication and use of mobile devices in high-latency, resource impoverished environments has spurred the development and growth of Delay-Tolerant Networks (DTN). DTNs aim to provide interoperability between a range of heterogeneous networks, operating under resource-constrained circumstances and traditional infrastructure networks such as the Internet. Because of the circumstances, DTNs possess some interesting characteristics that make a traditional end-to-end security paradigm unsuitable and increase the value of the overlay's resources. Controlling access to overlay resources and providing for secure group communications over unknown intermediate networks is essential. We propose a novel solution based on previous work in secure group communications using key-graphs and in extension to work on scalable access authorization in self-organizing overlays to provide a scalable mechanism for access control and secure group communications in DTNs. Since resources are especially limited, our implementation focuses on minimizing the traffic on the overlay associated with the maintenance of our solution. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Group Communication <s> Delay Tolerant Network (DTN) has the character of long intermittent connectivity and communication delays, which makes the existing group key management mechanism can not be effectively applied. We proposed a new Chinese Remainder Theorem based group key management mechanism for DTN. Comparing with the early scheme, the existing joined node can derive a new group key from the old group key using hash function in the new user join phase, so the server does not need to broadcast any key update message for the newly user join, and only broadcasts one message for user leave. Meanwhile, aiming at the forward security problem in the many-to-many scenarios, the time-based group key management scheme is introduced. The simulation results show that the group key update success rate, latency and message authentication success rate for our scheme is better than CRGK and LKH schemes. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Group Communication <s> In deep space delay tolerant networks rekeying expend vast amounts of energy and delay time as a reliable end-to-end communication is very difficult to be available between members and key management center. In order to deal with the question, this paper puts forwards an autonomic group key management scheme for deep space DTN, in which a logical key tree based on one-encryption-key multi-decryption-key key protocol is presented. Each leaf node with a secret decryption key corresponds to a network member and each non-leaf node corresponds to a public encryption key generated by all leaf node's decryption keys that belong to the non-leaf node's sub tree. In the proposed scheme, each legitimate member has the same capability of modifying public encryption key with himself decryption key as key management center, so rekeying can be fulfilled successfully by a local leaving or joining member in lack of key management center support. In the security aspect, forward security and backward security are guaranteed. In the efficiency aspect, our proposed scheme's rekeying message cost is half of LKH scheme when a new member joins, furthermore in member leaving event a leaving member makes tradeoff between computation cost and message cost except for rekeying message cost is constant and is not related to network scale. Therefore, our proposed scheme is more suitable for deep space DTN than LKH and the localization of rekeying is realized securely. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Group Communication <s> Background/Objectives: With the unequalled growth in Wireless networks, the associated fields of group application have also seen growth like multimedia teleconferencing, stock quoting, and distance education. Security in group applications need to be maintained which cannot be accomplished by wireless networks and IP multicast, encryptions by a shared group key is required. In this paper we review on network-independent group key management, which can be classified into three types: centralized, decentralized and distributed group key management protocols. Improvements/Methods: We Analyse key management algorithms in wireless networks and compared various factors of performance like storage overhead, commutation overhead and communication overhead during the join and leave process in groups in wireless group key management approaches, with a set of assessment parameters. In this paper we find various parameters to consider when designing a key management algorithm in wireless networks for mobile environment. Findings: It is important to guarantee the safety of this group key and to ensure the group communication. In spite of the fact that group communication encryption can be utilized to secure messages exchanged among group individuals, distributing the cryptographic keys turn into an issue. However, various significant parameters are used to analysis security requirement of application, when the key management algorithm is developed. Strength of key management algorithms is minimization of key cost during the join/leave process of members in the groups. Applications: In addition, we identify the relationships between the various security issues of wireless group key management like performance, security and network compatibility with regard to the algorithms discussed. It will give better idea about designing key management algorithms with minimum cost like storage, communication and computation parameters when the members join/leave from the groups. It will enable them to take better decisions. <s> BIB004
|
Security in-group communications is a highly desirable feature in military and law enforcement DTN scenarios and the need of confidentiality in group communication grows at a day-by-day basis . As presented in Table 4 , nearly a handful of works BIB001 BIB002 BIB003 cope with group key management and specifically with rekeying in DTNs. Before delving into the details of the aforementioned works, it is to be noted that group key management protocols should take into account various security requirements such as forward and backward secrecy. More specifically, the most important security requirements for a group key management protocol are BIB004 : • Forward secrecy (FS)-requires that users who left the group and know a contiguous subset of old group keys cannot discover subsequent group keys. This ensures that a member cannot decrypt data sent immediately after it leaves the group.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> The increased demand for mobile communication and use of mobile devices in high-latency, resource impoverished environments has spurred the development and growth of Delay-Tolerant Networks (DTN). DTNs aim to provide interoperability between a range of heterogeneous networks, operating under resource-constrained circumstances and traditional infrastructure networks such as the Internet. Because of the circumstances, DTNs possess some interesting characteristics that make a traditional end-to-end security paradigm unsuitable and increase the value of the overlay's resources. Controlling access to overlay resources and providing for secure group communications over unknown intermediate networks is essential. We propose a novel solution based on previous work in secure group communications using key-graphs and in extension to work on scalable access authorization in self-organizing overlays to provide a scalable mechanism for access control and secure group communications in DTNs. Since resources are especially limited, our implementation focuses on minimizing the traffic on the overlay associated with the maintenance of our solution. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> A Delay-Tolerant Network (DTN) Architecture (Request for Comment, RFC-4838) and Bundle Protocol Specification, RFC-5050, have been proposed for space and terrestrial networks. Additional security specifications have been provided via the Bundle Security Specification (currently a work in progress as an Internet Research Task Force internet-draft) and, for link-layer protocols applicable to Space networks, the Licklider Transport Protocol Security Extensions. This document provides a security analysis of the current DTN RFCs and proposed security related internet drafts with a focus on space-based communication networks, which is a rather restricted subset of DTN networks. Note, the original focus and motivation of DTN work was for the ‘Interplanetary Internet’. This document does not address general store-and-forward network overlays, just the current work being done by the Internet Research Task Force (IRTF) and the Consultative Committee for Space Data Systems (CCSDS) Space Internetworking Services Area (SIS) - DTN working group under the ‘DTN’ and ‘Bundle’ umbrellas. However, much of the analysis is relevant to general store-and-forward overlays. 12 <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> Delay Tolerant Network (DTN) has the character of long intermittent connectivity and communication delays, which makes the existing group key management mechanism can not be effectively applied. We proposed a new Chinese Remainder Theorem based group key management mechanism for DTN. Comparing with the early scheme, the existing joined node can derive a new group key from the old group key using hash function in the new user join phase, so the server does not need to broadcast any key update message for the newly user join, and only broadcasts one message for user leave. Meanwhile, aiming at the forward security problem in the many-to-many scenarios, the time-based group key management scheme is introduced. The simulation results show that the group key update success rate, latency and message authentication success rate for our scheme is better than CRGK and LKH schemes. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> A delay tolerant network (DTN) is a store carry and forward network characterized by highly mobile nodes, intermittent connectivity with frequent disruptions, limited radio range and physical obstructions. Emerging applications of DTN include rural DTN, vehicular DTN and pocket DTN. The development of DTN raises a number of security-related challenges due to inconsistent network access and unreliable end-to-end network path. One of the challenges is initial secure context establishment as it is unrealistic to assume that public key infrastructure (PKI) is always globally present and available, hence, the public key management becomes an open problem for DTN. In this paper, for the first time, we propose a dynamic virtual digraph (DVD) model for public key distribution study by extending graph theory and then present a public key distribution scheme for pocket DTN based on two-channel cryptography. By distinguishing between owners and carriers, public key exchange and authentication issues in the decentralized pocket DTN environment can be solved by a two-channel cryptography process and our simulation results have proven it. <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> Opportunistic networks are a new and specific type of mobile peer-to-peer networks where end-to-end connectivity cannot be assumed. These networks present compelling challenges, especially from a security perspective, as interactive protocols are infeasible in such environments. In this article, we focus on the problem of key management in the framework of content-based forwarding and opportunistic networks. After analysing this issue and identifying specific security threats such as Sybil attacks, we propose a specific key management scheme that enables the bootstrapping of local, topology-dependent security associations between a node and its neighbours along with the discovery of the neighbourhood topology, thanks to the use of pseudonym certificates and encapsulated signatures. <s> BIB005 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> A Delay Tolerant Network (DTN) is a network where nodes can be highly mobile, with long message delay times, forming dynamic and fragmented networks. Conventional centralised network security mechanisms are unsuitable in such networks, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme, usually requires human intervention. In this paper, we build and compare different decentralised trust systems for autonomous DTN. We utilise a public key distribution model based on the Web of Trust principle, and employ a simple Leverage of Common Friends (LCF) trust system to establish initial trust in autonomous DTN. We compare this system with two other scenarios (no trust and random trust) for autonomous establishment of initial trust. Comparisons are based on the time it takes to disperse the trust and resilience of the system against a malicious node distributing malicious and False Public Keys. Our results show that the LCF trust system mitigates the distribution of false malicious public keys by 40%. LCF takes 44% longer to distribute 50% of the public keys compared when using no trust system, but is 16% faster in comparison to the random trust method. <s> BIB006 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> In deep space delay tolerant networks rekeying expend vast amounts of energy and delay time as a reliable end-to-end communication is very difficult to be available between members and key management center. In order to deal with the question, this paper puts forwards an autonomic group key management scheme for deep space DTN, in which a logical key tree based on one-encryption-key multi-decryption-key key protocol is presented. Each leaf node with a secret decryption key corresponds to a network member and each non-leaf node corresponds to a public encryption key generated by all leaf node's decryption keys that belong to the non-leaf node's sub tree. In the proposed scheme, each legitimate member has the same capability of modifying public encryption key with himself decryption key as key management center, so rekeying can be fulfilled successfully by a local leaving or joining member in lack of key management center support. In the security aspect, forward security and backward security are guaranteed. In the efficiency aspect, our proposed scheme's rekeying message cost is half of LKH scheme when a new member joins, furthermore in member leaving event a leaving member makes tradeoff between computation cost and message cost except for rekeying message cost is constant and is not related to network scale. Therefore, our proposed scheme is more suitable for deep space DTN than LKH and the localization of rekeying is realized securely. <s> BIB007 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. <s> BIB008 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> • <s> Security solutions are not easily adapted to the DTN paradigm, for example, traditional key management solutions cannot be applied in such environments. A key management scheme for DTNs must consider the dynamic topology, be self-organized and distributed and be effective in the presence of intermittent connections. There are just a few key management scheme for DTNs. Nevertheless, all of them have serious issues. They generate a huge overhead, have a single point of failure, are centralized or do not provide any security against very simple attacks. This work introduces a new key management scheme for DTNs based on chains of digital signatures (DSC-KM - Digital Signature Chains Key Management Scheme). It is decentralized, fully distributed, and does not have a single point of failure. It allows nodes to leave and joint the network at any time, works in the presence of intermittent connections, has a very small communication and memory overhead. It is based on chains of digital signatures and on a very simple chain exchange mechanism. According to simulation results, it is able to correctly and securely disseminate all certificates through the network in a short period, while maintaining the size of the chains very small. Moreover, it is able to resist to several attacks to the key management infrastructure. <s> BIB009
|
Backward secrecy (BS)-mandates that a new user that joins the group and knows a contiguous subset of group keys cannot discover preceding group keys. This ensures that a member cannot decrypt data sent before it joins the group. Collusion freedom (CF)-requires that any set of fraudulent users who have much information about past keys should be incapable of deducing the current used group key. • Key independence (KI)-requires that a passive adversary who knows any proper subset of group keys cannot compromise other past or future group keys. That is, the combination of backward and forward secrecy yields key independence. The first attempt for group key management in DTNs is presented in BIB001 . Specifically, the authors proposed a group-oriented security solution for DTNs that provides access control and secure group communications. They suggest a centralised group key management mechanism based on the Logical Key Hierarchy (LKH). Group key management in DTN has been studied in BIB003 as well. The proposed protocol capitalizes on the Chinese Remainder Theorem (CRT). The concept of key lifetime is also introduced to alleviate the forward security problem in many-to-many DTN communication scenarios. In addition, the authors suggest that group key management for DTNs should use stateless and not stateful schemes such as LKH because there is no need for users to possess any previous keys. On the downside, the drawback in LKH scheme is that whenever a user joins or leaves, the group structure of tree has to be rearranged and logical key at each ancestor node has to be recomputed. Specifically, the computation cost is analogous to the network scale BIB007 . Actually, this is the reason why LKH scheme is unsuitable for space DTN. More recently, another research work on group key management is given in BIB007 . More precisely, the authors propose an autonomic group key management (AGKM) scheme based on one-encryption-key multi-decryption-key (OMPK) key protocol for deep space DTNs. In this work, the authors also prove the forward, backward, passive security, and key independence qualities of their proposed protocol. In terms of efficiency, their scheme seems to produce a smaller penalty than other proposed (e.g., LKH), making it more suitable for deep space DTNs. Due to the lack of central key management center support, rekeying can be attained by a local leaving or joining user. Last but not least, the work in proposed a scheme which is based on a modified version of Chinese remainder theorem. By shifting more computing load onto the key server, their scheme optimize the number of re-key broadcast message. For simulation results Opportunistic Networking Environment (ONE) Simulator is used and the results suggest that this scheme is better than LKH as well as Chinese remainder group key schemes. More precisely, this scheme does not broadcast any key update message in case of user join or leave, thus making it very efficient for secure communication in DTNs. Their work, reduced the complexity of user leave from O (n) to constant O (1). No TTP is required-Works that rely on PGP BIB006 BIB009 and two-channel cryptography BIB004 are self-organised BIB005 BIB008 and do not require a TTP. Multicast Security-Currently, there is no mechanism to separate between a multicast and anycast endpoint. DTN security architecture does not address the security aspects of enabling a DTN node to register with a particular multicast or anycast endpoint identifier at all. • Performance Issues-Security within a DTN imposes both bandwidth utilization costs on the communication links and computational costs at the nodes. In addition, there may be certain limitations regarding how much CPU, storage, energy, and so on can be devoted to security, and the amount of computation costs will undoubtedly depend on the underlying algorithms and their associated parameters. • Naming-DTN naming is a hard open issue to cope with BIB002 . For instance, how names are to be used in routing, and the ways this will be mapped to the underlying routing of each convergence layer network, remains unclear. A properly constructed naming system can aid in simplifying both routing and security. That is, for security and resource allocation reasons, one would overwhelmingly prefer to be able to uniquely identify a source as well as to determine which group or groups this source may belong to.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Key Revocation <s> A Delay Tolerant Network (DTN) is a network where nodes can be highly mobile, with long message delay times, forming dynamic and fragmented networks. Conventional centralised network security mechanisms are unsuitable in such networks, therefore distributed security solutions are more desirable in DTN implementations. Establishing effective trust in distributed systems with no centralised Public Key Infrastructure (PKI) such as the Pretty Good Privacy (PGP) scheme, usually requires human intervention. In this paper, we build and compare different decentralised trust systems for autonomous DTN. We utilise a public key distribution model based on the Web of Trust principle, and employ a simple Leverage of Common Friends (LCF) trust system to establish initial trust in autonomous DTN. We compare this system with two other scenarios (no trust and random trust) for autonomous establishment of initial trust. Comparisons are based on the time it takes to disperse the trust and resilience of the system against a malicious node distributing malicious and False Public Keys. Our results show that the LCF trust system mitigates the distribution of false malicious public keys by 40%. LCF takes 44% longer to distribute 50% of the public keys compared when using no trust system, but is 16% faster in comparison to the random trust method. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Key Revocation <s> We model a decentralised security credential revocation and replacement scheme.We utilise leverage of common friends trust concepts for trust transferral on keys.We propose a revocation scheme to provide entity confidence and trust transferral.We compare similar schemes on key and certificate removal and replacement metrics.Proposal distributes credentials 35% faster, slowed spoofed credentials by 50%. A Delay Tolerant Network (DTN) is a dynamic, fragmented, and ephemeral network formed by a large number of highly mobile nodes. DTNs are ephemeral networks with highly mobile autonomous nodes. This requires distributed and self-organised approaches to trust management. Revocation and replacement of security credentials under adversarial influence by preserving the trust on the entity is still an open problem. Existing methods are mostly limited to detection and removal of malicious nodes. This paper makes use of the mobility property to provide a distributed, self-organising, and scalable revocation and replacement scheme. The proposed scheme effectively utilises the Leverage of Common Friends (LCF) trust system concepts to revoke compromised security credentials, replace them with new ones, whilst preserving the trust on them. The level of achieved entity confidence is thereby preserved. Security and performance of the proposed scheme is evaluated using an experimental data set in comparison with other schemes based around the LCF concept. Our extensive experimental results show that the proposed scheme distributes replacement credentials up to 35% faster and spreads spoofed credentials of strong collaborating adversaries up to 50% slower without causing any significant increase on the communication and storage overheads, when compared to other LCF based schemes. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Key Revocation <s> Public-key infrastructure (PKI) is based on public-key certificates and is the most widely used mechanism for trust and key management. However, standard PKI validation and revocation mechanisms are considered major reasons for its unsuitability for delay/disruption tolerant networking (DTN). DTN requires mechanism to authenticate messages at each node before forwarding it in the network. So, certificate revocation lists (CRLs) being distributed in DTN network will need to be authenticated and validated for issuer certificate authority (CA) at each node. In this study, the authors propose new validation and revocation mechanism which is compliant with DTN semantics and protocols. This study also proposes a new design for CRL in compliance with standard PKI X.509 standard to make the proposed mechanism easy to implement for DTN. The new designed CRL is of reduced size as it contains fewer entries as compared with standard X.509 CRL and also arranges the revocation list in the form of hash table (map) to increase the searching efficiency. <s> BIB003
|
Unitil now, only a couple of works about key revocation in DTN have been proposed. In particularly, the authors in BIB003 propose a new validation and revocation mechanism as well as a new design for a lightweight CRL in compliance with PKI (X.509) for DTNs. The new designed CRL is of reduced size and arranges the revocation list in the form of a Hash Table (Map) data structure to increase the searching efficiency. Moreover, in BIB002 the authors present a secure and fully distributed key revocation and update scheme for DTNs called Distributed Signing (DS) revocations. This is based on a DTN without a centralised PKI that ensures entity authentication and utilises the LCF trust system presented in BIB001 . More specifically, neighbouring friendly nodes attest and vouch for a node's identity during the key revocation process. Table 5 summarizes and compares these two key revocation schemes using the same criteria as in the previous tables.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Standardisation Efforts <s> This document describes the end-to-end protocol, block formats, and ::: abstract service description for the exchange of messages (bundles) in ::: Delay Tolerant Networking (DTN). This document was produced within ::: the IRTF's Delay Tolerant Networking Research Group (DTNRG) and ::: represents the consensus of all of the active contributors to this ::: group. See http://www.dtnrg.org for more information. This memo ::: defines an Experimental Protocol for the Internet community. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Standardisation Efforts <s> This document provides an overview of the security requirements and ::: mechanisms considered for delay tolerant networking security. It ::: discusses the options for protecting such networks and describes ::: reasons why specific security mechanisms were (or were not) chosen for ::: the relevant protocols. The entire document is informative, given its ::: purpose is mainly to document design decisions. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Standardisation Efforts <s> This document defines a streamlined bundle security protocol, which ::: provides data authentication, integrity, and confidentiality services ::: for the Bundle Protocol. Capabilities are provided to protect the ::: bundle payload, and additional data that may be included within the ::: bundle, along a single path through a network. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Standardisation Efforts <s> Delay/Disruption Tolerant Networking (DTN) introduces a network model ::: in which communications can be subject to long delays and/or ::: intermittent connectivity. DTN specifies the use of public-key ::: cryptography to secure the confidentiality and integrity of messages ::: in transit. The use of public-key cryptography posits the need for ::: certification of public keys and revocation of certificates. This ::: document formally defines the DTN key management problem and then ::: provides a high-level design solution for delay and disruption ::: tolerant distribution and revocation of public-key certificates along ::: with relevant design options and recommendations for design choices. <s> BIB004
|
Until now, a handful of Internet drafts have been released regarding security and key management in DTN, but no full-fledged solution is yet proposed. In BIB002 the authors states a series of requirements for key management in DTNs without proposing a solution. In fact, in RFC6257 [39] and in Internet draft BIB003 key management is recognized as a cumbersome topic and the authors explicitly state that such exclusion is a result of an informed decision. The BSP specification [39] defines security features for the BP BIB001 and attempts to protect its operation by introducing security mechanisms that provide confidentiality, integrity, and bundle authentication. More specifically, it describes four security blocks to cater for different security services. These blocks, namely the BAB, the PIB, the PCB and the Extensions Security Block (ESB), are defined in the Abstract Security Block (ASB). The Consultative Committee for Space Data Systems (CCSDS) released a green book [60] about key management concept in space environments, where they described the basics for the CCSDS standardization activities related to security services and key management schemes for space missions. In addition, an internet draft about DTN security services is given in BIB003 , where the Streamlined Bundle Security Protocol (SBSP) is introduced. Specifically, SBSP is an improvement and simplification on BSP and provides authentication, integrity, and confidentiality for the "bundles" along the transmission path. It combines BSB with Bundle-in-Bundle encapsulation (BIBE) and supports three security blocks, namely BAB, BIB and BCB. As expected, SBSP applies only to security-aware nodes. More recently, the DTN Networking Security Key Management and DTN Security Key Management have been released. The former states the key management problem in DTNs and emphasizes that traditional security key management mechanisms are not always feasible in environments where DTN typically operate in. The latter proposes requirements and presents a design for key management in DTNs. Specifically, the core requirements and design criteria for DTN security key management are described. The newly published Internet draft BIB004 defines the DTN key management problem and at the same time provides high-level solutions for public key distribution and public key revocation. Finally, the Bundle Protocol Security Specification (BPSec) defines a security protocol for the services of end-to-end data integrity and confidentiality of the BP. Table 6 summarizes all the security-related Internet drafts and RFCs.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> Endpoints in a delay tolerant network (DTN) [K. Fall, 2003] must deal with long periods of disconnection, large end-to-end communication delays, and opportunistic communication over intermittent links. This makes traditional security mechanisms inefficient and sometimes unsuitable. We study three specific problems that arise naturally in this context: initiation of a secure channel by a disconnected user using an opportunistic connection, mutual authentication over an opportunistic link, and protection of disconnected users from attacks initiated by compromised identities. We propose a security architecture for DTN based on hierarchical identity based cryptography (HIBC) that provides efficient and practical solutions to these problems. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> This paper reviews the Internet-inspired security work on delay tolerant networking, in particular, as it might apply to space missions, and identifies some challenges arising, for both the Internet security community and for space missions. These challenges include the development of key management schemes suited for space missions as well as a characterization of the actual security requirements applying. A specific goal of this paper is therefore to elicit feedback from space mission IT specialists in order to guide the development of security mechanisms for delay tolerant networking. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> Traditional approaches for communication security do not work well in disruption- and delay-tolerant networks (DTNs). Recently, the use of identity-based cryptography (IBC) has been proposed as one way to help solve some of the DTN security issues. We analyze the applicability of IBC in this context and conclude that for authentication and integrity, IBC has no significant advantage over traditional cryptography, but it can indeed enable better ways of providing confidentiality. Additionally, we show a way of bootstrapping the needed security associations for IBC use from an existing authentication infrastructure. <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> We describe a novel Distributed Key Establishment (DKE) protocol in Disruption (Delay) Tolerant Location Based Social Wireless Sensor and Actor Networks (DTLBS-WSAN). In DKE, we propose that sensor nodes use neighboring signatures to establish their keys. Pre-distributed keys are used by actor nodes to strengthen communication security. We show that nodes can get guaranteed security when actors are connected and cover the network area and high security confidence can be achieved even without actor nodes when the adversary (malicious node) density is small. In DTLBS-WSANs, key (certificate) establishment, storage and look up are performed in a distributed way. Multiple copies of a certificate can be stored at nodes to improve key security and counter the adverse impact of network disruption. <s> BIB005 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> A delay tolerant network (DTN) is a store carry and forward network characterized by highly mobile nodes, intermittent connectivity with frequent disruptions, limited radio range and physical obstructions. Emerging applications of DTN include rural DTN, vehicular DTN and pocket DTN. The development of DTN raises a number of security-related challenges due to inconsistent network access and unreliable end-to-end network path. One of the challenges is initial secure context establishment as it is unrealistic to assume that public key infrastructure (PKI) is always globally present and available, hence, the public key management becomes an open problem for DTN. In this paper, for the first time, we propose a dynamic virtual digraph (DVD) model for public key distribution study by extending graph theory and then present a public key distribution scheme for pocket DTN based on two-channel cryptography. By distinguishing between owners and carriers, public key exchange and authentication issues in the decentralized pocket DTN environment can be solved by a two-channel cryptography process and our simulation results have proven it. <s> BIB006 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Discussion <s> To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. <s> BIB007
|
In the previous section we have classified the proposed solutons of cryptographic key management in DTNs into three major categories. However, as shown in the corresponding subsections, the majority of the examined approaches are hybrid in nature and may fall into more than one category. Characteristic examples of this situation are the works in BIB003 BIB001 BIB004 BIB005 BIB007 , where the authors propose schemes that can be used in security initialisation and key establishment. Contributions such as BIB006 BIB002 have shown that establishing the initial source context at the deployment phase is still an open issue. From Table 1 we can observe that the majority (8 out of 12) of the security initialisation methods are based on PKC. A number of other methods for security initialisation have relied on IBC, such as HIBC and its variations. Also, there is an almost unanimous agreement, that traditional PKI is not always suitable for DTN. Specifically, in disconnected DTNs, without online access to the necessary certificate or the certificate revocation list posted by CAs, sending an encrypted message and authenticate senders' identity is infeasible. For this reason, apart from PKI, IBC has been examined as a viable solution for security within the DTN context. From Tables 2 and 3 it can be observed that few works propose the usage of pre-shared keys or pre-established trust between the nodes . However, it is obvious that for scalability reasons such schemes apply only in a small and fixed-size DTNs. By observing Table 4 it is clear that group key management for DTN is still in its infancy, with only four proposed works based on LKH or Chinese Remainder Theorem. Moreover, as given in Table 5 , even fewer works proposed ways of handling key revocation in DTNs. It is worth mentioning that researches tried to address the key revocation issue almost eleven years (2016) after the first work related to key management in DTNs has been published BIB001 . To further exemplify this, Figure 2 provides a timeline of all the different methods introduced for key management in DTNs. It can be noticed that all the different methods proposed between 2005 and 2013 and the rest of the papers are based on the already proposed methods. Moreover, the bottom part of the same figure, includes the different kind of DTN network that is addressed by each work. For instance, the first chronologically proposed works on DTN key management focused on rural area DTNs, while the last ones on large scale DTNs. In addition, as it can be seen from Tables 1-5 , when applicable all the solutions included in this survey have been either evaluated only through theoretical proofs and/or simulations. This means that hardware testbeds and real-life deployments in cryptographic key management are still largely missing from the DTN research area. On top of that, up to date, most of the works in the context of key management in DTNs concentrate on rural or space networks neglecting other DTN applications, including vehicular or undersea ones. Last but not least, the analysis of the various works showed that the most recent ones (after 2012) tend to focus on space DTNs and generally on large scale DTNs.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Endpoints in a delay tolerant network (DTN) [K. Fall, 2003] must deal with long periods of disconnection, large end-to-end communication delays, and opportunistic communication over intermittent links. This makes traditional security mechanisms inefficient and sometimes unsuitable. We study three specific problems that arise naturally in this context: initiation of a secure channel by a disconnected user using an opportunistic connection, mutual authentication over an opportunistic link, and protection of disconnected users from attacks initiated by compromised identities. We propose a security architecture for DTN based on hierarchical identity based cryptography (HIBC) that provides efficient and practical solutions to these problems. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> A delay tolerant network (DTN) is a store and forward network where end-to-end connectivity is not assumed and where opportunistic links between nodes are used to transfer data. An emerging application of DTNs are rural area DTNs, which provide Internet connectivity to rural areas in developing regions using conventional transportation mediums, like buses. Potential applications of these rural area DTNs are e-governance, telemedicine and citizen journalism. Therefore, security and privacy are critical for DTNs. Traditional cryptographic techniques based on PKI-certified public keys assume continuous network access, which makes these techniques inapplicable to DTNs. We present the first anonymous communication solution for DTNs and introduce a new anonymous authentication protocol as a part of it. Furthermore, we present a security infrastructure for DTNs to provide efficient secure communication based on identity-based cryptography. We show that our solutions have better performance than existing security infrastructures for DTNs. <s> BIB002 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Traditional approaches for communication security do not work well in disruption- and delay-tolerant networks (DTNs). Recently, the use of identity-based cryptography (IBC) has been proposed as one way to help solve some of the DTN security issues. We analyze the applicability of IBC in this context and conclude that for authentication and integrity, IBC has no significant advantage over traditional cryptography, but it can indeed enable better ways of providing confidentiality. Additionally, we show a way of bootstrapping the needed security associations for IBC use from an existing authentication infrastructure. <s> BIB003 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Delay Tolerant Networks (DTN) arise whenever traditional assumptions about todaypsilas Internet such as continuous end-to-end connectivity, low latencies and low error rates are not applicable. These challenges impose constraints on the choice and implementation of possible security mechanisms in DTNs. The key requirements for a security architecture in DTNs include ensuring the protection of DTN infrastructure from unauthorized use as well as application protection by providing confidentiality, integrity and authentication services for end-to-end communication. In this paper, we examine the issues in providing application protection in DTNs and look at various possible mechanisms. We then propose an architecture based on Hierarchical Identity Based Encryption (HIBE) that provides end-to-end security services along with the ability to have fine-grained revocation and access control while at the same time ensuring efficient key management and distribution. We believe that a HIBE based mechanism would be much more efficient in dealing with the unique constraints of DTNs compared to standard public key mechanisms (PKI). <s> BIB004 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> In the last few years, Delay/Disruption Tolerant Networking has grown to a healthy research topic because of its suitability for challenged environments characterized by heterogeneity, long delay paths and unpredictable link disruptions. This paper presents a DTN security architecture that focuses on the requirements for lightweight key management; lightweight AAA-like architecture for authentication/authorisation; resilience to Denial of Service attacks and user anonymity. <s> BIB005 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Due to the rapid development in technology, every network, application needs full time connectivity without disruption and delays. The Delay/Disruption Tolerant Networking (DTN) concept is suitable for applications such as rural and disaster areas networks, animal and environmental monitoring plus others. However, due to the shared and unsecured nature of such challenged networks a good cryptographic framework needed in DTN. Identity Based Cryptography (IBC) compares favorably with traditional public key cryptography while generating public key on a fly as required. In this paper, we will provide anonymity solution in DTN using IBC. This has the advantage over public key cryptography with respect to end-to-end confidentiality. Also we use pseudonyms to provide anonymity and hide the identity of the end user. <s> BIB006 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> The increased demand for mobile communication and use of mobile devices in high-latency, resource impoverished environments has spurred the development and growth of Delay-Tolerant Networks (DTN). DTNs aim to provide interoperability between a range of heterogeneous networks, operating under resource-constrained circumstances and traditional infrastructure networks such as the Internet. Because of the circumstances, DTNs possess some interesting characteristics that make a traditional end-to-end security paradigm unsuitable and increase the value of the overlay's resources. Controlling access to overlay resources and providing for secure group communications over unknown intermediate networks is essential. We propose a novel solution based on previous work in secure group communications using key-graphs and in extension to work on scalable access authorization in self-organizing overlays to provide a scalable mechanism for access control and secure group communications in DTNs. Since resources are especially limited, our implementation focuses on minimizing the traffic on the overlay associated with the maintenance of our solution. <s> BIB007 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Key exchange is considered to be a challenging problem in Delay Tolerant Networks (DTNs) operating in space environments. In this paper we investigate the options for integrating key exchange protocols with the Bundle Protocol. We demonstrate this by using a one-pass key establishment protocol. In doing so, we also highlight the peculiarities, issues and opportunities a DTN network maintains, which heavily influences the underlying security solution. <s> BIB008 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> Most of the existing authentication and key agreement protocols for delay tolerant networks are not designed for protecting privacy. In this paper, an authentication and key agreement protocol with anonymity based on combined public key is proposed. The proposed protocol eliminates the need of public key digital certificate on-line retrieval, so that any on-line trusted third party is no longer required, only needs an off-line public information repository and key generation center; and realizes mutual authentication and key agreement with anonymity between two entities. We show that the proposed protocol is secure for all probabilistic polynomial-time attackers, and achieves good security properties, including authentication, anonymity, and confidentiality and so on. <s> BIB009 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> In the past, security protocols including key transport protocols are designed with the assumption that there are two parties communication with each other and an adversary tries to intercept this communication. In Delay/Disruption Tolerant Networking (DTN), packet delivery relies on intermediate parties in the communication path to store and forward the packets. DTN security architecture requires that integrity and authentication should be verified at intermediate nodes as well as at end nodes and confidentiality should be maintained for end communicating parties. This requires new security protocols and key management to be defined for DTN as traditional end-to-end security protocols will not work with DTN. To contribute towards solving this problem, we propose a novel Efficient and Scalable Key Transport Scheme (ESKTS) to transport the symmetric key generated at a DTN node to other communicating body securely using public key cryptography and proxy signatures. It is unique effort to design a key transport protocol in compliance with DTN architecture. ESKTS ensures that integrity and authentication is achieved at hop-by-hop level as well as end-to-end level. It also ensures end-to-end confidentiality and freshness for end communicating parties. This scheme provides a secure symmetric key transport mechanism based on public key cryptography to exploit the unique bundle buffering characteristics of DTN to reduce communication and computation cost . <s> BIB010 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Require or Not Trusted Third Party (TTP) <s> In deep space delay tolerant networks rekeying expend vast amounts of energy and delay time as a reliable end-to-end communication is very difficult to be available between members and key management center. In order to deal with the question, this paper puts forwards an autonomic group key management scheme for deep space DTN, in which a logical key tree based on one-encryption-key multi-decryption-key key protocol is presented. Each leaf node with a secret decryption key corresponds to a network member and each non-leaf node corresponds to a public encryption key generated by all leaf node's decryption keys that belong to the non-leaf node's sub tree. In the proposed scheme, each legitimate member has the same capability of modifying public encryption key with himself decryption key as key management center, so rekeying can be fulfilled successfully by a local leaving or joining member in lack of key management center support. In the security aspect, forward security and backward security are guaranteed. In the efficiency aspect, our proposed scheme's rekeying message cost is half of LKH scheme when a new member joins, furthermore in member leaving event a leaving member makes tradeoff between computation cost and message cost except for rekeying message cost is constant and is not related to network scale. Therefore, our proposed scheme is more suitable for deep space DTN than LKH and the localization of rekeying is realized securely. <s> BIB011
|
Key management solutions can also be categorised based on whether they require a TTP or not. A TTP can be used for key management services such as key generation, key distribution or translation or keying material and certification . Traditionally, in continuously connected networks the most proven practice for the key management is to contact an online TTP. Although the approach that imposes a TTP is secure and resilient, it is also not scalable for DTNs. In fact, DTNs require a different approach for handling key management. This is because every pair of nodes has to obtain keys from the online TTP, something that can not be guaranteed in DTNs with intermittent connectivity. Moreover, this approach has sizable communication overhead, which is unwelcome for DTNs. Also, the TTP constitutes a single point of failure. On the other hand, works that are self-organised may not have the aforementioned problems, but are applicable only to small size networks due to the computational overhead produced. Most of the works that do not require TTP tried to solve the security initialisation problem as an alternative solution, while most of the works that require TTP attempted to solve the key establishment, without considering the open issue of security initialisation. Contributions that require a TTP increase the communication overhead, while those which do not rely on a TTP produce additional computation overhead. As a result, both approaches can not always apply in such a hostile environment. Below, we categorize the majority of the works included in this survey based on the existence or not of TTP. • Require TTP-Works such as BIB005 BIB008 BIB010 are based on PKI solutions mandate a TTP, and thus a CA. Works such as BIB002 BIB001 BIB003 BIB004 BIB006 that are founded on IBC solutions, require a TTP too, namely the PKG. In addition, schemes in group key management that rely on LKH such as BIB007 BIB011 , necessitate a TTP known as Key Distribution Center (KDC). Moreover, the work in BIB009 , which is based on CPK, eliminates the need for online TTP and only needs an off-line PKG.
|
Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Centralised, Decentralised and Distributed <s> Group key management is an important functional building block for any secure multicast architecture. Thereby, it has been extensively studied in the literature. In this paper we present relevant group key management protocols. Then, we compare them against some pertinent performance criteria. Keywords—Multicast, Security, Group Key Management. <s> BIB001 </s> Cryptographic Key Management in Delay Tolerant Networks: A Survey <s> Centralised, Decentralised and Distributed <s> Delay Tolerant Network (DTN) has the character of long intermittent connectivity and communication delays, which makes the existing group key management mechanism can not be effectively applied. We proposed a new Chinese Remainder Theorem based group key management mechanism for DTN. Comparing with the early scheme, the existing joined node can derive a new group key from the old group key using hash function in the new user join phase, so the server does not need to broadcast any key update message for the newly user join, and only broadcasts one message for user leave. Meanwhile, aiming at the forward security problem in the many-to-many scenarios, the time-based group key management scheme is introduced. The simulation results show that the group key update success rate, latency and message authentication success rate for our scheme is better than CRGK and LKH schemes. <s> BIB002
|
Key management solutions in DTN can be divided into three major categories, namely (a) centralised, (b) decentralised, and (c) distributed architecture. The schemes in the first category mandate the use of a TTP, and so far are the most commonly used and studied in the respective literature. Decentralised schemes on the other hand use more than one group to manage key distribution and are used with the purpose of sharing the overhead between the parties. The latter category of schemes pertains to group key management protocols, and therefore it imposes multiple cryptographic operations. This typically results to large communication and computational overheads BIB002 . Unfortunately, the various solutions proposed for the existing wired/wireless networks cannot apply to DTNs, because of the communication and computational overhead BIB002 . Also, in the DTN literature, the difference between decentralised and distributed models is unclear and sometimes is considered the same. Overall, due to its nature, the distributed model is more tolerant to infrastructure failures. This is in contrary to the centralised model which consists a single point of failure. Moreover, in the centralised model, join and leave operations for members are straightforward, but all communications require interaction with the TTP. The decentralised/distributed model not only makes privacy a hard issue to deal with, but also the more nodes in the network the more storage is needed, which is impractical in DTNs with low storage capabilities. Generally, works following the PGP philosophy are decentralised and distributed, while the rest of them which require a TTP, such as CA or PKG, are centralised. Lastly, it is to be noted that the three-fold taxonomy included in this subsection is most commonly used in group communications BIB001 . Tables 1-5 summarise the architecture used by each scheme.
|
An overview and perspective on social network monitoring <s> 1.Introduction <s> Most disease registries are updated at least yearly. If a geographically localized health hazard suddenly occurs, we would like to have a surveillance system in place that can pick up a new geographical disease cluster as quickly as possible, irrespective of its location and size. At the same time, we want to minimize the number of false alarms. By using a space–time scan statistic, we propose and illustrate a system for regular time periodic disease surveillance to detect any currently ‘active’ geographical clusters of disease and which tests the statistical significance of such clusters adjusting for the multitude of possible geographical locations and sizes, time intervals and time periodic analyses. The method is illustrated on thyroid cancer among men in New Mexico 1973–1992. <s> BIB001 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> We argue that social networks differ from most other types of networks, including technological and biological networks, in two important ways. First, they have nontrivial clustering or network transitivity and second, they show positive correlations, also called assortative mixing, between the degrees of adjacent vertices. Social networks are often divided into groups or communities, and it has recently been suggested that this division could account for the observed clustering. We demonstrate that group structure in networks can also account for degree correlations. We show using a simple model that we should expect assortative mixing in such networks whenever there is variation in the sizes of the groups and that the predicted level of assortative mixing compares well with that observed in real-world networks. <s> BIB002 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> Much of the research involving simultaneous monitoring of several related quality characteristics that follow a multivariate Poisson distribution has relied on using the normal approximation to the Poisson distribution in order to determine the appropriate control limits. In this paper, evaluation and implementation of MEWMA schemes for count rates using the multivariate Poisson framework itself are presented. We demonstrate that the multivariate EWMA chart-based directly on the multivariate Poisson distribution is superior to one based on normal-theory with respect to the in-control average run length. The proposed scheme performs similarly to one based on normal-theory for detecting an out-of-control process. We also illustrate a step-by-step numerical example on the practical use of the new control chart. <s> BIB003 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> A zero-inflated Poisson (ZIP) process is different from a standard Poisson process in that it results in a greater number of zeros. It can be used to model defect counts in manufacturing processes with occasional occurrences of non-conforming products. ZIP models have been developed assuming that random shocks occur independently with probability p, and the number of non-conformities in a product subject to a random shock follows a Poisson distribution with parameter λ. In our paper, a control charting procedure using a combination of two cumulative sum (CUSUM) charts is proposed for monitoring increases in the two parameters of the ZIP process. Furthermore, we consider a single CUSUM chart for detecting simultaneous increases in the two parameters. Simulation results show that a ZIP-Shewhart chart is insensitive to shifts in p and smaller shifts in λ in terms of the average number of observations to signal. Comparisons between the combined CUSUM method and the single CUSUM chart show that the latter's performance is worse when there are only increases in p, but better when there are only increases in λ or when both parameters increase. The combined CUSUM method, however, is much better than the single CUSUM chart when one parameter increases while the other decreases. Finally, we present a case study from the light-emitting diode packaging industry. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB004 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> The authors consider statistical process control of multivariate categorical processes and propose a Phase II log-linear directional control chart. <s> BIB005 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> The aggregation of event counts is a common, and often necessary, practice in many applications. When working with large numbers of events, it may be more practical to consider the number of events... <s> BIB006 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> We introduce a computationally scalable method for detecting small anomalous areas in a large, time-dependent computer network, motivated by the challenge of identifying intruders operating inside enterprise-sized computer networks. Time-series of communications between computers are used to detect anomalies, and are modeled using Markov models that capture the bursty, often human-caused behavior that dominates a large subset of the time-series. Anomalies in these time-series are common, and the network intrusions we seek involve coincident anomalies over multiple connected pairs of computers. We show empirically that each time-series is nearly always independent of the time-series of other pairs of communicating computers. This independence is used to build models of normal activity in local areas from the models of the individual time-series, and these local areas are designed to detect the types of intrusions we are interested in. We define a locality statistic calculated by testing for deviations from... <s> BIB007 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> A cumulative sum control chart for multivariate Poisson distribution (MP-CUSUM) is proposed. The MP-CUSUM chart is constructed based on log-likelihood ratios with in-control parameters, Θ0, and shifts to be detected quickly, Θ1. The average run length (ARL) values are obtained using a Markov Chain-based method. Numerical experiments show that the MP-CUSUM chart is effective in detecting parameter shifts in terms of ARL. The MP-CUSUM chart with smaller Θ1 is more sensitive than that with greater Θ1 to smaller shifts, but more insensitive to greater shifts. A comparison shows that the proposed MP-CUSUM chart outperforms an existing MP chart. <s> BIB008 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> The use of varying sample size monitoring techniques for Poisson count data has drawn a great deal of attention in recent years. Specifically, these methods have been used in public health surveillance, manufacturing, and safety monitoring. A number of approaches have been proposed, from the traditional Shewhart charts to cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) methods. It is convenient to use techniques based on statistics that are invariant to the units of measurement since in most cases these units are arbitrarily selected. A few of the methods reviewed in our expository article are not inherently invariant, but most are easily modified to be invariant. Most importantly, if methods are invariant to the choice of units of measurement, they can be applied in situations where the in-control Poisson mean varies over time, even if there is no associated varying sample size. Several examples are discussed to highlight the promising uses of invariant Poisson control charting me... <s> BIB009 </s> An overview and perspective on social network monitoring <s> 1.Introduction <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB010
|
There has been an increasing amount of research on the monitoring of social networks. An overview of methods was given in a recent review paper by BIB010 who listed applications including the detection of important and influential network participants, the detection of clandestine organizational structures, and the detection of fraudulent or predatory activity. One of our primary contributions is to add to the discussion of BIB010 by including additional network monitoring papers and discussing the various methods in the context of the considerable amount of related work in industrial process monitoring and public health surveillance. Social network monitoring methods are often illustrated using terrorist networks like the al Qaeda network (see Figure 1) or social networks such as that based on Enron e-mail communications (see Figure 2 ). The basic idea in social network monitoring is to detect sudden changes in the behavior of a subset of the individuals in the network. Significant increases in the communication levels of the entire network, of smaller sub-networks or of individuals are often of primary interest in applications, where global changes are typically the easiest to detect. In some cases, however, decreases in communication levels may be of interest. BIB010 referred to regions of the network with structure differing from that expected under normal conditions as anomalies. Of course, to formalize what is meant by an anomaly, there must be an operational definition of the normal conditions. The definition of an anomaly would likely vary from application to application. Networks are expected to evolve over time, however, so it would be unusual to have interest in detecting that any change, however small, has occurred. The focus is usually on detecting sudden large changes in the structure of some portion of the network. We assume that there are n individuals in the network to be monitored. These individuals could refer to people, e-mail addresses, or other entities. We assume that we are collecting network data aggregated over some time period to give, for example, daily or weekly data, with m time periods of data in a baseline sample. For each time period t, t = 1, 2, …, we have information on the communication level between individual i and individual j, say c t (i, j), i, j = 1, …, n, where i is not equal to j. Most often we are interested in the number of communications between individuals i and j. Alternatively, c t (i, j) may be an indicator variable indicating whether or not there was at least one contact between i and j, or whether some other criterion on the level of communication between these two individuals was met. In the social network change detection literature, the numbers of contacts between pairs of individuals are frequently modeled by some variant of the Poisson distribution; whereas, Bernoulli random variables are typically used to model indicator variables. Communication levels can be quantified as directed or undirected. With directed data, c t (i, j) reflects only communications between individuals i and j that were initiated by individual i; whereas, with undirected data, communications are considered mutual, namely c t (i, j) = c t (j, i). There can be a substantial loss of information in transforming directed to undirected data, or in representing communication counts by binary indicator variables. Indeed, with undirected data it is not possible to study how contacts propagate through the network. Generally, as discussed by BIB006 , greater levels of data aggregation result in greater losses of information and poorer process monitoring performance. The values c t (i, j) can be placed into row i and column j of a matrix, say C t , t = 1, 2, … . The matrix C t is typically referred to as the adjacency matrix or graph corresponding to the social network at time t. These matrices are usually quite sparse and assumed to have diagonal elements set to zero so that the graph contains no self-loops. Note that if the data are undirected, then the matrix C t is symmetric. Thus the network monitoring problem can be framed as the detection of certain types of changes in matrices of indicator variables or counts over time. This is a broad generalization of the framework usually considered in the many papers on the monitoring of Bernoulli or count data. The vast majority of the methods for such data studied in the literature on statistical process monitoring are univariate and thus could be applied directly only to a network consisting of two individuals. As reviewed by , there has been much research on the monitoring of sequences of Bernoulli data. Aside from its diagonal of zero elements, C t will be a matrix of Bernoulli random variables in network monitoring applications in which c t (i, j) = 1 if there was at least one contact between individuals i and j or some other criterion was met, and 0 otherwise. This represents a substantial multivariate generalization of the usual univariate framework. The monitoring of a single stream of Poisson-distributed data has been widely studied. BIB004 provided a review of methods for monitoring a zeroinflated Poisson distribution. BIB009 reviewed methods for monitoring non-homogeneous Poisson processes. Monitoring with multivariate Poisson vectors has been studied by BIB003 and BIB008 , among others, but no one has studied the monitoring of matrices of Poisson counts in the industrial statistics literature. BIB005 and Yashchin (2012) proposed methods for monitoring categorical data which can be considered in some cases to consist of matrices of counts, but not with the same matrix structure or the same objectives as in network monitoring. It is frequently assumed in the study of public health surveillance methods, however, that each sample of disease incidence counts consists of a set of assumed Poisson random variables that could be viewed as components of a matrix. Sometimes these counts apply to a rectangular set of sub-regions of a larger region of interest. Scan methods, such as those of BIB001 , are frequently used with this type of data to detect clusters of contiguous sub-regions where the Poisson rate seems significantly higher than expected. A primary difference between this problem and that of network monitoring is that location can provide a natural ordering of the sub-regions in public health applications. There is usually no natural ordering for individuals in a network. There has been considerable work on the monitoring of computer networks. See, for example, BIB007 . In their review, BIB010 pointed out that the structure of social network data is usually different from that of computer networks and that the objectives are typically different. In addition, computer network monitoring involves considerably more data collected at a much higher frequency and with much more pronounced periodic patterns than with social network data. BIB002 provided a discussion of the differences between these two types of networks. The social network monitoring literature seems to have been developed somewhat independently of the computer network monitoring literature. In the Appendix we give a brief introduction to social network terminology. Section 2 contains our review of social network monitoring methods. In Section 3 we discuss some issues related to network monitoring. Conclusions and a number of research topics are given in Section 4.
|
An overview and perspective on social network monitoring <s> Monitoring methods <s> Unusual clusters of disease must be detected rapidly for effective public health interventions to be introduced. Over the past decade there has been a surge in interest in statistical methods for the early detection of infectious disease outbreaks.This growth in interest has given rise to much new methodological work, ranging across the spectrum of statistical methods.The paper presents a comprehensive review of the statistical approaches that have been proposed. Applications to both laboratory and syndromic surveillance data are provided to illustrate the various methods <s> BIB001 </s> An overview and perspective on social network monitoring <s> Monitoring methods <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB002
|
In this section we briefly describe some of the recent methods proposed for monitoring social networks and relate them to methods in the area of statistical process monitoring. We use categories corresponding roughly to those used by BIB002 . These four categories were also used in the review paper by BIB001 to classify prospective public health surveillance methods. We assume that the reader has some familiarity with statistical process monitoring methods. For more information on these methods, we recommend Montgomery (2013).
|
An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> We propose a Bayesian approach to obtaining control charts when there is parameter uncertainty. Our approach consists of two stages, (i) construction of the control chart where we use a predictive distribution based on a Bayesian approach to derive the rejection region, and (ii) evaluation of the control chart where we use a sampling theory approach to examine the performance of the control chart under various hypothetical specifications for the data generation model. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Many types of control charts have an ability to detect process changes that can weaken over time depending on the past data observed. This is often referred to as the “inertia problem.” We propose a new measure of inertia, the signal resistance, to be the largest standardized deviation from target not leading to an immediate out-of-control signal. We calculate the signal resistance values for several types of univariate and multivariate charts. Our conclusions support the recommendation that Shewhart limits should be used with exponentially weighted moving average charts, especially when the smoothing parameter is small. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Many control charts are used to determine the output of a linear filter applied to process data. An alarm is sounded when the filter output falls outside a set of control limits. In this study, this concept is generalized by observing the linear filter .. <s> BIB003 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> This article develops a control chart for the generalized variance. A Bayesian approach is used to incorporate parameter uncertainty. Our approach has two stages, (i) construction of the control chart where we use a predictive distribution based on a Bayesian approach to derive the rejection region, and (ii) evaluation of the control chart where we use a sampling theory approach to examine the performance of the control chart under various hypothetical specifications for the data generation model. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Learning the network structure of a large graph is computationally demanding, and dynamically monitoring the network over time for any changes in structure threatens to be more challenging still. ::: ::: This paper presents a two-stage method for anomaly detection in dynamic graphs: the first stage uses simple, conjugate Bayesian models for discrete time counting processes to track the pairwise links of all nodes in the graph to assess normality of behavior; the second stage applies standard network inference tools on a greatly reduced subset of potentially anomalous nodes. The utility of the method is demonstrated on simulated and real data sets. <s> BIB005 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> This article develops a control chart for the variance of a normal distribution and, equivalently, the coefficient of variation of a log-normal distribution. A Bayesian approach is used to incorporate parameter uncertainty, and the control limits are obtained from the predictive distribution for the variance. We evaluate this control chart by examining its performance for various values of the process variance. <s> BIB006 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> This article develops a control chart for a mean vector when it is monitored by a quadratic form in the exponentially weighted observation vector. A Bayesian approach is used to incorporate parameter uncertainty. We first use a Bayesian predictive distribution to construct the control chart, and we then use a sampling theory approach to evaluate it under various hypothetical specifications for the data generation model. <s> BIB007 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Abstract : Changes in observed social networks may signal an underlying change within an organization, and may even predict significant events or behaviors. The breakdown of a team's effectiveness, the emergence of informal leaders, or the preparation of an attack by a clandestine network may all be associated with changes in the patterns of interactions among group members. The ability to systematically, statistically, effectively and efficiently detect these changes has the potential to enable the anticipation, early warning, and faster response to both positive and negative organizational activities. By applying statistical process control techniques to social networks we can rapidly detect changes in these networks. Herein we describe this methodology and then illustrate it using four data sets. We nominate four types of dynamic network behaviors for investigation in this paper. These behaviors are not comprehensive; however, it is necessary to define a set of behaviors to focus our investigation of network change. The four behaviors we focus on are network stability, endogenous change, exogenous change, and initiated change. The first data set is the Newcomb fraternity data. The second set of data was collected on a group of mid-career U.S. Army officers in a week-long training exercise. The third data set contains the perceived connections among members of al Qaeda based on open sources, and the fourth data set is simulated using multiagent simulation. The results indicate that this methodology is able to detect change even with the high levels of uncertainty inherent in these data sets. <s> BIB008 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Graphs are high-dimensional, non-Euclidean data, whose utility spans a wide variety of disciplines. While their non-Euclidean nature complicates the application of traditional signal processing paradigms, it is desirable to seek an analogous detection framework. In this paper we present a matched filtering method for graph sequences, extending to a dynamic setting a previous method for the detection of anomalously dense subgraphs in a large background. In simulation, we show that this temporal integration technique enables the detection of weak subgraph anomalies than are not detectable in the static case. We also demonstrate background/foreground separation using a real background graph based on a computer network. <s> BIB009 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> This article develops combined exponentially weighted moving average (EWMA) charts for the mean and variance of a normal distribution. A Bayesian approach is used to incorporate parameter uncertainty. We first use a Bayesian predictive distribution to construct the control chart, and we then use a sampling theory approach to evaluate it under various hypothetical specifications for the data generation model. Simulations are used to compare the proposed charts for different values of both the weighing constant for the exponentially weighted moving averages and for the size of the calibration sample that is used to estimate the in-statistical-control process parameters. We also examine the separate performance of the EWMA chart for the variance. <s> BIB010 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> The cumulative sum (CUSUM) and the exponentially weighted moving average (EWMA) control charts are alternatives to the Xbar chart. The CUSUM's theoretical optimality suggests that it should outperform the EWMA for detecting persistent shifts, but practitioners have long thought that the two perform about equally. Each also involves design decisions on the likely shift in the process. This article quantifies the effect of these choices and concludes that, though the CUSUM outperforms the EWMA at the shift for which each was designed, if the actual shift is smaller than that used in the design, the EWMA may respond faster. <s> BIB011 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> This article develops a Phase I design structure of -Chart, namely Bayesian -Chart, based on Bayesian (posterior distribution) framework assuming the normality of the quality characteristic to incorporate parameter uncertainty. Our approach consists of two stages: (i) construction of the control limits for -Chart based on posterior distribution of unknown mean μ and (ii) evaluation of the performance of the proposed design structure. The proposed design structure of -Chart is compared with the frequents design structure of - Chart in terms of (i) width of the control region and (ii) power of detecting a shift in the location parameter of the process. It has been observed that the proposed design structure of -Chart is performs better than the usual design structure to detecting shifts in the parameter of the process when the prior mean is close to the unknown target value. <s> BIB012 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> We study the effect of the Phase I estimation error on the cumulative sum (CUSUM) chart. Impractically large amounts of Phase I data are needed to sufficiently reduce the variation in the in-control average run lengths (ARL) between practitioners. To reduce the effect of estimation error on the chart's performance we design the CUSUM chart such that the in-control ARL exceeds a desired value with a specified probability. This is achieved by adjusting the control limits using a bootstrap-based design technique. Such approach does affect the out-of-control performance of the chart; however, we find that this effect is relatively small. <s> BIB013 </s> An overview and perspective on social network monitoring <s> Control chart and hypothesis testing methods <s> Network modeling and analysis has become a fundamental tool for studying various complex systems. This paper proposes an extension of statistical monitoring to network streams, which is crucial for executive decision-making in various applications. To t.. <s> BIB014
|
We believe that concepts and methods in statistical process monitoring can be used to greater advantage in social network monitoring. One of these concepts, the distinction between the retrospective analysis of baseline data (Phase I) and methods for prospective on-going monitoring (Phase II), is discussed in Section 4.1. In their papers, McCulloh and Carley (2008a, b) , and BIB008 used monitoring methods such as the cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) charts to detect changes in the network as a whole. For information on these two types of charts, we recommend BIB011 . They focused on detecting changes in the communication behavior in military units. Global network metrics such as average closeness and average betweenness were used as time series input to the charting methods, but it was pointed out that node or sub-network metrics could have been used instead. BIB008 stated that five or more network graphs should be used to establish a baseline. Current research by BIB013 and others show, however, that many more network graphs would have to be available in order to estimate the baseline parameters so that the resulting control chart performance would be reliable. BIB014 proposed a method to detect changes in the behavior within and between specified sub-networks with the incorporation of covariate information. For example, in a university environment the sub-networks could correspond to departments and faculty rank could serve as a covariate. The authors modeled the probabilities of contacts between pairs of individuals in the network using a logistic regression model with sub-network membership and covariate data on the individuals used as explanatory variables. A likelihood ratio test was proposed to detect changes in the logistic regression model fit with each new graph. The authors proposed three approaches. One is referred to as the static reference approach where each new graph is compared to those in a fixed baseline Phase I sample. In the dynamic reference approach, each incoming graph is compared to all previous graphs. If there is no signal of an anomaly then the current graph is entered into the baseline for the next graph to be observed. The third approach is referred to as the dynamic reference sliding window approach where the current graph is compared to only the most recent q graphs, where q is the size of the moving window. BIB014 stated that the choice of approach depends on the objective of monitoring, but we see the moving window approach as generally being the most useful because networks tend to evolve over time. Some types of anomalies cannot be detected with the logistic regression method of BIB014 . In some cases the number of contacts within a specified group within the network can be redistributed into any configuration without affecting the estimated regression coefficients or the likelihood ratio test. We note that checking for changes in a logistic regression model over time falls into the category of profile monitoring. Yeh and Huang (2011) reviewed some relevant methods for determining whether or not a logistic regression model has changed over time. BIB014 used simulation to compare their methods to those of BIB008 , where the latter methods are based on global network metrics without the incorporation of the available covariate information. It would have been a fairer comparison, however, to base the McCulloh and Carley (2011) methods on metrics corresponding to the activity within each of the two assumed categories of individuals. proposed a method with assumptions quite similar to those of BIB014 to detect specified types of network changes. used a log-linear model for the probabilities of connections between pairs of individuals, however, because the reduced amount of computation allows their method to be used with much larger networks. The monitoring approach of BIB009 ) is based on eigenvalues of modularity matrices, proposed by for finding community structure in networks. The modularity matrix is the difference between C t and the expected value of C t assuming that edges occur independently. It can be thought of as a residuals matrix. BIB009 ) considered a window of network snapshots to calculate the differences between observed and expected adjacency matrices. The differences between the matrices were weighted with filter coefficients based on the assumed known signal model. Instead of only choosing the first eigenvector of the resulting matrix, they picked the first two and projected the modularity matrix onto the corresponding space. They assumed if there is no change, projected values should be randomly scattered (not clustered) in any arbitrarily defined quadrant. In order to test this hypothesis, they use a contingency test statistic in a 2 by 2 table defined by the quadrant. If the test statistic is large, there is evidence of change. BIB009 assumed that there is a known signal model, where the anomalous subgraph behavior of interest is known, but its position within the background is not. Their matched filter approach is very similar to the cuscore approach of Box and Ramirez (1992) , which BIB003 showed was not effective at detecting delayed process shifts. In their first simulation BIB009 assumed that an anomalous subgraph density is fixed, but the edges changed with each sample. In this case their approach gets better and better as the window size increases. This is appropriate for hypothesis testing, but large window sizes are not efficient for process monitoring because of the buildup of inertia. As discussed by BIB002 , inertia can slow down the detection of delayed process changes. BIB009 assumed in their second simulation that the density of the anomalous subgraph increased linearly over 32 samples. The filter coefficients were then set to be linearly decreasing with age from 1 to 0 over a window of size 32. The problem with their matched filter approach is that one does not know when the signal (anomaly) will occur. BIB009 ) assumed a hypothesis testing framework, not on-line continuous Phase II monitoring, and evaluated their methods using ROC curves assuming that any anomalies occur immediately. BIB005 proposed a two-stage Bayesian approach to anomaly detection. Their goal was to detect anomalous communication levels between pairs of individuals. Once these pairs are identified, they are used to form a sub-network that can then be analyzed for anomalous behavior. They assumed either a Poisson conditional distribution or a hurdle Poisson conditional distribution for the counts of contacts between pairs of individuals. The hurdle model allows higher probabilities of no contact in a way similar to the use of a zero-inflated Poisson model. They used control limits based on a Bayesian predictive distributions on the contacts between each pairs of individuals to identify a subset of potentially anomalous pairs of individuals. If an observed count is sufficiently far into the tails of the predictive distribution, as measured by a p-value, a signal is given that there could be an anomaly. The predictive distribution for the current count was based on the prior distribution and all data up to, but not including, the current time. They then used standard network inference tools on a smaller sub-network based on the pairs of individuals identified as anomalous and their contacts to identify anomalous network behavior. BIB005 used a p-value threshold of 0.05, which will lead to many pairs of individuals falsely identified as anomalous in large networks. BIB005 did not realize that a number of researchers have proposed using control charts with the control limits based on Bayesian predictive distributions for quality control applications. These include BIB001 BIB004 BIB006 BIB007 BIB010 , , BIB012 , and Raubenheimer and van der Merwe (2015) . The primary way in which these methods differ is that only BIB005 and updated the posterior distribution of the parameter of interest using all prior data without a distinction between the retrospective analysis of baseline Phase I data and the on-going real-time monitoring in Phase II. In this sense, their approaches and the dynamic reference approach of BIB014 are closely related to the use of the self-starting methods of . The other researchers used predictive distributions based on only the fixed set of Phase I data to determine the posterior distribution of the parameter or parameters of interest. Heard et al. (2011) pointed out, however, that for a longer term view, local models should be fit within shorter blocks of time, i.e., a moving window version of their method should be used.
|
An overview and perspective on social network monitoring <s> Scan methods <s> Most disease registries are updated at least yearly. If a geographically localized health hazard suddenly occurs, we would like to have a surveillance system in place that can pick up a new geographical disease cluster as quickly as possible, irrespective of its location and size. At the same time, we want to minimize the number of false alarms. By using a space–time scan statistic, we propose and illustrate a system for regular time periodic disease surveillance to detect any currently ‘active’ geographical clusters of disease and which tests the statistical significance of such clusters adjusting for the multitude of possible geographical locations and sizes, time intervals and time periodic analyses. The method is illustrated on thyroid cancer among men in New Mexico 1973–1992. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Scan methods <s> We introduce a theory of scan statistics on graphs and apply the ideas to the problem of anomaly detection in a time series of Enron email graphs. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Scan methods <s> AbstractThe inference about the statistical properties of quality control methodologies is based on the assumptions of normality and independence. In real industrial environments though process data is often correlated or exhibits some serial dependence affecting the efficiency of Statistical Process Control (SPC) methodologies. New technology gives managers the option of using more sophisticated SPC models which more accurately reflect the process being monitored, by relaxing some of the assumptions. The aim of this paper is to present, to apply and to evaluate control charts that are designed to account for autocorrelation. <s> BIB003 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Scan statistics are used in public health applications to detect increases in rates or clusters of disease indicated by an unusually large number of events. Most of the work has been for the retrospective case, in which a single set of historical data is to be analyzed. A modification of this retrospective scan statistic has been recommended for use when incidences of an event are recorded as they occur over time (prospectively) to determine whether the underlying incidence rate has increased, preferably as soon as possible after such an increase. In this paper, we investigate the properties of the scan statistic when used in prospective surveillance of the incidence rate under the assumption of independent Bernoulli observations. We show how to evaluate the expected number of Bernoulli observations needed to generate a signal that the incidence rate has increased. We compare the performance of the prospective scan statistic method with that obtained using the Bernoulli-based cumulative sum (CUSUM) technique. We show that the latter tends to be more effective in detecting sustained increases in the rate, but the scan method may be preferred in some applications due to its simplicity and can be used with relatively little loss of efficiency. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Learning the network structure of a large graph is computationally demanding, and dynamically monitoring the network over time for any changes in structure threatens to be more challenging still. ::: ::: This paper presents a two-stage method for anomaly detection in dynamic graphs: the first stage uses simple, conjugate Bayesian models for discrete time counting processes to track the pairwise links of all nodes in the graph to assess normality of behavior; the second stage applies standard network inference tools on a greatly reduced subset of potentially anomalous nodes. The utility of the method is demonstrated on simulated and real data sets. <s> BIB005 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Abstract : Changes in observed social networks may signal an underlying change within an organization, and may even predict significant events or behaviors. The breakdown of a team's effectiveness, the emergence of informal leaders, or the preparation of an attack by a clandestine network may all be associated with changes in the patterns of interactions among group members. The ability to systematically, statistically, effectively and efficiently detect these changes has the potential to enable the anticipation, early warning, and faster response to both positive and negative organizational activities. By applying statistical process control techniques to social networks we can rapidly detect changes in these networks. Herein we describe this methodology and then illustrate it using four data sets. We nominate four types of dynamic network behaviors for investigation in this paper. These behaviors are not comprehensive; however, it is necessary to define a set of behaviors to focus our investigation of network change. The four behaviors we focus on are network stability, endogenous change, exogenous change, and initiated change. The first data set is the Newcomb fraternity data. The second set of data was collected on a group of mid-career U.S. Army officers in a week-long training exercise. The third data set contains the perceived connections among members of al Qaeda based on open sources, and the fourth data set is simulated using multiagent simulation. The results indicate that this methodology is able to detect change even with the high levels of uncertainty inherent in these data sets. <s> BIB006 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Scan statistics are used in spatial statistics and image analysis to detect regions of unusual or anomalous activity. A scan statistic is a maximum (or minimum) of a local statistic—one computed on a local region of the data. This is sometimes called ‘moving window analysis’; in the Engineering literature. The idea is to ‘slide’ a window around the image (or map or whatever spatial structure the data have), compute a statistic within each window, and look for outliers—anomalously high (or low) statistics. We discuss extending this idea to graphs, in which case the local region is defined in terms of the connectivity of the graph—the neighborhoods of vertices. WIREs Comput Stat 2012 doi: 10.1002/wics.1217 ::: ::: ::: ::: This article is a U.S. Government work, and as such, is in the public domain in the United States of America. <s> BIB007 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Unusual clusters of disease must be detected rapidly for effective public health interventions to be introduced. Over the past decade there has been a surge in interest in statistical methods for the early detection of infectious disease outbreaks.This growth in interest has given rise to much new methodological work, ranging across the spectrum of statistical methods.The paper presents a comprehensive review of the statistical approaches that have been proposed. Applications to both laboratory and syndromic surveillance data are provided to illustrate the various methods <s> BIB008 </s> An overview and perspective on social network monitoring <s> Scan methods <s> We introduce a computationally scalable method for detecting small anomalous areas in a large, time-dependent computer network, motivated by the challenge of identifying intruders operating inside enterprise-sized computer networks. Time-series of communications between computers are used to detect anomalies, and are modeled using Markov models that capture the bursty, often human-caused behavior that dominates a large subset of the time-series. Anomalies in these time-series are common, and the network intrusions we seek involve coincident anomalies over multiple connected pairs of computers. We show empirically that each time-series is nearly always independent of the time-series of other pairs of communicating computers. This independence is used to build models of normal activity in local areas from the models of the individual time-series, and these local areas are designed to detect the types of intrusions we are interested in. We define a locality statistic calculated by testing for deviations from... <s> BIB009 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Social networks are increasingly attracting the attention of academic and industry researchers. Monitoring communications within clusters of suspicious individuals is important in flagging potential planning activities for terrorism events or crime. Governments are interested in methodology that can forewarn them of future terrorist attacks or social uprisings in disenchanted groups of their populations. This paper will examine a range of approaches that could be used to monitoring communication levels between suspicious individuals. The methodology could be scaled up to either understand changes in social structure for larger groups of people, to help manage crises such are bushfires in densely populated areas, or early detection of disease outbreaks using surveillance methods. The methodology could be extended into these other application domains that are less invasive of individuals’ privacy. <s> BIB010 </s> An overview and perspective on social network monitoring <s> Scan methods <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB011
|
A number of researchers have proposed what are referred to as scan-based network monitoring schemes. In a frequently cited paper, BIB002 proposed a method for detecting increases in communication levels based on the sizes of the k th order neighborhoods of each individual, where k = 0, 1, and 2. The degree of an individual was referred to as the size of the 0 th order neighborhood. Standardized statistics were calculated over time for each of the three metrics for each individual using a moving window of a specified length to establish the baseline mean and standard deviation. A lower bound of one was used for the estimates of the standard deviation to avoid signals for relatively small changes in network behavior. A lower bound of one for the estimated standard deviation is also used in the Early Aberration Reporting System algorithm for monitoring count data used by the U. With the BIB002 method, the maximum of the three standardized network metrics at each time period is taken over the set of individuals in the network. The signal rule is based on these maxima. These maximum values are themselves standardized based on the estimated mean and standard deviation of previous maxima calculated over a moving window and a signal is given whenever a maximum is further than five standard deviations from its estimated mean. Their method was applied retrospectively to an Enron e-mail network, but it was stated that their method can be used for prospective network monitoring. We anticipate, however, that this method will be able to detect quickly only very large network changes because of the use of the maximum of the standardized metrics. The standardized metrics corresponding to an individual could become quite large, for example, without causing the maximum value to take an unusually large value. We note that the BIB002 method is not a scan method in the sense of BIB001 or BIB004 . In these examples of more traditional scan methods, the monitoring statistics are based on counts in moving temporal or spatiotemporal windows. The network monitoring scan methods are instead based on maximum values of standardized deviations of metrics over moving windows, where the maximum is taken over all of the nodes in the network. In their Bayesian method, BIB005 updated the estimates of the baseline parameter values after each time period, whereas BIB002 based the comparison baseline on a moving window of observations. Using a moving window approach allows for the network behavior to evolve slowly over time without necessarily having a signal that a process change has occurred. BIB005 , on the other hand, incorporated all data into the estimates of the level of the process to which a metric based on the current sample is compared. The moving window approach seems more reasonable to us. One must keep in mind, however, that data reflecting undetected network changes become incorporated into the baseline with a moving window approach. This makes it more difficult to detect an anomaly that is not detected as soon as it occurs. In addition, moving window approaches will not continue to signal a sustained anomaly. In his scan-based approach, BIB010 BIB010 first ranked the individuals in an attempt to have the more associated individuals closer to each other in the ranking. Once the ranking was made, a spatio-temporal scan approach was taken to identify any anomalous sub-networks with increased communication levels. One advantage given for the approach is its computational efficiency compared to the infeasible approach of scanning over the activity of all subsets of individuals of given sizes. One concern with this approach is that communities within the network may not be captured by the ordering of the individuals. In addition, the network change to be detected may correspond to sub-networks different from those captured by the ordering of individuals. Other scan-based approaches pointed out by BIB011 include a dissertation by BIB009 , the ideas in which were subsequently published in BIB009 . The application was on computer network monitoring, however, not social network monitoring. Other work involving scan statistics included , BIB007 ), Park et al. (2009 ), and McCullough and Carley (2011 . The approach of BIB007 is closely related to that of Priebe et al. (2011). McCullough and BIB006 claimed to use a scan approach similar to that of BIB002 , but we find their scan method to be somewhat ambiguously defined. combined the scan methods of BIB002 with an analysis of cross-correlations between the network metrics being monitored. The cross-correlations were calculated based on the data in the moving window, which does not include the most current observation. One concern regarding this approach is that the cross-correlations do not necessarily provide any information on anomalous activity that occurs with the current network graph. Also, their use of average correlations can mask important relationships between pairs of metric time series. Finally, it does not seem that they account for the fact that a correlation of a time series with itself will always be unity. BIB011 mentioned Pincombe (2005) as providing a network monitoring method based on time series models. Time series models can be fitted to time series of any network metrics. Unusually large residuals indicate network changes. It is important to note that this type of approach has been widely used for process monitoring in public health surveillance and in industrial and qualityrelated applications. Woodall and Montgomery (2014) provided an overview of this area and cited several review papers on the use of time series models in process monitoring, including BIB003 . BIB008 reviewed the use of time series approaches in public health surveillance.
|
An overview and perspective on social network monitoring <s> Other approaches <s> Community detection on networks is a well-known problem encountered in many fields, for which the existing algorithms are inefficient 1) at capturing overlaps in-between communities, 2) at detecting communities having disparities in size and density 3) at taking into account the networks’ dynamics. In this paper, we propose a new algorithm (iLCD) for community detection using a radically new approach. Taking into account the dynamics of the network, it is designed for the detection of strongly overlapping communities. We first explain the main principles underlying the iLCD algorithm, introducing the two notions of intrinsic communities and longitudinal detection, and detail the algorithm. Then, we illustrate its efficiency in the case of a citation network, and then compare it with existing most efficient algorithms using a standard generator of community-based networks, the LFR benchmark. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Other approaches <s> Detection of emerging topics are now receiving renewed interest motivated by the rapid growth of social networks. Conventional term-frequency-based approaches may not be appropriate in this context, because the information exchanged are not only texts but also images, URLs, and videos. We focus on the social aspects of theses networks. That is, the links between users that are generated dynamically intentionally or unintentionally through replies, mentions, and retweets. We propose a probability model of the mentioning behaviour of a social network user, and propose to detect the emergence of a new topic from the anomaly measured through the model. We combine the proposed mention anomaly score with a recently proposed change-point detection technique based on the Sequentially Discounting Normalized Maximum Likelihood (SDNML), or with Kleinberg's burst model. Aggregating anomaly scores from hundreds of users, we show that we can detect emerging topics only based on the reply/mention relationships in social network posts. We demonstrate our technique in a number of real data sets we gathered from Twitter. The experiments show that the proposed mention-anomaly-based approaches can detect new topics at least as early as the conventional term-frequency-based approach, and sometimes much earlier when the keyword is ill-defined. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Other approaches <s> Recent advances in technology have enabled social media services to support space-time indexed data, and internet users from all over the world have created a large volume of time-stamped, geo-located data. Such spatiotemporal data has immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. In analyzing social media data, researchers have mainly focused on finding temporal trends according to volume-based importance. Hence, a relatively small volume of relevant messages may easily be obscured by a huge data set indicating normal situations. In this paper, we present a visual analytics approach that provides users with scalable and interactive social media data analysis and visualization including the exploration and examination of abnormal topics and events within various social media data sources, such as Twitter, Flickr and YouTube. In order to find and understand abnormal events, the analyst can first extract major topics from a set of selected messages and rank them probabilistically using Latent Dirichlet Allocation. He can then apply seasonal trend decomposition together with traditional control chart methods to find unusual peaks and outliers within topic time series. Our case studies show that situational awareness can be improved by incorporating the anomaly and trend examination techniques into a highly interactive visual analysis process. <s> BIB003 </s> An overview and perspective on social network monitoring <s> Other approaches <s> This paper develops a methodology to aggregate signals in a network regarding some hidden state of the world. We argue that focusing on edges around hubs will under certain circumstances amplify the faint signals disseminating in a network, allowing for more efficient detection of that hidden state. We apply this method to detecting emergencies in mobile phone data, demonstrating that under a broad range of cases and a constraint in how many edges can be observed at a time, focusing on the egocentric networks around key hubs will be more effective than sampling random edges. We support this conclusion analytically, through simulations, and with analysis of a dataset containing the call log data from a major mobile carrier in a European nation. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Other approaches <s> As social networking sites have risen in popularity, cyber-criminals started to exploit these sites to spread malware and to carry out scams. Previous work has extensively studied the use of fake (Sybil) accounts that attackers set up to distribute spam messages (mostly messages that contain links to scam pages or drive-by download sites). Fake accounts typically exhibit highly anomalous behavior, and hence, are relatively easy to detect. As a response, attackers have started to compromise and abuse legitimate accounts. Compromising legitimate accounts is very effective, as attackers can leverage the trust relationships that the account owners have established in the past. Moreover, compromised accounts are more difficult to clean up because a social network provider cannot simply delete the correspond- <s> BIB005
|
Many methods have been proposed in the network analysis literature for detecting changes in network structure or behavior over time with specific goals in mind. Examples include detecting fraudulent accounts, detecting unusual events affecting network behavior and detecting change in community structure. A complete review of these methods is not feasible, but we briefly discuss some of this work in this subsection. BIB001 , for example, proposed a method for identifying changes in community structure over time where the identified communities could possibly overlap. As data are obtained, previously identified communities are updated and new communities can be identified. As another example, BIB003 proposed a method for detecting abnormal events quickly, such as a mass shooting or an earthquake, using social media data that incorporates spatiotemporal information. The approach involves a seasonal trend decomposition in conjunction with control chart methods based on a moving window of values to find unusual peaks and outliers within topic time series. In a related paper, BIB004 developed a method for detecting an extraordinary event using the timing and traffic within a network assuming no knowledge of the content of the messages. In addition, BIB005 proposed a method for identifying compromised user accounts by building behavioral profiles for the users. Their method involves looking for groups of accounts that all experience similar changes within a short period of time. Their method was illustrated using Twitter and Facebook datasets. BIB002 , on the other hand, proposed a method for detecting emerging topics from social network streams based on the mentioning behavior of the users.
|
An overview and perspective on social network monitoring <s> Phase I vs. Phase II <s> An overview and perspective is provided on the Phase I collection and analysis of data for use in process improvement and control charting. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Phase I vs. Phase II <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB002
|
In statistical process monitoring it is important to distinguish between Phase I and Phase II. Phase I includes methods for understanding process behavior based on a fixed baseline set of data. In-control parameter values for appropriate models are estimated in the retrospective Phase I and used to design methods for on-going prospective monitoring in Phase II. In Phase II, we make a decision about the stability of the process relative to the Phase I baseline as each sample is collected over time. Phase I issues and methods were discussed by BIB001 . Generally it would seem to be more difficult to obtain a baseline of stable network data, however, than it would be to obtain such data in a much more controlled industrial environment. Thus we see a greater need for the use of moving window approaches which would be inappropriate for industrial process monitoring because industrial processes are not allowed to wander or evolve. BIB002 referred to methods of network anomaly detection as being either "static" or "dynamic". For static network methods the time order of contacts is ignored with all data aggregated over time. We consider it useful to also distinguish between Phase I dynamic methods to be used on a set of historical data with time order preserved and Phase II dynamic monitoring performed on-line as each new matrix of counts is observed. Generally the methods used for the analysis of Phase I data differ from those used in Phase II. Quick detection of process changes is important in Phase II, for example, but irrelevant in the analysis of Phase I data. Thus EWMA and CUSUM methods are often used in Phase II, while change-point and outlier detection methods are commonly used in Phase I.
|
An overview and perspective on social network monitoring <s> Use of computer simulation <s> Social networks are increasingly attracting the attention of academic and industry researchers. Monitoring communications within clusters of suspicious individuals is important in flagging potential planning activities for terrorism events or crime. Governments are interested in methodology that can forewarn them of future terrorist attacks or social uprisings in disenchanted groups of their populations. This paper will examine a range of approaches that could be used to monitoring communication levels between suspicious individuals. The methodology could be scaled up to either understand changes in social structure for larger groups of people, to help manage crises such are bushfires in densely populated areas, or early detection of disease outbreaks using surveillance methods. The methodology could be extended into these other application domains that are less invasive of individuals’ privacy. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Use of computer simulation <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Use of computer simulation <s> Network modeling and analysis has become a fundamental tool for studying various complex systems. This paper proposes an extension of statistical monitoring to network streams, which is crucial for executive decision-making in various applications. To t.. <s> BIB003
|
We agree with BIB002 that methods need to be compared based on simulated networks. McCulloh and Carly (2011) also pointed out the usefulness of simulation studies. Anomalies can be modeled in the simulated datasets and methods can be compared on the basis of their ability to detect the anomalies. There is a substantive literature in the statistical modeling of networks that offers a diverse number of random graph models that may be helpful in this endeavor. For example, see the recent review of . There are advantages in using parametric statistical models for the networks so that multiple graphs can be generated to represent a baseline and so that anomalies can be simulated by changing the parameters corresponding, for example, to contacts between individuals within a sub-network. Ideally one should use realistic networks, but the use of simplified networks would likely provide valuable insights on the relative performance of competing methods. If a method is not effective in detecting changes in simple networks, it will be unlikely to be effective with more complex networks. Decisions are required on the number of individuals in the network, the grouping of individuals into sub-networks, the type of covariate information, if any, and the type of anomaly to be detected. In their simulation BIB003 assumed a given logistic regression model for the probabilities of contacts between pairs of individuals. They assumed that covariate data was available on the individuals, i.e., the data were labelled. also used simulation to study the detection performance of their method. In his simulations BIB001 BIB001 assumed that the numbers of contacts between individuals were Poisson distributed.
|
An overview and perspective on social network monitoring <s> Distributional assumptions <s> Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Distributional assumptions <s> Learning the network structure of a large graph is computationally demanding, and dynamically monitoring the network over time for any changes in structure threatens to be more challenging still. ::: ::: This paper presents a two-stage method for anomaly detection in dynamic graphs: the first stage uses simple, conjugate Bayesian models for discrete time counting processes to track the pairwise links of all nodes in the graph to assess normality of behavior; the second stage applies standard network inference tools on a greatly reduced subset of potentially anomalous nodes. The utility of the method is demonstrated on simulated and real data sets. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Distributional assumptions <s> Stochastic blockmodels have been proposed as a tool for detecting community structure in networks as well as for generating synthetic networks for use as benchmarks. Most blockmodels, however, ignore variation in vertex degree, making them unsuitable for applications to real-world networks, which typically display broad degree distributions that can significantly distort the results. Here we demonstrate how the generalization of blockmodels to incorporate this missing element leads to an improved objective function for community detection in complex networks. We also propose a heuristic algorithm for community detection using this objective function or its non-degree-corrected counterpart and show that the degree-corrected version dramatically outperforms the uncorrected one in both real-world and synthetic networks. <s> BIB003 </s> An overview and perspective on social network monitoring <s> Distributional assumptions <s> Social networks are increasingly attracting the attention of academic and industry researchers. Monitoring communications within clusters of suspicious individuals is important in flagging potential planning activities for terrorism events or crime. Governments are interested in methodology that can forewarn them of future terrorist attacks or social uprisings in disenchanted groups of their populations. This paper will examine a range of approaches that could be used to monitoring communication levels between suspicious individuals. The methodology could be scaled up to either understand changes in social structure for larger groups of people, to help manage crises such are bushfires in densely populated areas, or early detection of disease outbreaks using surveillance methods. The methodology could be extended into these other application domains that are less invasive of individuals’ privacy. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Distributional assumptions <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB005 </s> An overview and perspective on social network monitoring <s> Distributional assumptions <s> Network modeling and analysis has become a fundamental tool for studying various complex systems. This paper proposes an extension of statistical monitoring to network streams, which is crucial for executive decision-making in various applications. To t.. <s> BIB006
|
To model a network parametrically requires some distributional assumptions. It is sometimes assumed that the number of communications between pairs of individuals is Poisson distributed. The Poisson means can vary depending on the sub-group membership of the individuals. See, for example, BIB004 BIB004 . BIB002 used a hurdle variant of the Poisson distribution to account for an increased probability of no communication between two individuals in a given time period. BIB005 stated, however, that social network communication count distributions typically have heavier tails than those associated with the Poisson distribution. The use of Bayesian models can yield negative binomial distributions for the counts. The negative binomial distribution, frequently used in public health surveillance, can be used to model counts that are overdispersed relative to the Poisson distribution. Poisson distributed numbers of contacts for individuals result from the random graph approach of under the assumption that contact between any two specified individuals can be represented by a Bernoulli random variable with a constant probability. As pointed out by , the degree distribution follows a power law distribution for many networks, in which case scale-free random graph models such as the preferential attachment model of BIB001 can be used. Another option is the degree-corrected stochastic block model of BIB003 . In their computer simulations and BIB006 assumed that there was covariate information on the individuals in the network. The probability of a link between any two individuals was modeled using loglinear modeling and logistic regression, respectively, in their approaches. We do not support the use of the binomial model by . They proposed breaking each of time periods for which we are obtaining the matrices C t into disjoint increments, assuming that the probability of at least one connection between two individuals in each increment is a fixed value π. Thus, the sum of these Bernoulli random variables is a binomial random variable. The issues regarding how to divide the interval into increments and the estimation of π, however, were not addressed. In addition, if more than one contact is made between individuals in a single time increment, there would be a loss of information with their approach. To simulate networks with parametric models, some assumptions about dependence structure are needed. As a start, it seems reasonable to assume independence of the C t matrices over time. If a method works poorly under this assumption, it would be unlikely to work well under a more general model.
|
An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased. 1.1. Simultaneous hypotheses testing. The control of the increased type I error when testing simultaneously a family of hypotheses is a central issue in the area of multiple comparisons. Rarely are we interested only in whether all hypotheses are jointly true or not, which is the test of the intersection null hypothesis. In most applications, we infer about the individual hypotheses, realizing that some of the tested hypotheses are usually true—we hope not all—and some are not. We wish to decide which ones are not true, indicating (statistical) discoveries. An important such problem is that of multiple endpoints in a clinical trial: a new treatment is compared with an existing one in terms of a large number of potential benefits (endpoints). <s> BIB002 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> A review is given of various statistical performance metrics that have been used with prospective surveillance schemes, giving consideration to situations under which the metrics are most useful. Approaches and metrics used in industrial process monito.. <s> BIB003 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> A number of methods have been proposed for detecting an increase in the incidence rate of a rare health event, such as a congenital malformation. Among these are the sets method, two modifications of the sets method, and the CUSUM method based on the Poisson distribution. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compared the performance of the sets method and its modifications with that of the Bernoulli CUSUM chart under a wide variety of circumstances. Chart design parameters were chosen to satisfy a minimax criteria. We used the steady-state average run length to measure chart performance instead of the average run length (ARL), which was used in nearly all previous comparisons involving the sets method or its modifications. Except in a very few instances, we found that the Bernoulli CUSUM chart has better steady-state ARL performance than the sets method and its modifications for the extensive number of cases considered. Thus, we recommend the use of the Bernoulli CUSUM chart to monitor small incidence rates and provide practical advice for its implementation. Copyright © 2007 John Wiley & Sons, Ltd. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> Graphs are high-dimensional, non-Euclidean data, whose utility spans a wide variety of disciplines. While their non-Euclidean nature complicates the application of traditional signal processing paradigms, it is desirable to seek an analogous detection framework. In this paper we present a matched filtering method for graph sequences, extending to a dynamic setting a previous method for the detection of anomalously dense subgraphs in a large background. In simulation, we show that this temporal integration technique enables the detection of weak subgraph anomalies than are not detectable in the static case. We also demonstrate background/foreground separation using a real background graph based on a computer network. <s> BIB005 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> Machine vision systems are increasingly being used in industrial applications because of their ability to quickly provide information on product geometry, surface defects, surface finish, and other product and process characteristics. Previous research for monitoring these visual characteristics using image data has focused on either detecting changes within an image or between images. Extending these methods to include both the spatial and the temporal aspects of image data would provide more detailed diagnostic information, which would be of great value to industrial practitioners. Therefore, in this article, we show how image data can be monitored using a spatiotemporal framework that is based on an extension of a generalized likelihood ratio control chart. The performance of the proposed method is evaluated through computer simulations and experimental studies. The results show that our proposed spatiotemporal method is capable of quickly detecting the emergence of a fault. The computer simulations also show that our proposed generalized likelihood ratio control charting method provides a good estimate of the change point and the size/location of the fault, which are important fault diagnostic metrics that are not typically provided in the image monitoring literature. Finally, we highlight some research opportunities and provide some advice to practitioners. Copyright © 2012 John Wiley & Sons, Ltd. <s> BIB006 </s> An overview and perspective on social network monitoring <s> Performance metrics for monitoring schemes <s> Control charts are the most popular monitoring tools used to distinguish between special (assignable) and common causes of variation and to detect any changes in processes. The time that a control chart gives an out-of-control signal is not the real time of change. The actual time of the change is called the change point. Knowing the real time of the change will help and simplify finding the assignable causes of the signal, which may be the result of a shift in the process mean or change in process variability. This article gives an overview of change point estimation in control charts, provides a classification scheme, and describes the research that has previously appeared in the literature. In addition, a gap analysis in this area provides direction for future research. Copyright © 2011 John Wiley & Sons, Ltd. <s> BIB007
|
We require metrics in order to compare the performance of network monitoring methods in computer simulation studies. The standard performance metrics in quality control applications are based on the run length distribution, where the run length is the number of samples of observations until a signal is given that a process change has occurred. Typically the average run length (ARL) is used. One would like for the ARL to be suitably large when the process is stable and low when a process change occurs. McCullough and Carley (2011) defined an average detection length metric that is equivalent to the ARL. The ARL metric is useful when a change in the process is sustained until it is detected. If a change to the network is temporary, however, then a more reasonable metric is the probability of detecting the process change while it is in effect. This is referred to as the probability of correct detection. A general discussion of this and other performance metrics was given by and BIB003 . In assessing performance in detecting a process change, it can be assumed that the process change happens at the time monitoring begins or that the change is delayed. Metrics under these two scenarios are referred to as being zero-state and steady state, respectively. Generally, steady-state performance metrics are preferred in statistical process monitoring because process changes are frequently delayed and because some methods have good zero-state performance, but poor steady-state performance. See, for example, BIB004 . We expect that the performance of the method of BIB005 will not be as good for delayed network changes as it is for network changes that occur when monitoring begins. In addition to quick detection of network anomalies, the individual or individuals involved in the anomaly may need to be accurately identified. This is analogous to being able to identify the correct geographical region of an outbreak in public health surveillance applications. Appropriate metrics include the percentages of misclassified individuals or the use of a metric such as the Dice similarity coefficient proposed by and used by BIB006 in an image monitoring application. It may also be important to determine the time at which an anomaly first occurred. BIB007 reviewed the statistical process monitoring literature on identifying the time of a process change after a signal that a change has occurred. With large networks methods may tend to identify one or more individuals or sub-networks as being anomalous at each time period. In these cases the ARL metric is no longer useful. Metrics such as the false discovery rate would then be needed based on the ideas of BIB001 and BIB002 . We note that the use of performance metrics is required in order to compare the performance of competing methods in simulation studies. Practitioners, however, should not expect to be able to design monitoring methods such that performance metrics will take specified values, e. g., having an in-control ARL of 100. As illustrated by , it is not possible to have enough baseline data to accomplish this objective even in the much simpler univariate case of monitoring the mean of a variable assumed to have a normal distribution.
|
An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> We introduce a theory of scan statistics on graphs and apply the ideas to the problem of anomaly detection in a time series of Enron email graphs. <s> BIB001 </s> An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> Learning the network structure of a large graph is computationally demanding, and dynamically monitoring the network over time for any changes in structure threatens to be more challenging still. ::: ::: This paper presents a two-stage method for anomaly detection in dynamic graphs: the first stage uses simple, conjugate Bayesian models for discrete time counting processes to track the pairwise links of all nodes in the graph to assess normality of behavior; the second stage applies standard network inference tools on a greatly reduced subset of potentially anomalous nodes. The utility of the method is demonstrated on simulated and real data sets. <s> BIB002 </s> An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> The aggregation of event counts is a common, and often necessary, practice in many applications. When working with large numbers of events, it may be more practical to consider the number of events... <s> BIB003 </s> An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> Social networks are increasingly attracting the attention of academic and industry researchers. Monitoring communications within clusters of suspicious individuals is important in flagging potential planning activities for terrorism events or crime. Governments are interested in methodology that can forewarn them of future terrorist attacks or social uprisings in disenchanted groups of their populations. This paper will examine a range of approaches that could be used to monitoring communication levels between suspicious individuals. The methodology could be scaled up to either understand changes in social structure for larger groups of people, to help manage crises such are bushfires in densely populated areas, or early detection of disease outbreaks using surveillance methods. The methodology could be extended into these other application domains that are less invasive of individuals’ privacy. <s> BIB004 </s> An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> Anomalies in online social networks can signify irregular, and often illegal behaviour. Anomalies in online social networks can signify irregular, and often illegal behaviour. Detection of such anomalies has been used to identify malicious individuals, including spammers, sexual predators, and online fraudsters. In this paper we survey existing computational techniques for detecting anomalies in online social networks. We characterise anomalies as being either static or dynamic, and as being labelled or unlabelled, and survey methods for detecting these different types of anomalies. We suggest that the detection of anomalies in online social networks is composed of two sub-processes; the selection and calculation of network features, and the classification of observations from this feature space. In addition, this paper provides an overview of the types of problems that anomaly detection can address and identifies key areas of future research. <s> BIB005 </s> An overview and perspective on social network monitoring <s> Research opportunities and conclusions <s> Network modeling and analysis has become a fundamental tool for studying various complex systems. This paper proposes an extension of statistical monitoring to network streams, which is crucial for executive decision-making in various applications. To t.. <s> BIB006
|
We believe that the monitoring of social networks is an important application and research area with abundant opportunities available. We agree with McCullough and Carley (2011) that social network change detection represents an exciting new area of research. The following are some research topics of interest: 1.We agree with BIB005 that there is a need to evaluate and compare the performance of existing methods. As they point out, most authors simply illustrate their proposed methods based on case study datasets. One cannot reliably compare performance of methods based on case study results since one rarely knows whether or not any detection is a false alarm. In addition, a method tailor-made for a specific case study may perform poorly in other applications. Comparisons of existing methods would likely spark ideas for new methods. It is better if new methods are scalable to large networks. 2.We also agree with BIB005 that research is needed to provide guidance on the selection of the most effective network metrics to monitor in order to satisfy the objectives of the monitoring. 3.Many of the approaches used are of the Shewhart-type in that the decision whether or not an anomaly is present is based on each set of graph information individually as it is obtained. See, for example, BIB006 . McCulloh and Carley (2008a, b) advocated use of CUSUM and EWMA methods based on network metrics. We would expect that the CUSUM and EWMA methods would have better detection capability, but performance comparisons are needed. 4.Study is needed on the effect of aggregation over time on the monitoring of networks. This would be a generalization of the work of BIB003 . We expect that detection of anomalies will become more difficult with increasing levels of aggregation, especially with Bernoulli data. In addition, study is needed on the effect of the loss of information in considering Bernoulli data instead of the numbers of contacts between individuals. We anticipate that reducing count data to Bernoulli data could result in a significant loss of information and a greatly reduced ability to detect network anomalies, particularly as graph data are aggregated over longer time intervals. 5.Is it more efficient to identify individuals with anomalous behavior and then analyze the resulting sub-network (as in BIB002 or is it better to search over sub-networks directly by monitoring k th order neighborhood data corresponding to each individual (as in BIB001 ? We anticipate that the latter approach will be more effective because the structure of the subnetwork formed by individuals with anomalous behavior may not necessarily be anomalous. 6.We encourage further investigation of monitoring methods based on monitoring the eigenvalues of modularity matrices. It is important to clarify what types of network changes are not detectable with use of a specified number of eigenvalues. 7.The use of false discovery rate approaches seems appropriate for methods based on the simultaneous use of many charts, such as the method proposed by BIB002 . Woodall and Montgomery (2014) listed several papers on the use of the false discovery rate approach in process monitoring. In addition, see . Some of the network monitoring methods, for example those by BIB002 and , are already p-value based with a concern over the high number of false positives so use of a false discovery rate approach seems promising. 8.Additional methods are needed that incorporate covariate information about the network or the contacts. This could include labels that categorize individuals into groups, the length or size of the message constituting the contact, and the time of any contact. BIB005 referred to the monitoring in this case as a search for dynamic labelled anomalies. and BIB006 seem to be the only ones thus far to have proposed methods for monitoring with attributed (or labelled) data. 9.Most often the graph count data are not smoothed over time. Moving window methods are used instead. BIB004 BIB004 , however, smoothed the count data using exponential smoothing to build in temporal memory. It is not clear which approach is better. 10.With moving window approaches, what should the length of the window be in a given application? BIB006 used moving windows of sizes 4 and 10 whereas BIB001 used a window lengths of size 20. Also it seems that it may be possible to improve performance by lagging the window by not including a specified number of the most recent graphs. 11.There will likely be seasonal effects in network data, e.g., day of the week effects or holiday effects. Seasonal effects could be identified using Phase I data. Sometimes the effect of this variation can be removed by aggregating over the data over time, e.g., aggregation of daily data by weeks. Seasonal effects are common in public health monitoring applications, so some public health surveillance methods could likely be adapted for use with network data. 12. Methods must be adapted for evolving networks to account for new individuals entering the network and for individuals leaving the network. These events can trigger signals of network change that are not likely of interest. 13. As a quality monitoring research topic, a comparison is needed between Bayesian control charts based on predictive distributions and the self-starting control chart approaches.
|
A Survey on Deep Transfer Learning <s> Deep Transfer Learning <s> A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Deep Transfer Learning <s> Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments. <s> BIB002
|
Transfer learning is an important tool in machine learning to solve the basic problem of insufficient training data. It try to transfer the knowledge from the source domain to the target domain by relaxing the assumption that the training data and the test data must be i.i.d. This will leads to a great positive effect on many domains that are difficult to improve because of insufficient training data. The learning process of transfer learning illustrated in the Fig. 1 . Some notations used in this survey need to be clearly defined. First of all, we give the definitions of a domain and a task respectively: A domain can be represented by D = {χ, P (X)}, which contains two parts: the feature space χ and the edge probability distribution P (X) where X = {x 1 , ..., x n } ∈ χ. A task can be represented by T = {y, f (x)}. It consists of two parts: label space y and target prediction function f (x). f (x) can also be regarded as a conditional probability function P (y|x). Then, the transfer learning can be formal defined as follows: Surveys BIB001 and BIB002 divide the transfer learning methods into three major categories with the relationship between the source domain and the target domain, which has been widely accepted. These suverys are good summary of the past works on transfer learning, which introduced a number of classic transfer learning methods. Further more, many newer and better methods have been proposed recently. In recent years, transfer learning research community are mainly focused on the following two aspects: domain adaption and multi-source domains transfer. Nowadays, deep learning has achieved dominating situation in many research fields in recent years. It is important to find how to effectively transfer knowledge by deep neural network, which called deep transfer learning that defined as follows: It is a deep transfer learning task where f T (·) is a non-linear function that reflected a deep neural network.
|
A Survey on Deep Transfer Learning <s> Categories <s> Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Categories <s> The goal of transfer learning is to improve the learning of a new target concept given knowledge of related source concept(s). We introduce the first boosting-based algorithms for transfer learning that apply to regression tasks. First, we describe two existing classification transfer algorithms, ExpBoost and TrAdaBoost, and show how they can be modified for regression. We then introduce extensions of these algorithms that improve performance significantly on controlled experiments in a wide range of test domains. <s> BIB002 </s> A Survey on Deep Transfer Learning <s> Categories <s> Text classification is widely used in many real-world applications. To obtain satisfied classification performance, most traditional data mining methods require lots of labeled data, which can be costly in terms of both time and human efforts. In reality, there are plenty of such resources in English since it has the largest population in the Internet world, which is not true in many other languages. In this paper, we present a novel transfer learning approach to tackle the cross-language text classification problems. We first align the feature spaces in both domains utilizing some on-line translation service, which makes the two feature spaces under the same coordinate. Although the feature sets in both domains are the same, the distributions of the instances in both domains are different, which violates the i.i.d. assumption in most traditional machine learning methods. For this issue, we propose an iterative feature and instance weighting (Bi-Weighting) method for domain adaptation. We empirically evaluate the effectiveness and efficiency of our approach. The experimental results show that our approach outperforms some baselines including four transfer learning algorithms. <s> BIB003 </s> A Survey on Deep Transfer Learning <s> Categories <s> Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability. A means of parameter selection for the two-sample test based on the MMD is proposed. For a given test level (an upper bound on the probability of making a Type I error), the kernel is chosen so as to maximize the test power, and minimize the probability of making a Type II error. The test statistic, test threshold, and optimization over the kernel parameters are obtained with cost linear in the sample size. These properties make the kernel selection and test procedures suited to data streams, where the observations cannot all be stored in memory. In experiments, the new kernel selection approach yields a more powerful test than earlier kernel selection heuristics. <s> BIB004 </s> A Survey on Deep Transfer Learning <s> Categories <s> Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization. <s> BIB005 </s> A Survey on Deep Transfer Learning <s> Categories <s> Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). ::: As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. ::: Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. <s> BIB006 </s> A Survey on Deep Transfer Learning <s> Categories <s> Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks. <s> BIB007 </s> A Survey on Deep Transfer Learning <s> Categories <s> Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings. <s> BIB008 </s> A Survey on Deep Transfer Learning <s> Categories <s> Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets. <s> BIB009 </s> A Survey on Deep Transfer Learning <s> Categories <s> The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks. <s> BIB010 </s> A Survey on Deep Transfer Learning <s> Categories <s> Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise cross-entropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods. <s> BIB011 </s> A Survey on Deep Transfer Learning <s> Categories <s> Classification of sandstone microscopic images is an essential task in geology, and the classical method is either subjective or time-consuming. Computer aided automatic classification has been proved useful, but it seldom considers the situation where sandstone images are collected from separated regions. In this paper, we provide a method called Festra, which uses transfer learning to handle the problem of interregional sandstone microscopic image classification. The method contains two parts: one is feature selection, which aims to screen out features having great difference between the regions, the other is instance transfer using an enhanced TrAdaBoost, whose object is to mitigate the difference among thin section images collected from the regions. Experiments are conducted based on the sandstone images taken from four regions in Tibet to study the performance of Festra. The experimental results have proved both effectiveness and validity of Festra, which provides competitive prediction performance on all the four regions, with few target instances labeled suitable for the field use. HighlightsThe interregional sandstone microscopic image classification problem is formally defined.A transfer learning method called Festra is proposed for the problem.The method combines the feature selection and Enhanced TrAdaBoost. <s> BIB012 </s> A Survey on Deep Transfer Learning <s> Categories <s> Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods. <s> BIB013 </s> A Survey on Deep Transfer Learning <s> Categories <s> The exquisite sensitivity of the advanced LIGO detectors has enabled the detection of multiple gravitational wave signals. The sophisticated design of these detectors mitigates the effect of most types of noise. However, advanced LIGO data streams are contaminated by numerous artifacts known as glitches: non-Gaussian noise transients with complex morphologies. Given their high rate of occurrence, glitches can lead to false coincident detections, obscure and even mimic gravitational wave signals. Therefore, successfully characterizing and removing glitches from advanced LIGO data is of utmost importance. Here, we present the first application of Deep Transfer Learning for glitch classification, showing that knowledge from deep learning algorithms trained for real-world object recognition can be transferred for classifying glitches in time-series based on their spectrogram images. Using the Gravity Spy dataset, containing hand-labeled, multi-duration spectrograms obtained from real LIGO data, we demonstrate that this method enables optimal use of very deep convolutional neural networks for classification given small training datasets, significantly reduces the time for training the networks, and achieves state-of-the-art accuracy above 98.8%, with perfect precision-recall on 8 out of 22 classes. Furthermore, new types of glitches can be classified accurately given few labeled examples with this technique. Once trained via transfer learning, we show that the convolutional neural networks can be truncated and used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Therefore, this provides a new framework for dynamic glitch classification for gravitational wave detectors, which are expected to encounter new types of noise as they undergo gradual improvements to attain design sensitivity. <s> BIB014 </s> A Survey on Deep Transfer Learning <s> Categories <s> Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. <s> BIB015 </s> A Survey on Deep Transfer Learning <s> Categories <s> Adversarial learning has been successfully embedded into deep networks to learn transferable features for domain adaptation, which reduce distribution discrepancy between the source and target domains and improve generalization performance. Prior domain adversarial adaptation methods could not align complex multimode distributions since the discriminative structures and inter-layer interactions across multiple domain-specific layers have not been exploited for distribution alignment. In this paper, we present randomized multilinear adversarial networks (RMAN), which exploit multiple feature layers and the classifier layer based on a randomized multilinear adversary to enable both deep and discriminative adversarial adaptation. The learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments demonstrate that our models exceed the state-of-the-art results on standard domain adaptation datasets. <s> BIB016 </s> A Survey on Deep Transfer Learning <s> Categories <s> We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target domain. Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition. <s> BIB017 </s> A Survey on Deep Transfer Learning <s> Categories <s> Transfer learning and ensemble learning are the new trends for solving the problem that training data and test data have different distributions. In this paper, we design an ensemble transfer learning framework to improve the classification accuracy when the training data are insufficient. First, a weighted-resampling method for transfer learning is proposed, which is named TrResampling. In each iteration, the data with heavy weights in the source domain are resampled, and the TrAdaBoost algorithm is used to adjust the weights of the source data and target data. Second, three classic machine learning algorithms, namely, naive Bayes, decision tree, and SVM, are used as the base learners of TrResampling, where the base learner with the best performance is chosen for transfer learning. To illustrate the performance of TrResampling, the TrAdaBoost and decision tree are used for evaluation and comparison on 15 UCI data sets, TrAdaBoost, ARTL, and SVM are used for evaluation and comparison on five text data sets. According to the experimental results, our proposed TrResampling is superior to the state-of-the-art learning methods on UCI data sets and text data sets. In addition, TrResampling, bagging-based transfer learning algorithm, and MultiBoosting-based transfer learning algorithm (TrMultiBoosting) are assembled in the framework, and we compare the three ensemble transfer learning algorithms with TrAdaBoost to illustrate the framework’s effective transfer ability. <s> BIB018 </s> A Survey on Deep Transfer Learning <s> Categories <s> The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. <s> BIB019
|
Deep transfer learning studies how to utilize knowledge from other fields by deep neural networks. Since deep neural networks have become popular in various fields, a considerable amount of deep transfer learning methods have been proposed that it is very important to classify and summarize them. Based on the techniques used in deep transfer learning, this paper classifies deep transfer learning into four categories: instances-based deep transfer learning, mapping-based deep transfer learning, network-based deep transfer learning, and adversarialbased deep transfer learning, which are shown in Table 1 . [4], BIB001 , BIB002 , BIB003 , BIB012 , BIB013 , BIB018 Mapping-based Mapping instances from two domains into a new data space with better similarity. [23], BIB007 , BIB004 , BIB009 , [2] Network-based Reuse the partial of network pre-trained in the source domain. [9], BIB005 , BIB010 , BIB011 , BIB019 , BIB014 , Adversarial-based Use adversarial technology to find transferable features that both suitable for two domains. [1], BIB006 , BIB008 , BIB015 , BIB016 , BIB017
|
A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Traditional machine learning makes a basic assumption: the training and test data should be under the same distribution. However, in many cases, this identical-distribution assumption does not hold. The assumption might be violated when a task from one new domain comes, while there are only labeled data from a similar old domain. Labeling the new data can be costly and it would also be a waste to throw away all the old data. In this paper, we present a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms (Freund & Schapire, 1997). TrAdaBoost allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data. We show that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone. We show that TrAdaBoost allows knowledge to be effectively transferred from the old data to the new. The effectiveness of our algorithm is analyzed theoretically and empirically to show that our iterative algorithm can converge well to an accurate model. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets. <s> BIB002 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> The goal of transfer learning is to improve the learning of a new target concept given knowledge of related source concept(s). We introduce the first boosting-based algorithms for transfer learning that apply to regression tasks. First, we describe two existing classification transfer algorithms, ExpBoost and TrAdaBoost, and show how they can be modified for regression. We then introduce extensions of these algorithms that improve performance significantly on controlled experiments in a wide range of test domains. <s> BIB003 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Text classification is widely used in many real-world applications. To obtain satisfied classification performance, most traditional data mining methods require lots of labeled data, which can be costly in terms of both time and human efforts. In reality, there are plenty of such resources in English since it has the largest population in the Internet world, which is not true in many other languages. In this paper, we present a novel transfer learning approach to tackle the cross-language text classification problems. We first align the feature spaces in both domains utilizing some on-line translation service, which makes the two feature spaces under the same coordinate. Although the feature sets in both domains are the same, the distributions of the instances in both domains are different, which violates the i.i.d. assumption in most traditional machine learning methods. For this issue, we propose an iterative feature and instance weighting (Bi-Weighting) method for domain adaptation. We empirically evaluate the effectiveness and efficiency of our approach. The experimental results show that our approach outperforms some baselines including four transfer learning algorithms. <s> BIB004 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Classification of sandstone microscopic images is an essential task in geology, and the classical method is either subjective or time-consuming. Computer aided automatic classification has been proved useful, but it seldom considers the situation where sandstone images are collected from separated regions. In this paper, we provide a method called Festra, which uses transfer learning to handle the problem of interregional sandstone microscopic image classification. The method contains two parts: one is feature selection, which aims to screen out features having great difference between the regions, the other is instance transfer using an enhanced TrAdaBoost, whose object is to mitigate the difference among thin section images collected from the regions. Experiments are conducted based on the sandstone images taken from four regions in Tibet to study the performance of Festra. The experimental results have proved both effectiveness and validity of Festra, which provides competitive prediction performance on all the four regions, with few target instances labeled suitable for the field use. HighlightsThe interregional sandstone microscopic image classification problem is formally defined.A transfer learning method called Festra is proposed for the problem.The method combines the feature selection and Enhanced TrAdaBoost. <s> BIB005 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Transfer learning has been proven to be effective for the problems where training data from a source domain and test data from a target domain are drawn from different distributions. To reduce the distribution divergence between the source domain and the target domain, many previous studies have been focused on designing and optimizing objective functions with the Euclidean distance to measure dissimilarity between instances. However, in some real-world applications, the Euclidean distance may be inappropriate to capture the intrinsic similarity or dissimilarity between instances. To deal with this issue, in this paper, we propose a metric transfer learning framework (MTLF) to encode metric learning in transfer learning. In MTLF, instance weights are learned and exploited to bridge the distributions of different domains, while Mahalanobis distance is learned simultaneously to maximize the intra-class distances and minimize the inter-class distances for the target domain. Unlike previous work where instance weights and Mahalanobis distance are trained in a pipelined framework that potentially leads to error propagation across different components, MTLF attempts to learn instance weights and a Mahalanobis distance in a parallel framework to make knowledge transfer across domains more effective. Furthermore, we develop general solutions to both classification and regression problems on top of MTLF, respectively. We conduct extensive experiments on several real-world datasets on object recognition, handwriting recognition, and WiFi location to verify the effectiveness of MTLF compared with a number of state-of-the-art methods. <s> BIB006 </s> A Survey on Deep Transfer Learning <s> Instances-based deep transfer learning <s> Transfer learning and ensemble learning are the new trends for solving the problem that training data and test data have different distributions. In this paper, we design an ensemble transfer learning framework to improve the classification accuracy when the training data are insufficient. First, a weighted-resampling method for transfer learning is proposed, which is named TrResampling. In each iteration, the data with heavy weights in the source domain are resampled, and the TrAdaBoost algorithm is used to adjust the weights of the source data and target data. Second, three classic machine learning algorithms, namely, naive Bayes, decision tree, and SVM, are used as the base learners of TrResampling, where the base learner with the best performance is chosen for transfer learning. To illustrate the performance of TrResampling, the TrAdaBoost and decision tree are used for evaluation and comparison on 15 UCI data sets, TrAdaBoost, ARTL, and SVM are used for evaluation and comparison on five text data sets. According to the experimental results, our proposed TrResampling is superior to the state-of-the-art learning methods on UCI data sets and text data sets. In addition, TrResampling, bagging-based transfer learning algorithm, and MultiBoosting-based transfer learning algorithm (TrMultiBoosting) are assembled in the framework, and we compare the three ensemble transfer learning algorithms with TrAdaBoost to illustrate the framework’s effective transfer ability. <s> BIB007
|
Instances-based deep transfer learning refers to use a specific weight adjustment strategy, select partial instances from the source domain as supplements to the training set in the target domain by assigning appropriate weight values to these selected instances. It is based on the assumption that "Although there are different between two domains, partial instances in the source domain can be utilized by the target domain with appropriate weights.". The sketch map of instances-based deep transfer learning are shown in Fig. 2 . TrAdaBoost proposed by BIB001 use AdaBoost-based technology to filter out instances that are dissimilar to the target domain in source domains. Re-weighted instances in source domain to compose a distribution similar to target domain. Finally, training model by using the re-weighted instances from source domain and origin instances from target domain. It can reduce the weighted training error on different distribution domains that preserving the properties of AdaBoost. TaskTrAdaBoost proposed by BIB002 is a fast algorithm promote rapid retraining over new targets. Unlike TrAdaBoost is designed for classification problems, ExpBoost.R2 and TrAdaBoost.R2 were proposed by BIB003 to cover the regression problem. Bi-weighting domain adaptation (BIW) proposed BIB004 can aligns the feature spaces of two domains into the common coordinate system, and then assign an appropriate weight of the instances from source domain. BIB005 propose a enhanced TrAdaBoost to handle the problem of interregional sandstone microscopic image classification. BIB006 propose a metric transfer learning framework to learn instance weights and a distance of two different domains in a parallel framework to make knowledge transfer across domains more effective. BIB007 introduce an ensemble transfer learning to deep neural network that can utilize instances from source domain.
|
A Survey on Deep Transfer Learning <s> Mapping-based deep transfer learning <s> Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis that p = q, based on the value of a test statistic measuring the distance between the samples. One choice of test statistic is the maximum mean discrepancy (MMD), which is a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability. A means of parameter selection for the two-sample test based on the MMD is proposed. For a given test level (an upper bound on the probability of making a Type I error), the kernel is chosen so as to maximize the test power, and minimize the probability of making a Type II error. The test statistic, test threshold, and optimization over the kernel parameters are obtained with cost linear in the sample size. These properties make the kernel selection and test procedures suited to data streams, where the observations cannot all be stored in memory. In experiments, the new kernel selection approach yields a more powerful test than earlier kernel selection heuristics. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Mapping-based deep transfer learning <s> Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task. <s> BIB002 </s> A Survey on Deep Transfer Learning <s> Mapping-based deep transfer learning <s> Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets. <s> BIB003 </s> A Survey on Deep Transfer Learning <s> Mapping-based deep transfer learning <s> This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks. <s> BIB004
|
Mapping-based deep transfer learning refers to mapping instances from the source domain and target domain into a new data space. In this new data space, instances from two domains are similarly and suitable for a union deep neural network. It is based on the assumption that "Although there are different between two origin domains, they can be more similarly in an elaborate new data space.". The sketch map of instances-based deep transfer learning are shown in Fig. 3 . Transfer component analysis (TCA) introduced by and TCA-based methods BIB004 had been widely used in many applications of traditional transfer learning. A natural idea is extend the TCA method to deep neural network. BIB002 extend MMD to comparing distributions in a deep neural network, by introduces an adaptation layer and an additional domain confusion loss to learn a representation that is both semantically meaningful and domain invariant. The MMD distance used in this work is defined as and the loss function is defined as [12] improved previous work by replace MMD distance with multiple kernel variant MMD (MK-MMD) distance proposed by BIB001 . The hidden layer related with the learning task in the convolutional neural networks (CNN) is mapped into the reproducing kernel Hilbert space (RKHS), and the distance between different domains is minimized by the multi-core optimization method. BIB003 propose joint maximum mean discrepancy (JMMD) to measurement the relationship of joint distribution. JMMD was used to generalize the transfer learning ability of the deep neural networks (DNN) to adapt the data distribution in different domain and improved the previous works. Wasserstein distance proposed by [2] can be used as a new distance measurement of domains to find better mapping.
|
A Survey on Deep Transfer Learning <s> Network-based deep transfer learning <s> Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Network-based deep transfer learning <s> The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks. <s> BIB002 </s> A Survey on Deep Transfer Learning <s> Network-based deep transfer learning <s> Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise cross-entropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods. <s> BIB003 </s> A Survey on Deep Transfer Learning <s> Network-based deep transfer learning <s> The exquisite sensitivity of the advanced LIGO detectors has enabled the detection of multiple gravitational wave signals. The sophisticated design of these detectors mitigates the effect of most types of noise. However, advanced LIGO data streams are contaminated by numerous artifacts known as glitches: non-Gaussian noise transients with complex morphologies. Given their high rate of occurrence, glitches can lead to false coincident detections, obscure and even mimic gravitational wave signals. Therefore, successfully characterizing and removing glitches from advanced LIGO data is of utmost importance. Here, we present the first application of Deep Transfer Learning for glitch classification, showing that knowledge from deep learning algorithms trained for real-world object recognition can be transferred for classifying glitches in time-series based on their spectrogram images. Using the Gravity Spy dataset, containing hand-labeled, multi-duration spectrograms obtained from real LIGO data, we demonstrate that this method enables optimal use of very deep convolutional neural networks for classification given small training datasets, significantly reduces the time for training the networks, and achieves state-of-the-art accuracy above 98.8%, with perfect precision-recall on 8 out of 22 classes. Furthermore, new types of glitches can be classified accurately given few labeled examples with this technique. Once trained via transfer learning, we show that the convolutional neural networks can be truncated and used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Therefore, this provides a new framework for dynamic glitch classification for gravitational wave detectors, which are expected to encounter new types of noise as they undergo gradual improvements to attain design sensitivity. <s> BIB004 </s> A Survey on Deep Transfer Learning <s> Network-based deep transfer learning <s> The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. <s> BIB005
|
Network-based deep transfer learning refers to the reuse the partial network that pre-trained in the source domain, including its network structure and connection parameters, transfer it to be a part of deep neural network which used in target domain. It is based on the assumption that "Neural network is similar to the processing mechanism of the human brain, and it is an iterative and continuous abstraction process. The front-layers of the network can be treated as a feature extractor, and the extracted features are versatile.". The sketch map of networkbased deep transfer learning are shown in Fig. 4 . [9] divide the network into two parts, the former part is the language-independent feature transform and the last layer is the language-relative classifier. The languageindependent feature transform can be transfer between multi languages. BIB001 reuse front-layers trained by CNN on the ImageNet dataset to compute intermediate image representation for images in other datasets, CNN are trained to learning image representations that can be efficiently transferred to other visual recognition tasks with limited amount of training data. BIB002 proposed a approach to jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain, which explicitly learn the residual function with reference to the target classifier by plugging several layers into deep network. BIB003 learning domain adaptation and deep hash features simultaneously in a DNN. BIB005 proposed a novel multi-scale convolutional sparse coding method. This method can automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns, and provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. BIB004 apply deep transfer learning to transfer knowledge from real-world object recognition tasks to glitch classifier for the detector of multiple gravitational wave signals. It demonstrate that DNN can be used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Another very noteworthy result is that point out the relationship between network structure and transferability. It demonstrated that some modules may not influence in-domain accuracy but influence the transferability. It point out what features are transferable in deep networks and which type of networks are more suitable for transfer. Given an conclusion that LeNet, AlexNet, VGG, Inception, ResNet are good chooses in network-based deep transfer learning.
|
A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. <s> BIB001 </s> A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). ::: As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. ::: Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. <s> BIB002 </s> A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings. <s> BIB003 </s> A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. <s> BIB004 </s> A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> Adversarial learning has been successfully embedded into deep networks to learn transferable features for domain adaptation, which reduce distribution discrepancy between the source and target domains and improve generalization performance. Prior domain adversarial adaptation methods could not align complex multimode distributions since the discriminative structures and inter-layer interactions across multiple domain-specific layers have not been exploited for distribution alignment. In this paper, we present randomized multilinear adversarial networks (RMAN), which exploit multiple feature layers and the classifier layer based on a randomized multilinear adversary to enable both deep and discriminative adversarial adaptation. The learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments demonstrate that our models exceed the state-of-the-art results on standard domain adaptation datasets. <s> BIB005 </s> A Survey on Deep Transfer Learning <s> Adversarial-based deep transfer learning <s> We propose a framework that learns a representation transferable across different domains and tasks in a label efficient manner. Our approach battles domain shift with a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach. Our model is simultaneously optimized on labeled source data and unlabeled or sparsely labeled data in the target domain. Our method shows compelling results on novel classes within a new domain even when only a few labeled examples per class are available, outperforming the prevalent fine-tuning approach. In addition, we demonstrate the effectiveness of our framework on the transfer learning task from image object recognition to video action recognition. <s> BIB006
|
Adversarial-based deep transfer learning refers to introduce adversarial technology inspired by generative adversarial nets (GAN) BIB001 to find transferable representations that is applicable to both the source domain and the target domain. It is based on the assumption that "For effective transfer, good representation should be discriminative for the main learning task and indiscriminate between the source domain and target domain." The sketch map of adversarial-based deep transfer learning are shown in Fig. 5 . In the training process on large-scale dataset in the source domain, the front-layers of network is regarded as a feature extractor. It extracting features from two domains and sent them to adversarial layer. The adversarial layer try to discriminates the origin of the features. If the adversarial network achieves worse performance, it means a small difference between the two types of feature and better transferability, and vice versa. In the following training process, the performance of the adversarial layer will be considered to force the transfer network discover general features with more transferability. The adversarial-based deep transfer learning has obtained the flourishing development in recent years due to its good effect and strong practicality. introduce adversarial technology to transfer learning for domain adaption, by using a domain adaptation regularization term in the loss function. BIB002 proposed an adversarial training method that suitable for most any feed-forward neural model by augmenting it with few standard layers and a simple new gradient reversal layer. BIB003 proposed a approach transfer knowledge cross-domain and cross-task simultaneity for sparsely labeled target domain data. A special joint loss function was used in this work to force CNN to optimize both the distance between domains which defined as L D = L c + λL adver , where L c is classification loss, L adver is domain adversarial loss. Because the two losses stand in direct opposition to one another, an iterative optimize algorithm are introduced to update one loss when fixed another. BIB004 proposed a new GAN loss and combine with discriminative modeling to a new domain adaptation method. BIB005 proposed a randomized multi-linear adversarial networks to exploit multiple feature layers and the classifier layer based on a randomized multi-linear adversary to enable both deep and discriminative adversarial adaptation. BIB006 utilize a domain adversarial loss, and generalizes the embedding to novel task using a metric learning-based approach to find more tractable features in deep transfer learning.
|
Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology vs. a Knowledge Base <s> Reusable ontologies are becoming increasingly important for tasks such as information integration, knowledge-level interoperation and knowledge-base development. We have developed a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups. These tools make use of the World Wide Web to enable wide access and provide users with the ability to publish, browse, create and edit ontologies stored on anontology server. Users can quickly assemble a new ontology from a library of modules. We discuss how our system was constructed, how it exploits existing protocols and browsing tools, and our experience supporting hundreds of users. We describe applications using our tools to achieve consensus on ontologies and to integrate information.The Ontolingua Server may be accessed through the URLhttp://ontolingua.stanford.edu <s> BIB001 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology vs. a Knowledge Base <s> 1 Why develop an ontology? In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are: <s> BIB002
|
Is there a difference between ontology and a knowledge base? What is the difference? The differences are now summarized as follows : Contents and scope: According to BIB002 , ontology consists of classes, properties, and restrictions. Ontology together with a set of individual instances of classes constitutes a knowledge base. However, in reality, there is a fine line where the ontology ends and the knowledge base begins. Deciding whether a particular concept is a class or an individual instance depends on what the potential applications of the ontology are. The lowest level of granularity in the representation is considered as an individual instance. Features of the language used to codify the knowledge: Ontologies should be written in an expressive, declarative, portable, domain-independent, and semantically welldefined, machine-readable language, which should be independent of any particular choice of target machinereadable language of the application, such as LOOM [31], CycL , and Ontolinga BIB001 . Goal of the knowledge codification: Ontologies are designed for knowledge sharing and reuse purposes and knowledge bases are not. As a result, their definitions should be conceptualized with enough abstraction and generality. These features guarantee that ontology definitions are independent of their final uses.
|
Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Comparison of Ontology Languages <s> We propose a novel formalism, called Frame Logic (abbr., F-logic), that accounts in a clean and declarative fashion for most of the structural aspects of object-oriented and frame-based languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, encapsulation, and others. In a sense, F-logic stands in the same relationship to the object-oriented paradigm as classical predicate calculus stands to relational programming. F-logic has a model-theoretic semantics and a sound and complete resolution-based proof theory. A small number of fundamental concepts that come from object-oriented programming have direct representation in F-logic; other, secondary aspects of this paradigm are easily modeled as well. The paper also discusses semantic issues pertaining to programming with a deductive object-oriented language based on a subset of F-logic. <s> BIB001 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Comparison of Ontology Languages <s> In this work I illustrate an approach to the development of a library of problem solving components for knowledge modelling. This approach is based on an epistemological modelling framework, the Task/Method/Domain/Application (TMDA) model, and on a principled methodology, which provide an integrated view of both library construction and application development by reuse. ::: ::: The starting point of the proposed approach is given by a task ontology. This formalizes a conceptual viewpoint over a class of problems, thus providing a task-specific framework, which can be used to drive the construction of a task model through a process of model-based knowledge acquisition. The definitions in the task ontology provide the initial elements of a task-specific library of problem solving components. ::: ::: In order to move from problem specification to problem solving, a generic, i.e. taskindependent, model of problem solving as search is introduced, and instantiated in terms of the concepts in the relevant task ontology, say T. The result is a task-specific, but method-independent, problem solving model. This generic problem solving model provides the foundation from which alternative problem solving methods for a class of tasks can be defined. Specifically, the generic problem solving model provides i) a highly generic method ontology, say M; ii) a set of generic building blocks (generic tasks), which can be used to construct task-specific problem solving methods; and iii) an initial problem solving method, which can be characterized as the most generic problem solving method, which subscribes to M and is applicable to T. More specific problem solving methods can then be (re-)constructed from the generic problem solving model through a process of method/ontology specialization and method-to-task application. ::: ::: The resulting library of reusable components enjoys a clear theoretical basis and provides robust support for reuse. In the thesis I illustrate the approach in the area of parametric design. <s> BIB002 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Comparison of Ontology Languages <s> The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages, and conclude with the results of applying this framework to the selected languages. <s> BIB003 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Comparison of Ontology Languages <s> Several languages have been proposed as candidates for semantic markup. We needed to adopt a language for our current research on developing user-oriented tools operating over the Semantic Web. This paper presents the results of our analysis of three candidates that we considered: XML, RDF, and DAML+OIL along with their associated schemas and ontology specifications. The analysis focuses on the expressiveness of each language, and is presented along several dimensions and summarized in a comparison table. A surprising result of our analysis is the decision to adopt XML(Schema) for practical reasons, since it is able to accommodate a relatively expressive set of constructs and is widely known and commercially supported. We also discuss how we plan to complement XML(S) with a small set of conventions, so that we will have an easier transition to other markup languages in the future. <s> BIB004
|
In comparison of these ontology languages, Gil and Ratnakar BIB004 proposed dimensions for the comparison, including context, subclasses and properties, primitive data types, instances, property constraints, property values, negation, conjunction and disjunction, inheritance, definitions, and expressiveness. Based on these dimensions, XML, RDF, DAML+OIL along with their associated schema were reviewed. Corcho and Gomez-Perez BIB003 established a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (ontolingua, OKBC , OCML BIB002 , FLogic BIB001 , LOOM) and "web-based" ontology languages (SHOE, XOL , RDF, OIL, etc.). The framework distinguishes between knowledge representation and inference mechanism. Domain knowledge describes the static information and knowledge objects in application domain. According to Gruber , domain knowledge in ontologies can be specified using five kinds of components: concepts, relations, functions, axioms and instances. Concepts in the ontology are usually organized in taxonomies. The inference mechanism describes how the static structures represented in the domain knowledge can be used to carry out a reasoning process . There is a strong relationship between both dimensions, as the structures used for representing knowledge are the basis for the reasoning process. The criteria consist of inference engine, automatic classification, exception, inheritance in monotonic, non-monotonic, simple, or multiple, executable procedure, constraint checking, and forward and backward chaining. When developing domain ontologies for an application, it is not only necessary to study the knowledge representation and reasoning needs for the application, but also the knowledge representation and reasoning capabilities provided by the language. The above framework will help developers make wise decisions on the selection of the ontology language to use. also needed.
|
Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Design Criteria <s> Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria. <s> BIB001 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Design Criteria <s> This phenomenon is quite common: the examples above have been taken from existing toplevel ontologies used in practice, and they are responsible, in our opinion, of many situations of tangleness, confusion, and lack of semantic rigour. We advocate in the present paper a design principle aimed to clarify such situations and produce more reusable and well-founded ontologies. It can be stated as follows: <s> BIB002
|
Based on the work of Gruber BIB001 and Borgo BIB002 , Gomez-Perez
|
Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology Building Tools and Environments <s> Reusable ontologies are becoming increasingly important for tasks such as information integration, knowledge-level interoperation and knowledge-base development. We have developed a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups. These tools make use of the World Wide Web to enable wide access and provide users with the ability to publish, browse, create and edit ontologies stored on anontology server. Users can quickly assemble a new ontology from a library of modules. We discuss how our system was constructed, how it exploits existing protocols and browsing tools, and our experience supporting hundreds of users. We describe applications using our tools to achieve consensus on ontologies and to integrate information.The Ontolingua Server may be accessed through the URLhttp://ontolingua.stanford.edu <s> BIB001 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology Building Tools and Environments <s> This paper presents WebODE as a workbench for ontological engineering that not only allows the collaborative edition of ontologies at the knowledge level, but also provides a scalable architecture for the development of other ontology development tools and ontology-based applications. First, we will describe the knowledge model of WebODE, which has been mainly extracted and improved from the reference model of METHONTOLOGY's intermediate representations. Later, we will present its architecture, together with the main functionalities of the WebODE ontology editor, such as its import/export service, translation services, ontology browser, inference engine and axiom generator, and some services that have been integrated in the workbench: WebPicker, OntoMerge and the OntoCatalogue. <s> BIB002
|
Numerous commercial and open source software tools are available for building and deploying ontologies. They can be used for building a new ontology from scratch or reusing existing ontologies. Apart from the common editing and browsing functionality, these tools usually include ontology documentation, ontology exportation and importation from different formats, graphical views of the ontologies built, ontology libraries, attached inference engines . Increasingly, these tools support the emerging standard ontology languages. Many more are offering platforms to interchange information among mutually heterogeneous resources including legacy databases, semi-structured repositories, industry-standard directories and vocabularies, and streams of unstructured contents as text and media . Denny's survey covered 94 tools with ontology editing capabilities that can be used to build ontology schemas (terminologies) and/or instance data. These editors may be available as standalone, plugin or online software and not necessary in production level. Well known ontology building tools include: Apollo [47] , LinkFactory , OilEd , OntoEdit , Ontolingua Server BIB001 , OntoSaurus , Protege [37], WebODE BIB002 , and WebOnto . A comparison study of ontology building tools can be found in BIB002 . An evaluation framework and evaluation results of various ontology tools can be found in .
|
Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology in Collaborative Design <s> The diversity of integral attachment snap-fit feature types (e.g. cantilever hooks, bayonet-fingers, compressive hooks, annular snaps, and others), and their possible combinations, sizes and locations and orientations on parts to enable assembly has made it appear that design possibilities may be unbounded. Attempts at understanding, no less optimization, seemed intractable. This paper presents a hierarchical classification scheme that brings order to the design space, and uses that classification scheme to define boundaries and size of the design space for achieving attachment at a level above feature detailing. Classification is based on the essential geometry of parts being assembled. The result is surprising order and simplicity, and the ability to reduce viable options for any assembly situation to a number (e.g. 8–10) that will permit true optimization. <s> BIB001 </s> Ontology as a Mechanism for Application Integration and Knowledge Sharing in Collaborative Design: A Review <s> Ontology in Collaborative Design <s> Part One: Introduction Chapter 1: General Introduction. 1.1 Motivation. 1.2 Book Organization. 1.3 How To Use This Book. Chapter 2: Collaborative Design and Manufacturing. 2.1 Introduction. 2.2 Engineering Design. 2.3 Advanced Manufacturing Systems. 2.4 Next Generation Collaborative Design and Manufacturing Systems. Chapter 3: DAI and Agents. 3.1 Classic AI and DAI. 3.2 Research Themes in DAI. 3.3 Models of DAI Systems. 3.4 Objects vs. Agents. 3.5 Different Types of Agents. 3.6. Why Agents for Collaborative Design and Manufacturing. Part Two: Important Issues Chapter 4: Knowledge Representation in Agent-Based Concurrent Design and Manufacturing Systems. 4.1 Introduction 4.2 What needs to be Represented. 4.3 How to Represent Knowledge in Agent-Based Systems. 4.4 Research Literature and Further References. Chapter 5: Learning in Agent-Based Concurrent Design and Manufacturing Systems. 5.1 Introdution. 5.2 Why to Learn. 5.3 Single-Agent Learning or Multi-Agent Learning. 5.4 When to Learn. 5.5 Where to Learn. 5.6 What is to be Learned. 5.7 How to Learn. 5.8 Examples. 5.9 Research Literature and Additional References. Chapter 6: Agent Structures. 6.1 Introduction. 6.2 Desirable characteristics of an agent. 6.3 Essential Modules (Components) for agents. 6.4 Different Approaches. 6.5 Comparison of Different Approaches. 6.6 Research Literature and further References. Chapter 7: Multi-Agent System Architectures. 7.1 Introduction. 7.2 Organization and System Architectures. 7.3 Different Approaches. 7.4 Select a suitable system architecture for a specific application. 7.5 Research Literature and Additional Readings. Chapter 8: Communication, Cooperation and Coordination. 8.1 Introduction. 8.2 Communication. 8.3 Coordination. 8.4 Cooperation. 8.5 Coordination, Cooperation and Communication. 8.6 Research Literature and Further References. Chapter 9: Collaboration, Task Decompsition and Allocation. 9.1 Introduction. 9.2 Different Approaches for Task Decomposition and Allocation. 9.3 Coordinated Task Allocation by Mediation. 9.4 Distributed Task Allocation. 9.5 Task Decomposition in MetaMorph: an Example. 9.6 Research Literature and Additional References. Chapter 10: Negotiation and Conflict Resolution. 10.1 Introduction. 10.2 Classification of Negotiation Categories. 103. Negotiation Protocols. 10.4 Negotiation Strategies. 10.5 Negotiation for Conflict Resolution. 10.6 Examples in Concurrent Design and Manufacturing. 10.7 Research Literature and Additional Information. Chapter 11: Ontology Problems. 11.1 Introduction. 11.2 What is Ontology? 11.3 Ontology and Knowledge Sharing. 11.4 Ontology Problems in Concurrent Design and Manufacturing. 11.5 Related concepts, Theories and Methods. 11.6 Ontolingua: A System for Managing Portable Ontologies. 11.7 Research Literature and Additional References. Chapter 12: Other Important Issues. 12.1 Introduction. 12.2 Agent Encapsulation. 12.3 Human machine integration (human participation). 12.4 System dynamics. 12.5. Design and manufacturability assessments. 12.6 Integration of manufacturing Planning, Scheduling and Execution. 12.7 Distributed Dynamic Scheduling. 12.8 Enterprise Integration and Supply Chain Management. 12.9 Legacy problem. 12.10 External interfaces. Part Three: Agent-Based Systems for Engineering Design & Manufacturing Chapter 13: Agent-Based Engineering Design Systems. 13.1 Introduction. 13.2 PACT (PACE) 13.3 SHARE (DSC) 13.4 First-Link, Next-Link and Process Link. 13.5 DIDE. 13.6 SiFAs. 13.7 RAPPID. 13.8 Other projects. 13.9 Summary. Chapter 14: Agent-Based manufacturing Planning, Scheduling and Control. 14.1 Introduction. 14.2 MetaMorph. 14.3 AARIA. 14.4 ADDYMS. 14.5 Other Projects. 14.6 Summary. Chapter 15: Enterprise Integration and Supply Chain Management. 15.1 Introduction. 15.2 ISCM. 15.3 CIIMPLEX. 15.4 MetaMorph II. 15.5 AIMS. 15.6 Other Projects. 15.7 Summary. Part Five: Developing Agent-Based Design and Manufacturing Systems Chapter 16: Methodology, Standards, Tools, Languages, and Frameworks. 16.1 Introduction. 16.2 Tools and Framework. 16.3 Methodology, Languages, and Standards. 16.4 Further references. Chapter 17: Building Agent-Based Design and Manufacturing Systems. 17.1 Introduction. 17.2 Selecting or developing an agent architecture. 17.3 Selecting an approach for agent organization. 17.4 Selecting or developing protocols for inter-agent communication. 17.5 Developing mechanisms for cooperation, coordination and negotiation. 17.6 Selecting platforms, tools and languages. 17.7 Agent-Oriented Design and Analysis. 17.8 Simulation and Implementation. 17.9 Testing, Debugging and Evaluation. Chapter 2: Collaborative Design and Manufacturing, Chapter 3: DAI and Agents. Part Two: Important Issues Chapter 4: Knowledge Representation in Agent-Based Concurrent Design and Manufacturing Systems. Chapter 5: Learning in Agent-Based Concurrent Design and Manufacturing Systems. Chapter 6: Agent Structures. Chapter 7: Multi-Agent System Architectures. Chapter 8: Communication, Cooperation and Coordination. Chapter 9: Collaboration, Task Decomposition and Allocation. Chapter 10: Negotiation and Conflict Resolution. Chapter 11: Ontology Problems. Chapter 12: Other Important Issues. Part Three: Agent-Based Systems for Engineering Design and Manufacturing Chapter 13: Agent-Based Engineering Design Systems. Chapter 14: Agent-Based manufacturing Planning, Scheduling and Control. Chapter 15: Enterprise Integration and Supply Chain Management. Part Four: Developing Agent-Based Design and Manufacturing Systems Chapter 16: Methodlogy, Standards, Tools, Languages, and Frameworks <s> BIB002
|
In the area of collaborative design, ontologies are usually: (1) to improve communication among humans; to improve data exchange among programs; and (3) to facilitate knowledge management, particularly knowledge sharing. Improving communications among humans involves standardizing the vocabulary and integrating new concepts. The goal is to increase mutual understanding among people from different departments, e.g., between design department and production department. A typical example was reported by Genc et al. BIB001 , who provided a hierarchical classification scheme in the domain of snapfit assemblies. Improving electronic data exchange requires compatible representation models. The ontology can be used to specify the concepts and vocabulary needed for developing exchange software (using frameworks like STEP/EXPRESS), or in integrating legacy systems when implementing concurrent engineering. When software agents are used, ontology is critical in sharing knowledge among the agents BIB002 . Ontologies can play an important role in facilitating knowledge management/sharing, automated collaborative design environments. Ontologies can improve a design process by building knowledge base for reuse or guiding the design process. Ontologies have been a very active research in the area of collaborative design. There are at least 6 papers on this topic presented at CSCWD 2005 [39] and at least 3 papers at CSCWD 2004 .
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Introduction <s> Detailed modeling of complex reaction systems is becoming increasingly important in the development, analysis, design, and control of chemical reaction processes. For industrial processes, complete incorporation of the chemistry into process models facilitates the minimization of byproduct and pollutant formation, increased efficiency, and improved product quality. Processes that involve complex reaction networks include a variety of noncatalytic and homogeneous or heterogeneous catalytic processes (such as fluid catalytic cracking, combustion, chemical vapor deposition, and alkylation). For some systems, large sets of relevant reactions have been identified for use in simulations.1-3 For others, the availability of advanced computing environments has enabled the automated generation of reaction networks and their models, based on computational descriptions of the reaction types occurring in the system.4-6 The use of such complex models is hindered by two obstacles. First, because of their sheer size and the presence of multiple time scales, these models are difficult to solve. Second, the models contain large numbers of uncertain (and sometimes unknown) kinetic parameters; regression to determine the parameters of complex nonlinear models is both difficult and unreliable, and the sensitivity of simulations to parameter uncertainties cannot be easily ascertained. Furthermore, for the purpose of gaining insights into the reaction system’s behavior, it is usually preferable to obtain simpler models that bring out the key features and components of the system. For these reasons, model simplification and order reduction are becoming central problems in the study of complex reaction systems. The simulation, monitoring, and control of a complex chemical process benefit from the derivation of accurate and reliable reduced models tailored to particular process modeling tasks. Model simplification is directly linked to identification of key reactions and sets of species that give valuable insights into the behavior of the network and how it may be influenced. Advanced control schemes such as model predictive control7 or multiple model adaptive control8 must be based on selecting appropriate reduced models and tracking key sets of species. Ideally, a model order reduction algorithm should have broad applicability, enable analysis at several levels of detail, and provide an assessment of the modeling error. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Introduction <s> Preface Part I. Introduction: 1. Introduction 2. Motivating examples Part II. Preliminaries: 3. Tools from matrix theory 4. Linear dynamical systems, Part 1 5. Linear dynamical systems, Part 2 6. Sylvester and Lyapunov equations Part III. SVD-based Approximation Methods: 7. Balancing and balanced approximations 8. Hankel-norm approximation 9. Special topics in SVD-based approximation methods Part IV. Krylov-based Approximation Methods: 10. Eigenvalue computations 11. Model reduction using Krylov methods Part V. SVD-Krylov Methods and Case Studies: 12. SVD-Krylov methods 13. Case studies 14. Epilogue 15. Problems Bibliography Index. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Introduction <s> BackgroundQuantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification.DescriptionBioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database.ConclusionsBioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. <s> BIB003
|
Model complexity can be used to refer to a number of specific properties of mathematical models occurring in a range of scientific contexts. It can, for example, be used to refer to models that are overparameterised relative to the volume of collectable data, models that are unintuitable due to their scale, or models that are computationally intractable in magnitude. In each case, complexity presents a barrier to standard tools of model analysis. Methods of model reduction offer one possible approach for dealing with the perennial issue of model complexity by seeking to approximate the behaviour of a model by constructing a simplified dynamical system that retains some degree of the predictive power of the original. Model reduction has a long history in the mathematical modelling of biological systems; perhaps the most famous example is Briggs and Haldane's application of the quasi-steady-state approximation (QSSA) for the simplification of a model of the enzyme-substrate reaction . They demonstrated that a simplifying assumption could take the unsolvable, nonlinear, four-dimensional system of coupled ordinary differential equations (ODEs) that constituted the model, to a single ODE whilst still providing an accurate description of the dynamics for a wide range of possible parameterisations. The mathematical modelling of biological processes often leads to highly complex systems involving many state-variables and reactions. The relatively recent advent of systems biology, which seeks to model such systems in detail and hence yield a high degree of mechanistic exploratory power, has greatly increased this complexity such that it is now common to encounter models containing hundreds or even thousands of variables BIB003 . Even given this rapid increase in complexity, however, concurrent advances in computing power and simulation algorithms may appear to make model reduction a less essential process than it was in the past-it is now possible to accurately and efficiently compute numerical simulations of even highly complex systems where previously some degree of reduction was necessary to understand even the basic dynamical behaviour of many models. Ease of simulation, however, does not necessarily lead to depth of understanding; for a wide range of analyses model complexity can present an insurmountable barrier. Methods of model reduction therefore remain a vital topic and a widely applicable tool in the analysis and modelling of biochemical systems. The methods that will be discussed throughout this paper have been employed for a wide range of purposes in the literature, including to obtain more intuitively understood models, to reduce the number of parameters so as to obtain an identifiable model, to lessen the computational burden of parameter fitting, and to enable the embedding of such systems within agent-based modelling approaches. Here, for example, a researcher may be interested in concurrently modelling a large number of cells comprising a tissue-by employing a reduced description of the individual cells, such a problem may be made more computationally feasible. Despite the utility of model reduction methods, familiarity is often limited to a small range of methods that can be found in the literature. This review therefore seeks to give an overview of the use and application of model reduction methods in this context. Such methods are commonly applied within the fields of engineering and control theory, and a number of reviews of methods within these contexts exist BIB001 BIB002 . Additionally, have reviewed timescale exploitation methods for the reduction of computational biology models, but their work mostly focuses on the fundamental basis of such methods and the potential applicability of model tropicalisation in this context. The aim of this review is therefore to provide a more contextualised and up-to-date overview of such methods, as well as a survey of the current state of the literature, so as to better assess the possible utility of particular model reduction methodologies for application in the field of systems biology. The broader topic of general model reduction methods is an extensive area of study. To review the entire field would be a challenging undertaking and beyond the scope of this paper. As a result, this review limits itself in the following respects; firstly, the survey of literature is limited only to those methods that have been developed, adapted or applied in the context of biochemical reaction network models. Secondly, it is limited to methods addressing models that are comprised of systems of ODEs. Thirdly, it focuses particularly on those methods that have seen published application within the previous 15 years. Ideally, such methods will be algorithmic, automatable and produce highly accurate, significantly reduced approximations. By reviewing such a range of literature we are able to separate methods into categories and provide insight into their suitability in addressing certain classes of problems. In the discussion section we provide an overview of methods and their general applicability, collating this information in Table 1 to summarise the suitability of the different methods in the context of particular model properties. It is hoped this can therefore provide guidance to the most appropriate methods currently available for reducing models.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Aims of Model Reduction <s> BackgroundSystems biology models tend to become large since biological systems often consist of complex networks of interacting components, and since the models usually are developed to reflect various mechanistic assumptions of those networks. Nevertheless, not all aspects of the model are equally interesting in a given setting, and normally there are parts that can be reduced without affecting the relevant model performance. There are many methods for model reduction, but few or none of them allow for a restoration of the details of the original model after the simplified model has been simulated.ResultsWe present a reduction method that allows for such a back-translation from the reduced to the original model. The method is based on lumping of states, and includes a general and formal algorithm for both determining appropriate lumps, and for calculating the analytical back-translation formulas. The lumping makes use of efficient methods from graph-theory and ϵ-decomposition and is derived and exemplified on two published models for fluorescence emission in photosynthesis. The bigger of these models is reduced from 26 to 6 states, with a negligible deviation from the reduced model simulations, both when comparing simulations in the states of the reduced model and when comparing back-translated simulations in the states of the original model. The method is developed in a linear setting, but we exemplify how the same concepts and approaches can be applied to non-linear problems. Importantly, the method automatically provides a reduced model with back-translations. Also, the method is implemented as a part of the systems biology toolbox for matlab, and the matlab scripts for the examples in this paper are available in the supplementary material.ConclusionsOur novel lumping methodology allows for both automatic reduction of states using lumping, and for analytical retrieval of the original states and parameters without performing a new simulation. The two models can thus be considered as two degrees of zooming of the same model. This is a conceptually new development of model reduction approaches, which we think will stimulate much further research and will prove to be very useful in future modelling projects. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Aims of Model Reduction <s> Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors. <s> BIB002
|
The choice of model reduction method employed is typically constrained by the aims of the researcher. For example, the optimal reduction that retains the biological meaning of the state-variables is likely to be non-optimal in a setting where transformations of the state-variables are permitted. The preferred reduction is also likely to differ if we select the reduction that can best approximate all state-variables as opposed to some subset, and depending upon the metric of error that is employed. Note that whilst we have here outlined the process of modelling biochemical reaction networks in the context of the Law of Mass Action, most reduction methods reviewed in this paper are applicable in the broader context of general ODE systems. The Law of Mass Action typically represents the main theoretic basis for the deterministic modelling of systems biology type networks. However, it is also common that other terms such as Hill, logistic, or other mathematical functions are used to describe certain biological phenomena. Certain methods that are reviewed (e.g. BIB002 do require that the model contains only polynomial terms or, in certain instances, that the model has a specific structure (e.g. or that it is linear (e.g. BIB001 . Where the methods do require a more specific structure than a general system of ODEs, this will be highlighted as part of the review.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Conservation Analysis <s> In the general framework of metabolic control theory, we describe a method of mathematical modelling that provides a way of analysing the sensitivity of a metabolic system to perturbations of the environment or of the internal state of this system. The method can be applied to any metabolic system, involving for instance conservation relationships, non-specific external parameters, etc., and leads in particular to a characterization of the control matrices and to a generalization of the summation and connectivity theorems. In this paper, we emphasize the structural characterizations and properties of the systems which depend only on the structure of the metabolic network, and not on the reaction kinetics. The advantage of this approach lies of course in the fact that the structure of the metabolic network is an invariant of the system which depends neither on the environment nor on the internal state of this system. The aim of this paper is to show the efficiency of such a structural approach. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Conservation Analysis <s> Abstract Large scale genomic studies are generating significant amounts of data on the structure of cellular networks. This is in contrast to kinetic data, which is frequently absent, unreliable or fragmentary. There is, therefore, a desire by many in the community to investigate the potential rewards of analyzing the more readily available topological data. This brief review is concerned with a particular property of biological networks, namely structural conservations (e.g. moiety conserved cycles). There has been much discussion in the literature on these cycles but a review on the computational issues related to conserved cycles has been missing 1 . This review is concerned with the detection and characterization of conservation relations in arbitrary networks and related issues, which impinge on simulation simulation software writers. This review will not address flux balance constraints or small-world type analyses in any significant detail. <s> BIB002
|
Models of biochemical reaction networks commonly possess subsets of reactants that, under a given linear combination, remain constant at all times . These subsets are typically referred to as conserved moieties and the specific linear combinations as conservation relations. In combination with the system of ODEs described by Eq. (2a), the existence of conservation relations implies that the model can be expressed as a system of differential algebraic equations (DAEs), such thaṫ where Γ is an h × n matrix referred to as the conservation matrix, the rows of which represent the linear combinations of reactants that are constant in time. As Eq. (4b) is linear following integration, it can be solved explicitly and used to eliminate up to h state-variables and their associated ODEs from the system defined by Eq. (4a). This replacement of state-variables via the algebraic exploitation of conservation relations is a common first step in the analysis of biochemical reaction networks and, for large systems, typically results in a reduction of 10-15% of the state-variables . For small networks conservation relations are usually obvious and easily exploited. For very large systems, however, these relations are often not readily apparent. As such it is common to turn to algorithmic approaches for finding the conservation matrix Γ . As is discussed in BIB001 , this can be achieved by computing the left null-space (and hence the linear dependencies) of the network's associated stoichiometry matrix. A review of a range of methods to find the left null-space of this matrix, including Gaussian elimination and singular value decomposition, can be found in BIB002 . Such methods, however, are often numerically unstable for systems of very high dimension, which can lead to some conservation relations being missed. A more numerically stable method based upon the construction of a QR decomposition via Householder reflections has been developed by . An example of the application of algorithmic conservation analysis to a nonlinear example model is provided in Additional file 1-Supplementary information Section 2.1.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Cellular functions, such as signal transmission, are carried out by 'modules' made up of many species of interacting molecules. Understanding how modules work has depended on combining phenomenological analysis with molecular studies. General principles that govern the structure and behaviour of modules may be discovered with help from synthetic sciences such as engineering and computer science, from stronger interactions between experiment and theory in cell biology, and from an appreciation of evolutionary constraints. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> As biology begins to move into the “postgenomic” era, a key emerging question is how to approach the understanding of how complex biomolecular networks function as dynamical systems. Prominent examples include multimolecular protein “machines,” intracellular signal transduction cascades, and cell–cell communication mechanisms. As the proportion of identified components involved in any of these networks continues to increase, in certain instances already asymptotically, the daunting challenge of developing useful models—mathematical as well as conceptual—for how they work is drawing interest. At one extreme is the hope that fundamental relationships will emerge from essentially statistical analyses of large genomic and proteomic databases enumerating correlations among gene expression, protein level/state/location, and cell behavior. At another extreme is a view that sheer computational power can be harnessed to create comprehensive simulations of the full set of fundamental physicochemical molecular interactions. Recently, an intermediate concept suggests a “modular” framework, treating subsystems of complex molecular networks as functional units that perform identifiable tasks—perhaps even able to be characterized in familiar engineering terms (1). The idea of functional modules as an effective approach to modeling biomolecular systems is quite appealing, because, even in nonbiological applications, engineering design is generally carried out in hierarchical or “nested” fashion. That is, the behavior of a system at the highest (i.e., largest space scale and/or longest time scale) level is typically analyzed and predicted with a model involving properties of the next-lower space/time scales; these properties are then analyzed and predicted with another set of models involving further subdivided space and/or time scales and so forth to a most detailed level as limited by current data. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> The sheer complexity of intracellular regulatory networks, which involve signal transducing, metabolic, and genetic circuits, hampers our ability to carry out a quantitative analysis of their functions. Here, we describe an approach that greatly simplifies this type of analysis by capitalizing on the modular organization of such networks. Steady-state responses of the network as a whole are accounted for in terms of intermodular interactions between the modules alone; processes operating solely within modules need not be considered when analysing signal transfer through the entire network. The intermodular interactions are quantified through (local) response coefficients which populate an interaction map (matrix). This matrix can be derived from a biochemical or molecular biological analysis of (macro) molecular interactions that constitute the regulatory network. The approach is illustrated by two examples: (i) mitogenic signalling through the mitogen-activated protein kinase cascade in the epidermal growth factor receptor network and (ii) regulation of ammonium assimilation in Escherichia coli. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Motivation: The vastness and complexity of the biochemical networks that have been mapped out by modern genomics calls for decomposition into subnetworks. Such networks can have inherent non-local ... <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> The physiological responses of cells to external and internal stimuli are governed by genes and proteins interacting in complex networks whose dynamical properties are impossible to understand by intuitive reasoning alone. Recent advances by theoretical biologists have demonstrated that molecular regulatory networks can be accurately modeled in mathematical terms. These models shed light on the design principles of biological control systems and make predictions that have been verified experimentally. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Central functions in the cell are often linked to complex dynamic behaviours, such as sustained oscillations and multistability, in a biochemical reaction network. Determination of the specific mechanisms underlying such behaviours is important, e.g. to determine sensitivity, robustness, and modelling requirements of given cell functions. In this work we adopt a systems approach to the analysis of complex behaviours in intracellular reaction networks, described by ordinary differential equations with known kinetic parameters. We propose to decompose the overall system into a number of low complexity subsystems, and consider the importance of interactions between these in generating specific behaviours. Rather than analysing the network in a state corresponding to the complex non-linear behaviour, we move the system to the underlying unstable steady state, and focus on the mechanisms causing destabilisation of this steady state. This is motivated by the fact that all complex behaviours in unforced systems can be traced to destabilisation (bifurcation) of some steady state, and hence enables us to use tools from linear system theory to qualitatively analyse the sources of given network behaviours. One important objective of the present study is to see how far one can come with a relatively simple approach to the analysis of highly complex biochemical networks. The proposed method is demonstrated by application to a model of mitotic control in Xenopus frog eggs, and to a model of circadian oscillations in Drosophila. In both examples we are able to identify the subsystems, and the related interactions, which are instrumental in generating the observed complex non-linear behaviours. <s> BIB006 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Multisite phosphorylation is an important mechanism for fine-tuned regulation of protein function. Mathematical models developed over recent years have contributed to elucidation of the functional consequences of a variety of molecular mechanisms involved in processing of the phosphorylation sites. Here we review the results of such models, together with salient experimental findings on multisite protein phosphorylation. We discuss how molecular mechanisms that can be distinguished with respect to the order and processivity of phosphorylation, as well as other factors, regulate changes in the sensitivity and kinetics of the response, the synchronization of molecular events, signalling specificity, and other functional implications. <s> BIB007 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Modularity plays a fundamental role in the prediction of the behavior of a system from the behavior of its components, guaranteeing that the properties of individual components do not change upon interconnection. Just as electrical, hydraulic, and other physical systems often do not display modularity, nor do many biochemical systems, and specifically, genetic and signaling networks. Here, we study the effect of interconnections on the input/output dynamic characteristics of transcriptional components, focusing on a concept, which we call “retroactivity” that plays a role similar to impedance in electrical circuits. In order to attenuate the effect of retroactivity on a system dynamics, we propose to design insulation devices based on a feedback mechanism inspired by the design of amplifiers in electronics. In particular, we introduce a bio-molecular realization of an insulation device based on phosphorylation. <s> BIB008 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Motivation: In Systems Biology, an increasing collection of models of various biological processes is currently developed and made available in publicly accessible repositories, such as biomodels.net for instance, through common exchange formats such as SBML. To date, however, there is no general method to relate different models to each other by abstraction or reduction relationships, and this task is left to the modeler for re-using and coupling models. In mathematical biology, model reduction techniques have been studied for a long time, mainly in the case where a model exhibits different time scales, or different spatial phases, which can be analyzed separately. These techniques are however far too restrictive to be applied on a large scale in systems biology, and do not take into account abstractions other than time or phase decompositions. Our purpose here is to propose a general computational method for relating models together, by considering primarily the structure of the interactions and abstracting from their dynamics in a first step. ::: ::: Results: We present a graph-theoretic formalism with node merge and delete operations, in which model reductions can be studied as graph matching problems. From this setting, we derive an algorithm for deciding whether there exists a reduction from one model to another, and evaluate it on the computation of the reduction relations between all SBML models of the biomodels.net repository. In particular, in the case of the numerous models of MAPK signalling, and of the circadian clock, biologically meaningful mappings between models of each class are automatically inferred from the structure of the interactions. We conclude on the generality of our graphical method, on its limits with respect to the representation of the structure of the interactions in SBML, and on some perspectives for dealing with the dynamics. ::: ::: Availability: The algorithms described in this article are implemented in the open-source software modeling platform BIOCHAM available at http://contraintes.inria.fr/biocham The models used in the experiments are available from http://www.biomodels.net/ ::: ::: Contact: [email protected] <s> BIB009 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Large-scale model development for biochemical reaction networks of living cells is currently possible through qualitative model classes such as graphs, Boolean logic, or Petri nets. However, when it is important to understand quantitative dynamic features of a system, uncertainty about the networks often limits large-scale model development. Recent results, especially from monotone systems theory, suggest that structural network constraints can allow consistent system decompositions, and thus modular solutions to the scaling problem. Here, we propose an algorithm for the decomposition of large networks into monotone subsystems, which is a computationally hard problem. In contrast to prior methods, it employs graph mapping and iterative, randomized refinement of modules to approximate a globally optimal decomposition with homogeneous modules and minimal interfaces between them. Application to a medium-scale model for signaling pathways in yeast demonstrates that our algorithm yields efficient and biologically interpretable modularizations; both aspects are critical for extending the scope of (quantitative) cellular network analysis. <s> BIB010 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Biological system models are routinely developed in modern systems biology research following appropriate modelling/experiment design cycles. Frequently these take the form of high-dimensional nonlinear Ordinary Differential Equations that integrate information from several sources; they usually contain multiple time-scales making them difficult even to simulate. These features make systems analysis (understanding of robust functionality) - or redesign (proposing modifications in order to improve or modify existing functionality) a particularly hard problem. In this paper we use concepts from systems theory to develop two complementary tools that can help us understand the complex behaviour of such system models: one based on model decomposition and one on model reduction. Our aim is to algorithmically produce biologically meaningful, simplified models, which can then be used for further analysis and design. The tools presented are applied on a model of the Epidermal Growth Factor signalling pathway. <s> BIB011 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> We approach the solution to the problem “are biological networks modular” using a systems theory approach. We propose a method to divide a particular family of gene regulatory networks into “modules” that are functionally isolated from each other, so that the behavior of a composite network of two or more modules can be predicted from the input-output characteristics of the individual modules. This method provides a platform for the creation of new foundational modules using which networks can be decomposed. We present our work for the deterministic case, while providing a well-known example from the literature to validate our approach. <s> BIB012 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Model Decomposition <s> Parameter estimation for high dimension complex dynamic system is a hot topic. However, the current statistical model and inference approach is known as a large p small n problem. How to reduce the dimension of the dynamic model and improve the accuracy of estimation is more important. To address this question, the authors take some known parameters and structure of system as priori knowledge and incorporate it into dynamic model. At the same time, they decompose the whole dynamic model into subset network modules, based on different modules, and then they apply different estimation approaches. This technique is called Rao-Blackwellised particle filters decomposition methods. To evaluate the performance of this method, the authors apply it to synthetic data generated from repressilator model and experimental data of the JAK-STAT pathway, but this method can be easily extended to large-scale cases. <s> BIB013
|
Biochemical reaction networks are often highly modular in nature BIB001 BIB003 . This implies that the elements (species or reactions) of most networks in this context, as compared to a randomly generated network, can be more easily partitioned into sub-networks that are highly connected within themselves and possess a low number of connections to elements outside of their partition. Additionally, complex phenomenological behaviours can often be shown to be driven by small sub-networks contained within the larger network BIB002 BIB005 . The approach of dividing the system into interacting sub-networks (often referred to as modules) is known as model decomposition. Given the high degree of network modularity common in this field and the likelihood of certain modules to dominate the dynamical behaviour of interest, model decomposition is an attractive technique in the modelling of biochemical systems. Methods of model decomposition are also highly complementary to methods of model reduction as they can be used to separate the system into modules of differing 'importance' and hence be used to guide reduction. For example, it may be the case that only those portions of a signalling pathway model addressing the initial receptor binding of an extracellular ligand and the phosphorylation of a particular protein downstream are of interest to the modeller. In this instance it may make sense to decompose the system into two modules representing these portions and a third module describing the 'unimportant' components of the network. This can then be used to guide model reduction such that the module deemed unimportant can be reduced in isolation and, potentially, approximated with a lower degree of accuracy than the important modules. As an example, consider the phosphorylation cycle [a description of phosphorylation cycles and their modelling can be found in BIB007 Fig. 1a . Given a system of this form a biologically reasonable decomposition is to partition the system into phosphorylation and dephosphorylation modules as depicted in Fig. 1 as modules A and B, respectively. If, for example, the modeller was primarily interested in the dephosphorylation module, it might be possible to reduce the phosphorylation module significantly, as shown in Fig. 1b , whilst still retaining an accurate description of the biological mechanisms of interest. A full review of decomposition methods is beyond the scope of this paper. A wide range of approaches for finding suitable decompositions can be found in the literature BIB004 BIB008 BIB010 BIB011 BIB012 . Related methods for determining whether a given model can be found as a sub-network in a larger system have also been discussed BIB009 ). BIB013 have proposed the decomposition of models into linear and nonlinear sub-modules for the purpose of parameter fitting via Rao- I The network depicted here represents a simple enzymatic phosphorylation cycle-a kinase K mediates the phosphorylation of a protein X , whilst a phosphatase P performs the process of dephosphorylation. Here a biologically guided decomposition of the network into two sub-modules A and B is depicted-with A representing the unphosphorylated protein and the kinase binding step, B representing the phosphorylated protein and the phosphatase binding step, and only the phosphorylation and dephosphorylation reactions linking the two sub-modules. II An example of a decomposition guided model reduction of the phosphorylation cycle. In this example module A representing the kinase binding has been reduced to a single state-variable, whilst the full biological detail of the phosphatase binding and dephosphorylation of X has been retained Blackwellised particle filters decomposition methods. Additionally, approaches for determining which sub-modules of a network drive particular dynamical behaviour of a model (oscillations, for example) BIB006 may have a particular applicability within the context of model reduction, guiding the use of reduction so as to preserve phenomena of interest.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Preserving Timescale Methods <s> Tihonov's Theorems for systems of first-order ordinary differential equations containing small parameters in the derivatives, which form the mathematical foundation of the steady-state approximation, are restated. A general procedure for simplifying chemical and enzyme reaction kinetics, based on the difference of characteristic time scales, Is presented. Korzuhin's Theorem, which makes it possible to approximate any kinetic system by a closed chemical system, is also reported. The notions and theorems are illustrated with examples of Michaelis-Menten enzyme kinetics and of a simple autocatalytic system. Another example illustrates how the differences in the rate constants of different elementary reactions may be exploited to simplify reaction kinetics by using Tihonov's Theorem. All necessary mathematical notions are explained in the appendices. The most simple formulation of Tihonov's 1st Theorem 'for beginners' is also given. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Preserving Timescale Methods <s> This paper discusses typical applications of singular perturbation techniques to control problems in the last fifteen years. The first three sections are devoted to the standard model and its time-scale, stability and controllability properties. The next two sections deal with linear-quadratic optimal control and one with cheap (near-singular) control. Then the composite control and trajectory optimization are considered in two sections, and stochastic control in one section. The last section returns to the problem of modeling, this time in the context of large scale systems. The bibliography contains more than 250 titles. <s> BIB002
|
These methods are based upon identifying either species or reactions which can be considered as exhibiting 'fast' dynamics in comparison with the remainder of the network, hence partitioning the system into fast and slow components. Often this involves finding some nondimensionalisation that exposes a small parameter δ 1 that can be used to distinguish between species and reactions occurring on fast and slow timescales. Once such a representation has been found, application of singular perturbation theory enables the reduction of the system. Singular perturbation for the reduction of systems of first-order ODEs was originally developed by . His original paper is in Russian, but an excellent synopsis in English is given by BIB001 which guides the description provided here. Tikhonov's theorem on dynamical system states that, under certain conditions, if a system of first-order differential equations can be expressed in the form where Eq. (7a) is commonly referred to as the degenerate system and (7b) as the adjoined system, then as δ → 0 the solution of the whole system tends to that of the degenerate system, such thatẋ with φ (x 1 , t) a root of the equations g (x 1 , x 2 , t) = 0. Clearly, Eq. (8b) can be substituted into Eq. (8a) to produce a reduced system of ODEs in terms only of statevariables x 1 (t). In order for this reduction to hold, Tikhonov's theorem requires the following conditions to be met: must be a stable steady state of the adjoined system (7b); and 3. the initial conditions used in the reduced system must be in the basin of attraction for this steady state of the adjoined system. This approach to reduction is commonly referred to as singular perturbation. Assuming δ = 0 is equivalent to a first-order truncation of the asymptotic expansion in terms of δ. Higher-order approximations can often be computed, potentially providing more accurate reduced models for somewhat larger values of δ. BIB002 additionally demonstrates how singular perturbation can be applied to a control-theoretic state-space model in the form of (2).
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> We extend the quasi-steady-state approximation (QSSA) with respect to the class of differential systems as well as with respect to the order of approximation. We illustrate the first extension by an example which cannot be treated in the frame of the classical approach. As an application of the second extension we prove that the trimolecular autocatalator can be approximated by a fast bimolecular reaction system. Finally we describe a class of singularly perturbed systems for which a higher order QSSA can easily be obtained. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> A nonlinear model reduction method for nonisothermal reaction systems that exhibit dynamics in two different time scales owing to the presence of fast and slow reactions was developed. The method systematically identifies the independent algebraic constraints that define the low-dimensional state space where the slow dynamics of the reaction system are constrained to evolve. It also derives state-space realizations of the resulting differential algebraic system that describes the slow dynamics. This method is illustrated through the classic Michaelis-Menten reaction system, and is applied to an ozone decomposition reaction system and a reaction mechanism for esterification of carboxylic acid. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> A nonlinear model reduction method based on time scale analysis was applied to a detailed mechanistic model of the glucose catabolic pathway in S. cerevisiae. The method allows us to identify quasi steady state constraints of complete conversion and reaction equilibrium corresponding to the fast reactions of the pathway, and to derive a reduced-order nonlinear model of the pathway dynamics for the time scale of interest, in terms of suitable linear combinations of the species concentrations. Numerical simulations were performed to validate the utilization of the reduced model in the description of the metabolic network. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> Kinetic models of metabolic networks are essential for predicting and optimizing the transient behavior of cells in culture. However, such models are inherently high dimensional and stiff due to the large number of species and reactions involved and to kinetic rate constants of widely different orders of magnitude. In this paper we address the problem of deriving non-stiff, reduced-order non-linear models of the dominant dynamics of metabolic networks with fast and slow reactions. We present a method, based on singular perturbation analysis, which allows the systematic identification of quasi-steady-state conditions for the fast reactions, and the derivation of explicit non-linear models of the slow dynamics independent of the fast reaction rate expressions. The method is successfully applied to detailed models of metabolism in human erythrocytes and Saccharomyces cerevisiae. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> Mathematical modelling of kinetic processes with different time scales allows a reduction of the governing equations using quasi-steady-state approximations (QSSA). A QSSA theorem is applied to a mathematical model of the influence that Raf kinase inhibitor protein (RKIP) has on the ERK signalling pathway. On the basis of previously published parameter values, the system of 11 ordinary differential equations is rewritten in a form suitable for model reduction. In accordance with the terminology of the QSSA theorem, it is established that four of the protein and protein-complex concentrations are 'fast varying', such that the corresponding kinetic equations form an attached system. Another concentration is 'medium varying' such that the corresponding equation is reduced with respect to the four fast ones. The other six concentrations are 'slow varying', which means the corresponding kinetic equations also present a reduced system with respect to the others. Analytical solutions, relating the steady-state values of the fast varying protein concentrations and the slow varying ones, are derived and interpreted as restrictions on the regulatory role of RKIP on ERK-pathway. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> BackgroundQuasi-steady state approximation (QSSA) based on time-scale analysis is known to be an effective method for simplifying metabolic reaction system, but the conventional analysis becomes time-consuming and tedious when the system is large. Although there are automatic methods, they are based on eigenvalue calculations of the Jacobian matrix and on linear transformations, which have a high computation cost. A more efficient estimation approach is necessary for complex systems.ResultsThis work derived new time-scale factor by focusing on the problem structure. By mathematically reasoning the balancing behavior of fast species, new time-scale criteria were derived with a simple expression that uses the Jacobian matrix directly. The algorithm requires no linear transformation or decomposition of the Jacobian matrix, which has been an essential part for previous automatic time-scaling methods. Furthermore, the proposed scale factor is estimated locally. Therefore, an iterative procedure was also developed to find the possible multiple boundary layers and to derive an appropriate reduced model.ConclusionBy successive calculation of the newly derived time-scale criteria, it was possible to detect multiple boundary layers of full ordinary differential equation (ODE) models. Besides, the iterative procedure could derive the appropriate reduced differential algebraic equation (DAE) model with consistent initial values, which was tested with simple examples and a practical example. <s> BIB006 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> Much of enzyme kinetics builds on simplifications enabled by the quasi-steady-state approximation and is highly useful when the concentration of the enzyme is much lower than that of its substrate. However, in vivo, this condition is often violated. In the present study, we show that, under conditions of realistic yet high enzyme concentrations, the quasi-steady-state approximation may readily be off by more than a factor of four when predicting concentrations. We then present a novel extension of the quasi-steady-state approximation based on the zero-derivative principle, which requires considerably less theoretical work than did previous such extensions. We show that the first-order zero-derivative principle, already describes much more accurately the true enzyme dynamics at enzyme concentrations close to the concentration of their substrates. This should be particularly relevant for enzyme kinetics where the substrate is an enzyme, such as in phosphorelay and mitogen-activated protein kinase pathways. We illustrate this for the important example of the phosphotransferase system involved in glucose uptake, metabolism and signaling. We find that this system, with a potential complexity of nine dimensions, can be understood accurately using the first-order zero-derivative principle in terms of the behavior of a single variable with all other concentrations constrained to follow that behavior. <s> BIB007 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> Quasi steady state assumptions are often used to simplify complex systems of ordinary differential equations in modelling of biochemical processes. The simplified system is designed to have the same qualitative properties as the original system and to have a small number of variables. This enables to use the stability and bifurcation analysis to reveal a deeper structure in the dynamics of the original system. This contribution shows that introducing delays to quasi steady state assumptions yields a simplified system that accurately agrees with the original system not only qualitatively but also quantitatively. We derive the proper size of the delays for a particular model of circadian rhythms and present numerical results showing the accuracy of this approach. <s> BIB008 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Species Partitioning <s> Mathematical analysis of mass action models of large complex chemical systems is typically only possible if the models are reduced. The most common reduction technique is based on quasi-steady state assumptions. To increase the accuracy of this technique we propose delayed quasi-steady state assumptions (D-QSSA) which yield systems of delay differential equations. We define the approximation based on D-QSSA, prove the corresponding error estimate, and show how it approximates the invariant manifold. Then we define a class of well mixed chemical systems and formulate assumptions enabling the application of D-QSSA. We also apply the D-QSSA to a model of Hes1 expression and to a cell-cycle model to illustrate the improved accuracy of the D-QSSA with respect to the standard quasi-steady state assumptions. <s> BIB009
|
In the case where a timescale separation for the rates of species evolution can be observed, it is possible to partition x such that where x s (t) represents those state-variables that evolve slowly in comparison with x f (t). For such a partitioning of a system to exist, it must be possible, via some nondimensionalisation, to express it in the form with the positive constant δ 1 corresponding to the difference in evolution speeds for the different species. Setting δẋ f (t) ≈ 0 yields the system of differential algebraic equations (DAEs) Clearly, where Eq. (11b) can be solved, variables x f (t) can be eliminated from Eq. (11a) to yield a reduced model. This method of model reduction is commonly referred to as the quasi-steady-state approximation (QSSA), and its most famous application is in reducing the Michaelis-Menten equation as outlined by . An example of the direct application of the QSSA to a nonlinear example model can be found in Additional file 1-Supplementary information Section 2.3. Such a reduction is valid where the timescale of the slowest fast species (τ f,max ) is significantly shorter than the timescale of the fastest slow species (τ s,min ), such that τ f,max τ s,min . This is guaranteed to be the case where a formulation for the model of the form (10) can be found with δ 1; typically such a formulation is found via searching through possible nondimensionalisations of the system. BIB005 , for example, recently applied the QSSA to a nondimensionalised and singularly perturbed model of the extracellular regulatory kinase (ERK) signalling pathway regulated by a Raf kinase inhibitor protein (RKIP). They showed that an 11-dimensional system can be reduced to 5 dimensions, and crucially, this reduced model can, unlike the original system, be solved analytically. This enables the biological insight that the RKIP protein only provides a regulatory role in the ERK pathway far from the system's steady state. A number of variations of the QSSA approach can also be found in the literature; BIB001 discussed how the QSSA can be extended to singular, singularly perturbed systems and how this approximation can be extended to higher orders via asymptotic expansion. BIB009 and BIB008 have introduced the delay quasi-steady-state approximation (DQSSA), enabling the QSSA method to compensate for the time error incurred by forcing the approximation that the timescale of the fast species is equal to zero. This time error can be particularly problematic for oscillatory systems where it can result in a mismatched phase. Compensating for this effect can greatly increase the accuracy of the QSSA in the case of such systems. Their approach is demonstrated via application to a 9-dimensional model of circadian rhythms which can be reduced to 2 dimensions; the standard QSSA incurs a 30% error for this reduction due to a mismatch in phase, whereas the DQSSA only incurs a 2% error. Unfortunately, the QSSA is somewhat limited in the models it can be applied to, as it requires that the species exhibit a clear separation in timescales and a formulation amenable to singular perturbation. For simpler examples, searching through the range of possible nondimensionalisations and employing intuition of the system in order to find such a formulation is often feasible. For very large models, however, such an approach can be prohibitive due to the combinatorial explosion in the range of possible model representations. As a result of the difficulties that commonly occur in finding a suitable partitioning of species, a number of publications in this area are dedicated to providing algorithmic methods for determining species that can potentially be considered 'fast'. BIB006 , for example, have devised an algorithmic approach to rank the timescale factors of species via analysis of the system's Jacobian after a short initial transient period. Similarly, have recently introduced a notion of 'speed coefficients' that can be calculated for the state-variables of a model via analysis of the system's Jacobian and used to guide the fast/slow partitioning of the species. The zero-derivative principle (ZDP) provides a computational approach for extending the QSSA to higher-order approximations (see Additional file 1-Supplementary information Section 1.2). BIB007 have demonstrated use of the ZDP for the reduction of biochemical reaction networks via application to the Michaelis-Menten enzyme-substrate model and a phosphotransferase system (PTS) within the context of glucose transport. In the case of the PTS model it was demonstrated that a firstorder ZDP approximation enabled the reduction of the original 9-dimensional system to a single state-variable whilst retaining a high degree of accuracy which was not attainable solely under the QSSA. Reaction Partitioning An alternative approach to partitioning the species x(t) is to instead partition the reaction rates v (x(t), p) into fast and slow groups, such that with δ 1. Here v s (x(t), p) corresponds to the slow reaction rates and v f (x(t), p) to those that can be considered fast in comparison (as denoted by the associated small parameter δ). This leads to a dynamical system of the forṁ where S s and S f represent submatrices of the stoichiometry matrix comprising those columns corresponding to the slow and fast reactions, respectively. Hence, the dynamics for the species concentrationsẋ(t) can be decomposed into fast and slow contributions as a sum, such thatẋ(t) = [ẋ(t)] s + [ẋ(t)] f . Note here that, unlike the equivalent terms in the species partitioning case, [ẋ(t)] s does not necessarily correspond to a proper subset of x(t)-rather it represents the slow dynamical contribution of each reaction to all of the modelled species concentrations. Taking the approximation δ → 0, singular perturbation yields As x(t) still depends on both the slow and fast dynamical contributions, the aim is to solve Eq. (14b) in such a way that (14a) can be decoupled from the fast contributions, leaving a reduced model that accurately describes the slow timescale. This method operates under the assumption that certain reactions occur fast enough so as to be approximated as equilibrating instantaneously; hence, it is commonly referred to as the rapid equilibrium approximation (REA). The most famous application of the REA is Michaelis and Menten's original reduction of the enzyme-substrate reaction model . The rapid equilibrium approximation has been applied in the work of BIB002 , BIB003 and BIB004 to a number of models, in particular a model of the glycolytic pathway in Saccharomyces cerevisiae where they were able to reduce the system from 21 to 18 reactions whilst maintaining a high degree of accuracy, and a model of central carbon metabolism in humans where they were able to similarly achieve a reduction from 25 to 20 reactions. More recently, Prescott and Papachristodoulou have developed a variant of this approach Papachristodoulou 2013, 2014 ) that further generalises the process of dividing such systems based upon differences in reaction timescales and hence partitioning the columns of the stoichiometry matrix. This work yielded an automatable model decomposition method they term layering . They highlight the fact that such an approach can present a more natural means of model decomposition as opposed to the traditional approach of partitioning species into modules.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Finding Timescale Partitions <s> Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a “concentration-clamp” procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal “kinetic biomarkers” of the overall β1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Finding Timescale Partitions <s> Systems biology uses large networks of biochemical reactions to model the functioning of biological cells from the molecular to the cellular scale. The dynamics of dissipative reaction networks with many well separated time scales can be described as a sequence of successive equilibrations of different subsets of variables of the system. Polynomial systems with separation are equilibrated when at least two monomials, of opposite signs, have the same order of magnitude and dominate the others. These equilibrations and the corresponding truncated dynamics, obtained by eliminating the dominated terms, find a natural formulation in tropical analysis and can be used for model reduction. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Finding Timescale Partitions <s> Model reduction is a central topic in systems biology and dynamical systems theory, for reducing the complexity of detailed models, finding important parameters, and developing multi-scale models for instance. While perturbation theory is a standard mathematical tool to analyze the different time scales of a dynamical system, and decompose the system accordingly, tropical methods provide a simple algebraic framework to perform these analyses systematically in polynomial systems. The crux of these tropicalization methods is in the computation of tropical equilibrations. In this paper we show that constraint-based methods, using reified constraints for expressing the equilibration conditions, make it possible to numerically solve non-linear tropical equilibration problems, out of reach of standard computation methods. We illustrate this approach first with the reduction of simple biochemical mechanisms such as the Michaelis-Menten and Goldbeter-Koshland models, and second, with performance figures obtained on a large scale on the model repository \texttt{biomodels.net} <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Finding Timescale Partitions <s> We discuss a method of approximate model reduction for networks of biochemical reactions. This method can be applied to networks with polynomial or rational reaction rates and whose parameters are given by their orders of magnitude. In order to obtain reduced models we solve the problem of tropical equilibration that is a system of equations in max-plus algebra. In the case of networks with nonlinear fast cycles we have to solve the problem of tropical equilibration at least twice, once for the initial system and a second time for an extended system obtained by adding to the initial system the differential equations satisfied by the conservation laws of the fast subsystem. The two steps can be reiterated until the fast subsystem has no conservation laws different from the ones of the full model. Our method can be used for formal model reduction in computational systems biology. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Finding Timescale Partitions <s> Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors. <s> BIB005
|
The main difficulty associated with these timescale partitioning methods is that of finding a formulation of the system for which an appropriate parameter δ 1 can be identified. A range of approaches addressing this issue have been discussed in the literature. BIB002 , BIB003 and BIB004 have proposed, developed and refined an approach of model tropicalisation for the reduction of biochemical models-this is a method of model abstraction which can guide the application of both the species-and reaction-based singular perturbation approaches described above. BIB005 further develop the method of tropicalisation in the context of systems with entirely polynomial governing equations by introducing an algorithm allowing the automatic computation of tropical equilibrations based upon the Newton polytope and edge filtering. BIB001 have also provided an a posteriori means of analysing systems for the existence of possible QSSA or REA simplifications. The system is simulated under two conditions-the introduction and the removal of a fixed input into the system. The trajectories of these simulations are then plotted in each of the 2-dimensional phase planes between all possible pairs of state-variables. In each case the hysteresis between these two trajectories is used to judge the possibility that each pair can be considered to rapidly equilibrate with respect to one another and hence guide application of the timescale exploitation methods described throughout this section. This method was applied to a 25-dimensional model of β 1 -adrenergic signalling, where it was shown that a 6-dimensional reduced model was capable of accurately capturing the original system's dynamics. demonstrate that for models which can be recast in the form of S-systems, it is always possible to algorithmically rank the timescales of species and to obtain a simple description of how this varies with model parameterisation. This is achieved by expressing the system in the form of a generalised Lotka-Volterra model through the analysis of a specific constant matrix and application of singular value decomposition, and it is then possible to study how the timescales of the state-variables depend upon both the specific parameterisation and stoichiometry of the system. This approach is demonstrated via application to three real-world examples a model of yeast glycolysis, the citric acid (TCA) cycle and purine metabolism.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> The present paper explores the following question: can the number-crunching power of the computer be used not only for generating numerical solutions, but also for deriving alternative formulations for the given problems? In other words, can the traditional role of human theoreticians also be performed by digital computers? We shall limit our efforts here to stiff systems of ordinary differential equations. Our task is to translate the general singular perturbation procedures used by human theoreticians for this class of problems into a programmable set of computations; the output from the computations shall provide both the numerical solutions and the alternative formulation of the given problem. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Our aim in this article is to present some receent results on the mathematical theory of Inertial Manifolds and Slow Manifolds. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> A general procedure for simplifying chemical kinetics is developed, based on the dynamical systems approach. In contrast to conventional reduced mechanisms no information is required concerning which reactions are to be assumed to be in partial equilibrium nor which species are assumed to be in steady state. The only “inputs” to the procedure are the detailed kinetics mechanism and the number of degrees of freedom required in the simplified scheme. (Four degrees of freedom corresponds to a four-step mechanism, etc.) The state properties given by the simplified scheme are automatically determined as functions of the coordinates associated with the degrees of freedom. Results are presented for the CO/H2/air system. These show that the method provides accurate results even in regimes (e.g., at low temperatures) where conventional mechanisms fail. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> The Computational Singular Perturbation (CSP) method of simplified kinetics modeling is presented with emphasis on its comparative merits versus conventional methodologies. A new “refinement” procedure for the basis vectors spanning the fast reaction subspace is presented. A simple example is first worked through using the conventional partial-equilibrium and quasi-steady approximations, and is then treated in some detail using CSP. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Abstract This paper concerns two methods for reducing large systems of chemical kinetics equations, namely, the method of intrinsic low-dimensional manifolds (ILDMs) due to Maas and Pope [Combust. Flame 88 (1992) 239] and an iterative method due to Fraser [J. Chem. Phys. 88 (1988) 4732] and further developed by Roussel and Fraser [J. Chem. Phys. 93 (1990) 1072]. Both methods exploit the separation of fast and slow reaction time scales to find low-dimensional manifolds in the space of species concentrations where the long-term dynamics are played out. The asymptotic expansions of these manifolds (e↓0, where e measures the ratio of the reaction time scales) are compared with the asymptotic expansion of M e , the slow manifold given by geometric singular perturbation theory. It is shown that the expansions of the ILDM and M e agree up to and including terms of O (e) ; the former has an error at O (e 2 ) that is proportional to the local curvature of M 0 . The error vanishes if and only if the curvature is zero everywhere. The iterative method generates, term by term, the asymptotic expansion of M e . Starting from M 0 , the ith application of the algorithm yields the correct expansion coefficient at O (e i ) , while leaving the lower-order coefficients invariant. Thus, after l applications, the expansion is accurate up to and including the terms of O (e l ) . The analytical results are illustrated on a planar system from enzyme kinetics (Michaelis–Menten–Henri) and a model planar system due to Davis and Skodje. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Systems biology aims at an understanding of increasingly large and complex cellular systems making use of computational approaches, e.g. numerical simulations. The size and complexity of the underlying biochemical reaction networks call for methods to speed up simulations and/or dissect the biochemical network into smaller subsystems which can be studied independently. Both goals can be achieved by so-called complexity reduction algorithms. However, existing complexity reduction approaches for biochemical reaction networks are mostly based on studying the steady state behavior of a system and/or are based on heuristics. Given the fact that many complex biochemical systems display highly nonlinear dynamics and that this dynamics plays a crucial role in the functioning of the organism, a new methodology has to be developed. Therefore, we present a new complexity reduction method which is time-dependent and suited not only for steady states, but for all possible dynamics of a biochemical system. It makes use of the evolution of the different time–scales in the system, allowing to reduce the number of equations necessary to describe the system which is speeding up the computation time. In addition, it is possible to study the way different variables/metabolites contribute to the reduced equation system which indicates how strongly they interact and couple. In the extreme case of variables decoupling in a specific state, the method allows the complete dissection of the system resulting in subsystems that can be studied in isolation. The whole method provides a systematic tool for an automated complexity reduction of arbitrary biochemical reaction networks. With the aid of a specific example, the oscillatory peroxidase-oxidase system, we show that coupling of time–scales depends heavily on the specific dynamics of the system. Therefore, neither computational improvement nor systematic understanding can be achieved by studying these aspects solely under steady state conditions. <s> BIB006 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Motivation: Simulation and modeling is becoming a standard approach to understand complex biochemical processes. Therefore, there is a big need for software tools that allow access to diverse simulation and modeling methods as well as support for the usage of these methods. ::: ::: Results: Here, we present COPASI, a platform-independent and user-friendly biochemical simulator that offers several unique features. We discuss numerical issues with these features; in particular, the criteria to switch between stochastic and deterministic simulation methods, hybrid deterministic--stochastic methods, and the importance of random number generator numerical resolution in stochastic simulation. ::: ::: Availability: The complete software is available in binary (executable) for MS Windows, OS X, Linux (Intel) and Sun Solaris (SPARC), as well as the full source code under an open source license from http://www.copasi.org. ::: ::: Contact: [email protected] <s> BIB007 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> We consider complex mathematical models that are characterized by a wide spectrum of time scales, the fastest of which are operative during the initial state only, leaving the slower ones to drive the system at later times. It is shown that very useful physical understanding can be acquired if the fast and slow dynamics are first separated and then analyzed. Existing algorithmic methodologies can be applied for this purpose. A demonstration of this approach is presented for a glycolysis model, the solution of which asymptotically evolves around a limit cycle. <s> BIB008 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Motivation: The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. ::: ::: Results: In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. ::: ::: Availability: The methods are included in COPASI which is free for academic use and available at www.copasi.org. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. <s> BIB009 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> Abstract Large-scale models of cellular reaction networks are usually highly complex and characterized by a wide spectrum of time scales, making a direct interpretation and understanding of the relevant mechanisms almost impossible. We address this issue by demonstrating the benefits provided by model reduction techniques. We employ the Computational Singular Perturbation (CSP) algorithm to analyze the glycolytic pathway of intact yeast cells in the oscillatory regime. As a primary object of research for many decades, glycolytic oscillations represent a paradigmatic candidate for studying biochemical function and mechanisms. Using a previously published full-scale model of glycolysis, we show that, due to fast dissipative time scales, the solution is asymptotically attracted on a low dimensional manifold. Without any further input from the investigator, CSP clarifies several long-standing questions in the analysis of glycolytic oscillations, such as the origin of the oscillations in the upper part of glycolysis, the importance of energy and redox status, as well as the fact that neither the oscillations nor cell–cell synchronization can be understood in terms of glycolysis as a simple linear chain of sequentially coupled reactions. <s> BIB010 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> BackgroundGiven the complex mechanisms underlying biochemical processes systems biology researchers tend to build ever increasing computational models. However, dealing with complex systems entails a variety of problems, e.g. difficult intuitive understanding, variety of time scales or non-identifiable parameters. Therefore, methods are needed that, at least semi-automatically, help to elucidate how the complexity of a model can be reduced such that important behavior is maintained and the predictive capacity of the model is increased. The results should be easily accessible and interpretable. In the best case such methods may also provide insight into fundamental biochemical mechanisms.ResultsWe have developed a strategy based on the Computational Singular Perturbation (CSP) method which can be used to perform a "biochemically-driven" model reduction of even large and complex kinetic ODE systems. We provide an implementation of the original CSP algorithm in COPASI (a COmplex PAthway SImulator) and applied the strategy to two example models of different degree of complexity - a simple one-enzyme system and a full-scale model of yeast glycolysis.ConclusionThe results show the usefulness of the method for model simplification purposes as well as for analyzing fundamental biochemical mechanisms. COPASI is freely available at http://www.copasi.org. <s> BIB011 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Coordinate Transforming Timescale Methods <s> The results of a detailed, step-by-step, asymptotic analysis of the NF-κB signaling system are reported, in the case where the system exhibits limit cycle behavior. The analysis is based on the dimensional form of the model and exploits the various fast/slow time scale gaps that develop as the solution evolves along the limit cycle. It is shown that under the action of fast time scales of dissipative character, the limit cycle is confined on a low-dimensional surface in the phase space. The cycle can be divided in three parts, each one related to a different characteristic time scale. The first part refers to the slow rate of NF-κB release in the cytoplasm. The second part refers to the even slower rate by which the free NF-κB enters into the nucleus. The last part refers to the fast rate by which the nuclear NF-κB is first produced and then depleted. It is demonstrated that these and many other findings (regarding the fast variables, the equilibrated fast processes, the rate limiting and/or driving steps, etc.) can be acquired by simple algorithmic tools. <s> BIB012
|
In the previous section it was discussed that often a nondimensionalisation of a system was required in order to clearly expose the timescale differences between species and reactions. In this section, however, it is shown that a change of basis for the statevariables can often be used to obtain a transformed model where timescale separation is significantly more readily apparent and exploitable. Such approaches can often lead to lower-dimensional and more accurate model reductions than the methods so far discussed. However, this is weighed against the fact that the transformations employed will often obfuscate the biological interpretability of the reduced dynamical system. The methods outlined in this section aim to find a transformation of the statevariables under which the fast and slow dynamics can be decoupled and then used to reduce the system whilst retaining a high degree of accuracy between the simplified and original models. In essence, such methods seek a low-dimensional manifold within the phase space of the system upon which trajectories of interest for the dynamical model can be satisfactorily approximated on the timescale of interest. Usually the aim is to describe the dynamics on the slow timescales and thus seek a manifold that can approximate trajectories after a short initial transient period through to steady-state. This is commonly known as an inertial manifold (or in special cases, as the slow manifold BIB002 . The methods discussed in this section provide approximations of such manifolds. The simplest example involves linearisation and transformation of the statevariables into the system's eigenbasis. First note that a system of the form described by Eq. (1) can be linearised (i.e. approximated by a linear system of ODEs) around a given state x c of the system by calculating the Jacobian matrix with E commonly referred to as the elasticity matrix, whose entries are given by Then, via a first-order Taylor expansion, the system can be approximated in the neighbourhood of The eigenvectors ν i , for i = 1, . . . , n, of J x c represent directions of movement around this point in phase space, and the corresponding eigenvalues λ i determine the speed of movement along that direction. Hence if the state-variables are transformed so as to correspond with the directions of the eigenvectors (i.e. into the eigenbasis), clear timescales τ i = −1/ |Re(λ i )| can be associated with each new variable. If there is a sufficiently large gap between any two successive eigenvalues (i.e. an eigengap), a timescale decomposition of the transformed state-variables into slow and fast groups is possible, and hence, singular perturbation can be applied to obtain a reduced system. Unfortunately, if some of the eigenvalues are tightly clustered or are replicated, standard eigendecomposition approaches may suffer issues of numerical inaccuracy. The intrinsic low-dimensional manifold method (ILDM), originally developed as a means of model reduction by BIB003 within the context of combustion chemistry, provides a numerically stable means of applying an eigenbasis decomposition. ILDM has seen a number of applications within the field of biochemical modelling, and a more detailed account of the methodology is given in Additional file 1-Supplementary information Section 1.3. Vallabhajosyula and Sauro have also provided a brief review of the ILDM method within the context of biochemical reaction networks ). An example of the direct application of the ILDM method to a nonlinear example model can also be found in Additional file 1-Supplementary information Section 2.4. Notably, BIB006 have developed a time-varying form of the ILDM method where the time course of the model is split into multiple intervals with differing reductions. This approach was demonstrated via application to a model of peroxidaseoxidase reaction coupled with enzyme activity consisting of 10 ODEs. Under their approach the model could be reduced to between 3 and 5 state-variables at each timeinterval whilst maintaining a high degree of accuracy. have also examined this approach via application to a model of glycolysis in yeast cells. In particular they sought to answer the question of how far the ILDM continues to provide an accurate timescale decomposition away from the point of linearisation x c . BIB009 have developed a highly automatable and time-dependent form of the ILDM method for implementation in the COPASI software package BIB007 ). Time dependency is achieved by not decoupling the fast and slow transformed state-variables found under the ILDM. Here, instead, the QSSA is applied to the species that are shown to contribute most to the set of fast transformed state-variables. Hence, although it has its roots in ILDM, this approach is coordinate preserving as opposed to employing a change of basis. This approach is demonstrated via application to models of calcium oscillation and glycolysis in Saccharomyces cerevisiae. In both cases good reductions could be obtained, with a maximal relative error of around 0.5% across all reactants in the glycolysis case. Bykov and Goldshtein (2016) outline a similar method to the ILDM termed the global quasi-linearisation method (GQL) that can be used to exploit fast/slow decompositions of the system. By combining the conservation relations and the singularly perturbed eigendecomposition of the systems GQL matrix, it is possible to replace a number of species with algebraic relations and hence reduce the system. This approach is demonstrated for a 28-dimensional system describing the intracellular signalling of FAS induced apoptosis; this system was reduced to 15 dimensions whilst incurring <1% relative error. An alternative coordinate transforming method based upon timescale decomposition is that of computational singular perturbation (CSP). The CSP method was originally published in 1985 by BIB001 and further developed in a series of papers by , and BIB004 . More recent work by BIB005 and Zagaris et al. (2004a, b) has provided a rigorous analysis of the asymptotic behaviour of CSP and its relationship to other timescale-based methods such as ILDM. Like ILDM, CSP seeks to provide a general framework for applying a timescale decomposition where no obvious nondimensionalisation exposing a singularly perturbed form can be found. This is again achieved by applying a change of basis. Unlike the ILDM method, however, CSP seeks to transform the set of reactions into a new basis that exposes clear timescale differences between the set of transformed reactions. The fast transformed reactions can then be assumed to equilibrate instantaneously, and hence their dynamical contribution can be neglected in a reduced model. In computing these transformed reaction rates, CSP also yields timescale estimates for the original set of reactions and state-variables. These timescale indices can be to guide the application of more traditional methods of reduction such as QSSA or REA. CSP is a highly automated approach that iteratively constructs a change of basis for the reactions. In doing so, application of CSP can provide significant analytical insight into the driving factors of a dynamical system. Further details on the application of CSP can be found in Additional file 1-Supplementary information Section 1.4. BIB011 have discussed the implementation the CSP algorithm in the COPASI software package and also demonstrated its application for the reduction of a model of glycolysis in S cerevisiae. Specifically, they showed the use of the method in guiding the application of the QSSA and the REA. They were hence able to reduce the original system, involving 22 state-variables and 24 reactions, to a 17-dimensional model detailing 19 reactions that remained accurate for a wide range of dynamical regimes. BIB008 BIB010 similarly applied the method to a model of glycolysis in S. cerevisiae. Here, however, they were only concerned with the long-term dynamical description of the system on a limit-cycle and, additionally, the transformation of the reactions into a new basis was permitted. Under this approach they were able to demonstrate that the limit-cycle contained within an 11-dimensional manifold and that evolution along this trajectory could be accurately described using only three state-variables. The publication also explored the use of CSP in guiding conventional model reduction approaches, but found that a 10-dimensional reduction attained via guided application of QSSA and REA performed significantly worse than that obtained via the construction of a transformed reaction basis. In a further work, BIB012 sought to analyse a model of the NF-κB signalling system via application of CSP and the computation of timescale indices, but did not propose a specific reduced model.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> Using the elementary sensitivity densities, a reaction rate sensitivity gradient is obtained which is the derivative of the rate of species concentration change with respect to the rate coefficient. The method is used to analyse the mechanism of high-temperature formaldehyde oxidation and high-temperature propane pyrolysis <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> Abstract Two different model reduction strategies are studied in order to test their applicability to reduce complex metabolism models. Using a model of one pre-identified model set describing complex metabolic dynamics after glucose pulse stimulation, a model reduction method based on the parameter tuning importance is compared with a pca based approach. Up to 49 of 122 parameters are rejected without significant changes of the simulated trajectories and of the flux distribution. Applying the reduction procedure to 12 other dynamic models reveals a general model structure inconsistency within the description of the pentose phosphate pathway. That points out the need of additional experiments to reproduce metabolite courses especially of this metabolic pathway. Thus the sensitivity based model reduction procedure is qualified as a promising tool for the model structure check and can be very useful for the entire model validation process which also includes the critical analysis of the data sets underlying the models. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> MOTIVATION ::: Novel high-throughput genomic and proteomic tools are allowing the integration of information from a range of biological assays into a single conceptual framework. This framework is often described as a network of biochemical reactions. We present strategies for the analysis of such networks. ::: ::: ::: RESULTS ::: The direct differential method is described for the systematic evaluation of scaled sensitivity coefficients in reaction networks. Principal component analysis, based on an eigenvalue-eigenvector analysis of the scaled sensitivity coefficient matrix, is applied to rank individual reactions in the network based on their effect on system output. When combined with flux analysis, sensitivity analysis allows model reduction or simplification. Using epidermal growth factor (EGF) mediated signaling and trafficking as an example of signal transduction, we demonstrate that sensitivity analysis quantitatively reveals the dependence of dual-phosphorylated extracellular signal-regulated kinase (ERK) concentration on individual reaction rate constants. It predicts that EGF mediated reactions proceed primarily via an Shc-dependent pathway. Further, it suggests that receptor internalization and endosomal signaling are important features regulating signal output only at low EGF dosages and at later times. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> Abstract The complexity of biochemical systems, stemming from both the large number of components and the intricate interactions between these components, may hinder us in understanding the behavior of these systems. Therefore, effective methods are required to capture their key components and interactions. Here, we present a novel and efficient reduction method to simplify mathematical models of biochemical systems. Our method is based on the exploration of the so-called admissible region, that is the set of parameters for which the mathematical model yields some required output. From the shape of the admissible region, parameters that are really required in generating the output of the system can be identified and hence retained in the model, whereas the rest is removed. To describe the idea, first the admissible region of a very small artificial network with only three nodes and three parameters is determined. Despite its simplicity, this network reveals all the basic ingredients of our reduction method. The method is then applied to an epidermal growth factor receptor (EGFR) network model. It turns out that only about 34% of the network components are required to yield the correct response to the epidermal growth factor (EGF) that was measured in the experiments, whereas the rest could be considered as redundant for this purpose. Furthermore, it is shown that parameter sensitivity on its own is not a reliable tool for model reduction, because highly sensitive parameters are not always retained, whereas slightly sensitive parameters are not always removable. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Sensitivity Analysis <s> Background ::: The description of intracellular processes based on chemical reaction kinetics has become a standard approach in the last decades, and parameter estimation poses several challenges. Sensitivity analysis is a powerful tool in model development that can aid model calibration in various ways. Results can for example be used to simplify the model by elimination or fixation of parameters that have a negligible influence on relevant model outputs. However, models are usually subject to rescaling and normalization to reference experiments, which changes the variance of the output. Thus, the results of the sensitivity analysis may change depending on the choice of these rescaling factors and reference experiments. Although it might intuitively be clear, this fact has not been addressed in the literature so far. <s> BIB006
|
Sensitivity analysis can be local or global and represents a commonly applied methodology in the systems biology literature BIB004 . It is typically employed to determine how robust the system's response is to fluctuations in parameter values; however, sensitivity analysis can also be used in model reduction to guide the elimination of the least influential reactions or species in a system. Given the state-space representation of Eq. (2), the aim of sensitivity analysis is to determine how the output y(t) changes under perturbations to the parameters p and the state-variables x(t). To then reduce the system, the most common approach is simply to eliminate those species or parameters found to be the least sensitive in affecting the model. This is typically achieved by setting insensitive parameters equal to zero and fixing insensitive state-variables to some constant value (typically its steady-state value). Figure 3 provides a schematic depiction of this approach to model reduction. Note that this method of sensitivity analysis preserves the meaning of the reduced state-variables and reactions as no transformation is employed. Fig. 3 Schematic depiction of sensitivity analysis versus optimisation. I Sensitivity analysis allows the ranking of the relative importance of the parameters on the outputs of interest. The least influential parameters can be fixed as constant lessening the burden of parameter fitting or can enable model reduction through the elimination of associated parameters. II The optimisation approaches differ in that they typically aim to eliminate the least influential state-variables by fixing them to be constant in time Local Sensitivity Analysis Local sensitivity analysis studies the response of the system to small perturbations in the model parameterisation around some specified operating point p = p * . More specifically, such an analysis usually aims to describe variation of the model's state-variables with respect to parameter variation by constructing a sensitivity matrix R(t) = r i j (t) where the entries represent the effect of perturbing the jth model parameter on the ith state-variable. As is discussed in BIB006 , for example, it is also common to normalise these indices of sensitivity such that measures of sensitivity remain invariant under the rescaling of state-variables. Further details on computing the sensitivity matrix are provided in Additional file 1-Supplementary information Section 1.5, and an example of the direct application of normalised, local sensitivity analysis to a nonlinear example model can be found in Section 1.5. Once a matrix of sensitivity coefficients has been constructed, principle component analysis (PCA) is an established method for ranking the importance of individual reactions and determining which can be eliminated from the model BIB001 ). An example of the direct application of normalised, local sensitivity analysis and PCA to a nonlinear example model can be found in Additional file 1-Supplementary information Section 2.5. BIB002 applied this method to a model of the glycolysis and pentose phosphate pathway in E. coli (122 parameters and 22 reactions). Employing sensitivity analysis and PCA 49 of the parameters could be discarded from the model whilst retaining an acceptable error bound. BIB003 applied an approach using sensitivity analysis, PCA and flux analysis to determine which reactions can be eliminated from a signalling model of the EGRF pathway. They demonstrated that (in one module of the pathway) the number of reactions could be reduced from 85 to 64 whilst retaining a 5% error bound. Smets et al. (2002) used the same approach for a model of gene expression in the Azospirillum brasilense Sp7 bacterium. Here, 14 parameters in the full model were reduced to 6 without a substantial loss of accuracy. BIB005 introduced an algorithmic derivative-based sensitivity analysis approach to rank parameter importance. The algorithm then attempts to eliminate each parameter in order of sensitivity and gauges the sensitivity of the model output to each elimination. Unfortunately, the resulting reduction was not reliable and demonstrated that local sensitivity analysis is not always sufficient to capture the desired behaviour of the system.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Global Sensitivity Analysis <s> BackgroundSensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species.ResultsWe present four techniques, derivative approximation (DA), polynomial approximation (PA), Gauss-Hermite integration (GHI), and orthonormal Hermite approximation (OHA), for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the four approximation techniques considered in this paper is orders of magnitude smaller than traditional Monte Carlo estimation. Software, coded in MATLAB®, which implements all sensitivity analysis techniques discussed in this paper, is available free of charge.ConclusionsEstimating variance-based sensitivity indices of a large biochemical reaction system is a computationally challenging task that can only be addressed via approximations. Among the methods presented in this paper, a technique based on orthonormal Hermite polynomials seems to be an acceptable candidate for the job, producing very good approximation results for a wide range of uncertainty levels in a fraction of the time required by traditional Monte Carlo sampling. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Global Sensitivity Analysis <s> With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Global Sensitivity Analysis <s> Acute Lymphoblastic Leukemia, commonly known as ALL, is a predominant form of cancer during childhood. With the advent of modern healthcare support, the 5-year survival rate has been impressive in the recent past. However, long-term ALL survivors embattle several treatment-related medical and socio-economic complications due to excessive and inordinate chemotherapy doses received during treatment. In this work, we present a model-based approach to personalize 6-Mercaptopurine (6-MP) treatment for childhood ALL with a provision for incorporating the pharmacogenomic variations among patients. Semi-mechanistic mathematical models were developed and validated for i) 6-MP metabolism, ii) red blood cell mean corpuscular volume (MCV) dynamics, a surrogate marker for treatment efficacy, and iii) leukopenia, a major side-effect. With the constraint of getting limited data from clinics, a global sensitivity analysis based model reduction technique was employed to reduce the parameter space arising from semi-mechanistic models. The reduced, sensitive parameters were used to individualize the average patient model to a specific patient so as to minimize the model uncertainty. Models fit the data well and mimic diverse behavior observed among patients with minimum parameters. The model was validated with real patient data obtained from literature and Riley Hospital for Children in Indianapolis. Patient models were used to optimize the dose for an individual patient through nonlinear model predictive control. The implementation of our approach in clinical practice is realizable with routinely measured complete blood counts (CBC) and a few additional metabolite measurements. The proposed approach promises to achieve model-based individualized treatment to a specific patient, as opposed to a standard-dose-for-all, and to prescribe an optimal dose for a desired outcome with minimum side-effects. <s> BIB003
|
Local sensitivity analysis approaches are strongly dependent upon the nonlinearity in the system and the point p * at which the coefficients are evaluated. The obtained sensitivity coefficient estimates will not necessarily remain accurate far from this point and can give misleading results where nonlinear effects are involved. More statistical approaches that involve sampling large volumes of the parameter space and evaluate the interaction between multiple parameters can lead to more objective estimates of sensitivity. These approaches, known as global sensitivity analysis methods, attempt to establish better estimates of how perturbations in a model's parameterisation propagate through the system and how they affect the model output. Estimating global sensitivity indices can be a challenging task, as it is typically not possible to analytically evaluate them. Hence, researchers resort to numerical approaches where, for large systems, such a process can be extremely computationally expensive due to the need to test sensitivity over a large range of parameter space. A wide range of methods to achieve this exist in the literature, as have been reviewed by BIB001 , with Monte Carlo sampling being perhaps the most common. Additionally, whilst it does not cover sensitivity analysis's application to model reduction, BIB002 provides a review of sensitivity analysis methods seen in the literature, including a survey of global sensitivity analyses that have been applied to systems biology models and their estimated computational cost. The use of global sensitivity analysis methods in the reduction of biochemical systems models has seen limited application. Most notably introduced a method of multiparametric variability analysis (MPVA) which tests the sensitivity of the objective function in response to multiple parameter changes simultaneously, as opposed to testing a single parameter's sensitivity at a time. A genetic algorithm (GA)-based approach is then used to search parameter space and find reduced parameter sets that accurately replicate the original dynamics of the output. This approach is demonstrated by application to a 17-dimensional model of the GTPase-cycle module with 48 associated rate parameters. The results show that good agreement can be obtained whilst retaining only 17 parameters. BIB003 applied Sobol's global sensitivity analysis method to three mechanistic models associated with the use of chemotherapy in the treatment of acute lymphoblastic leukaemia. They were able to reduce the number of parameter across the models from 23 to 12. This enabled parameter fitting of these models for individual patients and hence the development of individualised treatment schemes.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> The complexity of full-scale metabolic models is a major obstacle for their effective use in computational systems biology. The aim of model reduction is to circumvent this problem by eliminating parts of a model that are unimportant for the properties of interest. The choice of reduction method is influenced both by the type of model complexity and by the objective of the reduction; therefore, no single method is superior in all cases. In this study we present a comparative study of two different methods applied to a 20D model of yeast glycolytic oscillations. Our objective is to obtain biochemically meaningful reduced models, which reproduce the dynamic properties of the 20D model. The first method uses lumping and subsequent constrained parameter optimization. The second method is a novel approach that eliminates variables not essential for the dynamics. The applications of the two methods result in models of eight (lumping), six (elimination) and three (lumping followed by elimination) dimensions. All models have similar dynamic properties and pin-point the same interactions as being crucial for generation of the oscillations. The advantage of the novel method is that it is algorithmic, and does not require input in the form of biochemical knowledge. The lumping approach, however, is better at preserving biochemical properties, as we show through extensive analyses of the models. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> Mathematical model reduction is a long-standing technique used both to gain insight into model subprocesses and to reduce the computational costs of simulation and analysis. A reduced model must retain essential features of the full model, which, traditionally, have been the trajectories of certain state variables. For biological clocks, timing, or phase, characteristics must be preserved. A key performance criterion for a clock is the ability to adjust its phase correctly in response to external signals. We present a novel model reduction technique that removes components from a single-oscillator clock model and discover that four feedback loops are redundant with respect to its phase response behavior. Using a coupled multioscillator model of a circadian clock, we demonstrate that by preserving the phase response behavior of a single oscillator, we preserve timing behavior at the multioscillator level. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> Quantitative modelling and analysis of biochemical networks is challenging because of the inherent complexities and nonlinearities of the system and the limited availability of parameter values. Even if a mathematical model of the network can be developed, the lack of large-scale good-quality data makes accurate estimation of a large number of parameters impossible. Hence, coarse-grained models (CGMs) consisting of essential biochemical mechanisms are more suitable for computational analysis and for studying important systemic functions. The central question in constructing a CGM is which mechanisms should be deemed 'essential' and which can be ignored. Also, how should parameter values be defined when data are sparse? A mixed-integer nonlinear-programming (MINLP) based optimisation approach to coarse-graining is presented. Starting with a detailed biochemical model with associated computational details (reaction network and mathematical description) and data on the biochemical system, the structure and the parameters of a CGM can be determined simultaneously. In this optimisation problem, the authors use a genetic algorithm to simultaneously identify parameter values and remove unimportant reactions. The methodology is exemplified by developing two CGMs for the GTPase-cycle module of M1 muscarinic acetylcholine receptor, Gq, and regulator of G protein signalling 4 [RGS4, a GTPase-activating protein (GAP)] starting from a detailed model of 48 reactions. Both the CGMs have only 17 reactions, fit experimental data well and predict, as does the detailed model, four limiting signalling regimes (LSRs) corresponding to the extremes of receptor and GAP concentration. The authors demonstrate that coarse-graining, in addition to resulting in a reduced-order model, also provides insights into the mechanisms in the network. The best CGM obtained for the GTPase cycle also contains an unconventional mechanism and its predictions explain an old problem in pharmacology, the biphasic (bell-shaped) response to certain drugs. The MINLP methodology is broadly applicable to larger and complex (dense) biochemical modules. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> Biological system models are routinely developed in modern systems biology research following appropriate modelling/experiment design cycles. Frequently these take the form of high-dimensional nonlinear Ordinary Differential Equations that integrate information from several sources; they usually contain multiple time-scales making them difficult even to simulate. These features make systems analysis (understanding of robust functionality) - or redesign (proposing modifications in order to improve or modify existing functionality) a particularly hard problem. In this paper we use concepts from systems theory to develop two complementary tools that can help us understand the complex behaviour of such system models: one based on model decomposition and one on model reduction. Our aim is to algorithmically produce biologically meaningful, simplified models, which can then be used for further analysis and design. The tools presented are applied on a model of the Epidermal Growth Factor signalling pathway. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> Biological systems are typically modelled by nonlinear differential equations. In an effort to produce high fidelity representations of the underlying phenomena, these models are usually of high dimension and involve multiple temporal and spatial scales. However, this complexity and associated stiffness makes numerical simulation difficult and mathematical analysis impossible. In order to understand the functionality of these systems, these models are usually approximated by lower dimensional descriptions. These can be analysed and simulated more easily, and the reduced description also simplifies the parameter space of the model. This model reduction inevitably introduces error: the accuracy of the conclusions one makes about the system, based on reduced models, depends heavily on the error introduced in the reduction process. In this paper we propose a method to calculate the error associated with a model reduction algorithm, using ideas from dynamical systems. We first define an error system, whose output is the error between observables of the original and reduced systems. We then use convex optimisation techniques in order to find approximations to the error as a function of the initial conditions. In particular, we use the Sum of Squares decomposition of polynomials in order to compute an upper bound on the worst-case error between the original and reduced systems. We give biological examples to illustrate the theory, which leads us to a discussion about how these techniques can be used to model-reduce large, structured models typical of systems biology. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Optimisation Approaches <s> In this paper, a model reduction procedure is proposed for the simplification of biochemical reaction network models. The approach is capable of reducing ODE models where the right hand side of the equations contains polynomial and/or rational function terms. The method is based on a finite number of mixed integer quadratic programming (MIQP) steps where the objective function effectively measures the fit between the time functions of the selected concentrations of the original and the reduced models, and the integer variables keep track of the presence of individual reactions. The procedure also contains the re-estimation of rate coefficients in the reduced model to minimize the defined model error. Two examples taken from the literature illustrate the operation of the method. <s> BIB006
|
An 'optimisation approach' here refers to those methods of model reduction that seek to reduce a system by testing a range of 'candidate'n-dimensional reduced models by calculating an associated error metric (potentially based upon either a posteriori or a priori information) for each and then selecting the best possible reduction. Of key interest is how the set of candidate reduced models are selected or sampled and what measure of model reduction error is employed in their evaluation. Such methods share a similarity with sensitivity analysis in that they are essentially testing the sensitivity of the error to changes (albeit typically in terms of species as opposed to reactions) in the reduced system. A large range of optimisation-based reduction approaches have been applied in the context of modelling biochemical reaction networks. BIB001 have developed and applied an approach they term elimination of nonessential variables (ENVA). Here the system is simulated where one-by-one each state-variable is eliminated by being fixed at its steady-state value. For a given dimensionality, the reduced model that most accurately reflects the original model dynamics is then returned. The method was applied to a 20-dimensional model of yeast glycolysis where it was able to yield an accurate 6-dimensional reduced model. BIB003 ) develop a method that simultaneously uses a model reduction and a parameter re-estimation algorithm. Here the least influential reaction rates are set to zero to obtain a reduction in the number of reactions. The optimal arrangement for eliminating reactions is expressed as a mixed integer nonlinear programming problem that is solved via a GA. This approach is demonstrated via application to a model of the GTPase-cycle, and it is shown that the original 48 reactions in the system can accurately be reduced to 17 whilst retaining sufficient predictive accuracy. BIB006 highlighted a similar method for the optimal elimination of reactions expressed as a mixed integer quadratic programming problem. Their approach was demonstrated via application to a model of the Arabidopsis thaliana circadian clock involving 7 state-variables and 27 reactions. The model was reduced under three cases relating to no light, a constant light source and a pulsing light source. Across these cases they were able to reduce the model by between 1 and 4 parameters whilst retaining an average error in the species dynamics of <6%. BIB002 describe an optimisation approach based upon the 'parametric impulse phase response curve' (pIPRC) which essentially describes how the phase of the limit-cycle in an oscillatory model varies in response to changes in parameter values and the error associated with approximating such a cycle. Their reduction methodology is then based upon a minimisation of both the number of state-variables and the pIPRC-associated error such that the reduced model seeks to preserve the oscillation phase. Given these nonlinear constraints the optimisation problem is then solved via a GA that seeks to fix the values of unnecessary state-variables. This approach was demonstrated via application to a 61-dimensional model of the mammalian circadian clock, which was accurately reduced to 13 dimensions whilst incurring only a 5% error in the pIPRC. BIB004 and BIB005 have developed methods for obtaining an a priori upper bound on the worst-case reduction error under the L 2 norm associated with a particular reduced model. In their initial work the estimate required a time-varying linearisation of the system such that an error estimate could be calculated via solving a Lyapunov equation. More recently, a worst-case error bound for the nonlinear system has been developed using the sum of squares decomposition for polynomials. These bounds have been used to develop an optimisation-based method of model reduction. Such an approach will often be faster than other methods as no simulation of the system is required to obtain a metric of reduction accuracy.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> A general analysis of approximate lumping is presented. This analysis can be applied to any reaction system with n species described by dy/dt =f(y), where y is an n-dimensional vector in a desired region Ω, and f(y) is an arbitrary n-dimensional function vector. Here we consider lumping by means of a rectangular constant matrix M (i.e. ŷ = My, where M is a row-full rank matrix and ŷ has dimension n not larger than n). The observer theory initiated by Luenberger is formally employed to obtain the kinetic equations and discuss the properties of the approximately lumped system. The approximately lumped kinetic equations have the same form dŷ/dt = Mf/My) as that for exactly lumped ones, but depend on the choice of the generalized inverse M of M. {1,2,3,4}-inverse is a good choice of the generalized inverse of M. The equations to determine the approximate lumping matrices M are presented. These equations can be solved by iteration. An approach for choosing suitable initial iteration values of the equations is illustrated by examples. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> Abstract A general analysis of exact nonlinear lumping is presented. This analysis can be applied to the kinetics of any reaction system with n species described by a set of first-order ordinary differential equations d y /d t = f ( y ), where y is an n -dimensional vector and f ( y ) is an arbitrary n -dimensional function vector. We consider lumping by means of n ( n ⩽ n )-dimensional arbitrary transformation ŷ = h ( y ). The lumped differential equation system is d y D t = y ( h (ŷ))f( h (ŷ)) , where h y (y) is teh Jacobian matrix of h(y) , h is a generalized inverse transformation of h satisfying the relation h( h ) = I n . Three necessary and sufficient conditions of the existence of exact nonlinear lumping schemes have been determined. The geometric and algebraic interpretations of these conditions are discussed. It is found that a system is exactly lumpable by h only if h(y) = 0 is its invariant manifold. A linear partial differential operator A = Σ n i =1 f i ( y )ϑ/ϑ y i corresponding to d y d t = f(y ) is defined. Using the eigenfunctions and the generalized eigenfunctions of A , the operator can be transformed to Jordan or diagonal canonical forms which give the lumped differential equation systems without determination of h . These approaches are illustrated by a simple example. The results of this analysis serve as a theoretical basis for the development of approaches for approximate nonlinear lumping. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> The complexity of full-scale metabolic models is a major obstacle for their effective use in computational systems biology. The aim of model reduction is to circumvent this problem by eliminating parts of a model that are unimportant for the properties of interest. The choice of reduction method is influenced both by the type of model complexity and by the objective of the reduction; therefore, no single method is superior in all cases. In this study we present a comparative study of two different methods applied to a 20D model of yeast glycolytic oscillations. Our objective is to obtain biochemically meaningful reduced models, which reproduce the dynamic properties of the 20D model. The first method uses lumping and subsequent constrained parameter optimization. The second method is a novel approach that eliminates variables not essential for the dynamics. The applications of the two methods result in models of eight (lumping), six (elimination) and three (lumping followed by elimination) dimensions. All models have similar dynamic properties and pin-point the same interactions as being crucial for generation of the oscillations. The advantage of the novel method is that it is algorithmic, and does not require input in the form of biochemical knowledge. The lumping approach, however, is better at preserving biochemical properties, as we show through extensive analyses of the models. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> BackgroundCombinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations.ResultsWe introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible.ConclusionThe new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> An algorithm for automatic order reduction of models defined by large systems of differential equations is presented. The algorithm was developed with systems biology models in mind and the motivation behind it is to develop mechanistic pharmacokinetic/pharmacodynamic models from already available systems biology models. The approach used for model reduction is proper lumping of the system's states and is based on a search through the possible combinations of lumps. To avoid combinatorial explosion, a heuristic, greedy search strategy is employed and comparison with the full exhaustive search provides evidence that it performs well. The method takes advantage of an apparent property of this kind of systems that lumps remain consistent over different levels of order reduction. Advantages of the method presented include: the variables and parameters of the reduced model retain a specific physiological meaning; the algorithm is automatic and easy to use; it can be used for nonlinear models and can handle parameter uncertainty and constraints. The algorithm was applied to a model of NF-B signalling pathways in order to demonstrate its use and performance. Significant reduction was achieved for the example model, while agreement with the original model was proportional to the size of the reduced model, as expected. The results of the model reduction were compared with a published, intuitively reduced model of NF-B signalling pathways and were found to be in agreement, in terms of the identified key species for the system's kinetic behaviour. The method may provide useful insights which are complementary to the intuitive reduction approach, especially in large systems. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> BackgroundSystems biology models tend to become large since biological systems often consist of complex networks of interacting components, and since the models usually are developed to reflect various mechanistic assumptions of those networks. Nevertheless, not all aspects of the model are equally interesting in a given setting, and normally there are parts that can be reduced without affecting the relevant model performance. There are many methods for model reduction, but few or none of them allow for a restoration of the details of the original model after the simplified model has been simulated.ResultsWe present a reduction method that allows for such a back-translation from the reduced to the original model. The method is based on lumping of states, and includes a general and formal algorithm for both determining appropriate lumps, and for calculating the analytical back-translation formulas. The lumping makes use of efficient methods from graph-theory and ϵ-decomposition and is derived and exemplified on two published models for fluorescence emission in photosynthesis. The bigger of these models is reduced from 26 to 6 states, with a negligible deviation from the reduced model simulations, both when comparing simulations in the states of the reduced model and when comparing back-translated simulations in the states of the original model. The method is developed in a linear setting, but we exemplify how the same concepts and approaches can be applied to non-linear problems. Importantly, the method automatically provides a reduced model with back-translations. Also, the method is implemented as a part of the systems biology toolbox for matlab, and the matlab scripts for the examples in this paper are available in the supplementary material.ConclusionsOur novel lumping methodology allows for both automatic reduction of states using lumping, and for analytical retrieval of the original states and parameters without performing a new simulation. The two models can thus be considered as two degrees of zooming of the same model. This is a conceptually new development of model reduction approaches, which we think will stimulate much further research and will prove to be very useful in future modelling projects. <s> BIB006 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> BackgroundModels of biochemical systems are typically complex, which may complicate the discovery of cardinal biochemical principles. It is therefore important to single out the parts of a model that are essential for the function of the system, so that the remaining non-essential parts can be eliminated. However, each component of a mechanistic model has a clear biochemical interpretation, and it is desirable to conserve as much of this interpretability as possible in the reduction process. Furthermore, it is of great advantage if we can translate predictions from the reduced model to the original model.ResultsIn this paper we present a novel method for model reduction that generates reduced models with a clear biochemical interpretation. Unlike conventional methods for model reduction our method enables the mapping of predictions by the reduced model to the corresponding detailed predictions by the original model. The method is based on proper lumping of state variables interacting on short time scales and on the computation of fraction parameters, which serve as the link between the reduced model and the original model. We illustrate the advantages of the proposed method by applying it to two biochemical models. The first model is of modest size and is commonly occurring as a part of larger models. The second model describes glucose transport across the cell membrane in baker's yeast. Both models can be significantly reduced with the proposed method, at the same time as the interpretability is conserved.ConclusionsWe introduce a novel method for reduction of biochemical models that is compatible with the concept of zooming. Zooming allows the modeler to work on different levels of model granularity, and enables a direct interpretation of how modifications to the model on one level affect the model on other levels in the hierarchy. The method extends the applicability of the method that was previously developed for zooming of linear biochemical models to nonlinear models. <s> BIB007 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Exact Versus Approximate Schemes <s> Bridging systems biology and pharmacokinetics–pharmacodynamics has resulted in models that are highly complex and complicated. They usually contain large numbers of states and parameters and describe multiple input–output relationships. Based on any given data set relating to a specific input–output process, it is possible that some states of the system are either less important or have no influence at all. In this study, we explore a simplification of a systems pharmacology model of the coagulation network for use in describing the time course of fibrinogen recovery after a brown snake bite. The technique of proper lumping is used to simplify the 62-state systems model to a 5-state model that describes the brown snake venom–fibrinogen relationship while maintaining an appropriate mechanistic relationship. The simplified 5-state model explains the observed decline and recovery in fibrinogen concentrations well. The techniques used in this study can be applied to other multiscale models. <s> BIB008
|
An exact lumping is one where the dynamics of the reduced system can be exactly mapped to the original dynamics using only new, BIB002 ). The conditions for exactness only hold true for a certain subset of lumping schemes and for models with specific properties. As a result, the majority of naive lumping schemes, and most of the lumping methodologies discussed in the literature, will only provide approximate reductions. The issue of how to choose a lumping that will minimise the approximation error comprises the main topic of papers in the literature. Given the above definitions, the term lumping is generally used to refer to linear, proper lumping in the literature. When applied to systems in the form of Eq. (2), this implies reduction via some linear projection L ∈ {0, 1}n ×n , where each row of L is pairwise orthogonal. The reduced state-variablesx(t) can then be computed as The dynamics of the system now acting upon the reduced variablesx(t) can be obtained via application of the Petrov-Galerkin projection as previously outlined. This yields a reduced system of the forṁ Note thatL can be any generalised inverse of L, and therefore, an infinite number of ways of constructing such a matrix exist. In the original Wei and Kuo papers outlining linear, proper lumping they suggest selecting theL that reconstructs the steady state of the system such that x * =Lx * with x * = lim t→+∞ x(t). In contrast, BIB005 , following the work of BIB001 , suggest using the Moore-Penrose inverse L + presumably for the purposes of simplicity and ease of calculation. This choice of lumping inverse, however, can have a significant influence on the model reduction error obtained. An example of the application of linear, proper lumping to a nonlinear example model is given in Additional file 1-Supplementary information Section 2.6. In recent years, lumping has been used to reduce a number of biochemical systems in the literature. BIB003 applied an approach of lumping and subsequent optimisation (which they term LASCO) to a 20-dimensional mode of yeast glycolysis. It was demonstrated that this system could be reduced to 8 dimensions whilst retaining good accuracy. It was also shown that subsequent application of their ENVA reduction approach (as previously outlined) could accurately produce further reductions in the model down to a system of only 3 dimensions that maintained the existence of a Hopf bifurcation. BIB005 introduced an algorithmic approach for linear, proper lumping. This is an optimisation-based reduction approach using lumping to obtain candidate reduced models. Their approach seeks to sum two state-variables at each step, testing every possible pair by simulating the resulting reduced model and comparing its output with the original. At each step the pair resulting in the most accurate reduction is lumped, and then the process is repeated a pair at a time. This is continued until the desired reduced dimensionality is reached. Clearly, for large models this can lead to an enormous number of lumpable pairs need to be tested; however, a range of enhancements to reduce the computational burden of this approach were also provided. Much like Danø et al., subsequent parameter optimisation was also suggested to improve the fit of the reduced model to simulated data from the original. This approach was applied to a 26-dimensional model of the NF-κB signalling pathway. Reasonable agreement with the original model was retained down to around 13 reduced state-variables, below which the oscillatory behaviour of the system was lost. BIB008 applied the Dokoumetzidis and Aarons methodology to a 62-dimensional model studying the effect of snake venom administration. It was shown that a 5-dimensional model can be produced which reflects the original system dynamics to within a maximal relative error of 20%. BIB004 applied a lumping style approach they termed 'layer-based reduced modelling'. Finding a lumping under this approach requires a relatively good a priori understanding of the model in order to decompose it into lumpable modules. All components that are strongly connected by a specified class of reactions are considered a 'layer' and are subsequently lumped together. Most notably, they apply their approach to a model of an extended subsystem of the insulin signalling pathway, reducing the 24-dimensional system to 11 dimensions with a reduction error 'within the range of measurement errors in typical experiments'. BIB006 BIB007 introduced proper lumping approaches with an emphasis on the 'zoomability' of the model, i.e. the ability to switch between particular dimensionalities of reduced models depending upon the application and accuracy desired. This was achieved via use of specific, fractional lumping inverses. In both papers the methods used for finding a suitable lumping have their basis in timescale analysis of the system. In their first paper BIB006 ) a method was devel-oped to analyse linear systems, under which the system is decomposed into fast and slow species. The algorithm then uses a graph-theoretic approach to analyse the fast part of the system looking for strongly connected components. If found, lumping of the associated species is attempted along with lumping of any linked sink state-variables. This approach is demonstrated via application to a 26-dimensional model of fluorescence emission in photosynthesis, which is reduced to 6 dimensions yielding only a negligible difference in the output profile of the reduced model. In the second paper BIB007 extend their approach to nonlinear models. To find a suitable lumping for a nonlinear system they begin by decomposing the model into fast and slow reactions. Conservation analysis is then applied to the stoichiometry matrix associated only with the fast reactions in the system to find what they term the 'apparent conservation relations'. Subsets of the variables in these apparent conservation relations are then lumped to produce a reduced model. This methodology is used to reduce a model of glycolysis in S. cerevisiae from 9 down to 5 state-variables which still provides an 'excellent description of the state dynamics'.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Balanced Truncation <s> Abstract In the area of genetically engineered micro-organisms grown in bioreactors, mathematical modeling usually results in balance type models in wiving (i) a (rather) large number of state variables and, (ii) complicated kinetic expressions containing a large number of parameters. Therefore, a generic methodology is developed to reduce the model complexity at the level of the kinetics, while maintaining high prediction power. As a case study to illustrate the method and results obtained, the influence of the dissolved oxygen concentration on the cytN gene expression in the bacterium Azospirillum brasilense Sp7 is modeled. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Balanced Truncation <s> Modelling of biochemical systems usually focuses on certain pathways, while the concentrations of so-called external metabolites are considered fixed. This approximation ignores feedback loops mediated by the environment, that is, via external metabolites and reactions. To achieve a more realistic, dynamic description that is still numerically efficient, we propose a new methodology: the basic idea is to describe the environment by a linear effective model of adjustable dimensionality. In particular, we (a) split the entire model into a subsystem and its environment, (b) linearize the environment model around a steady state, and (c) reduce its dimensionality by balanced truncation, an established method for large-scale model reduction. The reduced variables describe the dynamic modes in the environment that dominate its interaction with the subsystem. We compute metabolic response coefficients that account for complexity-reduced dynamics of the environment. Our simulations show that a dynamic environment model can improve the simulation results considerably, even if the environment model has been drastically reduced and if its kinetic parameters are only approximately known. The speed-up in computation gained by model reduction may become vital for parameter estimation in large cell models. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Balanced Truncation <s> System reduction is applied to a mathematical model of glycolysis in yeast and to a chain network without and with feedback. The method of system reduction used is linearization of a rational positive system at a steady state, balancing of the local linear system, and truncation of the balanced linear system. For a model of glycolysis in yeast with glucose as input and pyruvate as output, it is shown that a third order linear system locally approximates well the original thirteenth order nonlinear system. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Balanced Truncation <s> This paper addresses the problem of model reduction for dynamical system models that describe biochemical reaction networks. Inherent in such models are properties such as stability, positivity and network structure. Ideally these properties should be preserved by model reduction procedures, although traditional projection based approaches struggle to do this. We propose a projection based model reduction algorithm which uses generalised block diagonal Gramians to preserve structure and positivity. Two algorithms are presented, one provides more accurate reduced order models, the second provides easier to simulate reduced order models. <s> BIB004
|
One SVD method that has been employed in the reduction of biochemical systems is that of balanced truncation BIB002 MeyerBäse and Theis 2008) . The method is most commonly used in the field of control theory and was originally devised in the early 1980s . It was subsequently refined by a number of authors and has become a well-developed methodology covered in many textbooks on control theory BIB001 . It is applicable to controlled models in a state-space representation form and focuses on reducing systems whilst preserving the overall input-output behaviour of the model. Typically, the method is used for the simplification of timeinvariant, linear systems and does not rely upon timescale separation of fast and slow processes (Fig. 5) . Crucially, balanced truncation seeks to exploit the concepts of controllability (how strongly each of the state-variables responds to changes in the input) and observability (how strongly the output responds to changes in the state-variables). To quantify these concepts it is possible to construct a pair of matrices known as the controllability and observability Gramians. Balanced truncation seeks a 'balancing' transformation of the state-variables under which these Gramians are equalised and diagonalised. This implies that the transformed state-variables are orthogonal in the input-output space of the model and those contributing least to the overall input-output relationship can therefore be truncated without impacting the remaining variables. In the linear case, balanced truncation begins with a controlled system of the forṁ The controllability and observability Gramians, P and Q, respectively, can then be obtained by solving the Lyapunov equations AP + P A + B B = 0, and A Q + Q A + C C = 0. The aim is then to find a balancing transformation which, when applied to the statevariables, equalises and diagonalises both P and Q. Such a transformation can be obtained via the following steps; first, perform a Cholesky factorisation of both of the Gramians to give P = L L, and Q = R R. Now take a singular value decomposition of the newly formed matrix L R to obtain L R = UΣ V , using this, the balancing transformation T and its inverseT can be computed as Given a reduced dimensionalityn the reduced model can be constructed via the following transformations A →Ã = P T AT P , B →B = P T B, where P is ann×n matrix of the form P = In 0 . This gives a reduced,n-dimensional model of the formẋ =Ãx +Bu, y =Cx. Such an approach has a number of strengths, especially in the construction of highly reduced systems that will provide an accurate approximation of output for any given input values. Additionally the method provides the ability to construct an a priori error bound for a given reduction based upon the singular values of the balanced Gramian (known as the Hankel singular values). Unfortunately, the transformation applied to the state-variables will typically mask the biological interpretability of the reduced dynamical system, and as such, balanced truncation can be considered as a black-box approach to model reduction. Balanced truncation was originally devised for the reduction of linear systems; however, in recent years generalisations for nonlinear cases have emerged BIB003 Edgar 2000, 2002) . For nonlinear systems, however, the Gramians computed are typically only an approximation. Given the usually nonlinear nature of biochemical models it is these methods that may possess the most relevance. In particular, empirical balanced truncation, which constructs approximate Gramians via repeated numerical simulations of the model under perturbations, may be highly applicable within the context of biochemical systems but has not yet seen published use. An example of the application of linearisation and balanced truncation to a nonlinear example model is given in Additional file 1-Supplementary information Section 2.7. In the biochemical modelling literature balanced truncation has seen relatively limited application. BIB002 outlined an approach that involved partitioning a model into two sets of species: a 'core' set containing the species and reactions of primary interest to the modeller and an 'environmental' set of terms present in the model, but of little interest. The approach then seeks to linearise and apply balanced truncation to the set of environmental species in order to construct a reduced model. This method was applied to a model of glycolysis from the KEGG database. A particular 3-dimensional sub-module was chosen to represent the core set, and the remaining 20 interacting species were found to be environmental relative to these dynamics of interest. It was demonstrated that this environmental set could be reduced to a single state-variable whilst retaining an accurate description of the core dynamics. Härdin and van Schuppen (2006) demonstrate a similar approach of system linearisation followed by balanced truncation to a model of yeast glycolysis. They showed that a 13-dimensional model could be reduced to 3 state-variables. Unfortunately, whilst the application of balanced truncation incurred very little error, the initial linearisation step was shown to suffer a prohibitive error cost. BIB004 developed a method of balanced truncation for application to linearised systems. To avoid issues of biological interpretability, they impose the condition that Gramians must be block diagonal, hence preserving meaning between sub-modules, with the interior of modules reduced by a balancing transformation. Their method requires that the system is monotone in order to obtain such block diagonal Gramians.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> Biological systems and, in particular, cellular signal transduction pathways are characterised by their high complexity. Mathematical models describing these processes might be of great help to gain qualitative and, most importantly, quantitative knowledge about such complex systems. However, a detailed mathematical description of these systems leads to nearly unmanageably large models, especially when combining models of different signalling pathways to study cross-talk phenomena. Therefore, simplification of models becomes very important. Different methods are available for model reduction of biological models. Importantly, most of the common model reduction methods cannot be applied to cellular signal transduction pathways. Using as an example the epidermal growth factor (EGF) signalling pathway, we discuss how quantitative methods like system analysis and simulation studies can help to suitably reduce models and additionally give new insights into the signal transmission and processing of the cell. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> After activation, many receptors and their adaptor proteins act as scaffolds displaying numerous docking sites and engaging multiple targets. The consequent assemblage of a variety of protein complexes results in a combinatorial increase in the number of feasible molecular species presenting different states of a receptor-scaffold signaling module. Tens of thousands of such microstates emerge even for the initial signal propagation events, greatly impeding a quantitative analysis of networks. Here, we demonstrate that the assumption of independence of molecular events occurring at distinct sites enables us to approximate a mechanistic picture of all possible microstates by a macrodescription of states of separate domains, i.e., macrostates that correspond to experimentally verifiable variables. This analysis dissects a highly branched network into interacting pathways originated by protein complexes assembled on different sites of receptors and scaffolds. We specify when the temporal dynamics of any given microstate can be expressed using the product of the relative concentrations of individual sites. The methods presented here are equally applicable to deterministic and stochastic calculations of the temporal dynamics. Our domain-oriented approach drastically reduces the number of states, processes, and kinetic parameters to be considered for quantification of complex signaling networks that propagate distinct physiological responses. <s> BIB002 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> Background:Receptors and scaffold proteins possess a number of distinct domains and bind multiple partners. A common problem in modeling signaling systems arises from a combinatorial explosion of different states generated by feasible molecular species. The number of possible species grows exponentially with the number of different docking sites and can easily reach several millions. Models accounting for this combinatorial variety become impractical for many applications.Results:Our results show that under realistic assumptions on domain interactions, the dynamics of signaling pathways can be exactly described by reduced, hierarchically structured models. The method presented here provides a rigorous way to model a large class of signaling networks using macro-states (macroscopic quantities such as the levels of occupancy of the binding domains) instead of micro-states (concentrations of individual species). The method is described using generic multidomain proteins and is applied to the molecule LAT.Conclusion:The presented method is a systematic and powerful tool to derive reduced model structures describing the dynamics of multiprotein complex formation accurately. <s> BIB003 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. <s> BIB004 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> BackgroundModeling of biological pathways is a key issue in systems biology. When constructing a model, it is tempting to incorporate all known interactions of pathway species, which results in models with a large number of unknown parameters. Fortunately, unknown parameters need not necessarily be measured directly, but some parameter values can be estimated indirectly by fitting the model to experimental data. However, parameter fitting, or, more precisely, maximum likelihood parameter estimation, only provides valid results, if the complexity of the model is in balance with the amount and quality of the experimental data. If this is the case the model is said to be identifiable for the given data. If a model turns out to be unidentifiable, two steps can be taken. Either additional experiments need to be conducted, or the model has to be simplified.ResultsWe propose a systematic procedure for model simplification, which consists of the following steps: estimate the parameters of the model, create an identifiability ranking for the estimated parameters, and simplify the model based on the identifiability analysis results. These steps need to be applied iteratively until the resulting model is identifiable, or equivalently, until parameter variances are small. We choose parameter variances as stopping criterion, since they are concise and easy to interpret. For both, the parameter estimation and the calculation of parameter variances, multi-start parameter estimations are run on a parallel cluster. In contrast to related work in systems biology, we do not suggest simplifying a model by fixing some of its parameters, but change the structure of the model.ConclusionsWe apply the proposed approach to a model of early signaling events in the JAK-STAT pathway. The resulting model is not only identifiable with small parameter variances, but also shows the best trade-off between goodness of fit and model complexity. <s> BIB005 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. <s> BIB006 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> In this paper we propose a model-order reduction method for chemical reaction networks governed by general enzyme kinetics, including the mass-action and Michaelis-Menten kinetics. The model-order reduction method is based on the Kron reduction of the weighted Laplacian matrix which describes the graph structure of complexes in the chemical reaction network. We apply our method to a yeast glycolysis model, where the simulation result shows that the transient behaviour of a number of key metabolites of the reduced-order model is in good agreement with those of the full-order model. <s> BIB007 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> BackgroundIn this paper we propose a model reduction method for biochemical reaction networks governed by a variety of reversible and irreversible enzyme kinetic rate laws, including reversible Michaelis-Menten and Hill kinetics. The method proceeds by a stepwise reduction in the number of complexes, defined as the left and right-hand sides of the reactions in the network. It is based on the Kron reduction of the weighted Laplacian matrix, which describes the graph structure of the complexes and reactions in the network. It does not rely on prior knowledge of the dynamic behaviour of the network and hence can be automated, as we demonstrate. The reduced network has fewer complexes, reactions, variables and parameters as compared to the original network, and yet the behaviour of a preselected set of significant metabolites in the reduced network resembles that of the original network. Moreover the reduced network largely retains the structure and kinetics of the original model.ResultsWe apply our method to a yeast glycolysis model and a rat liver fatty acid beta-oxidation model. When the number of state variables in the yeast model is reduced from 12 to 7, the difference between metabolite concentrations in the reduced and the full model, averaged over time and species, is only 8%. Likewise, when the number of state variables in the rat-liver beta-oxidation model is reduced from 42 to 29, the difference between the reduced model and the full model is 7.5%.ConclusionsThe method has improved our understanding of the dynamics of the two networks. We found that, contrary to the general disposition, the first few metabolites which were deleted from the network during our stepwise reduction approach, are not those with the shortest convergence times. It shows that our reduction approach performs differently from other approaches that are based on time-scale separation. The method can be used to facilitate fitting of the parameters or to embed a detailed model of interest in a more coarse-grained yet realistic environment. <s> BIB008 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> Biochemical systems involving a high number of components with intricate interactions often lead to complex models containing a large number of parameters. Although a large model could describe in detail the mechanisms that underlie the system, its very large size may hinder us in understanding the key elements of the system. Also in terms of parameter identification, large models are often problematic. Therefore, a reduced model may be preferred to represent the system. Yet, in order to efficaciously replace the large model, the reduced model should have the same ability as the large model to produce reliable predictions for a broad set of testable experimental conditions. We present a novel method to extract an "optimal" reduced model from a large model to represent biochemical systems by combining a reduction method and a model discrimination method. The former assures that the reduced model contains only those components that are important to produce the dynamics observed in given experiments, whereas the latter ensures that the reduced model gives a good prediction for any feasible experimental conditions that are relevant to answer questions at hand. These two techniques are applied iteratively. The method reveals the biological core of a model mathematically, indicating the processes that are likely to be responsible for certain behavior. We demonstrate the algorithm on two realistic model examples. We show that in both cases the core is substantially smaller than the full model. <s> BIB009 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Miscellaneous Methods <s> In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. <s> BIB010
|
There are a range of model reduction methods described in the literature that do not sit comfortably within any of the areas so far covered in this review. The following section provides a brief overview of these methods. Motif Replacement Such approaches decompose a system into various interconnected sub-modules that can be replaced by simpler motifs. Typically this requires a relatively high degree of heuristic insight in order to spot replacement motifs. BIB001 developed a motif replacement method where the model is initially decomposed into a number of sub-modules, and each module is then treated in isolation. Reactions feeding into a sub-module are considered as inputs, and those exiting in the sub-module are considered outputs. Each sub-module is then simulated under perturbations of its inputs in order to construct an overall input-output profile. Com-parison of the input-output profiles with each other and standard profile types from signal theory can be used to replace the modules with simpler motifs that replicate their behaviour. The method was demonstrated via application to a model of EGF receptor signalling enabling the accurate reduction of several sub-modules. A similar approach of partitioning a biochemical network into sub-modules and applying motif replacement based upon their input-output profiles was also briefly discussed by . Reduction Workflow This topic concerns the general heuristics used to guide the application of model reduction methods. BIB005 propose an approach whereby a model is reduced iteratively until the system is sufficiently identifiable, i.e. until the variances associated with the parameter estimates are sufficiently small. This method was demonstrated via application to a model of JAK-STAT signal transduction. Over 6 reduction steps the number of state-variables was reduced from 17 to 10 and the number of parameters from 25 to 10, at which point the model parameters could be accurately estimated given a limited set of input-output data. BIB009 propose an iterative heuristic for obtaining a reduced model. Given a system in the form of (2), with experimental results that can be treated as outputs and experimental conditions that can be treated as inputs, the approach is twofold. Firstly, model reduction is performed via an iterative algorithm involving state-variable and parameter truncation, lumping, and the re-fitting of parameters. Reduction is repeatedly applied until the reduced model cannot capture the experimental behaviour within an adequate error bound. Secondly, model 'discrimination' is performed to determine the experimental conditions (within a feasible range) that maximise the error between the reduced and original models. If the maximal error exceeds the previously defined limit, then new experimental data obtained under the error-maximising conditions are included and the reduction step is rerun. These steps are applied recursively until a reduced model is obtained that adequately captures the results under all possible experimental conditions. The method is demonstrated via application to two systems: firstly a model of a genetic interaction network in flower development of A thaliana where it is shown that a reduction from 37 to 31 parameters still maintains accuracy for all reasonable experimental conditions, and secondly, to a model of the EGFR signalling pathway where it is shown that a reduction from 23 to 17 state-variables and 50-25 kinetic parameters was sufficient to yield no more than a 25% error for all possible experimental conditions. BIB010 present a heuristic for reduction whereby a model is reduced until it is identifiable relative to the experimental data available. This is achieved by evaluating parameter profile likelihoods and then seeking to reduce reactions associated with the least identifiable parameters. Structurally non-identifiable parameters can, at least theoretically if not practically, be eliminated from the system via the exploitation of intrinsic symmetries in the system. In the case of the weakly identifiable parameters in the system, associated reactions are reduced via approaches such as lumping, deletion of species, and algebraic replacement until an identifiable system is obtained. Reducing Combinatorial Complexity Particular attention can be given to model reduction in the context of combinatorially complex systems such as those found in the modelling of scaffold proteins. Such proteins have a large number of binding sites and can form complexes in many different combinations. Using a standard modelling approach each possible binding configuration is considered a separate species and its concentration is modelled as such. Clearly this can lead to a combinatorial explosion in the number of state-variables, and hence, there exist a number of methods of model reduction which seek to alleviate this complexity. BIB002 demonstrated a model reduction approach for such systems via a transformation of the possible states into 'macro-states', effectively improper lumpings of the original terms. However, this work only applies to scaffold proteins with independent binding sites or with only one controlling domain. Subsequently, BIB003 BIB004 extended this approach to more general models of scaffold protein interactions (or models with similar combinatorially complex interactions). A hierarchical state-variable transformation is introduced; this transformation is guided a form of sensitivity analysis under the assumption that many of the possible complexes will have a limited effect on the outputs of interest. BIB007 BIB008 developed an approach that seeks to reduce the set of chemical equations defining a biochemical reaction network via an iterative process of equilibrating and deleting one complex (as defined under chemical reaction network theory Feinberg 1987 ) at a time. This approach is applied using an optimisation algorithm until a pre-defined error tolerance is reached. The method is demonstrated via application to a model of yeast glycolysis where it was found that deletion of 4 complexes (producing a reduction from 12 state-variables, 88 parameters and 12 reactions to 7 state-variables, 50 parameters and 7 reactions) incurred a <8% average error across time and state-variables. A model of fatty acid beta oxidation was also considered where the deletion of 14 complexes (corresponding to a reduction from 42 state-variables to 29) could be obtained incurring an average error of 7.5%. BIB006 applies an approach of mesh refinement via a posteriori error analysis, commonly used in improving the numerical simulation of partial differential equations via finite element methods, to the reduction of biochemical systems. Via an iterative process, this approach determines which state-variables should be retained and which can be fixed (beginning with the 'all fixed' possibility) within each timeinterval to meet some pre-assigned error bound. outline an approach based on differential geometry known as the manifold boundary approximation method. This approach allows the construction of a model manifold M describing the parameter-dependent variation in certain pre-defined outputs or 'quantities of interest (QoIs)'. By repeatedly evaluating the Fisher information matrix it is typically possible to construct geodesics along M that can be used to define boundaries in parameter space. These boundaries imply that at certain positions in parameter space the QoIs can be captured by a reduced system. Using this information it is possible to construct reduced systems in these spaces by allowing certain combinations of parameters to tend to infinity or zero. In the paper it is demonstrated that this approach can recover the QSSA for the Michaelis-Menten enzyme-substrate reaction model. They also demonstrate the methods application to a 15-dimensional model of ERK activation via the interacting EGF and NGF pathways. Here they recover models in various states of reduction depending upon the specific QoIs-notably, they demonstrate that a 6-dimensional network can describe the overall input output behaviour of EGF, NGF and their effect on ERK.
|
Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Discussion <s> Preface Part I. Introduction: 1. Introduction 2. Motivating examples Part II. Preliminaries: 3. Tools from matrix theory 4. Linear dynamical systems, Part 1 5. Linear dynamical systems, Part 2 6. Sylvester and Lyapunov equations Part III. SVD-based Approximation Methods: 7. Balancing and balanced approximations 8. Hankel-norm approximation 9. Special topics in SVD-based approximation methods Part IV. Krylov-based Approximation Methods: 10. Eigenvalue computations 11. Model reduction using Krylov methods Part V. SVD-Krylov Methods and Case Studies: 12. SVD-Krylov methods 13. Case studies 14. Epilogue 15. Problems Bibliography Index. <s> BIB001 </s> Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends <s> Discussion <s> The transient kinetic behaviour of an open single enzyme, single substrate reaction is examined. The reaction follows the Van Slyke–Cullen mechanism, a spacial case of the Michaelis–Menten reaction. The analysis is performed both with and without applying the quasi-steady-state approximation. The analysis of the full system shows conditions for biochemical pathway coupling, which yield sustained oscillatory behaviour in the enzyme reaction. The reduced model does not demonstrate this behaviour. The results have important implications in the analysis of open biochemical reactions and the modelling of metabolic systems. <s> BIB002
|
There exists no one-size-fits-all method of model reduction which can be considered optimal for all large-scale biochemical systems irrespective of the context in which it is applied. Indeed, the 'best' reduced model that can be obtained for a particular system is inextricably linked to both the overall aims of the modeller, the scope and scale of the of the approximation error they are willing to incur, and the nature of the model they are seeking to reduce. This review defined a method of reduction as any approach seeking to approximate the dynamics of a given model by a simpler system, featuring a smaller number of reactions or reactants. As was shown, even given this relatively narrow definition, methods for the reduction of biochemical systems can take a wide number of forms. Table 1 provides an overview of the main methods of model reduction reviewed within this paper and their attributes. Timescale exploitation methods are particularly applicable where reactions in the system occur across a wide range of timescales (typically dictated by widely varying reaction rate constants) or the modeller wishes to access a reduced model that is accurate within a particular time-interval. Coordinate preserving timescale exploitation methods usually require that the species of the system can be explicitly defined as either fast or slow. Where this is possible, it enables access to intuitively understood reductions of the system. Coordinate transforming timescale exploitation methods can be used in a more general setting and will often produce more accurate reductions, but the biological meaning of the reduced model can be somewhat obscured by the change of variables. Optimisation-and sensitivity analysis-based approaches to model reduction are the most intuitive of the methods reviewed here. These approaches can be applied to any model in general, but can be highly computationally expensive for large models where the parameter space to be searched and simulated is often prohibitive. Lumping is a broad class of model reduction, but in its common definition of linear, proper lumping it represents a highly algorithmic and relatively intuitive methodology. However, the question of how the best lumping is determined for a nonlinear system is still somewhat open-approaches in the literature often rely upon trial and error, which can be computationally expensive for very large systems. SVD methods represent some of the more esoteric methods that can be applied. They apply transformations to the state-variables that typically produce transformed variables with an obscured biological meaning. However, these methods work especially well when a model can be treated as a black-box and only the input-output behaviour is of interest to the modeller. These methods can often produce very accurate and low-dimensional reductions. The relatively recent advent of systems biology has produced a wealth of highly detailed models, providing great insight into the mechanistic underpinnings of physiological systems. It seems inevitable that researchers in both academia and industry will increasingly seek to use these models in new ways beyond exploratory research. As they do so, the perennial issue of complexity will be necessarily brought into focus again. In those areas of science, such as engineering, most used to pragmatic compromise in the face of systemic complexity, methods of model reduction are already a well-utilised tool of research. Hence model reduction techniques, such as those introduced throughout this review, must also become a more familiar tool in the biochemical modeller's arsenal. Whilst such methods have the potential to provide substantial benefits, enabling previously intractable problems to be tackled and allowing modellers to extract insight from complexity, their application should never be considered a 'magic bullet'. Reduced systems typically only remain valid within a specific region of parameter space or predictive for a set of pre-defined outputs. Even in archetypal examples such as the QSSA being applied to the enzyme-substrate equation, validity is only guaranteed for particular model parameterisations and, when used inappropriately, can lead to the loss of dynamical phenomena in the original system BIB002 . In general, model reduction can therefore be thought of as a trade between the simplicity of the reduced model and the predictive power that it retains. Hence, before applying such methods, it is important to be clear on how the reduced model will be used, the specific questions you are aiming to answer, and how the reduction method should be constrained in terms of loss of information. The development and application of model reduction methods for the field of systems biology remain an ongoing and active area of research. There are a number of likely ways forward including the combining of existing methodologies, the further tailoring of methods to a biological context, and study of the relationship between model reduction and parameter identifiability. Methods from other fields, such as those based upon proper orthogonal decomposition and Krylov subspaces BIB001 , might also find specific applications in this setting.
|
A Survey of Scientific Approaches Considering the Integration of Security and Risk Aspects into Business Process Management <s> IT Risk Reference Model <s> The economic relevance of IT risks is increasing due to various operational, technical as well as regulatory reasons. Increasing flexibility of business processes and increasing dependability on IT require continuous risk assessment, challenging current methods for risk management. Extending IT risk management by a business process-oriented view is a promising approach for taking the occurring dynamics and interlinks into consideration. In this contribution, a systematic modeling of relations between causes (threats) and effects (direct and indirect loss) is pursued, bringing together the economic, process-oriented view with the technical, threat-oriented view of IT risks. It is discussed how the integration of cause and effect relations into the risk management process can improve the data basis for continuous risk assessment. <s> BIB001
|
Sackmann extends current risk management methods with a business process-oriented view leading to an IT risk reference model (see Figure 7 ) which builds the bridge between the economic and more technical layers including vulnerabilities BIB001 [9]. The introduced model consists of four interconnected layers: (1) Business process layer; (2) IT applications / IT infrastructure layer; (3) Vulnerabilities layer; (4) Threats layer. This reference model "serves as foundation for formal modeling of the relations between causes of IT risks and their effects on business processes or a company's returns" BIB001 . For expressing these relations (i.e. the searched causeeffect relations) a matrix-based description is used.
|
A Survey of Scientific Approaches Considering the Integration of Security and Risk Aspects into Business Process Management <s> Risk-Oriented Business Process Evaluation (ROPE) <s> Today, companies face the challenge to effectively and efficiently perform their business processes as well as to guarantee their continuous operation. To meet the economic requirements, companies often consult business process management experts. The robustness and continuity of operations is separately considered in other domains such as business continuity management and risk management. The shortcoming of this separation is that in most cases a common reasoning and information basis is missing. With the risk-aware process modeling and simulation methodology named ROPE we fill this gap and combine the strengths of the aforementioned domains. In this paper, we present new ROPE simulation capabilities focusing on the determination of resource requirements considering the impact of occurring threats on business processes. Furthermore, we introduce an example scenario to clarify how a company can benefit from applying these extensions. <s> BIB001 </s> A Survey of Scientific Approaches Considering the Integration of Security and Risk Aspects into Business Process Management <s> Risk-Oriented Business Process Evaluation (ROPE) <s> Driven by the steadily growing number of natural disasters, the threat of terrorist and other criminal attacks as well as changed legislation and regulations, companies are increasingly forced to prepare against threats that endanger the survivability of crucial business activities. As a consequence, management has to pay more attention to business continuity issues including serious management commitment and more appropriate funding. Business impact analysis and risk assessment concepts enable adequate business continuity planning as they deliver essential information about the impact of resources' disruption on business. In this paper we present how these concepts can be enhanced through the application of the ROPE (Risk-Oriented Process Evaluation) methodology enabling risk-aware business process management and simulation. Moreover, we present essential extensions of the ROPE simulation capabilities leading to a more efficient and effective business continuity planning. <s> BIB002
|
The ROPE (Risk-Oriented Process Evaluation) methodology focuses on the simulation-based evaluation of threats' impact on the execution of business processes BIB001 BIB002 . Therefore, the basic concept is as follows: Business process activities require resources in order to be adequately executed. Occurred threats impact the functionally of resources until -if not appropriately defeated -one or more affected resources are not available any more. In the worst case a resource represents a single point of failure and consequently hinders the execution of the related business process activity. Besides the business processes, counter and recovery measure processes are modeled. In the case that a threat is detected, the appropriate counter measure process is invoked counteracting the threat. If the threat could be defeated, recovery processes are invoked in order to re-establish the functionality of the affected resource until it is again available for the respective business process activity. ROPE consists of three modeling layers enabling the so called risk-aware business process modeling and simulation. (1) Within the process layer, business as well as counter and recovery measure process activities are modeled. (2) Resources within the resource layer are allocated to one or more business process activities and are modeled in a tree-based structure. Furthermore, the resources are interconnected with the logical operators AND and OR (in order to enable the modeling of redundancies). (3) Within the threat/impact layer identified threats are modeled and assigned to resources. Simulating the whole model on the one hand enables the determination of business processes' delays in the case of occurred threats considering implemented counter and recovery measures (see Figure 8 ). On the other hand, it is possible to determine additional times and costs (of activities and required resources) when invoking counter and recovery measure processes. Manifold different scenarios can be modeled enabling simulation-based identification of a company's critical business processes and single points of failure.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as 'red', 'striped', or 'spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. ::: ::: As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images. <s> BIB003 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set). Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%. <s> BIB004 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> This article presents a novel scale- and rotation-invariant detector and descriptor, coined SURF (Speeded-Up Robust Features). SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (specifically, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper encompasses a detailed description of the detector and descriptor and then explores the effects of the most important parameters. We conclude the article with SURF's application to two challenging, yet converse goals: camera calibration as a special case of image registration, and object recognition. Our experiments underline SURF's usefulness in a broad range of topics in computer vision. <s> BIB005 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework. <s> BIB006 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> We present a method to learn visual attributes (eg.“red”, “metal”, “spotted”) and object classes (eg. “car”, “dress”, “umbrella”) together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance. <s> BIB007 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance. <s> BIB008 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. <s> BIB009 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”. <s> BIB010 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Introduction <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB011
|
Attributes are the visual properties of objects. They are mainly related to the appearance and geometrical structures of the objects BIB003 . Visual attributes used by humans to describe an object are called semantic attributes. They are used to represent parts, shape and materials of objects BIB006 . Humans have the ability to describe unfamiliar objects up to some extent by using the semantic attributes. An Object category recognition algorithm based on training images is likely to fail when presented with an image from a particular category for which no images are included in the training data set. It can be observed in the real world that the objects share visual and semantic attributes. Researchers have taken the advantage of the sharing nature of the attributes and have shown that attributes work well in situations where the images from a particular object category are absent in the training data set. Attributes have been used to identify an object or at least parts of an unknown object BIB006 [3] BIB007 . Apart from object category recognition, attributes can be used for image retrieval in a way that is natural and favorable for humans. Current methods of image retrieval use local or global features of images which are more related to the rigid structure of objects being queried. Semantic attributes can be used to describe an object due which they present a natural choice for image retrieval. Attributes have also been used for face verification BIB008 , describing aesthetics and interestingness of images BIB010 and generation of sentences from images as well. The low level features are extracted from annotated training images and then quantized. The attributes classifiers are then learnt based on these quantized low-level features. As stated earlier, attributes are used to represent materials, parts and shapes. Color and texture are used for materials, visual words are used for parts, and edges are used for shapes BIB006 . Low-level feature used for these attributes are texture descriptors extracted with a texton filter bank, HOG BIB002 descriptor for the construction of visual words, and canny edge detectors BIB001 is used for edge detection. Apart from these, SIFT BIB011 , rgSIFT BIB009 , PHOG BIB004 , SURF BIB005 and self-symmetry Histograms are also used as low-level features. In most of the works, a classifier is trained for each attribute based on the quantized low-level features. The learned parameters from these classifiers are then used to predict the presence or absence of an attribute in a given image. Most of the algorithms learn these parameters by localizing the objects of interest in a bounding box that is pre-detected thus making the problem as "what is this" instead of "where is this" BIB006 . Some of the algorithms combine the localization as well as . Attributes are the mid-level features which can be named and described making them more efficient than low level features for problems like object category recognition. They can efficiently model the visual characteristics of objects and their spatial relationships in an image. Attributes can also represent image properties and concepts. They have been shown to give better performance than low-level features in face verification BIB008 and image aesthetics prediction BIB010 . These results give the motivation to switch from solutions based on low-level features to attributes based solutions. Therefore an overview of the attributes and their applications becomes a pre-requisite before one can apply them to solution of a problem. This article will give the required overview. Following are the contributions of this article. • An overview of the attributes based solutions of object classification and description, image retrieval and image aesthetics prediction. • Potential future directions. Rest of the paper is organized as follows. Section 2 summarizes the techniques that use attributes for object classification and description. An overview of the attributes used for image aesthetics prediction is given in section 3, section 4 gives an overview of attributes based image retrieval methods, the attributes data sets and future directions to apply attributes for some potential problems are given in section 6 and section 7 concludes the article.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results for several different annotation problems. We describe some strategies for determining when the task is well specified and properly priced. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> Some of the most effective recent methods for content-based image classification work by quantizing image descriptors, and accumulating histograms of the resulting visual word codes. Large numbers of descriptors and large codebooks are required for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests-ensembles of randomly created clustering trees-and show that they provide more accurate results, much faster training and testing, and good resistance to background clutter. Second, an efficient image classification method is proposed. It combines ERC-Forests and saliency maps very closely with the extraction of image information. For a given image, a classifier builds a saliency map online and uses it to classify the image. We show in several state-of-the-art image classification tasks that this method can speed up the classification process enormously. Finally, we show that the proposed ERC-Forests can also be used very successfully for learning distance between images. The distance computation algorithm consists of learning the characteristic differences between local descriptors sampled from pairs of same or different objects. These differences are vector quantized by ERC-Forests and the similarity measure is computed from this quantization. The similarity measure has been evaluated on four very different datasets and always outperforms the state-of-the-art competitive approaches. <s> BIB003 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose. <s> BIB004 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework. <s> BIB005 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> We present a method to learn visual attributes (eg.“red”, “metal”, “spotted”) and object classes (eg. “car”, “dress”, “umbrella”) together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance. <s> BIB006 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based object description <s> We propose an approach to find and describe objects within broad domains. We introduce a new dataset that provides annotation for sharing models of appearance and correlation across categories. We use it to learn part and category detectors. These serve as the visual basis for an integrated model of objects. We describe objects by the spatial arrangement of their attributes and the interactions between them. Using this model, our system can find animals and vehicles that it has not seen and infer attributes, such as function and pose. Our experiments demonstrate that we can more reliably locate and describe both familiar and unfamiliar objects, compared to a baseline that relies purely on basic category detectors. <s> BIB007
|
The method proposed by Farhadi et al. BIB005 performs the object description based on semantic attributes. The base features are extracted from pre-localized objects in training data set. Base features consist of color and texture for materials, visual words for parts and edges for shapes. The undesirable attributes correlations occur during the learning stage of the attributes classifiers. To cope with this problem, a novel feature selection method is proposed. Their feature selection method focuses on within category prediction ability. Those features are selected which best distinguish object of the same category with and without a specific attribute. Then all the selected features over all the categories are pooled to learn a classifier for a specific attribute. These features are called selected features. Two data sets have been developed for this purpose called the a-Pascal and a-Yahoo data sets. These data sets are annotated with 64 semantic attributes using the Amazon's Mechanical Turk BIB002 . Both the data sets have objects belonging to different categories yet the attributes assignment is done using across category prediction where training and test instances are drawn from different sets of classes. Apart from the attribute assignment, the proposed method also reports the absence of typical and presence of atypical attributes. The proposed method also works well at naming known objects, learning new categories with few visual examples and learning new categories based on pure textual descriptions. Gang et al. BIB006 combine both the attributes and object category learning. In this method the objects are localized and their attributes are described. The training images are weakly labeled where every training image is annotated to contain an attribute-object pair but the location is unknown. Each image is a "bag" of windows at different scales and locations. If the image is labeled as positive then at least one window contains the object and labeled as negative if none of the windows contains the object. The attributes and category detectors are combined to simplify the problem of localization where both the detectors support each other at the location where an object (resp. attribute) is present. The main problem is the large number of candidate windows. The visual saliency BIB003 and homogeneity BIB001 are used to sub-sample the large set of windows in the bag to emphasize interesting image windows. The large bag of windows is thus reduced to small subset based on the scores obtained for saliency and homogeneity. The combined attributes and object category classifiers are then learnt using mi-SVM which is an SVM for multiple instances learning with the constraint that both the classifiers give maximum responses in the windows where both the object and attribute are present. The proposed method of Farhadi et al. BIB007 used the parts based model of BIB004 and category detectors to localize objects and then describe them by using the spatial relationship of the attributes. A new data set is constructed and annotated using the Amazon's Mechanical Turk BIB002 . Objects are grouped into broader categories. These objects are localized and then attributes are assigned to them. Generalization is improved through efficient knowledge transfer giving better performance at localization and naming as well as inferring pose, composition and function of the objects. By learning one set of animals and vehicles, many others can be localized giving an improved generalization across broad domains. 28 types of objects (animals and vehicles) as well as several types of parts and ten types of materials are annotated. These annotations include object segmentation, object parts segmentation, category and parts labels, and masks for common materials, pose and viewpoint. The aim of the data set is to study the crosscategory generalization with respect to localization and description. Detectors are trained on this data set for parts such as "wheels", super ordinate categories such as "fourwheeled vehicle" and basic level categories such as "car" using the parts based and category based models. To localize objects in an image, these trained parts and category detectors are applied. Votes are accumulated from confident detectors to obtain object candidates. A graphical model is then used to describe the localized object.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Object recognition based on visual attributes extracted from text <s> In recent years, segmentation with graph cuts is increasingly used for a variety of applications, such as photo/video editing, medical image processing, etc. One of the most common applications of graph cut segmentation is extracting an object of interest from its background. If there is any knowledge about the object shape (i.e. a shape prior), incorporating this knowledge helps to achieve a more robust segmentation. In this paper, we show how to implement a star shape prior into graph cut segmentation. This is a generic shape prior, i.e. it is not specific to any particular object, but rather applies to a wide class of objects, in particular to convex objects. Our major assumption is that the center of the star shape is known, for example, it can be provided by the user. The star shape prior has an additional important benefit - it allows an inclusion of a term in the objective function which encourages a longer object boundary. This helps to alleviate the bias of a graph cut towards shorter segmentation boundaries. In fact, we show that in many cases, with this new term we can achieve an accurate object segmentation with only a single pixel, the center of the object, provided by the user, which is rarely possible with standard graph cut interactive segmentation. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Object recognition based on visual attributes extracted from text <s> We investigate the task of learning models for visual object recognition from natural language descriptions alone. The approach contributes to the recognition of fine-grain object categories, such as animal and plant species, where it may be difficult to collect many images for training, but where textual descriptions of visual attributes are readily available. As an example we tackle recognition of butterfly species, learning models from descriptions in an online nature guide. We propose natural language processing methods for extracting salient visual attributes from these descriptions to use as ‘templates’ for the object categories, and apply vision methods to extract corresponding attributes from test images. A generative model is used to connect textual terms in the learnt templates to visual attributes. We report experiments comparing the performance of humans and the proposed method on a dataset of ten butterfly categories. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Object recognition based on visual attributes extracted from text <s> Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge. <s> BIB003 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Object recognition based on visual attributes extracted from text <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB004
|
Wang et al. BIB002 proposed to extract the visual attributes i.e. shape, color and pattern from pure textual descriptions given by an expert source. The proposed method builds 'templates' for the visual attributes of 10 species of butterflies. These templates contain slots for colors, patterns and their location on the butterfly. The techniques of Natural Language Processing are applied to the text extracted from an expert source i.e. eNature 1 to fill these templates. The properties of butterflies described by the expert source are both detailed and discriminative thus making the assignment of the attributes more efficient. In order to match the visual attributes of an image of a butterfly to the ones extracted from the text, the butterfly in the image is separated from the background using a semi-automatic segmentation proposed in BIB001 . Two visual attributes which are determined as salient by the textual description are a) dominant (Wing) color and b) colored spots. The spots are extracted using the Difference of Gaussian BIB004 and SIFT BIB003 descriptors are extracted around each candidate spot. A spot classifier is then trained to differentiate spots from non-spots using hand-marked butterfly images without incorporating the category information of the butterfly. The templates also give the color names of the wings and spots. For each color name in the template, a probability distribution is learnt from the training images using the L*a*b* space. A generative model is developed to predict the category of a given image of a butterfly. The priors over the dominant (wing) color and spot color are learnt from the templates. The likelihood of a given image is then evaluated for all the categories and the category that maximizes the likelihood is assigned to this image. Experiments are carried out on native and non-native English speakers as well. The description of a certain category is presented to them along with 10 images randomly selected from each category. They are told to select the image that best matches the category explained in the text. The same experiments are performed with the learnt model and the results obtained are comparable to the ones obtained from non-native English speakers.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based Image Retrieval <s> We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching in two full length feature films. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Attributes based Image Retrieval <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002
|
Image retrieval methods based on local features BIB001 [28] BIB002 extract local features from the query image and then compare with those of the database images. Images with the most number of matches are considered to be similar. However, these methods are not in accordance with the human description of objects i.e. the way humans describe objects by their appearance, shape and relationship with other objects in the images. Image retrieval methods based on understandable attributes of objects will be most satisfy-ing for humans. In this section we present recent methods that use attributes for image retrieval.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on classemes <s> We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on classemes <s> We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on classemes <s> A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently. <s> BIB003
|
Torresani et al. have used attributes which they call classemes for image retrieval. Images are described by low dimensional descriptors whose components are the outputs of the category-specific classifiers applied to the image. Their approach is similar to BIB001 and BIB002 , however the images they used are not annotated and their attributes have no specific semantic meanings. The images are independently obtained from a search engine for each category from the LSCOM ontology . The feature vector is learnt using the LP-β kernel combiner BIB003 with 13 kernels for 13 types of features to represent an image and this vector is called the classeme vector. These classemes are then shown to perform well in transfer learning by retrieving 'similar' images for a novel category image.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on compact combination of classemes and fisher vectors <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on compact combination of classemes and fisher vectors <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Image retrieval based on compact combination of classemes and fisher vectors <s> Attributes were recently shown to give excellent results for category recognition. In this paper, we demonstrate their performance in the context of image retrieval. First, we show that retrieving images of particular objects based on attribute vectors gives results comparable to the state of the art. Second, we demonstrate that combining attribute and Fisher vectors improves performance for retrieval of particular objects as well as categories. Third, we implement an efficient coding technique for compressing the combined descriptor to very small codes. Experimental results on the Holidays dataset show that our approach significantly outperforms the state of the art, even for a very compact representation of 16 bytes per image. Retrieving category images is evaluated on the “web-queries” dataset. We show that attribute features combined with Fisher vectors improve the performance and that combined image features can supplement text features. <s> BIB003
|
The attribute descriptors of Torresani et al. and Fisher vectors based on low-level image features are used by Douze et al. BIB003 for image retrieval. Apart from these descriptors, textual features are also extracted from the tags and text around the images. Both Fisher vectors and attribute descriptors are normalized and combined. The performance of this combined descriptor gave a performance on par with the state-of-the-art Fisher BIB001 and VLAD BIB002 descriptors with a somewhat lower dimensionality. Dimensionality reduction of both the Fisher vectors and attributes descriptor is done and evaluated as well and it is noted that relatively lower dimensions of the descriptors gave comparable results as well. Additional performance and compactness are also obtained by encoding the image descriptors resulting in better performance over the state-of-the-art methods. This approach is shown in Figure 4 .
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Multi-query image retrieval based on weak attributes <s> We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking/retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> Multi-query image retrieval based on weak attributes <s> Attribute-based query offers an intuitive way of image retrieval, in which users can describe the intended search targets with understandable attributes. In this paper, we develop a general and powerful framework to solve this problem by leveraging a large pool of weak attributes comprised of automatic classifier scores or other mid-level representations that can be easily acquired with little or no human labor. We extend the existing retrieval model of modeling dependency within query attributes to modeling dependency of query attributes on a large pool of weak attributes, which is more expressive and scalable. To efficiently learn such a large dependency model without overfitting, we further propose a semi-supervised graphical model to map each multiattribute query to a subset of weak attributes. Through extensive experiments over several attribute benchmarks, we demonstrate consistent and significant performance improvements over the state-of-the-art techniques. In addition, we compile the largest multi-attribute image retrieval dateset to date, including 126 fully labeled query attributes and 6,000 weak attributes of 0.26 million images. <s> BIB002
|
Using an approach similar to Siddiquie et al. BIB001 , Yu et al. BIB002 modeled the interdependence of the attributes in multi-attribute queries for large scale image retrieval based on 'weak attributes'. They are called weak attributes because of the fact that they may or may not be directly related to the query attributes. A retrieval model is learnt for a multi-attribute query using max-margin training formulation on training data. This model is used to predict a subset of images for the given query. However a query adaptive selection of weak attributes is needed due to the fact that a small set of weak attributes is only related to the query attributes among the large pool of weak attributes. To model this mapping, a two-layered semi-supervised graphical model is developed. A supervised layer is constructed using the training data with query attributes labels and an unsupervised layer is constructed using the test data with no query attributes labels. The inference is done iteratively on both the layers in an alternating fashion. In this work, they have constructed the largest multi-attribute data set 2 to date as well called the a-TRECVID data set. The method has been shown to outperform several state-of-the-art methods on several benchmark data sets.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> We propose an approach to find and describe objects within broad domains. We introduce a new dataset that provides annotation for sharing models of appearance and correlation across categories. We use it to learn part and category detectors. These serve as the visual basis for an integrated model of objects. We describe objects by the spatial arrangement of their attributes and the interactions between them. Using this model, our system can find animals and vehicles that it has not seen and infer attributes, such as function and pose. Our experiments demonstrate that we can more reliably locate and describe both familiar and unfamiliar objects, compared to a baseline that relies purely on basic category detectors. <s> BIB003 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> Crowd-sourcing approaches such as Amazon's Mechanical Turk (MTurk) make it possible to annotate or collect large amounts of linguistic data at a relatively low cost and high speed. However, MTurk offers only limited control over who is allowed to particpate in a particular task. This is particularly problematic for tasks requiring free-form text entry. Unlike multiple-choice tasks there is no correct answer, and therefore control items for which the correct answer is known cannot be used. Furthermore, MTurk has no effective built-in mechanism to guarantee workers are proficient English writers. We describe our experience in creating corpora of images annotated with multiple one-sentence descriptions on MTurk and explore the effectiveness of different quality control strategies for collecting linguistic data using Mechanical MTurk. We find that the use of a qualification test provides the highest improvement of quality, whereas refining the annotations through follow-up tasks works rather poorly. Using our best setup, we construct two image corpora, totaling more than 40,000 descriptive captions for 9000 images. <s> BIB004 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> It is common to use domain specific terminology - attributes - to describe the visual appearance of objects. In order to scale the use of these describable visual attributes to a large number of categories, especially those not well studied by psychologists or linguists, it will be necessary to find alternative techniques for identifying attribute vocabularies and for learning to recognize attributes without hand labeled training data. We demonstrate that it is possible to accomplish both these tasks automatically by mining text and image data sampled from the Internet. The proposed approach also characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape. This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description. <s> BIB005 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • Animals with Attributes Data set <s> We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning. <s> BIB006
|
This data set 4 is created in the work of Lampert et al. BIB001 to provide a platform for the benchmark algorithms of transfer learning. It consists of 30475 images from 50 categories of animals and marked with 80 attributes defined by Osherson and Wilkie . The images are collected from image search engines, Google, Flickr, Yahoo and Microsoft by using the animal names. • Cross Category Object Recognition (CORE) Data set This data set 5 is developed by Farhadi et al. BIB003 . The purpose of this data set is to localize and describe objects. It contains 2780 images from ImageNet data set BIB002 with 3192 objects from 28 categories. The annotations include object segmentation, segmentation of category, part labels and parts, masks for common materials, viewpoint and pose. The annotations consist of 26695 parts of 71 types, 30046 attributes of 34 types and 1052 material images of 10 types. This data set is not as challenging as PASCAL VOC because the purpose of this data set is to study the problem of cross-category generalization. • UIUC PASCAL Sentence Data set This data set 6 created by Rashtchian et al. BIB004 consists of images for which a one sentence description is obtained from annotators using the Amazon's Mechanical Turk. The quality control of the annotations is done by recruiting only those annotators who manage to pass a qualification test. • SBU captioned Photo Dataset This data set 7 created by Ordonez et al. BIB006 contains 1 million captioned images from flickr. These captions are filtered that they contain at least two words from the keyword list and at least one spatial preposition. • Attributes Discovery Data set This dataset 8 created by Berg et al. BIB005 consists of 37705 annotated images from 4 shopping categories which are ties, earrings, bags and shoes. The textual description is associated with images collected from like.com.
|
RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • PubFig: Public Figures Face Database <s> We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance. <s> BIB001 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • PubFig: Public Figures Face Database <s> When glancing at a magazine, or browsing the Internet, we are continuously being exposed to photographs. Despite of this overflow of visual information, humans are extremely good at remembering thousands of pictures along with some of their visual details. But not all images are equal in memory. Some stitch to our minds, and other are forgotten. In this paper we focus on the problem of predicting how memorable an image will be. We show that memorability is a stable property of an image that is shared across different viewers. We introduce a database for which we have measured the probability that each picture will be remembered after a single view. We analyze image features and labels that contribute to making an image memorable, and we train a predictor based on global image descriptors. We find that predicting image memorability is a task that can be addressed with current computer vision techniques. Whereas making memorable images is a challenging task in visualization and photography, this work is a first attempt to quantify this useful quality of images. <s> BIB002 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • PubFig: Public Figures Face Database <s> The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. <s> BIB003 </s> RECENT PROGRESS IN ATTRIBUTES BASED LEARNING: A SURVEY <s> • PubFig: Public Figures Face Database <s> Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system. <s> BIB004
|
This data set 9 created by Kumar et al. BIB001 is a large face data set which consists of 58797 images of 200 people collected from various sources on internet. These images are taken in uncontrolled environment and there is a great variation of pose, illumination, expression, scene and imaging conditions. They have annotated the images with attributes selected from a list of 65 facial attributes using the Amazon Mechanical Turk. These data sets provide annotated data for future research. Some future directions regarding solutions based on attributes are given as follows. • Attributes are shown to predict the aesthetics of an image as reviewed in section.3. However it will be interesting to know the semantic attributes that makes an image memorable. A first attempt about the image S memorability has been done recently by BIB002 . They have investigated the image memorability dependence on the color, simple image features, object statistics, object semantics, and scene semantics. Further work can be done by exploring the image memorability dependence on semantic attributes. • Attributes based visual learning and NLP techniques are used by to generate sentences from images. These sentences describe the objects, their attributes and their relative spatial positions in the images. It means that only 'adjectives' and 'prepositions' are modeled up to some extent in their work. The developed model lacks the pose estimation and does not give the information about the simple poses of a person i.e. whether a person is standing or sitting. Secondly, it does not tell about the details of a person i.e. gender, age and ethnicity etc as reported by BIB001 . Thirdly, 'adverb' can be added which tells some thing about the 'adjectives'. Adverbs can be incorporated by using the relative attributes. Finally, the clothing attributes proposed by BIB004 can be used to describe the clothing of the people as well. • Semantic Attributes can be used to report abnormal things in images like accidents. Current methods are only based on local features and context. Attributes can capture the abnormal semantics as well. Therefore they can give better results in such situations. • Attributes can be used to identify abnormalities in face which would need immediate attention and cure. These abnormalities are lines and wrinkles, facial volume loss, facial contouring, uneven pigmentation, overall skin condition including ageing, loss of elasticity,reduction of scarring, acne (active and scars) BIB003 . Kumar et al. BIB001 have listed 65 attributes of faces but they are used for face verification and not used to detect facial abnormalities.
|
A Survey of Service Oriented Architecture Systems Maintenance Approaches <s> RELATED ISSUES <s> The service-oriented architecture (SOA) has become today's reference architecture for modern distributed systems. As SOA concepts and technologies become more and more widespread and the number of services in operation within enterprises increases, the problem of managing these services becomes manifest. One of the most pressing needs we hear from customers is the ability to "discover", within a maze of services each offering functionality to (and in turn using functionality offered by) other services, which are the actual dependencies between such services. Understanding dependencies is essential to performing two functions: impact analysis (understanding which other services are affected when a service becomes unavailable) and service-level root-cause analysis (which is the opposite problem: under-standing the causes of a service failure by looking at the other services it relies on). Discovering dependencies is essential as the hope that the enterprise maintains documentation that describe these dependencies (on top of a complex maze and evolving implementations) is vane. Hence, we have to look for dependencies by observing and analyzing the interactions among services. In this paper we identify the importance of the problem of discovering dynamic dependencies among Web services and we propose a solution for the automatic identification of traces of dependent messages, based on the correlation of messages exchanged among services. We also discuss our lessons learned and results from applying the techniques to data related to HP processes and services. <s> BIB001 </s> A Survey of Service Oriented Architecture Systems Maintenance Approaches <s> RELATED ISSUES <s> In service-oriented architectures, everything is a service and everyone is a service provider. Web services (or simply services) are loosely coupled software components that are published, discovered, and invoked across the Web. As the use of Web service grows, in order to correctly interact with them, it is important to understand the business protocols that provide clients with the information on how to interact with services. In dynamic Web service environments, service providers need to constantly adapt their business protocols for reflecting the restrictions and requirements proposed by new applications, new business strategies, and new laws, or for fixing problems found in the protocol definition. However, the effective management of such a protocol evolution raises critical problems: one of the most critical issues is how to handle instances running under the old protocol when it has been changed. Simple solutions, such as aborting them or allowing them to continue to run according to the old protocol, can be considered, but they are inapplicable for many reasons (for example, the loss of work already done and the critical nature of work). In this article, we present a framework that supports service managers in managing the business protocol evolution by providing several features, such as a variety of protocol change impact analyses automatically determining which ongoing instances can be migrated to the new version of protocol, and data mining techniques inferring interaction patterns used for classifying ongoing instances migrateable to the new protocol. To support the protocol evolution process, we have also developed database-backed GUI tools on top of our existing system. The proposed approach and tools can help service managers in managing the evolution of ongoing instances when the business protocols of services with which they are interacting have changed. <s> BIB002 </s> A Survey of Service Oriented Architecture Systems Maintenance Approaches <s> RELATED ISSUES <s> Dynamic evolution is required in SOAs (Service Oriented Architecture) with complex business processes to adapt to the opening environment of Internet and ever changing requirement of user. This paper proposes an approach to identifying conversation dependency between business processes to facilitate the dynamic evolution. In our approach, a business process is represented as a directed graph, and the matrix method is used to identify the execution order of activities in the business process, which determines the conversation dependency. <s> BIB003 </s> A Survey of Service Oriented Architecture Systems Maintenance Approaches <s> RELATED ISSUES <s> Service Oriented Architecture (SOA) enables organizations to react to requirement changes in an agile manner and to foster the reuse of existing services. However, the dynamic nature of service oriented systems and their agility bear the challenge of properly understanding such systems. In particular, understanding the dependencies among services is a non trivial task, especially if service oriented systems are distributed over several hosts belonging to different departments of an organization. In this paper, we propose an approach to extract dynamic dependencies among web services. The approach is based on the vector clocks, originally conceived and used to order events in a distributed environment. We use the vector clocks to order service executions and to infer causal dependencies among services. We show the feasibility of the approach by implementing it into the Apache CXF framework and instrumenting the SOAP messages. We designed and executed two experiments to investigate the impact of the approach on the response time. The results show a slight increase that is deemed to be low in typical industrial service oriented systems. <s> BIB004 </s> A Survey of Service Oriented Architecture Systems Maintenance Approaches <s> RELATED ISSUES <s> The combination of service-oriented applications, with their run-time service binding, and mobile ad hoc networks, with their transient communication topologies, brings a new level of complex dynamism to the structure and behavior of software systems. This complexity challenges our ability to understand the dependence relationships among system components when performing analyses such as fault localization and impact analysis. Current methods of dynamic dependence discovery, developed for use in fixed networks, assume that dependencies change slowly. Moreover, they require relatively long monitoring periods as well as substantial memory and communication resources, which are impractical in the mobile ad hoc network environment. We describe a new method, designed specifically for this environment, that allows the engineer to trade accuracy against cost, yielding dynamic snapshots of dependence relationships. We evaluate our method in terms of the accuracy of the discovered dependencies. <s> BIB005
|
When we talk about traditional software the case is somewhat easy but when we start redeveloping the SOA with the concerned of the all QOS or nonfunctional requirements as much as possible by its providers without knowing the end user or which applications will use it in the future. In this context we need to discuss some issues as follows: The first issue: After developing the previous changes how we can estimate the influence on these changes in the whole SOA system according to the system functional and nonfunctional requirements because it is the importance factor in evolving the software systems . By return to service -the base element in our paper-when we make the developing process we have to take into consideration that the evolving service returns its interface and nonfunctional requirements or dependencies and that mostly conceder to be in complete information about the analysis of the service. The problem here is how to make the evolution in service that provide or deal just by interface and what is the influence of the whole system after the final integration with the developed service. The first approach in this context is [Basuet. et.al. BIB001 ] Give a technique to deal with dynamic dependencies between services. Building of one dependency is related of the other identified dependencies between two messages by taking into consideration the appearing of services to other applications. This experiment was applied on HP business data, SOA based system consist of several services. [Bertolino et al. 14 ] Give a model depends on black-box approach effect to specify the quality attributes including business requirements just taking advantage of service interface only. That can be happen by the invocations of its operations. A deferent system perturbation had been used by [Romano, et al. BIB004 ] to explain the active dependencies. The use the previous technique to monitor the service work in the system. An operational dependency graph for a specific combo of system and workload was created by the active dependency approach while requiring very few details of the internal implementation of the system. [Ryu et al. BIB002 ] Give a technique to deal with dependencies issues between services by something called completed conversations. The approach analyzes the strategies of combining the service nonfunctional changes and the SOA system. These conversations can produces by the system check of its executions by decision tree model. This approach will determine the business protocol dependencies between the system and also the developing service dividing them into forward and backward dependencies. [Novonty et al. BIB005 ] Give a technique to deal with dependencies issues between services in a web application. Based on proposes a dependency, and the deep analysis for the text feature of hyperlink, a regular expression-based linkage information extraction method is presented. Other techniques are based on the formal or algorithmic approaches. For more practicability this approach uses mathematical foundations to record the behavior of the SOA to achieve the dependency handling purposes. [Alda 18 ] is one of these approaches based on the previous technique which have a way for service handling dependences purposes. She distributes the approach into two steps. The first one is the ability of the costumer to use group of maintained services relies on public service produced by provider. That's mean the applying of the generalization of one service to contain a list on inheritances services. [Liu, Ma and Zhao BIB003 ] is another type of the previous technique which presents the a conversations dependency between business processes to enhance the evolution with dynamic dependencies of the SOA based systems this approach is used to define the order of activities in the process and by this we can produce the conversation dependency. The second issue we want to talk about is the comprehension of the service which asked by "how" question. The question in this case is how we can identify the behavior of the service when it evolved. Now we need to collect the approaches that help the maintenance developers to understand the evolved service operation and behavior for example the functional and nonfunctional requirements. We knew that the service can just implemented just by its interface and that make the comprehension process very hard because we can't access the data required for this task. Now we want to present some approaches that work well in the previous issue. [Bertolino et al. 2009 20 ] is an example of this issue which explained above in the first issue. The third issue is to provide an approach that helps the maintenance developers to finish the functional and nonfunctional development attributes of the service and end these process with the testing of those attributes and making sure if the evolve service performs previous attribute requirements? Or not. ] also provide an approach in this context to test the functional requirement attribute just based on its interface. The system's maintainer sends to the intermediate provider some conversations between the system and the service, which have been recorded in logs. Following, the intermediate provider acquires coverage data from the service provider, which show in what extent the conversations cover the functionality of the service. The coverage data can help SOA system's maintainer to: a. produce further test cases; b. become aware of when an adequacy criterion of its test cases is reached; b. update its test cases by adding tests which cover untested behavior; c. update its test cases by dropping tests that are exercising the same case; d. update its test cases by collecting coverage data on successive versions of services.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> Introduction <s> Background on genetic algorithms, LISP, and genetic programming hierarchical problem-solving introduction to automatically-defined functions - the two-boxes problem problems that straddle the breakeven point for computational effort Boolean parity functions determining the architecture of the program the lawnmower problem the bumblebee problem the increasing benefits of ADFs as problems are scaled up finding an impulse response function artificial ant on the San Mateo trail obstacle-avoiding robot the minesweeper problem automatic discovery of detectors for letter recognition flushes and four-of-a-kinds in a pinochle deck introduction to biochemistry and molecular biology prediction of transmembrane domains in proteins prediction of omega loops in proteins lookahead version of the transmembrane problem evolutionary selection of the architecture of the program evolution of primitives and sufficiency evolutionary selection of terminals evolution of closure simultaneous evolution of architecture, primitive functions, terminals, sufficiency, and closure the role of representation and the lens effect. Appendices: list of special symbols list of special functions list of type fonts default parameters computer implementation annotated bibliography of genetic programming electronic mailing list and public repository. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> Introduction <s> The use of intelligent techniques in the manufacturing field has been growing the last decades due to the fact that most manufacturing optimization problems are combinatorial and NP hard. This paper examines recent developments in the field of evolutionary computation for manufacturing optimization. Significant papers in various areas are highlighted, and comparisons of results are given wherever data are available. A wide range of problems is covered, from job shop and flow shop scheduling, to process planning and assembly line balancing. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> Introduction <s> Genetic algorithms (GAs) and genetic programming (GP) are often considered as separate but related fields. Typically, GAs use a fixed length linear representation, whereas GP uses a variable size tree representation. This paper argues that the differences are unimportant. Firstly, variable length actually means variable length up to some fixed limit, so can really be considered as fixed length. Secondly, the representations and genetic operators of GA and GP appear different, however ultimately it is a population of bit strings in the computers memory which is being manipulated whether it is GA or GP which is being run on the computer. The important difference lies in the interpretation of the representation; if there is a one to one mapping between the description of an object and the object itself (as is the case with the representation of numbers), or a many to one mapping (as is the case with the representation of programs). This has ramifications for the validity of the No Free Lunch theorem, which is valid in the first case but not in the second. It is argued that due to the highly related nature of GAs and GP, that many of the empirical results discovered in one field will apply to the other field, for example maintaining high diversity in a population to improve performance. <s> BIB003
|
The sense of vision plays an important role in the process of human perception. As human vision is restricted only to the visual band of the electromagnetic spectrum, but machine vision covers nearly the whole electromagnetic spectrum, ranging from gamma rays to radio waves . Image processing (IP) emulates the capabilities of human eye and brain in extracting features or segmenting regions, therefore IP is a challenging task in the sense that these algorithms have to be accurate, fast, reliable, as well as robust. Development in this field has increased with the decline in the prices of computers because IP related tasks are dependent on computer algorithms. Due to its diverse applications, IP cannot be completely distinguished from its closely related fields like computer vision and image analysis because IP is also involved in both the aforementioned fields at different levels. In the somewhat restricted definition of IP, it is a process whose inputs and outputs are images and can be extended to encompass processes that involve techniques of features extraction from images in order to identify the individual objects . Different intelligent techniques such as an Artificial Immune System (AIS), Genetic Algorithm (GA), Artificial Neural Network (ANN), Ant Colony Optimization (ACO),and Genetic Programming (GP) have been exploited in the field of IP. To differentiate the intelligent IP techniques from that of the conventional mathematical and analytical methods based IP techniques, the term "Computational Intelligence" (CI) is usually used to refer to these intelligent IP techniques because of their flexibility and adaptability. CI can find optimum solutions to computationally hard problems in a variety of domains BIB002 . This survey focuses on the applications of GP in IP. GP is one of the promising CI technique that comes under the sub-type of Evolutionary Computation (EC) techniques based on the Darwinian theory of evolution. GP evolves output in the form of a tree or a computer program. Different programs are generated depending on the terminal and function sets used. Existing paradigms do not produce solutions in the form of computer programs but instead involve specialized structures like weight vectors for neural networks, coefficients for polynomials, chromosome strings in the conventional GA etc. BIB001 . GP comes under the umbrella of EC along with GA, Evolutionary Programming, and Evolutionary Strategies BIB003 . GP is a special form of the common GA, which uses a fixed (though variants now exist) length string of bits or real numbers to represent individuals called chromosomes. In contrast to GA, GP represents individuals as trees that can be evaluated to obtain results. Initially a population of individuals is randomly generated, using a terminal set (which contains constants, argument-less functions, variables) and a function set (e.g. +, -, /, if-else, for-next). Based on their fitness, the individuals are given chances for reproduction and allowed to change via crossover and mutation. Crossover is used to search for an optimal solution, whereas mutation introduces rapid changes in the population and thus helps in avoiding trapping in local optima. The flexible nature of GP, its generality, little or no preprocessing, some knowledge about the size and shape of the solution, and its parallelizability have resulted in its popularity in applications such as data modeling, symbolic regression, image and signal processing, medicine, Bioinformatics, financial trading, and industrial process control. This survey addresses GP's applicability in the field of IP and is organized as follows. The background of GP and IP is described in Section 2. The importance of the review is presented in Section 3. The similarities of the GP approaches in different categories of IP are given in Section 4 and further reviewed in Section 5. Section 6 is about the advantages and disadvantages of using GP in IP. Section 7 presents guidelines for applying GP in IP. The comparison and discussions are provided in Section 8, while Section 9 concludes the article.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> a) Image Processing <s> Many seemingly different problems in machine learning, artificial intelligence, and symbolic processing can be viewed as requiring the discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a highly fit individual computer program. The recently developed genetic programming paradigm described herein provides a way to search the space of possible computer programs for a highly fit individual computer program to solve (or approximately solve) a surprising variety of different problems from different fields. In genetic programming, populations of computer programs are genetically bred using the Darwinian principle of survival of the fittest and using a genetic crossover (sexual recombination) operator appropriate for genetically mating computer programs. Genetic programming is illustrated via an example of machine learning of the Boolean 11-multiplexer function and symbolic regression of the econometric exchange equation from noisy empirical data. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> a) Image Processing <s> 1. Introduction. 2. Background on Genetic Programming. 3. Automatic Synthesis of Controllers. 4. Automatic Synthesis of Circuits. 5. Automatic Synthesis of Circuit Topology, Sizing, Placement, and Routing. 6. Automatic Synthesis of Antennas. 7. Automatic Synthesis of Genetic Networks. 8. Automatic Synthesis of Metabolic Pathways. 9. Automatic Synthesis of Parameterized Topologies for Controllers. 10. Automatic Synthesis of Parameterized Topologies for Circuits. 11. Automatic Synthesis of Parameterized Topologies with Conditional Developmental Operators for Circuits. 12. Automatic Synthesis of Improved Tuning Rules for PID Controllers. 13. Automatic Synthesis of Parameterized Topologies for Improved Controllers. 14. Reinvention of Negative Feedback. 15. Automated Re-Invention of Six Post-2000 Patented Circuits. 16. Problems for Which Genetic Programming May Be Well Suited. 17. Parallel Implementation and Computer Time. 18. Historical Perspective on Moore's Law and the Progression of Qualitatively More Substantial Results Produced by Genetic Programming. 19. Conclusion. Appendix A: Functions and Terminals. Appendix B: Control Parameters. Appendix C: Patented or Patentable Inventions Generated by Genetic Programming. Bibliography. <s> BIB002
|
Image is a visual representation of an object produced on a surface. Before the invention of paper, images were produced on stones and other materials. In the case of computers, a visual representation of an image is displayed on a monitor, a Liquid Crystal Display or a multimedia projector. However, for computer storage, these images are defined as two-dimensional matrices of pixel (picture-element) values. These pixel values are the intensity or gray level of the image and can be represented in the form of function F(x,y), where x and y are spatial coordinates. If the intensity values within image are finite discrete quantities then, such an image is a digital image. A pixel of size one byte (8 bits) can represent 256 intensity values from 0 (black) to 255 (white). The values in between this range give different shades as shown in Figure 1 . When values of such a representation are modified in some way, we call it IP. For example, enhancing the image quality, removing noise, segmenting specific parts, making a comparison with other images, etc., all include processing the image in some way. For the image in Figure 1 , if we want to change the center pixel to black then, we just change its value from 78 to 0. GP GP is one of the promising EC techniques, and is viewed as a specialization of GA. GP and GA mainly differ in representation scheme. GA uses strings of bits, integers, or real numbers to represent individuals, whereas GP mainly represents individuals as trees and is well suited for mapping functions, model development, nonlinear regression, and other related problems. Koza has pointed out various interesting problems, where GP produced human-competitive results BIB002 . GP is a domain-independent method and can solve high-level problems automatically . Moreover, pioneering works of Koza, Langdon, Poli, and Banzhaf has boosted research in the field of GP BIB001 . Figure 2 depicts genetic search cycle of EC techniques, where an initial population is generated and then, the fittest individuals are selected as parents, based on some evaluation criterion. In the next step, genetic operators (e.g. crossover, mutation, reproduction etc.) are applied to produce offspring. In the last step, fittest individuals are selected as a population for the next generation. The whole search cycle continue after each generation until a termination criterion is fulfilled and the best candidate is considered as the fittest individual. Figure 3 depicts the basic flow of GP, in which an initial population is initially generated randomly, and before termination criterion is satisfied, parents are selected randomly from the population. Then different genetic operators are applied. After the application of genetic operators, selected individuals are inserted in the next generation. This process is repeated until a termination criterion is met and finally, the best-evolved GP tree is saved.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Denoising <s> Graphics processor units are fast, inexpensive parallel computing devices. Recently there has been great interest in harnessing this power for various types of scientific computation, including genetic programming. In previous work, we have shown that using the graphics processor provides dramatic speed improvements over a standard CPU in the context of fitness evaluation. In this work, we use Cartesian Genetic Programming to generate shader programs that implement image filter operations. Using the GPU, we can rapidly apply these programs to each pixel in an image and evaluate the performance of a given filter. We show that we can successfully evolve noise removal filters that produce better image quality than a standard median filter. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Denoising <s> Generally, the impulse noise filtering schemes use all pixels within a neighborhood and increase the size of neighborhood with the increase in noise density. However, the estimate from all pixels within neighborhood may not be accurate. Moreover, the larger window may remove edges and fine details as well. In contrast, we propose a novel impulse noise removal scheme that emphasizes on few noise-free pixels and small neighborhood. The proposed scheme searches noise-free pixels within a small neighborhood. If at least three pixels are not found, then the noisy pixel is left unchanged in current iteration. This iterative process continues until all noisy pixels are replaced with estimated values. In order to estimate the optimal value of the noisy pixel, genetic programming-based estimator is developed. The estimator (function) is composed of useful pixel information and arithmetic functions. Experimental results show that the proposed scheme is capable of removing impulse noise effectively while preserving the fine image details. Especially, our approach has shown effectiveness against high impulse noise density. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Denoising <s> The coefficients in previous local filters are mostly heuristically optimized, which leads to artifacts in the denoised image when the optimization is not adaptive enough to the image content. Compared to parametric filters, learning-based denoising methods are more capable of tackling the conflicting problem of noise reduction and artifact suppression. In this paper, a patch-based Evolved Local Adaptive (ELA) filter is proposed for natural image denoising. In the training process, a patch clustering is used and the genetic programming (GP) is applied afterwards for determining the optimal filter (linear or nonlinear in a tree structure) for each cluster. In the testing stage, the optimal filter trained beforehand by GP will be retrieved and employed on the input noisy patch. In addition, this adaptive scheme can be used for different noise models. Extensive experiments verify that our method can compete with and outperform the state-of-the-art local denoising methods in the presence of Gaussian or salt-and-pepper noise. Additionally, the computational efficiency has been improved significantly because of the separation of the offline training and the online testing processes. HighlightsGenetic programming (GP) is used for removing Gaussian and impulse noise.Patch clustering is used to classify the noisy input patches.This adaptive scheme can be used for different noise models.Our method generates the state-of-the-art performance at low or medium noise levels. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Denoising <s> Abstract Composite filters based on mathematical morphological operators (MMO) are getting considerable attraction in image denoising. Most of such approaches depend on pre-fixed combination of MMO. In this paper, we proposed a genetic programming (GP) based approach for denoising magnetic resonance images (MRI) that evolves an optimal composite morphological supervised filter ( F OCMSF ) by combining the gray-scale MMO. The proposed method is divided into three modules: preprocessing module, GP module, and evaluation module. In preprocessing module, the required components for the development of the proposed GP based filter are prepared. In GP module, F OCMSF is evolved through evaluating the fitness of several individuals over certain number of generations. Finally, the evaluation module provides the mechanism for testing and evaluating the performance of the evolved filter. The proposed method does not need any prior information about the noise variance. The improved performance of the developed filter is investigated using the standard MRI datasets and its performance is compared with previously proposed methods. Comparative analysis demonstrates the superiority of the proposed GP based scheme over the existing approaches. <s> BIB004
|
Many researchers used GP as an effective strategy for the purpose of removing noise from an image. Chaudhry et al. proposed GP for restoring degraded images by evolving an optimal function that estimated pixel intensity. The proposed technique was a hybrid of GP and Fuzzy logic, which denoises gray level Gaussian noise images in the spatial domain. First, for deciding if a pixel needed to be rigged, mapping function based on fuzzy logic was used and then GP was applied to evolve an optimal pixel intensity-estimation function. Another denoising method based on local-adaptive learning (for Gaussian and salt & pepper noise) method was proposed by Yan et al. BIB003 . In the training stage, clustering was used to classify the image based on similar local structures, and then GP was applied to determine optimal filters (which themselves were tree like individuals) for each cluster. The function set was composed of Gaussian and bilateral filters as well as arithmetic operators. An increased PNSR was reported for the proposed method in comparison to other local learning-based methods such as K-clustering with Singular-Value-Decomposition . On the other hand, to denoise Magnetic-Resonance-Imaging (MRI) images in case of Racian noise, an optimal composite morphological filter was generated via GP BIB004 . GP individual performed morphological operations on the corrupted image to obtain an observed image. RMSE of the feature sets for the degraded image and the observed image were used to calculate the fitness of each individual. For evaluation, a noisy image was filtered by the developed filter to obtain an estimated image. Moreover, proposed method (in terms of RMSE and PSNR) was also compared with other techniques . Another work for removing mixed/Gaussian noise using GP was proposed by Petrovic et al. . GP based two-step filter (each having its own estimator) was used to remove the noisy pixels missed by the first detector through a second detector. PSNR was used for evaluating the filter. Harding et al. BIB001 used Cartesian GP to evolve image filters and evaluated their fitness functions on a GPU. The average error on each pixel was used as the fitness score. Majid et al. BIB002 employed GP to estimate optimal values of noisy pixels for impulse noise removal. Using directional derivative, noisy pixels were detected first, then their values were estimated using GP estimator by incorporating noise-free pixels. Feature vectors were constructed using noisy pixels with at least three neighnoring noise-free pixels.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Registration <s> Genetic Improvement (GI) is shown to optimise, in some cases by more than 35percent, a critical component of healthcare industry software across a diverse range of six nVidia graphics processing units (GPUs). GP and other search based software engineering techniques can automatically optimise the current rate limiting CUDA parallel function in the NiftyReg open source C++ project used to align or register high resolution nuclear magnetic resonance NMRI and other diagnostic NIfTI images. Future Neurosurgery techniques will require hardware acceleration, such as GPGPU, to enable real time comparison of three dimensional in theatre images with earlier patient images and reference data. With millimetre resolution brain scan measurements comprising more than ten million voxels the modified kernel can process in excess of 3 billion active voxels per second. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Registration <s> Image registration (IR) is a fundamental task in image processing for matching two or more images of the same scene taken at different times, from different viewpoints and/or by different sensors. Due to the enormous diversity of IR applications, automatic IR remains a challenging problem to this day. A wide range of techniques has been developed for various data types and problems. These techniques might not handle effectively very large images, which give rise usually to more complex transformations, e.g., deformations and various other distortions. In this paper we present a genetic programming (GP)- based approach for IR, which could offer a significant advantage in dealing with very large images, as it does not make any prior assumptions about the transformation model. Thus, by incorporating certain generic building blocks into the proposed GP framework, we hope to realize a large set of specialized transformations that should yield accurate registration of very large images. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Registration <s> Abstract In feature-based methods, outlier removal plays an important role in attaining a reasonable accuracy for image registration. In this paper, we propose a genetic programming (GP) based adaptive method for outlier removal. First, features are extracted through the scale-invariant feature transform (SIFT) from the reference and sensed images which were initially matched using Euclidean distance. The classification of feature points into inliers and outliers is done in two stages. In the first stage, feature vectors are computed using various distance and angle information. Feature points are categorized into three groups; inliers, outliers and non-classified feature (NCF) points. In the second stage, a GP-based classifier is developed to classify NCF points into inliers and outliers. The GP-based function takes features as an input feature vector and provides a scalar output by combining features with arithmetic operations. Finally, registration is done by eliminating the outliers. The effectiveness of the proposed outlier removal method is analyzed through the classification and positional accuracy. The experimental results show a considerable improvement in the registration accuracy. <s> BIB003
|
Image registration involves matching different images of the same scene, which are captured at different intervals, from different directions or by different sensors. One objective of image registration is to bring into line the images in such a way so that high-level processing can be executed. Only few researchers have employed GP for image registration. Chicotay et al. BIB002 presented GP based approach for large size image registration, in which transformation T on an image mapped every pixel ( , ) p x y of the input image to a different pixel ( ', ') p x y in the coordinate system of the referenced image. Mutual Information (MI), was used as a measure to search for a function that generated highest value when there existed maximum overlap between the referenced and the transformed image. Root mean square error was used to evaluate each individual. Comparison was made with Scale-Invariant Feature Transform (SIFT) [26] based image registration. Though the results were not as good as a SIFT-based technique, but they were still comparable keeping in view that unlike the SIFT-based technique the proposed technique did not make any assumptions about the transformation model in order to initiate or bound the registration process. The function set included transformation functions such as sine, cosine, power, rotation, and radial basis function. Langdon et al. BIB001 employed GP optimization to improve Graphics Processing Unit (GPU) based implementation of Nifty Reg Software. The optimization was performed for six different graphics cards. Nifty Reg is an open-source software for medical image registration. The implementation was completed using Compute Unified Device Architecture (CUDA). GP with linear variable length genome specified changes to the CUDA kernel. Two parameters (compute level and size of block) for CUDA were also tuned along with post-evolution bloat removal. Each genome was saved as a text line. Crossover and mutation were prohibited from including code lines to those parts of the kernel where the containing variables might go beyond the scope. For each generation, a new image was created randomly and each GPU kernel was run on it. Each of the answer created by a GPU kernel was checked against that of the CPU and its runtime was compared with that of the original kernel that run on the same hardware. Outliers within data significantly degrade the performance of a classifier. To overcome such degradation in the performance of an image registration related classifier, Lee et al. BIB003 proposed a novel GP based method. In their method, firstly feature extraction was performed using Scale Invariant Feature Transform (SIFT) [26] . The features were then classified into three categories i.e. inliers, outliers, and non-classified features. Inliers and outliers extracted from the first phase were then provided as training data to GP. GP then categorized the non-classified features into two groups, i.e. inliers and outliers. After finding the outliers within the dataset, all the outliers were removed from the dataset and the image registration was performed on the preprocessed data (after outlier removal). Block diagram of Lee's technique is shown in Figure 6 .
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Compression <s> We describe a genetic programming system which learns nonlinear predictive models for lossless image compression. Sexpressions which represent nonlinear predictive models are learned, and the error image is compressed using a Huffman encoder. We show that the proposed system is capable of achieving compression ratios superior to that of the best known lossless compression algorithms, although it is significantly slower than standard al- <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Compression <s> Edge detection is a subjective task. Traditionally, a moving window approach is used, but the window size in edge detection is a tradeoff between localization accuracy and noise rejection. An automatic technique for searching a discriminated pixel's neighbors to construct new edge detectors is appealing to satisfy different tasks. In this paper, we propose a genetic programming (GP) system to automatically search pixels (a discriminated pixel and its neighbors) to construct new low-level subjective edge detectors for detecting edges in natural images, and analyze the pixels selected by the GP edge detectors. Automatically searching pixels avoids the problem of blurring edges from a large window and noise influence from a small window. Linear and second-order filters are constructed from the pixels with high occurrences in these GP edge detectors. The experiment results show that the proposed GP system has good performance. A comparison between the filters with the pixels selected by GP and all pixels in a fixed window indicates that the set of pixels selected by GP is compact but sufficiently rich to construct good edge detectors. <s> BIB002
|
The increasing use of images and their storage requirements initiated the need to compress them. The basic idea behind image compression is to remove redundants bits, and thus encode the information contained in the image so that while restoring, the encoded image is obtained without considerable loss. Restoring the exact images is important in case of medical diagnosis or other security forensics. Transmitting images over the internet also requires compression in order to consume less bandwidth. Fukunage et al. BIB001 described a GP system, for lossless image compression, which learned nonlinear predictive model for pixel prediction based on neighboring pixels. Four neighboring pixels were used as terminals for the GP. For each image, a unique model was generated and was represented as s-expression. The high computational cost of evaluating the s-expression for each pixel in the image was overcome by removing function call overhead by employing Genome Compiler. This compiler translates s-expressions into efficient SPARC machine code before execution. The proposed method was compared with other compression techniques including CALIC, LOCO-I, gzip and was reported to be superior in the compression achieved, though it was slow. Figure 7 depicts the steps of Fukunage's method. BIB001 In another technique, Fu et al. BIB002 used a compressed form of genotypic representation for GA, termed as compressed GA (cGA). For lossless compression of medical images, a linear GP driven by cGA finded a transformation, represented as T(d), which improved the compression ratio of data d. Moreover, this transformation could remove certain types of redundancy. The terminal set comprised of constants, while the function set included four transformation functions. These transformations acted as preprocessing before real compression and showed more compression as compared to standard GA based technique.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Segmentation <s> This paper describes an approach to using GP for image analysis based on the idea that image enhancement, feature detection and image segmentation can be re-framed as filtering problems. GP can discover efficient optimal filters which solve such problems but in order to make the search feasible and effective, terminal sets, function sets and fitness functions have to meet some requirements. We describe these requirements and we propose terminals, functions and fitness functions that satisfy them. Experiments are reported in which GP is applied to the segmentation of the brain in medical images and is compared with artificial neural nets. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Segmentation <s> This paper describes a texture segmentation method using genetic programming (GP), which is one of the most powerful evolutionary computation algorithms. By choosing an appropriate representation texture, classifiers can be evolved without computing texture features. Due to the absence of time-consuming feature extraction, the evolved classifiers enable the development of the proposed texture segmentation algorithm. This GP based method can achieve a segmentation speed that is significantly higher than that of conventional methods. This method does not require a human expert to manually construct models for texture feature extraction. In an analysis of the evolved classifiers, it can be seen that these GP classifiers are not arbitrary. Certain textural regularities are captured by these classifiers to discriminate different textures. GP has been shown in this study as a feasible and a powerful approach for texture classification and segmentation, which are generally considered as complex vision tasks. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Segmentation <s> In this study, we propose a fully automatic algorithm to detect and segment corpora lutea (CL) using genetic programming and rotationally invariant local binary patterns. Detection and segmentation experiments were conducted and evaluated on 30 images containing a CL and 30 images with no CL. The detection algorithm correctly determined the presence or absence of a CL in 93.33 % of the images. The segmentation algorithm achieved a mean (±standard deviation) sensitivity and specificity of 0.8693 ± 0.1371 and 0.9136 ± 0.0503, respectively, over the 30 CL images. The mean root mean squared distance of the segmented boundary from the true boundary was 1.12 ± 0.463 mm and the mean maximum deviation (Hausdorff distance) was 3.39 ± 2.00 mm. The success of these algorithms demonstrates that similar algorithms designed for the analysis of in vivo human ovaries are likely viable. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Segmentation <s> Edge detection is a subjective task. Traditionally, a moving window approach is used, but the window size in edge detection is a tradeoff between localization accuracy and noise rejection. An automatic technique for searching a discriminated pixel's neighbors to construct new edge detectors is appealing to satisfy different tasks. In this paper, we propose a genetic programming (GP) system to automatically search pixels (a discriminated pixel and its neighbors) to construct new low-level subjective edge detectors for detecting edges in natural images, and analyze the pixels selected by the GP edge detectors. Automatically searching pixels avoids the problem of blurring edges from a large window and noise influence from a small window. Linear and second-order filters are constructed from the pixels with high occurrences in these GP edge detectors. The experiment results show that the proposed GP system has good performance. A comparison between the filters with the pixels selected by GP and all pixels in a fixed window indicates that the set of pixels selected by GP is compact but sufficiently rich to construct good edge detectors. <s> BIB004
|
The main purpose of image segmentation is to segment out different gray levels of an image. If the pixels belonging to regions are homogeneous then they are assigned the same label otherwise different labels are assigned. In other words, a good segmentation criterion is to look for homogeneity within-region and heterogeneity between regions BIB001 . Developing a comprehensive way to check the accuracy of image segmentation algorithms is a major problem. In the field of IP, GP has been widely used for the purpose of segmenting region of interest from images used GP to combine different and unrelated evaluation measures. They selected three evaluation measures, which are based on layout of entropy, similarity within region, and disparity between the regions for the creation of composite evaluation measure. In another technique, Song et al. BIB002 used GP to evolve automatic texture classifiers, which were then used for texture segmentation. As opposed to conventional methods their method, does not require the manual construction of models to extract texture features because the classifier's input is raw pixels instead of features. Also, the conventional methods are not universally applicable as they rely on the knowledge of the nature of texture, which may differ from region to region and image to image. Dong et al. BIB003 attempted to categorize the texture within an image to be either Corpora Lutea (CL) (i.e. an endocrine gland that is generated from the follicular tissue after ovulation) or non-CL, based on local neighborhoods. A 16-bit invariant uniform local binary patterns (LBP) histogram of pixels in the neighborhood was formed to represent texture descriptions. Feature vector was formed by the histogram bin values, which were fed as input to GP. GP was used to train a classifier for distinguishing between CL texture and other textures. For segmentation, a sliding window was used to scan the image in raster order. Each image pixel in the window was then assigned a class label by the GP classifier. Majority voting was used in case of multiple labels. For CL detection, properties related to set of region were computed for each image's output region. Then a GP classifier was learned using these properties. Finally, the classifier was used to detect whether the segmented region of an image is a CL or not. To address the tradeoff between localization accuracy (requiring small window), and noise rejection (requiring large window) posed by selecting the window size, Fu et al BIB004 used GP to automatically search discriminating pixels and their neighbors to construct edge detectors. Rather than using a set of pixels from a moving window, GP used full image. The selected pixels were then used to form linear and nonlinear filters for detecting edges. The parameters of these filters were estimated via a hybrid of Particle-Swarm-Optimization (PSO) and Differential Evolution. A shifting function, representing four directional shifting functions, was included in the function set. A comparison was made with other detectors showing good results for GP based detectors. They employed F-measure to evaluate the accuracy of detectors. Similarly another GP based image segmentation technique for extracting regions of interest from the background was proposed by Liang et al. . Feature selection using GP was used to find out the effective features that helped to segment out the desired region of interest. Three different types of GP based feature selection methods were proposed. In all of the three methods, fitness function within GP was either based on single or multi-objective method. Their experimental results showed that the GP based feature selection, which used multi-objective fitness function, improved the performance of classifier and also reduced the computational complexity. Block diagram of Liang's technique is shown in Figure 8 .
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Retrieval <s> This paper examines the feasibility of an approach to image retrieval from a heterogeneous collection based on texture. For each texture of interest (T), a T-vs-other classifier is evolved for small n times n windows using genetic programming. The classifier is then used to segment the images in the collection. If there is a significant contiguous area of T in an image, it is considered to contain that texture for retrieval purposes. We have experimented with sky and grass textures in the Corel Volume 12 image set. Experiments with a single image indicate that classifiers for the two textures can be learned to a high accuracy. Experiments with a test set of 714 Corel images gave a retrieval accuracy of 84% for both sky and grass textures. These results suggest that the use of texture could enhance retrieval accuracy in content based image retrieval systems <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Retrieval <s> This paper presents two content-based image retrieval frameworks with relevance feedback based on genetic programming. The first framework exploits only the user indication of relevant images. The second one considers not only the relevant but also the images indicated as non-relevant. Several experiments were conducted to validate the proposed frameworks. These experiments employed three different image databases and color, shape, and texture descriptors to represent the content of database images. The proposed frameworks were compared, and outperformed six other relevance feedback methods regarding their effectiveness and efficiency in image retrieval tasks. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Retrieval <s> This paper presents a framework for multimodal retrieval with relevance feedback based on genetic programming. In this supervised learning-to-rank framework, genetic programming is used for the discovery of effective combination functions of (multimodal) similarity measures using the information obtained throughout the user relevance feedback iterations. With these new functions, several similarity measures, including those extracted from different modalities (e.g., text, and content), are combined into one single measure that properly encodes the user preferences. This framework was instantiated for multimodal image retrieval using visual and textual features and was validated using two image collections, one from the Washington University and another from the ImageCLEF Photographic Retrieval Task. For this image retrieval instance several multimodal relevance feedback techniques were implemented and evaluated. The proposed approach has produced statistically significant better results for multimodal retrieval over single modality approaches and superior effectiveness when compared to the best submissions of the ImageCLEF Photographic Retrieval Task 2008. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Retrieval <s> Web image retrieval is a research area that is receiving a lot of attention in the last few years due to the growing availability of images on the Web. Since content-based image retrieval is still considered very difficult and expensive in the Web context, most current large-scale Web image search engines use textual descriptions to represent the content of the Web images. In this paper we present a study about the usage of genetic programming (GP) to address the problem of image retrieval on the World Wide Web by using textual sources of evidence and textual queries. We investigate several parameter of choices related to the usage of a framework previously proposed by us. The proposed framework uses GP to provide a good solution to combine multiple textual sources of evidence associated with the Web images. Experiments performed using a collection with more than 195,000 images extracted from the Web showed that our evolutionary approach outperforms the best baseline we used with gains of 22.36 % in terms of mean average precision. <s> BIB004
|
Due to the decline in the prices of image acquisition devices and the development of efficient IP algorithms, the databases of images are increasing in number, therefore it has become inevitable to design effective and fast methods for retrieving desired images from such big collections. There are different techniques for image retrieval such as associating some metadata (tags, keywords) with the images, or using content-based retrieval, which is based on similarities of the contents of the given image (or feature) and the desired image. Different shapes, textures, colors etc; can be used as features for Image retrieval related tasks. In Torres et al. technique, GP was applied for creating a merged similarity function for content-based image retrieval. To improve a content-based system, features can be combined from multiple feature vectors or weights can be assigned based on image similarities. In case where combining images gets more complex than, the GP is used for combining nonlinear image similarities. The resulting composite descriptor is simply a combination of pre-defined descriptors. This GP based composite descriptor uses the similarity values obtained from each descriptor and combines them to produce a more effective similarity function. Ciesielski et al. BIB001 used a segmentation algorithm based on a texture-versus-all-else classifiers evolved by GP to retrieve from a large heterogeneous collection of images. Calumby et al. BIB003 used GP to iteratively combine multimodal similarity measures, such as those extracted from text and content, to new similarity functions that would fit the user preferences. For each discovered function, the evaluation functions returned a measure of quality that was based on how well the training set objects were ranked by that function. The proposed method showed higher efficiency, when compared to Image CLEF Photographic Retrieval Task . A somewhat similar framework was also described by Ferreira et al. BIB002 . Saraiva et al. BIB004 , on the other hand, used GP to combine multiple textual sources of evidence such as image file name, content of HTML, page title, alt tag, keywords, description, and text passages around the image, to rank web-based image retrievals.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> Five alternative methods are proposed to perform multi-class classification tasks using genetic programming. These methods are: (1) binary decomposition, in which the problem is decomposed into a set of binary problems and standard genetic programming methods are applied; (2) static range selection, where the set of real values returned by a genetic program is divided into class boundaries using arbitrarily-chosen division points; (3) dynamic range selection, in which a subset of training examples are used to determine where, over the set of reals, class boundaries lie; (4) class enumeration, which constructs programs similar in syntactic structure to a decision tree; and (5) evidence accumulation, which allows separate branches of the program to add to the certainty of any given class. The results show that the dynamic range selection method is well-suited to the task of multi-class classification and is capable of producing classifiers that are more accurate than the other methods tried when comparable training times are allowed. The accuracy of the generated classifiers was comparable to alternative approaches over several data sets. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> This paper describes an approach to the use of genetic programming for multi-class image recognition problems. In this approach, the terminal set is constructed with image pixel statistics, the function set consists of arithmetic and conditional operators, and the fitness function is based on classification accuracy in the training set. Rather than using fixed static thresholds as boundaries to distinguish between different classes, this approach introduces two dynamic methods of classification, namely centred dynamic range selection and slotted dynamic range selection, based on the returned value of an evolved genetic program where the boundaries between different classes can be dynamically determined during the evolutionary process. The two dynamic methods are applied to five image datasets of classification problems of increasing difficulty and are compared with the commonly used static range selection method. The results suggest that, while the static boundary selection method works well on relatively easy binary or tertiary image classification problems with class labels arranged in the natural order, the two dynamic range selection methods outperform the static method for more difficult, multiple class problems. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> This paper proposes a novel method for breast cancer diagnosis using the feature generated by genetic programming (GP). We developed a new feature extraction measure (modified Fisher linear discriminant analysis (MFLDA)) to overcome the limitation of Fisher criterion. GP as an evolutionary mechanism provides a training structure to generate features. A modified Fisher criterion is developed to help GP optimize features that allow pattern vectors belonging to different categories to distribute compactly and disjoint regions. First, the MFLDA is experimentally compared with some classical feature extraction methods (principal component analysis, Fisher linear discriminant analysis, alternative Fisher linear discriminant analysis). Second, the feature generated by GP based on the modified Fisher criterion is compared with the features generated by GP using Fisher criterion and an alternative Fisher criterion in terms of the classification performance. The classification is carried out by a simple classifier (minimum distance classifier). Finally, the same feature generated by GP is compared with a original feature set as the inputs to multi-layer perceptrons and support vector machine. Results demonstrate the capability of this method to transform information from high-dimensional feature space into one-dimensional space and automatically discover the relationship among data, to improve classification accuracy. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> Mammography is a widely used screening tool and is the gold standard for the early detection of breast cancer. The classification of breast masses into the benign and malignant categories is an important problem in the area of computer-aided diagnosis of breast cancer. A small dataset of 57 breast mass images, each with 22 features computed, was used in this investigation; the same dataset has been previously used in other studies. The extracted features relate to edge-sharpness, shape, and texture. The novelty of this paper is the adaptation and application of the classification technique called genetic programming (GP), which possesses feature selection implicitly. To refine the pool of features available to the GP classifier, we used feature-selection methods, including the introduction of three statistical measures--Student's t test, Kolmogorov-Smirnov test, and Kullback-Leibler divergence. Both the training and test accuracies obtained were high: above 99.5% for training and typically above 98% for test experiments. A leave-one-out experiment showed 97.3% success in the classification of benign masses and 95.0% success in the classification of malignant tumors. A shape feature known as fractional concavity was found to be the most important among those tested, since it was automatically selected by the GP classifier in almost every experiment. <s> BIB004 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> This paper describes a new approach to the use of Gaussian distribution in genetic programming (GP) for multiclass object classification problems. Instead of using predefined multiple thresholds to form different regions in the program output space for different classes, this approach uses probabilities of different classes, derived from Gaussian distributions, to construct the fitness function for classification. Two fitness measures, overlap area and weighted distribution distance, have been developed. Rather than using the best evolved program in a population, this approach uses multiple programs and a voting strategy to perform classification. The approach is examined on three multiclass object classification problems of increasing difficulty and compared with a basic GP approach. The results suggest that the new approach is more effective and more efficient than the basic GP approach. Although developed for object classification, this approach is expected to be able to be applied to other classification problems. <s> BIB005 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> The detection and classification of buried targets such as unexploded ordnance (UXO) using ground penetrating radar (GPR) technology involves complex qualitative features and 2-D scattering images. These processes are often performed by human operators and are thus subject to error and bias. Artificial intelligence (AI) technologies, such as neural networks (NN) and fuzzy systems, have been applied to develop autonomous classification algorithms and have shown promising results. Genetic programming (GP), a relatively new AI method, has also been examined for these classification purposes. In this letter, the results of a comparison between the classification performances of NN versus the GP techniques for GPR UXO data are presented. Simulated 2-D scattering patterns from one UXO target and four non-UXO objects are used in this comparison. Different levels of noise and cases of untrained data are also examined. Obtained results show that GP provides better performance than NN methods with increasing problem difficulty. Genetic programming also showed robustness to untrained data as well as an inherent capability of providing global optimal searching, which could minimize efforts on training processes. <s> BIB006 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> In this paper we explore the application of Genetic Programming (GP) to the problem of domain-independent image feature extraction and classification. We propose a new GP-based image classification system that extracts image features autonomously, and compare its performance against a baseline GP-based classifier system that uses human-extracted features. We found that the proposed system has a similar performance to the baseline system, and that GP is capable of evolving a single program that can both extract useful features and use those features to classify an image. <s> BIB007 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> This paper presents an interactive technique for remote sensing image classification. In our proposal, users are able to interact with the classification system, indicating regions of interest (and those which are not). This feedback information is employed by a genetic programming approach to learning user preferences and combining image region descriptors that encode spectral and texture properties. Experiments demonstrate that the proposed method is effective for image classification tasks and outperforms the traditional MaxVer method. <s> BIB008 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> The use of remote sensing images as a source of information in agribusiness applications is very common. In those applications, it is fundamental to know how the space occupation is. However, identification and recognition of crop regions in remote sensing images are not trivial tasks yet. Although there are automatic methods proposed to that, users very often prefer to identify regions manually. That happens because these methods are usually developed to solve specific problems, or, when they are of general purpose, they do not yield satisfying results. This work presents a new interactive approach based on relevance feedback to recognize regions of remote sensing. Relevance feedback is a technique used in content-based image retrieval (CBIR) tasks. Its objective is to aggregate user preferences to the search process. The proposed solution combines the Optimum-Path Forest (OPF) classifier with composite descriptors obtained by a Genetic Programming (GP) framework. The new approach has presented good results with respect to the identification of pasture and coffee crops, overcoming the results obtained by a recently proposed method and the traditional Maximimun Likelihood algorithm. <s> BIB009 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> Classifying images is of great importance in machine vision and image analysis applications such as object recognition and face detection. Conventional methods build classifiers based on certain types of image features instead of raw pixels because the dimensionality of raw inputs is often too large. Determining an optimal set of features for a particular task is usually the focus of conventional image classification methods. In this study we propose a Genetic Programming (GP) method by which raw images can be directly fed as the classification inputs. It is named as Two-Tier GP as every classifier evolved by it has two tiers, the other for computing features based on raw pixel input, one for making decisions. Relevant features are expected to be self-constructed by GP along the evolutionary process. This method is compared with feature based image classification by GP and another GP method which also aims to automatically extract image features. Four different classification tasks are used in the comparison, and the results show that the highest accuracies are achieved by Two-Tier GP. Further analysis on the evolved solutions reveals that there are genuine features formulated by the evolved solutions which can classify target images accurately. <s> BIB010 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> An effective automated pulmonary nodule detection system can assist radiologists in detecting lung abnormalities at an early stage. In this paper, we propose a novel pulmonary nodule detection system based on a genetic programming (GP)-based classifier. The proposed system consists of three steps. In the first step, the lung volume is segmented using thresholding and 3D-connected component labeling. In the second step, optimal multiple thresholding and rule-based pruning are applied to detect and segment nodule candidates. In this step, a set of features is extracted from the detected nodule candidates, and essential 3D and 2D features are subsequently selected. In the final step, a GP-based classifier (GPC) is trained and used to classify nodules and non-nodules. GP is suitable for detecting nodules because it is a flexible and powerful technique; as such, the GPC can optimally combine the selected features, mathematical functions, and random constants. Performance of the proposed system is then evaluated using the Lung Image Database Consortium (LIDC) database. As a result, it was found that the proposed method could significantly reduce the number of false positives in the nodule candidates, ultimately achieving a 94.1% sensitivity at 5.45 false positives per scan. <s> BIB011 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> We present an integrated algorithm for simultaneous feature selection (FS) and designing of diverse classifiers using a steady state multiobjective genetic programming (GP), which minimizes three objectives: 1) false positives (FPs); 2) false negatives (FNs); and 3) the number of leaf nodes in the tree. Our method divides a c -class problem into c binary classification problems. It evolves c sets of genetic programs to create c ensembles. During mutation operation, our method exploits the fitness as well as unfitness of features, which dynamically change with generations with a view to using a set of highly relevant features with low redundancy. The classifiers of i th class determine the net belongingness of an unknown data point to the i th class using a weighted voting scheme, which makes use of the FP and FN mistakes made on the training data. We test our method on eight microarray and 11 text data sets with diverse number of classes (from 2 to 44), large number of features (from 2000 to 49 151), and high feature-to-sample ratio (from 1.03 to 273.1). We compare our method with a bi-objective GP scheme that does not use any FS and rule size reduction strategy. It depicts the effectiveness of the proposed FS and rule size reduction schemes. Furthermore, we compare our method with four classification methods in conjunction with six features selection algorithms and full feature set. Our scheme performs the best for 380 out of 474 combinations of data sets, algorithm and FS method. <s> BIB012 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Classification <s> HighlightsWe present a new iterative feature construction approach for supervised learning model based on the meta-heuristic Harmony Search (HS) algorithm and Cartesian Genetic Programming.We propose a novel method to incorporate soft information about the relevance of the constructed features in the HS algorithm so as to enhance its convergence.The performance of the proposed scheme is assessed over datasets from the literature, with promising results that support its suitability to deal with legacy datasets. The advent of the so-called Big Data paradigm has motivated a flurry of research aimed at enhancing machine learning models by following very diverse approaches. In this context this work focuses on the automatic construction of features in supervised learning problems, which differs from the conventional selection of features in that new characteristics with enhanced predictive power are inferred from the original dataset. In particular this manuscript proposes a new iterative feature construction approach based on a self-learning meta-heuristic algorithm (Harmony Search) and a solution encoding strategy (correspondingly, Cartesian Genetic Programming) suited to represent combinations of features by means of constant-length solution vectors. The proposed feature construction algorithm, coined as Adaptive Cartesian Harmony Search (ACHS), incorporates modifications that allow exploiting the estimated predictive importance of intermediate solutions and, ultimately, attaining better convergence rate in its iterative learning procedure. The performance of the proposed ACHS scheme is assessed and compared to that rendered by the state of the art in a toy example and three practical use cases from the literature. The excellent performance figures obtained in these problems shed light on the widespread applicability of the proposed scheme to supervised learning with legacy datasets composed by already refined characteristics. <s> BIB013
|
Image classification is the process of classifying images based on some visual contents. Various Artificial Intelligence (AI) based technologies, such as Artificial Neural Networks (ANNs) and fuzzy systems, have been applied to develop autonomous classification algorithms and have shown promising results BIB006 . Two broad families of approaches used in image classification are parametric (that requires learning phase) and non-parametric methods (that does not require learning phase). Some examples of parametric classifiers are Support Vector Machine (SVM), Decision Trees, and GA. Whereas, Nearest-Neighbor image classifier is an example of non-parametric classifiers. When GP is used for classification, the inputs are features and the output is a mathematical expression that returns different values for different classes. Using GP for classification requires a threshold to be set for the program output to specify different classes. In case of static range selection, boundaries of program output space are fixed and predefined. However, in dynamic range selection, the boundaries are searched automatically BIB001 . In centered dynamic range selection, the class boundaries are dynamically determined by calculating the center of the program output values for each class. In slotted dynamic class boundary determination method the output value of a program is split into many slots. Each slot will be assigned to a value for each class. It then dynamically determines the class by simply taking the class with the largest value at the slot BIB002 . Several techniques have used GP for classification BIB013 BIB012 . Nandi et al. BIB004 used GP for feature selection to classify breast masses in mammograms to benign and malignant groups. To narrow down the pool of features, they used a few procedures like Sequential Forward Selection, and Student's t-test etc. Once important features were selected, these were divided into two groups. Either union or intersection operation were performed over these groups to create a new set of data points for GP classifier. Similarly, Kobashigawa et al. BIB006 showed that with the increase in problem difficulty level GP achieves better results than ANN methods. Kobashigawa's work also revealed the robustness of GP to unseen examples along with an inherent capability of global optimal searching, which could minimize efforts that is required during training processes. On the other hand, Smart et al. BIB002 employed the evolutionary process of GP to dynamically determine the boundaries between images of coins of different denominations. Pixel level domain independent statistical features such as average intensity, variance, etc. were given as input to GP to automatically select features that were relevant to this multi-class image classification problem. As compared to static range selection, reasonably good results were reported using the proposed dynamic methods, centered dynamic range selection, and slotted dynamic range selection, on large dataset. Similarly, Atkins et al. BIB007 proposed a GP-based domain-independent technique for extracting features and image classification. Block diagram of Atkins's approach is shown in Figure 9 . First raw images were preprocessed by the filtering layer whose outputs (the filtered images) were fed to the second layer, called the aggregation layer. The aggregation layer then performed feature aggregation and produced a real value. Finally, the output of the aggregation layer was passed on to the classification layer to perform classification. For this layer, a threshold of zero was used so a negative output would mean class A and non-negative would classify the image as belonging to class B. The proposed procedure was tested on four different datasets and the reported results suggested that it outperformed the basic GP methodology with increasing problem difficulty. Figure 9 : Three tier GP for image classification BIB007 In another approach, Al-sahaf et al. BIB010 presented a GP based approach that extended the work of Atkin's et al. BIB007 and introduced aggregation functions that read in different shapes such as lines, circles, and rectangles in order to enable sampling windows that were not in square shape. They did not use the filtering layer as was used by Atkin's technique BIB007 and still achieved better results as compared to a canonical GP that used extracted features and performed classification by the three tier GP. Guo et al. BIB003 used a Modified Fisher criterion based GP (MF-GP) for generating features. The generated features were evaluated for their discriminating ability by the Minimum Distance Classifier (MDC). Improved results were reported for MF-GP compared to Multi-layer perceptron, SVM, and Alternative Fisher criterion based GP (AF-GP) with MDC. A semi-automatic approach for classifying Remote Sensing Images (RSI) was proposed by Santos et al. BIB008 . GP was used to learn user preferences via user indicated relevant as well as non-relevant regions. The image region descriptors were combined that encoded color and texture properties. The reported results showed that the method outperformed maximumlikelihood-classification, when used for Remote Sensing Images (RSI) classification. In the same way, Santos et al. BIB009 improved the results of the previous work by combining Optimum-PathForest (OPF) with composite descriptors achieved by a GP framework. OPF classifier represents each class of objects by one or numerous optimal-path trees rooted at key samples, called prototypes. The OPF-based classification system took into account the user interaction. Choi et al. BIB011 proposed a system for automatic detection of pulmonary nodules, which first segmented the lung volume using thresholding, then detected and segmented nodule candidates using multiple thresholding and rule-based pruning. From these nodules, candidates geometrical and statistical features were extracted and a GP-based classifier was trained. The fitness function was constructed by combining area under the Receiver-Operating-Characteristic (ROC) curve, the True-Positive-Rate (TPR), and specificity. They reported that compared to the previous proposed methods for this application, this GP based classifier showed high sensitivity and a reduced false positive rate. Zang et al. BIB005 developed fitness function for classification based on probabilities (derived from Gaussian distribution) that are associated with different classes. Assuming the outputs from different classifiers as random independent variables, two fitness function (overlapped region and weighted distribution distance) were developed. Zang's approach exploited many top GP programs for classification and the class with the highest probability was used as the class of the object pattern. In comparison to a basic GP classification, which also used multiple best programs and voting, the proposed technique was reported to have good results in terms of classification accuracy and execution time.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Watermarking <s> Embedding of a digital watermark in an electronic document is proving to be a feasible solution for copyright protection and authentication purposes. In this paper, we present an innovative scheme of perceptually shaping watermark to the cover images. A watermark is generally embedded in the selected coefficients of the transformed image using a carefully chosen watermarking strength. Choice of a good watermarking strength, to perceptually shape the watermark according to the cover image is crucial to make a tradeoff between the two conflicting properties, namely: robustness and imperceptibility of the watermark. Traditionally, a constant watermarking strength obtained from spatial activity masking and heuristics has been used for all the selected coefficients during embedding. We consider this tradeoff as an optimization problem and have investigated an evolutionary optimization technique to find optimal/near-optimal perceptual shaping function for DCT based watermarking system. The new scheme provides an excellent tradeoff between the robustness and imperceptibility and is image adaptive. Improved resistance to attacks, especially against JPEG compression of quality 7% and Gaussian noise of variance 17000 has been observed. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Watermarking <s> In this paper we propose an algorithm to develop an intelligent perceptual shaping function based on Genetic Programming (GP) in DCT domain. In digital image watermarking, robustness and imperceptibility compete with each other. In this paper we applied GP to make a trade off between these two characteristics. Here, the original image is divided into 8×8 non-overlapping blocks and the DCT coefficients in each block are sorted by means of zigzag. One AC coefficient in each block is changed according to a perceptual shaping function. This perceptual shaping function is obtained from the GP core and is dependent on average of all block coefficients and the related AC coefficient. The experimental results show that this proposed algorithm is robust against some digital image attacks such as low pass filtering, median filtering and JPEG compression. In addition the improvement in watermarked image quality also is achieved. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Watermarking <s> This paper presents a novel approach of adaptive visual tuning of a watermark in Discrete Cosine Transform (DCT) domain. The proposed approach intelligently selects appropriate frequency bands as well as optimal strength of alteration. Genetic Programming (GP) is applied to structure the watermark by exploiting both the characteristics of human visual system and information pertaining to a cascade of conceivable attacks. The developed visual tuning expressions are dependent on frequency and luminance sensitivities, and contrast masking. To further enhance robustness, spread spectrum based watermarking and Bose-Chadhuri-Hocquenghem (BCH) coding is employed. The combination of spread spectrum sequence, BCH coding and GP based non-linear structuring makes it extremely difficult for an attacker to gain information about the secret knowledge of the watermarking system. Experimental results show the superiority of the proposed approach against the existing approaches. Especially, the margin of improvement in robustness will be of high importance in medical and context aware related applications of watermarking. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Watermarking <s> Embedding of the digital watermark in an electronic document proves to be a viable solution for the protection of copyright and for authentication. In this paper we proposed a watermarking scheme based on wavelet transform, genetic programming (GP) and Watson distortion control model for JPEG2000. To select the coefficients for watermark embedding image is first divided into 32×32 blocks. Discrete Wavelet Transform DWT of each block is obtained. Coefficients in LH, HL and HH subbands of each 32×32 block are selected based on the Just Noticeable Difference (JND). Watermark is embedded by carefully chosen watermarking level. Choice of watermarking level is very important. The two important properties robustness and imperceptibility depends on good choice of watermarking level. GP is used to obtain mathematical function representing optimum watermarking level. The proposed scheme is tested and gives a good compromise between the robustness and imperceptibly. <s> BIB004 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Image Watermarking <s> This paper proposes an intelligent hybrid watermarking algorithm for digital images. In digital image watermarking, robustness and imperceptibility compete with each other. In this paper we applied a hybrid intelligent algorithm based on genetic programming and particle swarm optimization to make a trade off between robustness and imperceptibility. In this way the intelligent method has been applied in DCT_DWT_SVD domain. First of all the original image is transformed into DCT domain and then a part of DCT matrix is decomposed into four subbands using discrete wavelet transform and finally the singular values of each subband are shaped perceptually by singular values of watermark image to embed the watermark. The optimization problem which is related to a conflict between robustness and imperceptibility is solved by means of genetic programming and particle swarm optimization, simultaneously, to achieve the best performance in robustness without losing the quality of host image. Experimental results show improvement in imperceptibility and robustness under several attacks and different images. <s> BIB005
|
The consistently broader use of information technology demands protection of information, especially in the field of medical imaging has become a challenging one. To overcome the issues related to protection of information, digital watermarking is used as a promising technique, especially for the authentication of medical related information. However, when more information (payload) is embedded in the image, it causes distortion in the original image. Moreover, there is also a tradeoff between imperceptibility and payload. In the past, many GP based watermarking techniques BIB002 BIB005 BIB003 BIB004 [67] BIB001 have been proposed for the development of efficient and reliable watermarking system. To perform the tradeoff between robustness and imperceptibility in digital image watermarking, GP was employedby Golshan et al. BIB002 . Instead of setting Perceptual Shaping Function (PSF) to a constant function, GP was utilized to develop an intelligent PSF. A fitness function based on both robustness and imperceptibility was used to evaluate performance of each PSF individual. Similarly, Golshan et al. BIB005 used hybrid approach of GP and PSO for the same purpose. In Gilani et al. technique, GP was used to estimate the distortion within the distorted watermarked signals. Both the watermarked and the distorted watermarked signals were fed to a GP module. The best-estimated distortion function returned by GP was then applied to the original watermarked signal. Varying strengths of Gaussian and JPEG compression attacks were tested for the proposed technique. Similarly, Usman et al. BIB003 proposed evolving application specific Visual Tuning Function (VTF), in which GP optimizes the balance between imperceptibility and robustness while processing an 8x8 block of Discrete Cosine Transform (DCT) image. The watermark was structured according to Human Visual System (HVS) and cascade of attacks. VTF is given as: where X 0,0 is the discrete cosine coefficient and signifies dependency of VTF on luminance sensitivity, X(i, j), is AC coefficient and symbolizes dependency of VTF on contrast masking, and α (i, j) shows frequency sensitivity. The current value of Watson's VTF, DC and AC (DCT) coefficients of 8x8 block were provided as variable terminals. Each potential VTF was evaluated for imperceptibility related fitness, whereas for robustness Bit Correct Ratio (BCR) represented an objective measure. Test images were then watermarked with the evolved VTF. Jan et al. BIB004 proposed that GP could be used to select the watermarking level. Coefficients were selected using a 32x32 block, whose Discrete Wavelet Transform (DWT) was obtained. Luminance, contrast, and Noise-Visibility-Function (NVF) were used as terminals for GP trees. Watermarking level was given by: where co is a selected coefficient, cont is contrast, and lum is luminance. Robustness against different attacks were reported whereas to check the imperceptibility of the watermark, Mean Square error (MSE) and PSNR were used. Similarly Abbasi et al. [67] used a similar approach but used a block size of 4x4. Khan et al. BIB001 presented a DCT based watermarking system which employed GP for finding optimal perceptual shaping function according to Human Visual System (HVS). Each GP tree represented a perceptual shaping function, which was evolved to embed high strength watermark in areas of high variance and low strength watermark in areas of low variance. Change in local variance of the watermarked image with respect to the original image was used as a fitness function. This technique was tested for JPEG compression and Gaussian noise. Recently another interesting reversible watermarking technique based on GP for the protection of medical related information was proposed by Arsalan et al. . Block diagram of the Arsalan's technique is shown in Figure 10 . First, histogram modified image was formed after the preprocessing of original image. Integer Wavelet Transform (IWT) was then applied on histogram modified image. After applying IWT, GP was used to find out the coefficients within the wavelet domain for the purpose of embedding watermark. The aim of the proposed GP based intelligent watermarking scheme was to produce a watermarked image having low distortion and high payload.
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Abstract The automatic detection of ships in low-resolution synthetic aperture radar (SAR) imagery is investigated in this article. The detector design objectives are to maximise detection accuracy across multiple images, to minimise the computational effort during image processing, and to minimise the effort during the design stage. The results of an extensive numerical study show that a novel approach, using genetic programming (GP), successfully evolves detectors which satisfy the earlier objectives. Each detector represents an algebraic formula and thus the principles of detection can be discovered and reused. This is a major advantage over artificial intelligence techniques which use more complicated representations, e.g. neural networks. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> This paper describes a domain independent approach to the use of genetic programming for object detection problems. Rather than using raw pixels or high level domain specific features, this approach uses domain independent statistical features as terminals in genetic programming. Besides position invariant statistics such as mean and standard deviation, this approach also uses position dependent pixel statistics such as moments and local region statistics as terminals. Based on an existing fitness function which uses linear combination of detection rate and false alarm rate, we introduce a new measure called "false alarm area" to the fitness function. In addition to the standard arithmetic operators, this approach also uses a conditional operator if in the function set. This approach is tested on two object detection problems. The experiments suggest that position dependent pixel statistics computed from local (central) regions and nonlinear condition functions are effective to object detection problems. Fitness functions with false alarm area can reflect the smoothness of evolved genetic programs. This approach works well for the detecting small regular multiple class objects on a relatively uncluttered background. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Abstract In this paper, we learn to discover composite operators and features that are synthesized from combinations of primitive image processing operations for object detection. Our approach is based on genetic programming (GP). The motivation for using GP-based learning is that we hope to automate the design of object detection system by automatically synthesizing object detection procedures from primitive operations and primitive features. There are many basic operations that can operate on images and the ways of combining these primitive operations to perform meaningful processing for object detection are almost infinite. The human expert, limited by experience, knowledge and time, can only try a very small number of conventional combinations. Genetic programming, on the other hand, attempts many unconventional combinations that may never be imagined by human experts. In some cases, these unconventional combinations yield exceptionally good results. To improve the efficiency of GP, we propose soft composite operator size limit to control the code-bloat problem while at the same time avoid severe restriction on the GP search. Our experiments, which are performed on selected regions of images to improve training efficiency, show that GP can synthesize effective composite operators consisting of pre-designed primitive operators and primitive features to effectively detect objects in images and the learned composite operators can be applied to the whole training image and other similar testing images. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> In this paper, we use genetic programming (GP) to synthesize composite operators and composite features from combinations of primitive operations and primitive features for object detection. The motivation for using GP is to overcome the human experts' limitations of focusing only on conventional combinations of primitive image processing operations in the feature synthesis. GP attempts many unconventional combinations that in some cases yield exceptionally good results. To improve the efficiency of GP and prevent its well-known code bloat problem without imposing severe restriction on the GP search, we design a new fitness function based on minimum description length principle to incorporate both the pixel labeling error and the size of a composite operator into the fitness evaluation process. To further improve the efficiency of GP, smart crossover, smart mutation and a public library ideas are incorporated to identify and keep the effective components of composite operators. Our experiments, which are performed on selected training regions of a training image to reduce the training time, show that compared to normal GP, our GP algorithm finds effective composite operators more quickly and the learned composite operators can be applied to the whole training image and other similar testing images. Also, compared to a traditional region-of-interest extraction algorithm, the composite operators learned by GP are more effective and efficient for object detection. <s> BIB004 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> This paper describes three developments to improve object detection performance using genetic programming. The first investigates three feature sets, the second investigates a new fitness function,... <s> BIB005 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> In object detection, the goals of successfully discriminating between different kinds of objects (object classification) and accurately identifying the positions of all objects of interest in a large image (object localisation) are potentially in conflict. We propose a Multi-Objective Genetic Programming (MOGP) approach to the task of providing a decision-maker with a diverse set of alternative object detection programs that balance between high detection rate and low false-alarm rate. Experiments on two datasets, simple shapes and photographs of coins, show that it is difficult for a Single-Objective GP (SOGP) system (which weights the multiple objectives a priori) to evolve effective object detectors, but that an MOGP system is able to evolve a range of effective object detectors more efficiently. <s> BIB006 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Edge detection is an important task in computer vision. This paper describes a global approach to edge detection using genetic programming (GP). Unlike most traditional edge detection methods which use local window filters, this approach directly uses an entire image as input and classifies pixels directly as edges or non-edges without preprocessing or postprocessing. Shifting operations and common standard operators are used to form the function set. Precision, recall and true negative rate are used to construct the fitness functions. This approach is examined and compared with the Laplacian and Sobel edge detectors on three sets of images providing edge detection problems of varying difficulty. The results suggest that the detectors evolved by GP outperform the Laplacian detector and compete with the Sobel detector in most cases. <s> BIB007 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Object detection in images is inherently imbalanced and prone to overfitting on the training set. This work investigates the use of a validation set and sampling methods in Multi-Objective Genetic Programming (MOGP) to improve the effectiveness and robustness of object detection in images. Results show that sampling methods decrease runtimes substantially and increase robustness of detectors at higher detection rates, and that a combination of validation together with sampling improves upon a validation-only approach in effectiveness and efficiency. <s> BIB008 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Single pixels can be directly used to construct low-level edge detectors but these detectors are not good for suppressing noise and some texture. In general, features based on a small area are used to suppress noise and texture. However, there is very little guidance in the literature on how to select the area size. In this paper, we employ Genetic Programming (GP) to evolve edge detectors via automatically searching for features based on flexible blocks rather than dividing a fixed window into small areas based on different directions. Experimental results for natural images show that using blocks to extract features obtains better performance than using single pixels only to construct detectors, and that GP can successfully choose the block size for extracting features. <s> BIB009 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Edge detectors trained by a machine learning algorithm are usually evaluated by the accuracy based on overall pixels in the training stage, rather than the information for each training image. However, when the evaluation for training edge detectors considers the accuracy of each image, the influence on the final detectors has not been investigated. In this study, we employ genetic programming to evolve detectors with new fitness functions containing the accuracy of training images. The experimental results show that fitness functions based on the accuracy of single training images can balance the accuracies across detection results, and the fitness function combining the accuracy of overall pixels with the accuracy of training images together can improve the detection performance. <s> BIB010 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> Basic features for edge detection, such as derivatives, can be further manipulated to improve detection performance. How to effectively combine different local features to improve detection performance remains an open issue and needs to be investigated. Genetic Programming (GP) has been employed to construct composite features. However, the range of the observations of an evolved program might be sparse and large, which is not good to indicate different edge responses. In this study, GP is used to construct composite features for edge detection via estimating the observations of evolved programs as triangular distributions. The results of the experiments show that the evolved programs with a large range of observations are not good to construct composite features. A proposed restriction on the range of the observations of evolved programs improves the performance of edge detection. <s> BIB011 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Object Detection <s> In edge detection, a machine learning algorithm generally requires training images with their ground truth or designed outputs to train an edge detector. Meanwhile the computational cost is heavy for most supervised learning algorithms in the training stage when a large set of training images is used. To learn edge detectors without ground truth and reduce the computational cost, an unsupervised Genetic Programming (GP) system is proposed for low-level edge detection. A new fitness function is developed from the energy functions in active contours. The proposed GP system utilises single images to evolve GP edge detectors, and these evolved edge detectors are used to detect edges on a large set of test images. The results of the experiments show that the proposed unsupervised learning GP system can effectively evolve good edge detectors to quickly detect edges on different natural images. <s> BIB012
|
Object detection is the task of finding different types of objects belonging to different categories and is a challenging task especially, in the field of IP and computer vision. In the field of IP, GP has been used by many researchers BIB001 BIB004 BIB003 BIB007 BIB009 BIB010 BIB011 BIB012 BIB006 BIB002 BIB005 BIB008 for accurate and efficient prediction of objects from cluttered and noisy scenes or images. Howard et al. BIB001 utilized GP to evolve detectors to detect ships in Synthetic Aperture Radar (SAR) imagery. Terminal nodes were real numerical values derived from random constants or pixel statistics. A value greater than zero was decided to be a target detection, while a value of zero or less was for ocean pixel. In Lin et al. approach BIB004 , GP was used to synthesize composite operators and features from primitive operations and features for object detection. A composite operator was applied to primitive feature images, the output was segmented to obtain a binary image and was used to extract the target object from the original image. The size of a composite operator as well as misclassified pixels were taken into consideration, while fitness function used in Lin's technique was based on Minimum Description Length (MDL) principle. In another work, Bhanu's et al. BIB003 have used a similar approach of composite operators but instead of MDL based fitness function, they used the following fitness measure: Where, G and ' G are foregrounds in the ground-truth and in the detected image respectively, n being the number of pixels in a given region. Martin et al. used GP to create algorithms for obstacle detection, which analyzes a domain to find its constraints. Lowest non-ground pixels were manually marked and these images were fed to GP, whose output was then compared to the ground truth images. A robot was then controlled by the best-evolved program. Edges are detected traditionally by using local window filters but in Fu et al. BIB007 work, GP was used for domain-independent global edge detection using the whole raw image as input. Different shifting functions were used along with other commonly used operators. F-measure was used in constructing the fitness function: In another work, Fu et al. BIB009 used GP to evolve edge detectors. Instead of distributing a fixed size window into small areas based on different directions, it searched for features based on flexible blocks and the fitness function was based on F-measure. Similarly, GP was also used for improving the performance of edge detection system, where the fitness function was based on accuracy of the training data BIB010 . In another work by Fu et al. BIB011 , composite features were constructed for edge detection by estimating the observations of the programs evolved by GP as triangular distributions. Gaussian filter gradient, histogram gradient, and normalized standard deviation were used as terminal set. In order to detect edges, an unsupervised GP system was proposed in BIB012 . However, fitness function was based on the energy functions in the active contours. In comparison with Sobel edge detector, the evolved GP edge detectors were reported to have better performance. Similarly, Liddle et al. BIB006 used a Multi-Objective GP (MOGP) for object detection. MOGP evolves a set of classifiers rather than a single classifier as in case of Single-Objective GP (SOGP). The proposed technique used NSGA-II algorithm, whose performance measure are Non-Dominance-Ranking and Crowding Distance. A two-phase training process applied MOGP algorithm twice using different objectives e.g. maximizing both TPR and True Negative Rate (TNR); or maximizing Detection Rate (DR), while at the same time minimizing False Alarm Rate (FAR). In the interesting work of Zang et al. BIB002 , GP was used for object detection but instead of using raw pixels and terminals, they used pixel statistics such as mean, standard deviation, and moments. A new fitness measure termed as "false alarm area" was used along with a combination of DR and FAR. On the other hand, Zang et al. presented domain independent features such as mean and standard deviation as terminals for GP to detect multiple objects. They used three different ways (rectilinear: based on different rectangles; circular: using circles of different radii; and using average of pixels) for obtaining pixel statistics. Evaluation of programs was performed with a fitness function based on DR and FAR as such: where, K1 and K2 are constants. Zang et al. BIB005 introduced a two-phase GP approach for object detection. In the first phase, cutouts from the training images were used with classification accuracy as the fitness function. The second phase was initialized with the population from the first phase and a window was moved over the whole image. For the second phase, the following fitness function was used: where FAR is the false alarm rate, DR stand for detection rate, FAA is the false alarm area (positive classifications -objects in the image), size is the program size, while K1, K2, K3, and K4 are constants. Hunt et al. BIB008 followed the previous two-phase approach BIB005 , augmented with validation and sampling methods in order to avoid overfitting. Validation was performed after every two generations. To measure the generalization ability, hyperarea (area covered by the best Paretofront) and distance (difference between performance of classifier on training and validation set) were used. Nguyen et al. [83] used GP for detection of rice leaf. In Nguyen's work, dataset was created by taking images from the top of rice field and a total of 600 images of size 640 840 were captured from the camera. Out of the total 600, 300 images were used for the training of classifier. After capturing images, next step was the conversion of color images into grayscale. Below equation shows the conversion of colored images into the grey scale images. In order to deduce the positive and negative samples from the set of gray images, a window size of 20 20 pixels was used to extract sub-regions within the images. If each sub-image contained portion of rice leaf then, it was labeled as positive example otherwise, negative label was assigned to that subpart. After pre-processing of original images, a total of 9000 images of size 20 20 pixels were generated in which half belonged to positive class and half belonged to negative class. For training of GP program, pixels were considered as terminal set, whereas the function set was comprised of four different arithmetic operators and a square-root function. Weighted sum of TPR and TNR was used as a fitness criterion. In order to ensure that value of fitness was between 0 and 100 percent, the following constraint was followed 1 2 1 ww . Block diagram of Nguyen's technique is shown in Figure 11 .
|
A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Computational Intelligence: An Introduction, Second Edition offers an in-depth exploration into the adaptive mechanisms that enable intelligent behaviour in complex and changing environments. The main focus of this text is centred on the computational modelling of biological and natural intelligent systems, encompassing swarm intelligence, fuzzy systems, artificial neutral networks, artificial immune systems and evolutionary computation. Engelbrecht provides readers with a wide knowledge of Computational Intelligence (CI) paradigms and algorithms; inviting readers to implement and problem solve real-world, complex problems within the CI development framework. This implementation framework will enable readers to tackle new problems without any difficulty through a single Java class as part of the CI library. Key features of this second edition include: A tutorial, hands-on based presentation of the material. State-of-the-art coverage of the most recent developments in computational intelligence with more elaborate discussions on intelligence and artificial intelligence (AI). New discussion of Darwinian evolution versus Lamarckian evolution, also including swarm robotics, hybrid systems and artificial immune systems. A section on how to perform empirical studies; topics including statistical analysis of stochastic algorithms, and an open source library of CI algorithms. Tables, illustrations, graphs, examples, assignments, Java code implementing the algorithms, and a complete CI implementation and experimental framework. Computational Intelligence: An Introduction, Second Edition is essential reading for third and fourth year undergraduate and postgraduate students studying CI. The first edition has been prescribed by a number of overseas universities and is thus a valuable teaching tool. In addition, it will also be a useful resource for researchers in Computational Intelligence and Artificial Intelligence, as well as engineers, statisticians, operational researchers, and bioinformaticians with an interest in applying AI or CI to solve problems in their domains. Check out http://www.ci.cs.up.ac.za for examples, assignments and Java code implementing the algorithms. <s> BIB001 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> This work compares the performance of genetic ::: programming (GP) against traditional fixed-length ::: genome GA approaches on the optimization of wire ::: antenna designs. We describe the implementation of ::: a GP electromagnetic optimization system for wire ::: structures. The results are compared with the traditional ::: GA approach. Although the dimensionality ::: of the search space is much higher for GP than GA, ::: we find that the GP approach gives better results ::: than GA for the same computational effort. In addition, ::: we find that a more expressive antenna structure ::: grammar, dramatically, improves the performance of ::: the GP approach. <s> BIB002 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> A machine learning approach is presented in this study to automatically construct motion detection programs. These programs are generated by Genetic Programming (GP), an evolutionary algorithm. They detect motion of interest from noisy data when there is no prior knowledge of the noise. Programs can also be trained with noisy data to handle noise of higher levels. Furthermore, these auto-generated programs can handle unseen variations in the scene such as different weather conditions and even camera movements. <s> BIB003 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Detecting moving objects is a significant component in many machine vision systems. One of the challenges in real world motion detection is the unstability of the background. An ideal method is expected to reliably detect interesting movements from videos while ignoring background/uninteresting movements. In this paper, Genetic Programming (GP) based motion detection method is used to tackle this issue, as it is a powerful learning method and has been successfully applied on various image analysis tasks. The investigation here focuses on the various representations of GP for motion detection and the suitability of these approaches. The unstable environments in this study include ripples on river, rainy background and moving cameras. It can be shown from the results that with a suitable frame representation and function set, reliable GP programs can be evolved to handle complex unstable background. <s> BIB004 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Motion detection is a vital part of vision systems, either biological or computerized. Conventional motion detection methods in machine vision can differentiate moving objects from background, but cannot directly handle different types of motions. In this paper, we present Genetic Programming (GP) as a method which not only removes relatively stationary background, but also can be selective on what kind of motions to capture. Programs can be evolved to select a certain type of moving objects and ignore other motions. That is to select fast moving target and ignore slowing moving ones. Furthermore programs can be evolved to handle these tasks even when the camera itself is in relatively arbitrary motion. This general GP method does not require additional process to differentiate various types of motions. <s> BIB005 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Genetic Programming (GP) is reputable for its power in finding creative solutions for complex problems. However the downside of it is also well known: the evolved solutions are often difficult to understand. This interpretability issue hinders GP to gain acceptance from many application areas. To address this issue in the context of motion detection, GP programs evolved for various detection tasks are analyzed in this study. Previous work has shown the capabilities of these evolved motion detectors such as ignoring uninteresting motions, differentiating fast motions from slow motions, identifying genuine motions from a moving background, and handling noises. This study aims to reveal the behavior of these GP individuals by introducing simplified motion detection tasks. The investigation on these GP motion detectors shows that their good performance is not random. There are contributing characteristics captured by these detectors, of which the behaviors are more or less explainable. This study validates GP as a good approach for motion detection. <s> BIB006 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> This study presents a selective motion detection methodology which is based on genetic programming GP, an evolutionary search strategy. By this approach, motion detection programs can be automatically evolved instead of manually coded. This study investigates the suitable GP representation for motion detection as well as explores the advantages of this method. Unlike conventional methods, this evolutionary approach can generate programs which are able to mark target motions. The stationary background and the uninteresting or irrelevant motions such as swaying trees, noises are all ignored. Furthermore, programs can be trained to detect target motions from a moving background. They are capable of distinguishing different kinds of motions. Such differentiation can be based on the type of motions as well, for example, fast moving targets are captured, while slow moving targets are ignored. One of the characteristics of this method is that no modification or additional process is required when different types of motions are introduced. Moreover, real-time performance can be achieved by this GP motion detection method. <s> BIB007 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Evolving solutions for machine vision applications has gained more popularity in the recent years. One area is evolving programs by Genetic Programming (GP) for motion detection, which is a fundamental component of most vision systems. Despite the good performance, this approach is not widely accepted by mainstream vision application developers. One of the reasons is that these GP generated programs are often difficult to interpret by humans. This study analyzes the reasons behind the good performance and shows that the behaviors of these evolved motion detectors can be explained. Their capabilities of ignoring uninteresting motions, differentiating fast motions from slow motions, identifying genuine motions from moving background and handling noises are not random. On simplified problems we can reveal the behaviors of these programs. By understanding the evolved detectors, we can consider evolution as a good approach for creating motion detection modules. <s> BIB008 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Genetic programming(GP) has become an increasingly hot issue in evolutionary computation due to its extensive application. Anomaly detection in crowded scenes is also a hot research topic in computer vision. However, there are few contributions on using genetic programming to detect abnormalities in crowded scenes. In this paper, we focus on anomaly detection in crowded scenes with genetic programming. We propose a new method called Multi-Frame LBP Difference(MFLD) based on Local Binary Patterns(LBP) to extract pixel-level features from videos without additional complex preprocessing operations such as optical flow and background subtraction. Genetic programming is employed to generate an anomaly detector with the extracted data. When a new video is coming, the detector can classify every frame and localize the abnormality to a single-pixel level in realtime. We validate our approach on a public dataset and compare our method with other traditional algorithms for video anomaly detection. Experimental results indicate that our method with genetic programming performs better in detecting abnormalities in crowded scenes. <s> BIB009 </s> A Recent Survey on the Applications of Genetic Programming in Image Processing <s> GP in Motion Detection <s> Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength . These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests. <s> BIB010
|
In past, many modeling and background subtraction related techniques have been designed for motion detection. Moreover, to avoid manually coded motion detection system, different researchers used GP based automatically evolved systems BIB009 BIB005 BIB003 BIB008 BIB010 . It was observed that generally, the GP based evolved programs outperformed manually coded programs. To tackle the unstable background (such as rainy background, moving background due to a moving camera) in motion detection, GP was employed in BIB004 , where classification accuracy based on motion and non-motion was used as a fitness measure. The 20 x 20 pixels cutouts were used as terminals, whereas, Min, Max and Avg were used as function set. One more difficult task in case of motion detection is to detect motion from noisy scene, when there is no information about the noise. Pinto et al. BIB003 tackled this problem by using GP based approach in which motion detectors were generated during the testing phase on the basis of fitness function. In this approach, Gaussian noise was added as a noise in video BIB003 and showed better results for detecting motion in different environments. In another work BIB006 , GP program was used for analyzing various type of motion detection techniques such as detecting simple motion, detection of fast-moving objects, motion detection from noisy background. Another advantage of using GP for motion detection is that the evolved detectors can also tolerate noise, that is why GP is considered as one of the best approach for detection of motion . Similarly, Xie e al. BIB009 used GP for anomaly detection from crowded scenes. In Xie's approach, multi-frame Local Binary Patterns (LBP) difference based on LBP was used for extracting features from video frames. Training of GP was performed on extracted features. The proposed scheme detected abnormalities in real time videos. Similarly, Song et al. BIB007 proposed GP based target motion detection approach that automatically evolved GP program and separated out target motion from other irrelevant motions such as noisy background. Song et al.'s technique was comprised of two phases. In the first phase (evolution phase), data used during training was divided into training and test part. Parameter optimization during training was performed on the basis of performance of GP based evolved program on test data. After the evolution of GP program, next phase was the application phase in which best-evolved GP program from the evolution phase was used to check the performance on unseen data samples. Block diagram of Song's technique is shown in Figure 12 . As this technique was used for detecting motion from the video, so first two-dimensional array of size 20 x 20 was captured as video frames from different locations of videos. If majority of pixels within the frame were labeled as samples by human expert then the image was considered to belong to positive class. During the training of GP program accuracy was used as fitness function, whereas, detection accuracy versus the number of generations were used as an evaluation measure. Figure 12 : GP based motion detection technique by Song et al. BIB007 6. Category wise Applications of GP This section presents different GP based techniques that are applied to different categories of IP. Table 1 lists the references as well as the GP parameter settings for each category. An overall analysis of Table 1 shows that in all of the reported IP related applications, large population is used in comparison to the number of generations. Large population within each generation helps to increase the diversity and hence increases the chance to obtain better individual with less number of generations. Moreover, most of the GP related IP applications used tournament selection. The advantage of using the tournament selection method is that it helps to maintain constant selection pressure and even programs with average fitness have chances to reproduce a child in the coming generation. Also Table 1 shows that a higher crossover probability is used in comparison to mutation probability, because higher values of mutation probability increases the search area within search space and the algorithm may get stuck in local minima. Also in IP related applications, ramped half and half is the commonly used population initialization method. This method produces the initial tree of variable length and thus help to increase the diversity of the initial population. The last column of Table 1 - Table 1 . Analysis of GP applications in IP 7. Advantages and disadvantages of using GP for IP GP is a relatively new technique among the all the evolutionary computing algorithms and has been widely applied in various IP related techniques. In literature, GP has shown excellent performance for optimization and classification related problems, however advantages and disadvantages are also associated with GP based optimization techniques. Some of which are discussed below. Understandability: GP outputs a program or a collection of programs in the form of mathematical expressions, which are easy to comprehend if simplified and converted to normal notation. Needs Large Training Data: A large dataset is needed for training process in order to reach an optimal solution. No Guaranteed Solution: Due to the stochastic nature, GP does not guarantee an exact solution, therefore it cannot be applied in situations where an exact straightforward solution is required. GP vs GA: Being the prominants types of Evolutionary Algorithms (EAs), both the paradigms share some characteristics but differ in others. They mainly differ in the way individuals are represented. GP uses a tree representation, whereas, GA uses a string representation BIB001 . In case of GA, individuals are generally raw data, whereas in GP, the individuals are computer programs. The tree-based representation gives GP an edge over GA because of its flexibility; however, GA is faster compared to GP BIB002 . Diverse Search Space: Genetic operators (crossover and mutation) used in GP introduce diversity and thus increases the span of search space. Larger search space helps in finding the most optimal solution for the problem at hand.
|
A Review of Theory and Practice in Scientometrics <s> <s> Abstract : A study is reported which tested the hypothesis that citation indexes are useful heuristic tools for the historian. In this approach, the history of science is regarded as a chronological sequence of events in which each new discovery is dependent upon earlier discoveries. Models of history were constructed consisting of chronological maps or topological network diagrams. Two such models were used here. The first is based on the events in the history of DNA as described by Dr. Isaac Asimov in the Genetic Code. The second is based on the bibliographic citation data contained in the documents which are the original published studies of events represented in the Asimov book. The interdependencies of linkages among 40 major events (nodes) included in both network diagrams were mapped and compared. The study confirmed 65% (28 of 43) of the historical dependencies in the Asimov network by corresponding linkages established by citations. In addition, 31 citation connections were found which did not correspond to any historical dependencies noted in The Genetic Code. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> <s> A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> <s> Thank you very much for downloading citation indexing its theory and application in science technology and humanities. As you may know, people have search hundreds times for their favorite novels like this citation indexing its theory and application in science technology and humanities, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some infectious virus inside their computer. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> <s> This book analyses changes which have occurred in the organization and management of the UK public services over the last 15 years, looking particularly at the restructured NHS. The authors present an up to date analysis around three main themes: 1. the transfer of private sector models to the public sector 2. the management of change in the public sector 3. management reorganization and role change In doing so they examine to what extent a New Public Management has emerged and ask whether this is a parochial UK development or of wider international significance. This is a topical and important issue in management training, professional and policy circles. Important analytic themes include: an analysis of the nature of the change process in the UK public services: characterisation of quasi markets; the changing role of local Boards and possible adaptation by professional groupings. The book also addresses the important and controversial question of accountability, and contributes to the development of a general theory of the New Public Management. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> <s> In this article, we define webometrics within the framework of informetric studies and bibliometrics, as belonging to library and information science, and as associated with cybermetrics as a generic subfield. We develop a consistent and detailed link typology and terminology and make explicit the distinction among different Web node levels when using the proposed conceptual framework. As a consequence, we propose a novel diagram notation to fully appreciate and investigate link structures between Web nodes in webometric analyses. We warn against taking the analogy between citation analyses and link analyses too far. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> <s> Webometrics, the quantitative study of Web phenomena, is a field encompassing contributions from information science, computer science, and statistical physics. Its methodology draws especially from bibliometrics. This special issue presents contributions that both push forward the field and illustrate a wide range of webometric approaches. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> <s> I propose the index $h$, defined as the number of papers with citation number higher or equal to $h$, as a useful index to characterize the scientific output of a researcher. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> <s> L'A. analyse les proprietes basiques de l'index-h, indicateur developpe par J. E. Hirsch, sur la base d'un modele de distribution de probabilites largement utilise en bibliometrie, a savoir les distributions Pareto. L'index-h, fonde sur le nombre de citations recues, mesure l'activite de publication et l'impact en citations. C'est un indicateur utile avec d'interessantes proprietes mathematiques, mais qui ne saurait se substituer aux indicateurs bibliometriques courants plus sophistiques. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> <s> The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994–2004). Different activity and impact indicators were obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> <s> There is an increasing emphasis on the use of metrics for assessing the research contribution of academics, departments, journals or conferences. Contribution has two dimensions: quantity which can be measured by number/size of the outputs, and quality which is most easily measured by the number of citations. Recently, Hirsch proposed a new metric which is simple, combines both quality and quantity in one number, and is robust to measurement problems. This paper applies the Hirsch-index (h-index) to three groups of management academics—BAM Fellows, INFORMS Fellows and members of COPIOR—in order to evaluate the extent to which the h-index would serve as a reliable measure of the contribution of researchers in the management field. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> <s> This paper reviews developments in informetrics between 2000 and 2006. At the beginning of the 21st century we witness considerable growth in webometrics, mapping and visualization and open access. A new topic is comparison between citation databases, as a result of the introduction of two new citation databases Scopus and Google Scholar. There is renewed interest in indicators as a result of the introduction of the h-index. Traditional topics like citation analysis and informetric theory also continue to develop. The impact factor debate, especially outside the informetric literature continues to thrive. Ranked lists (of journal, highly cited papers or of educational institutions) are of great public interest. <s> BIB011 </s> A Review of Theory and Practice in Scientometrics <s> <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB012 </s> A Review of Theory and Practice in Scientometrics <s> <s> This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists, the rapidity of maturing of citation impact, and the extent to which a database used for the assessment covers the field's literature. It further develops Eugene Garfield's notions of a field's ‘citation potential’ defined as the average length of references lists in a field and determining the probability of being cited, and the need in fair performance assessments to correct for differences between subject fields. A journal's subject field is defined as the set of papers citing that journal. SNIP is defined as the ratio of the journal's citation count per paper and the citation potential in its subject field. It aims to allow direct comparison of sources in different subject fields. Citation potential is shown to vary not only between journal subject categories – groupings of journals sharing a research field – or disciplines (e.g., journals in mathematics, engineering and social sciences tend to have lower values than titles in life sciences), but also between journals within the same subject category. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics higher than periodicals in classical subjects or more general journals. SNIP corrects for such differences. Its strengths and limitations are critically discussed, and suggestions are made for further research. All empirical results are derived from Elsevier's Scopus. <s> BIB013 </s> A Review of Theory and Practice in Scientometrics <s> <s> A size-independent indicator of journals’ scientific prestige, the SCImago Journal Rank (SJR) indicator, is proposed that ranks scholarly journals based on citation weighting schemes and eigenvector centrality. It is designed for use with complex and heterogeneous citation networks such as Scopus. Its computation method is described, and the results of its implementation on the Scopus 2007 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), both generally and within specific scientific areas. Both the SJR indicator and the JIF distributions were found to fit well to a logarithmic law. While the two metrics were strongly correlated, there were also major changes in rank. In addition, two general characteristics were observed. On the one hand, journals’ scientific influence or prestige as computed by the SJR indicator tended to be concentrated in fewer journals than the quantity of citation measured by JIF(3y). And on the other, the distance between the top-ranked journals and the rest tended to be greater in the SJR ranking than in that of the JIF(3y), while the separation between the middle and lower ranked journals tended to be smaller. <s> BIB014 </s> A Review of Theory and Practice in Scientometrics <s> <s> This chapter contains sections titled: Introduction, Defining the Object: Science as a “Good as It Is” Social System in the Positivist and Functionalist Traditions, Producing the Evidence: Citation Indexes and the Quantity-Quality Connection, Setting Up the Rules of the Game: Mathematical Life in a Skewed World, Conclusions, Note, References <s> BIB015 </s> A Review of Theory and Practice in Scientometrics <s> <s> This chapter contains sections titled: Introduction, Proliferation of Performance Indicators, Strategic Behavior, Ambivalent Attitudes, The Citation as Institution, The Citation as Infrastructure, References <s> BIB016 </s> A Review of Theory and Practice in Scientometrics <s> <s> Are you applying for tenure, promotion or a new job? Is your work cited in journals which are not ISI listed? Publish or Perish is designed to help individual academics to present their case for research impact to its best advantage. <s> BIB017
|
Scientometrics -"The quantitative methods of the research on the development of science as an informational process" (Nalimov & Mulcjenko, 1971, p. 2) . This field concentrates specifically on science (and the social sciences and humanities). Informetrics -"The study of the application of mathematical methods to the objects of information science" (Nacke, 1979, p. 220) . Perhaps the most general field covering all types of information regardless of form or origin BIB011 Wilson, 1999) . Webometrics -"The study of the quantitative aspects of the construction and use of information resources, structures and technologies on the Web drawing on bibliometric and informetric approaches BIB005 , p. 1217 BIB006 Thelwall et al., 2005) . This field mainly concerns the analysis of web pages as if they were documents. Altmetrics -"The study and use of scholarly impact measures based on activity in online tools and environments" (Priem, 2014, p. 266) . Also called Scientometrics 2.0, this field replaces journal citations with impacts in social networking tools such as views, downloads, "likes", blogs, Twitter, Mendelay, CiteULike. In this review we concentrate on scientometrics as that is the field most directly concerned with the exploration and evaluation of scientific research. In fact, traditionally these fields have concentrated on the observable or measurable aspects of communications -external borrowings of books rather than in-library usage; citations of papers rather than their reading -but currently online access and downloads provide new modes of usage and this leads to the developments in webometrics and altmetrics that will be discussed later. In this section we describe the history and development of scientometrics BIB015 and in the next sections explore the main research areas and issues. Whilst scientometrics can, and to some extent does, study many other aspects of the dynamics of science and technology, in practice it has developed around one core notion -that of the citation. The act of citing another person's research provides the necessary linkages between people, ideas, journals and institutions to constitute an empirical field or network that can be analysed quantitatively. Furthermore, the citation also provides a linkage in time -between the previous publications of its references and the later appearance of its citations. This in turn stems largely from the work of one person -Eugene Garfield -who identified the importance of the citation and then promulgated the idea of the Science Citation Index (SCI) in the 1950's (and the company the Institute for Scientific Information, ISI, to maintain it) as a database for capturing citations BIB003 2 . Scholar which works in an entirely different way -searching the web rather than collecting data directly. Whilst this extension of coverage is valuable, it also leads to problems of comparison with quite different results appearing depending on the databases used. Secondly, a whole new range of metrics has appeared superseding, in some ways, the original ones such as total number of citations and citations per paper (cpp). The h-index BIB009 BIB008 BIB007 BIB010 is one that has become particularly prominent, now available automatically in the databases. It is transparent and robust but there are many criticisms of its biases. In terms of journal evaluation, several new metrics have been developed such as SNIP BIB013 and SCImago Journal Rank (SJR) BIB014 ) which aim to take into account the differential citation behaviours of different disciplines, e.g., some areas of science such as biomedicine cite very highly and have many authors per paper; other areas, particularly some of the social sciences, mathematics and the humanities do not cite so highly. A third, technical, development has been in the mapping and visualization of bibliometric networks. This idea was also initiated by Garfield who developed the concept of "historiographs" BIB001 , maps of connections between key papers, to reconstruct the intellectual forebears of an important discovery. This was followed by co-citation analysis which used multivariate techniques such as factor analysis, multi-dimensional scaling and cluster analysis to analyse and map the networks of highly related papers which pointed the way to identifying research domains and frontiers BIB002 . And also co-word analysis that looked at word pairs from titles, abstracts or keywords and drew on the actor network theory of Callon and Latour . New algorithms and mapping techniques such as the Blondel algorithm BIB012 and the Pajek mapping software have greatly enhanced the visualization of high-dimensional datasets . But perhaps the most significant change, which has taken scientometrics from relative obscurity as a statistical branch of information science to playing a major, and often much criticised, role within the social and political processes of the academic community, is the drive of governments and official bodies to monitor, record and evaluate research performance. This itself is an effect of the neo-liberal agenda of "new public management" (NPM) BIB004 and its requirements of transparency and accountability. This occurs at multiple levels -individuals, departments and research groups, institutions and, of course, journals -and has significant consequences in terms of jobs and promotion, research grants, and league tables. In the past, to the extent that this occurred it did so through a process of peer review with the obvious drawbacks of subjectivity, favouritism and conservatism . But now, partly on cost grounds, scientometrics are being called into play and the rather ironic result is that instead of merely reflecting or mapping a pre-given reality, scientometrics methods are actually shaping that reality through their performative effects on academics and researchers BIB016 . At the same time, the discipline of science studies itself has bi-(or tri-) furcated into at least three elements -the quantitative study of science indicators and their behaviour, analysis and metrication from a positivist perspective. A more qualitative, sociology-of-science, approach that studies the social and political processes lying behind the generation and effects of citations, generally from a constructivist perspective. And a third stream of research that is interested in policy implications and draws on both the other two. Finally, in this brief overview, we must mention the advent of the Web and social networking. This has brought in the possibility of alternatives to citations as ways of measuring impact (if not quality) such as downloads, views, "tweets", "likes", and mentions in blogs. Together, these are known as "altmetrics" , and whilst they are currently underdeveloped, they may well come to rival citations in the future. Google Scholar can produces profiles of researchers, including their h-index, and Publish or Perish BIB017 enhances searches of Scholar with the Harzing website (www.harzing.com) being a repository for multiple journals ranking lists in the field of business and management.
|
A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Classical assumptions about the nature and ethical entailments of authorship (the standard model) are being challenged by developments in scientific collaboration and multiple authorship. In the biomedical research community, multiple authorship has increased to such an extent that the trustworthiness of the scientific communication system has been called into question. Documented abuses, such as honorific authorship, have serious implications in terms of the acknowledgment of authority, allocation of credit, and assigning of accountability. Within the biomedical world it has been proposed that authors be replaced by lists of contributors (the radical model), whose specific inputs to a given study would be recorded unambiguously. The wider implications of the ‘hyperauthorship’ phenomenon for scholarly publication are considered. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Journal articles constitute the core documents for the diffusion of knowledge in the natural sciences. It has been argued that the same is not true for the social sciences and humanities where knowledge is more often disseminated in monographs that are not indexed in the journal-based databases used for bibliometric analysis. Previous studies have made only partial assessments of the role played by both serials and other types of literature. The importance of journal literature in the various scientific fields has therefore not been systematically characterized. The authors address this issue by providing a systematic measurement of the role played by journal literature in the building of knowledge in both the natural sciences and engineering and the social sciences and humanities. Using citation data from the CD-ROM versions of the Science Citation Index (SCI), Social Science Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) databases from 1981 to 2000 (Thomson ISI, Philadelphia, PA), the authors quantify the share of citations to both serials and other types of literature. Variations in time and between fields are also analyzed. The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature. <s> BIB003 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> This paper addresses research performance monitoring of the social sciences and the humanities using citation analysis. Main differences in publication and citation behavior between the (basic) sciences and the social sciences and humanities are outlined. Limitations of the (S)SCI and A&HCI for monitoring research performance are considered. For research performance monitoring in many social sciences and humanities, the methods used in science need to be extended. A broader range of both publications (including non-ISI journals and monographs) and citation indicators (including non-ISI reference citation values) is needed. Three options for bibliometric monitoring are discussed. <s> BIB004 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> The Institute for Scientific Information's (ISI, now Thomson Scientific, Philadelphia, PA) citation databases have been used for decades as a starting point and often as the only tools for locating citations andsor conducting citation analyses. The ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science (LIS) faculty members as a case study, the authors examine the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. The WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours. © 2007 Wiley Periodicals, Inc. <s> BIB005 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as 3 alternatives to the JIF to assess journal impact (h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS gen- erally results in more comprehensive citation coverage in the area of management and international business. The use of GS particularly benefits academics publishing in sources that are not (well) cov- ered in ISI. Among these are books, conference papers, non-US journals, and in general journals in the field of strategy and international business. The 3 alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution's financial means. <s> BIB006 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR—a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially when citations in conference proceedings are sought, and that researchers should manually calculate h scores instead of relying on system calculations. © 2008 Wiley Periodicals, Inc. <s> BIB007 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Given the current availability of different bibliometric indicators and of production and citation data sources, the following two questions immediately arise: do the indicators’ scores differ when computed on different data sources? More importantly, do the indicator-based rankings significantly change when computed on different data sources? We provide a case study for computer science scholars and journals evaluated on Web of Science and Google Scholar databases. The study concludes that Google scholar computes significantly higher indicators’ scores than Web of Science. Nevertheless, citation-based rankings of both scholars and journals do not significantly change when compiled on the two data sources, while rankings based on the h index show a moderate degree of variation. <s> BIB008 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Hirsch's h index is becoming the standard measure of an individual's research accomplishments. The aggregation of individuals' measures is also the basis for global measures at institutional or national levels. To investigate whether the h index can be reliably computed through alternative sources of citation records, the Web of Science (WoS), PsycINFO and Google Scholar (GS) were used to collect citation records for known publications of four Spanish psychologists. Compared with WoS, PsycINFO included a larger percentage of publication records, whereas GS outperformed WoS and PsycINFO in this respect. Compared with WoS, PsycINFO retrieved a larger number of citations in unique areas of psychology, but it retrieved a smaller number of citations in areas that are close to statistics or the neurosciences, whereas GS retrieved the largest numbers of citations in all cases. Incorrect citations were scarce in Wos (0.3p), more prevalent in PsycINFO (1.1p), and overwhelming in GS (16.5p). All platforms retrieved unique citations, the largest set coming from GS. WoS and PsycINFO cover distinct areas of psychology unevenly, thus applying different penalties on the h index of researches working in different fields. Obtaining fair and accurate h indices required the union of citations retrieved by all three platforms. © 2010 Wiley Periodicals, Inc. <s> BIB009 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> Research assessment carries important implications both at the individual and institutional levels. This paper examines the research outputs of scholars in business schools and shows how their performance assessment is significantly affected when using data extracted either from the Thomson ISI Web of Science (WoS) or from Google Scholar (GS). The statistical analyses of this paper are based on a large survey data of scholars of Canadian business schools, used jointly with data extracted from the WoS and GS databases. Firstly, the findings of this study reveal that the average performance of B scholars regarding the number of contributions, citations, and the h-index is much higher when performances are assessed using GS rather than WoS. Moreover, the results also show that the scholars who exhibit the highest performances when assessed in reference to articles published in ISI-listed journals also exhibit the highest performances in Google Scholar. Secondly, the absence of association between the strength of ties forged with companies, as well as between the customization of the knowledge transferred to companies and research performances of B scholars such as measured by indicators extracted from WoS and GS, provides some evidence suggesting that mode 1 and 2 knowledge productions might be compatible. Thirdly, the results also indicate that senior B scholars did not differ in a statistically significant manner from their junior colleagues with regard to the proportion of contributions compiled in WoS and GS. However, the results show that assistant professors have a higher proportion of citations in WoS than associate and full professors have. Fourthly, the results of this study suggest that B scholars in accounting tend to publish a smaller proportion of their work in GS than their colleagues in information management, finance and economics. Fifthly, the results of this study show that there is no significant difference between the contributions record of scholars located in English language and French language B schools when their performances are assessed with Google Scholar. However, scholars in English language B schools exhibit higher citation performances and higher h-indices both in WoS and GS. Overall, B scholars might not be confronted by having to choose between two incompatible knowledge production modes, but with the requirement of the evidence-based management approach. As a consequence, the various assessment exercises undertaken by university administrators, government agencies and associations of business schools should complement the data provided in WoS with those provided in GS. <s> BIB010 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> A search for the Standard Model Higgs boson in proton–proton collisions with the ATLAS detector at the LHC is presented. The datasets used correspond to integrated luminosities of approximately 4.8 fb−1 collected at View the MathML source in 2011 and 5.8 fb−1 at View the MathML source in 2012. Individual searches in the channels H→ZZ(⁎)→4l, H→γγ and H→WW(⁎)→eνμν in the 8 TeV data are combined with previously published results of searches for H→ZZ(⁎), WW(⁎), View the MathML source and τ+τ− in the 7 TeV data and results from improved analyses of the H→ZZ(⁎)→4l and H→γγ channels in the 7 TeV data. Clear evidence for the production of a neutral boson with a measured mass of View the MathML source is presented. This observation, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9, is compatible with the production and decay of the Standard Model Higgs boson. <s> BIB011 </s> A Review of Theory and Practice in Scientometrics <s> SOURCES OF CITATIONS DATA <s> In 2011, Thomson-Reuters introduced the Book Citation Index (BKCI) as part of the Science Citation Index (SCI). The interface of the Web of Science version 5 enables users to search for both "Books" and "Book Chapters" as new categories. Books and book chapters, however, were always among the cited references, and book chapters have been included in the database since 2005. We explore the two categories with both BKCI and SCI, and in the sister databases for the social sciences (SoSCI) and the arts & humanities (A&HCI). Book chapters in edited volumes can be highly cited. Books contain many citing references, but are relatively less cited. We suggest that this may find its origin in the slower circulation of books then of journal articles. It is possible to distinguish bibliometrically between monographs and edited volumes among the "Books". Monographs may be underrated in terms of citation impact or overrated using publication performance indicators because individual chapters are counted separately as contributions in terms of articles, reviews, and/or book chapters. <s> BIB012
|
Clearly for the quantitative analysis of citations to be successful, there must be comprehensive and accurate sources of citation data. The major source of citations in the past was the Thomson Reuters . Both of these resources are free whilst access to WoS and Scopus are subscription-based and offer different levels of accessibility depending on the amount of payment thus leading to differential access for researchers. Many studies have shown that the coverage of WoS and Scopus differs significantly between different fields, particularly between the natural sciences, where coverage is very good, the social sciences where it is moderate and variable, and the arts and humanities where it is generally poor BIB003 10 ,. In contrast, the coverage of GS is generally higher, and does not differ so much between subject areas, but the reliability and quality of its data can be poor BIB010 . Van , in a study of Delft University between 1991 and 2001 found that in fields such as architecture and technology, policy and management the proportion of publication in WoS and the proportion of references to ISI material was under 30% while for applied science it was between 70% and 80%. Across the social sciences, the proportions varied between 20% for political science and 50% for psychology. studied the results of the 2001 RAE in the UK and found that while 89% of the outputs in biomedicine were in WoS, the figures for social science and arts & humanities were 35% and 13% respectively. CWTS was commissioned to analyse the 2001 RAE and found that the proportions of outputs contained in WoS and Scopus respectively were: Economics (66%, 72%), Business and Management (38%, 46%), Library and Information Science (32%, 34%) and Accounting and Finance (22%, 35%). There are several reasons for the differential coverage in these databases BIB003 BIB004 and we should also note that the problem is not just the publications that are not included, but also that the publications that are included have lower citations recorded since many of the citing sources are not themselves included. The first reason is that in science almost all research publications appear in journal papers (which are largely included in the databases), but in the social sciences and even more so in humanities books are seen as the major form of research output. Secondly, there is a greater prevalence of the "lone scholar" as opposed to the team approach that is necessary in the experimental sciences and which results in a greater number of publications (and hence citations) overall. As an extreme example, a paper in Physics Letters B BIB011 in 2012 announcing the discovery of the Higgs Boson has 2,932 authors and already has over 4000 citations. These outliers can distort bibliometrics analyses as we shall see BIB001 ). Thirdly, a significant number of social science and humanities journals are not, or have not chosen to become, included in WoS, the accounting and finance field being a prime example. Finally, in social science and humanities a greater proportion of publications are directed at the general public or 9 http://www.harzing.com/pop.htm 10 Higher Education Funding Council for England specialised constituencies such as practitioners and these "trade" publications or reports are not included in the databases. There have also been many comparisons of WoS, Scopus and Google Scholar across a range of disciplines BIB010 BIB008 BIB009 BIB006 BIB002 BIB007 BIB005 . The general conclusions of these studies are: That the coverage of research outputs, including books and reports, is much higher in GS, usually around 90%, and that this is reasonably constant across the subjects. This means that GS has a comparatively greater advantage in the non-science subjects where Scopus and WoS are weak. Partly, but not wholly, because of the coverage, GS generates a significantly greater number of citations for any particular work. This can range from two times to five times as many. This is because the citations come from a wide range of sources, not being limited to the journals that are included in the other databases. However, the data quality in GS is very poor with many entries being duplicated because of small differences in spellings or dates and many of the citations coming from a variety of non-research sources. With regard to the last point, it could be argued that the type of citation does not necessarily matter -it is still impact. Typical of these comparisons is who reviewed all the publications of three UK business schools from 1980 to 2008. Of the 4,600 publications in total, 3,023 were found in GS, but only 1,004 in WoS. None of the books, book chapters, conference papers or working papers were in WoS 11 . In terms of number of citations, the overall mean cites per paper (cpp) in GS was 14.7 but only 8.4 in WoS. It was also found that these rates varied considerably between fields in business and management, a topic to be taken up in the section on normalization. When taken down to the level of individual researchers the variation was even more noticeable both in terms of the proportion of outputs in WoS and the average number of citations. For example, the most prolific researcher had 109 publications. 92% were in GS, but only 40% were in WoS. The cpp in GS was 31.5, but in WoS it was 12.3. Generally, where papers were included in both sources GS cites were around three times greater. With regard to data quality, Garcia-Perez (2010) studied papers of psychologists in WoS, GS, and PsycINFO 12 . GS recorded more publications and citations than either of the other sources, but also had a large proportion of incorrect citations (16.5%) in comparison with 1% or less in the other 11 Most studies do not include WoS for books, which is still developing BIB012 . 12 PsycINFO is an abstracting and indexing database of the American Psychological Association with more than 3 million records devoted to peer-reviewed literature in the behavioural sciences and mental health
|
A Review of Theory and Practice in Scientometrics <s> Indicators of productivity <s> In this article we further develop the theory for a stochastic model for the citation process in the presence of obsolescence to predict the future citation pattern of individual papers in a collection. More precisely, we investigate the conditional distribution--and its mean-- of the number of citations to a paper after time t, given the number of citations it has received up to time t. In an important parametric case it is shown that the expected number of future citations is a linear function of the current number, this being interpretable as an example of a success-breeds-success phenomenon. <s> BIB001 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of productivity <s> An exact probabilistic formulation of the “square root law” conjectured byPrice is given and a probability distribution satisfying this law is defined, for which the namePrice distribution is suggested. Properties of thePrice distribution are discussed, including its relationship with the laws ofLotka andZipf. No empirical support of applicability ofPrice distribution as a model for publication productivity could be found. <s> BIB002 </s> A Review of Theory and Practice in Scientometrics <s> Indicators of productivity <s> Introduction. Chapter I. Lotkaian Informetrics: An Introduction. Informetrics. What is Lotkaian informetrics? Why Lotkaian informetrics? Practical Examples of Lotkaian Informetrics. Chapetr II. Basic Theory of Lotkaian Informetrics. General Informetrics Theory. Theory of Lotkaian Informetrics. Extension of the General Informetrics Theory: The Dual Size-Frequency Function H. The Place of the Law of Zipf in Lotkaian Informetrics. Chapter III. Three-dimensional Lotkaian Informetrics. Linear Three-Dimensional Lotkaian Informetrics. Chapter IV. Lotkaian Concentration Theory. Introduction. Discrete Concentration Theory. Continuous Concentration Theory. Concentration Theory of Linear Three-Dimensional Informetrics. Chapter V. Lotkaian Fractal Complexity Theory. Introduction. Elements of Fractal Theory. Interpretation of Lotkaian IPPs as Self-Similar Fractals. Chapter VI. Lotkaian Informetrics of Systems in which Items can have Multiple Sources. Introduction. Crediting Systems and Counting Procedures for Sources and "Super Sources" in IPPs Where Items Can Have Multiple Sources. Construction of Fractional Size-Frequency Functions Based on Two Dual Lotka laws. Chapter VII. Further Applications in Lotkaian Informetrics. Introduction. Explaining "Regularities". Probabilistic Explanation of the Relationship Between Citation Age and Journal Productivity. Chapter VII. General and Lotkaian Theory of the Distribution of Author Ranks in Multi-Authored Papers. The First-Citation Distribution in Lotkaian Informetrics. Zipfian Theory of N-grams and of N-word Phrases: the Cartesian Product of IPPs. Appendix. Appendix I. Appendix II. Appendix III Statistical Determination of the Parameters in the Law of Lotka. Bibliography. Subject Index. <s> BIB003
|
Some of the very early work, from the 1920s onwards, concerned productivity in terms of the number of papers produced by an author or research unit; the number of papers journals produce on a particular subject; and the number of key words that texts generate. They all point to a similar phenomenon -the Paretian one that a small proportion of producers are responsible for a high proportion of outputs. This also means that the statistical distributions associated with these phenomena are generally highly skewed. It should be said that the original works were quite approximate and actually provided few examples. They have been formalised by later researchers. studied the frequency distribution of numbers of publications per author, concluding that "the number of authors making n contributions is about 1/n 2 of those making one" from which can be derived de Solla "square root law" that "half the scientific papers are contributed by the top square root of the total number of scientific authors". So, typically there are 1/4 authors publishing two papers than one; 1/9 publishing three papers and so on. Lotka's Law generates the following distribution: where k = 1, 2, … BIB002 showed that a special case of the Waring distribution satisfies the square root law. hypothesised that if one ranks journals in terms of number of articles they publish on a particular subject, then there will be a core that publish the most. If you then group the rest into zones such that each zone has about the same number of articles, then the number of journals in each zone follows this law: where k = Bradford coefficient, N 0 = number in core zone, N n = journals in the n th zone; Thus the number of journals needed to publish the same number of articles grows with a power law. studied the frequency of words in a text and postulated that the rank of the frequency of a word and the actual frequency, when multiplied together, are a constant. That is, the number of occurrences is inversely related to the rank of the frequency. In a simple case, the most frequent word will occur twice as often as the second most frequent, and three times as often as the third. rf(r) = C r is the rank, f(r) is the frequency of that rank, C is a constant More generally: N is the number of items, s is a parameter The Zipf distribution has been found to apply in many other contexts such as the size of city by population. All three of these behaviours ultimately rest on the same cumulative advantage mechanisms (SBS) mentioned above and, indeed, under certain conditions all three can be shown to be mathematically equivalent and a consequence of SBS BIB003 , Chs. 2 and 3). However, empirical data on the number of publications per year by, for example, a particular author shows that the Lotka distribution by itself is too simplistic as it does not take into account productivity varying over time (including periods of inactivity) or subject. One approach is to model the process as a mixture of distributions . For example, we could assume that the number of papers per year followed a Poisson distribution with parameter λ, but that the parameter itself varied with a particular distribution depending on age, activity, discipline. If we assume that the parameter follows a Gamma distribution, then this mixture results in a negative-binomial which has been found to have a good empirical fit . Moreover, this approach BIB001 shows that SBS is a consequence of the underlying model.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.